Feb 9 07:52:56.546214 kernel: microcode: microcode updated early to revision 0xf4, date = 2022-07-31 Feb 9 07:52:56.546226 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 9 07:52:56.546233 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 07:52:56.546238 kernel: BIOS-provided physical RAM map: Feb 9 07:52:56.546241 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 9 07:52:56.546245 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 9 07:52:56.546249 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 9 07:52:56.546253 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 9 07:52:56.546257 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 9 07:52:56.546261 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000061f6efff] usable Feb 9 07:52:56.546265 kernel: BIOS-e820: [mem 0x0000000061f6f000-0x0000000061f6ffff] ACPI NVS Feb 9 07:52:56.546269 kernel: BIOS-e820: [mem 0x0000000061f70000-0x0000000061f70fff] reserved Feb 9 07:52:56.546272 kernel: BIOS-e820: [mem 0x0000000061f71000-0x000000006c0c4fff] usable Feb 9 07:52:56.546276 kernel: BIOS-e820: [mem 0x000000006c0c5000-0x000000006d1a7fff] reserved Feb 9 07:52:56.546281 kernel: BIOS-e820: [mem 0x000000006d1a8000-0x000000006d330fff] usable Feb 9 07:52:56.546286 kernel: BIOS-e820: [mem 0x000000006d331000-0x000000006d762fff] ACPI NVS Feb 9 07:52:56.546290 kernel: BIOS-e820: [mem 0x000000006d763000-0x000000006fffefff] reserved Feb 9 07:52:56.546294 kernel: BIOS-e820: [mem 0x000000006ffff000-0x000000006fffffff] usable Feb 9 07:52:56.546298 kernel: BIOS-e820: [mem 0x0000000070000000-0x000000007b7fffff] reserved Feb 9 07:52:56.546302 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 9 07:52:56.546306 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 9 07:52:56.546311 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 9 07:52:56.546315 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 9 07:52:56.546319 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 9 07:52:56.546323 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000008837fffff] usable Feb 9 07:52:56.546327 kernel: NX (Execute Disable) protection: active Feb 9 07:52:56.546332 kernel: SMBIOS 3.2.1 present. Feb 9 07:52:56.546336 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Feb 9 07:52:56.546340 kernel: tsc: Detected 3400.000 MHz processor Feb 9 07:52:56.546344 kernel: tsc: Detected 3399.906 MHz TSC Feb 9 07:52:56.546348 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 07:52:56.546353 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 07:52:56.546357 kernel: last_pfn = 0x883800 max_arch_pfn = 0x400000000 Feb 9 07:52:56.546361 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 07:52:56.546365 kernel: last_pfn = 0x70000 max_arch_pfn = 0x400000000 Feb 9 07:52:56.546370 kernel: Using GB pages for direct mapping Feb 9 07:52:56.546375 kernel: ACPI: Early table checksum verification disabled Feb 9 07:52:56.546379 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 9 07:52:56.546383 kernel: ACPI: XSDT 0x000000006D6440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 9 07:52:56.546388 kernel: ACPI: FACP 0x000000006D680620 000114 (v06 01072009 AMI 00010013) Feb 9 07:52:56.546394 kernel: ACPI: DSDT 0x000000006D644268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 9 07:52:56.546398 kernel: ACPI: FACS 0x000000006D762F80 000040 Feb 9 07:52:56.546404 kernel: ACPI: APIC 0x000000006D680738 00012C (v04 01072009 AMI 00010013) Feb 9 07:52:56.546408 kernel: ACPI: FPDT 0x000000006D680868 000044 (v01 01072009 AMI 00010013) Feb 9 07:52:56.546413 kernel: ACPI: FIDT 0x000000006D6808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 9 07:52:56.546418 kernel: ACPI: MCFG 0x000000006D680950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 9 07:52:56.546422 kernel: ACPI: SPMI 0x000000006D680990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 9 07:52:56.546427 kernel: ACPI: SSDT 0x000000006D6809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 9 07:52:56.546431 kernel: ACPI: SSDT 0x000000006D6824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 9 07:52:56.546437 kernel: ACPI: SSDT 0x000000006D6856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 9 07:52:56.546441 kernel: ACPI: HPET 0x000000006D6879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 07:52:56.546446 kernel: ACPI: SSDT 0x000000006D687A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 9 07:52:56.546450 kernel: ACPI: SSDT 0x000000006D6889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 9 07:52:56.546455 kernel: ACPI: UEFI 0x000000006D6892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 07:52:56.546459 kernel: ACPI: LPIT 0x000000006D689318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 07:52:56.546464 kernel: ACPI: SSDT 0x000000006D6893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 9 07:52:56.546469 kernel: ACPI: SSDT 0x000000006D68BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 9 07:52:56.546473 kernel: ACPI: DBGP 0x000000006D68D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 07:52:56.546479 kernel: ACPI: DBG2 0x000000006D68D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 9 07:52:56.546483 kernel: ACPI: SSDT 0x000000006D68D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 9 07:52:56.546488 kernel: ACPI: DMAR 0x000000006D68EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Feb 9 07:52:56.546492 kernel: ACPI: SSDT 0x000000006D68ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 9 07:52:56.546497 kernel: ACPI: TPM2 0x000000006D68EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 9 07:52:56.546501 kernel: ACPI: SSDT 0x000000006D68EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 9 07:52:56.546506 kernel: ACPI: WSMT 0x000000006D68FC28 000028 (v01 \xf0a 01072009 AMI 00010013) Feb 9 07:52:56.546511 kernel: ACPI: EINJ 0x000000006D68FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 9 07:52:56.546516 kernel: ACPI: ERST 0x000000006D68FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 9 07:52:56.546521 kernel: ACPI: BERT 0x000000006D68FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 9 07:52:56.546525 kernel: ACPI: HEST 0x000000006D68FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 9 07:52:56.546530 kernel: ACPI: SSDT 0x000000006D690260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 9 07:52:56.546535 kernel: ACPI: Reserving FACP table memory at [mem 0x6d680620-0x6d680733] Feb 9 07:52:56.546539 kernel: ACPI: Reserving DSDT table memory at [mem 0x6d644268-0x6d68061e] Feb 9 07:52:56.546544 kernel: ACPI: Reserving FACS table memory at [mem 0x6d762f80-0x6d762fbf] Feb 9 07:52:56.546551 kernel: ACPI: Reserving APIC table memory at [mem 0x6d680738-0x6d680863] Feb 9 07:52:56.546556 kernel: ACPI: Reserving FPDT table memory at [mem 0x6d680868-0x6d6808ab] Feb 9 07:52:56.546562 kernel: ACPI: Reserving FIDT table memory at [mem 0x6d6808b0-0x6d68094b] Feb 9 07:52:56.546566 kernel: ACPI: Reserving MCFG table memory at [mem 0x6d680950-0x6d68098b] Feb 9 07:52:56.546571 kernel: ACPI: Reserving SPMI table memory at [mem 0x6d680990-0x6d6809d0] Feb 9 07:52:56.546575 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6809d8-0x6d6824f3] Feb 9 07:52:56.546580 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6824f8-0x6d6856bd] Feb 9 07:52:56.546584 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6856c0-0x6d6879ea] Feb 9 07:52:56.546589 kernel: ACPI: Reserving HPET table memory at [mem 0x6d6879f0-0x6d687a27] Feb 9 07:52:56.546593 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d687a28-0x6d6889d5] Feb 9 07:52:56.546614 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6889d8-0x6d6892ce] Feb 9 07:52:56.546619 kernel: ACPI: Reserving UEFI table memory at [mem 0x6d6892d0-0x6d689311] Feb 9 07:52:56.546624 kernel: ACPI: Reserving LPIT table memory at [mem 0x6d689318-0x6d6893ab] Feb 9 07:52:56.546628 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6893b0-0x6d68bb8d] Feb 9 07:52:56.546633 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68bb90-0x6d68d071] Feb 9 07:52:56.546637 kernel: ACPI: Reserving DBGP table memory at [mem 0x6d68d078-0x6d68d0ab] Feb 9 07:52:56.546642 kernel: ACPI: Reserving DBG2 table memory at [mem 0x6d68d0b0-0x6d68d103] Feb 9 07:52:56.546646 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68d108-0x6d68ec6e] Feb 9 07:52:56.546650 kernel: ACPI: Reserving DMAR table memory at [mem 0x6d68ec70-0x6d68ed17] Feb 9 07:52:56.546655 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ed18-0x6d68ee5b] Feb 9 07:52:56.546660 kernel: ACPI: Reserving TPM2 table memory at [mem 0x6d68ee60-0x6d68ee93] Feb 9 07:52:56.546665 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ee98-0x6d68fc26] Feb 9 07:52:56.546669 kernel: ACPI: Reserving WSMT table memory at [mem 0x6d68fc28-0x6d68fc4f] Feb 9 07:52:56.546674 kernel: ACPI: Reserving EINJ table memory at [mem 0x6d68fc50-0x6d68fd7f] Feb 9 07:52:56.546678 kernel: ACPI: Reserving ERST table memory at [mem 0x6d68fd80-0x6d68ffaf] Feb 9 07:52:56.546683 kernel: ACPI: Reserving BERT table memory at [mem 0x6d68ffb0-0x6d68ffdf] Feb 9 07:52:56.546687 kernel: ACPI: Reserving HEST table memory at [mem 0x6d68ffe0-0x6d69025b] Feb 9 07:52:56.546692 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d690260-0x6d6903c1] Feb 9 07:52:56.546696 kernel: No NUMA configuration found Feb 9 07:52:56.546701 kernel: Faking a node at [mem 0x0000000000000000-0x00000008837fffff] Feb 9 07:52:56.546706 kernel: NODE_DATA(0) allocated [mem 0x8837fa000-0x8837fffff] Feb 9 07:52:56.546710 kernel: Zone ranges: Feb 9 07:52:56.546715 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 07:52:56.546719 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 07:52:56.546724 kernel: Normal [mem 0x0000000100000000-0x00000008837fffff] Feb 9 07:52:56.546728 kernel: Movable zone start for each node Feb 9 07:52:56.546733 kernel: Early memory node ranges Feb 9 07:52:56.546737 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 9 07:52:56.546743 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 9 07:52:56.546747 kernel: node 0: [mem 0x0000000040400000-0x0000000061f6efff] Feb 9 07:52:56.546752 kernel: node 0: [mem 0x0000000061f71000-0x000000006c0c4fff] Feb 9 07:52:56.546756 kernel: node 0: [mem 0x000000006d1a8000-0x000000006d330fff] Feb 9 07:52:56.546761 kernel: node 0: [mem 0x000000006ffff000-0x000000006fffffff] Feb 9 07:52:56.546765 kernel: node 0: [mem 0x0000000100000000-0x00000008837fffff] Feb 9 07:52:56.546770 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000008837fffff] Feb 9 07:52:56.546778 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 07:52:56.546783 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 9 07:52:56.546788 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 9 07:52:56.546793 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 9 07:52:56.546798 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Feb 9 07:52:56.546803 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Feb 9 07:52:56.546808 kernel: On node 0, zone Normal: 18432 pages in unavailable ranges Feb 9 07:52:56.546813 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 9 07:52:56.546818 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 9 07:52:56.546823 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 9 07:52:56.546828 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 9 07:52:56.546833 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 9 07:52:56.546838 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 9 07:52:56.546842 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 9 07:52:56.546847 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 9 07:52:56.546852 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 9 07:52:56.546857 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 9 07:52:56.546862 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 9 07:52:56.546866 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 9 07:52:56.546872 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 9 07:52:56.546877 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 9 07:52:56.546881 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 9 07:52:56.546886 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 9 07:52:56.546891 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 9 07:52:56.546896 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 9 07:52:56.546901 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 07:52:56.546905 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 07:52:56.546910 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 07:52:56.546916 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 07:52:56.546921 kernel: TSC deadline timer available Feb 9 07:52:56.546926 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 9 07:52:56.546930 kernel: [mem 0x7b800000-0xdfffffff] available for PCI devices Feb 9 07:52:56.546935 kernel: Booting paravirtualized kernel on bare hardware Feb 9 07:52:56.546940 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 07:52:56.546945 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 9 07:52:56.546950 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 9 07:52:56.546955 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 9 07:52:56.546960 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 9 07:52:56.546965 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8190323 Feb 9 07:52:56.546970 kernel: Policy zone: Normal Feb 9 07:52:56.546975 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 07:52:56.546980 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 07:52:56.546985 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 9 07:52:56.546990 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 9 07:52:56.546995 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 07:52:56.547001 kernel: Memory: 32555728K/33281940K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 725952K reserved, 0K cma-reserved) Feb 9 07:52:56.547006 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 9 07:52:56.547011 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 07:52:56.547015 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 07:52:56.547020 kernel: rcu: Hierarchical RCU implementation. Feb 9 07:52:56.547025 kernel: rcu: RCU event tracing is enabled. Feb 9 07:52:56.547030 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 9 07:52:56.547035 kernel: Rude variant of Tasks RCU enabled. Feb 9 07:52:56.547040 kernel: Tracing variant of Tasks RCU enabled. Feb 9 07:52:56.547046 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 07:52:56.547051 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 9 07:52:56.547055 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 9 07:52:56.547060 kernel: random: crng init done Feb 9 07:52:56.547065 kernel: Console: colour dummy device 80x25 Feb 9 07:52:56.547070 kernel: printk: console [tty0] enabled Feb 9 07:52:56.547074 kernel: printk: console [ttyS1] enabled Feb 9 07:52:56.547079 kernel: ACPI: Core revision 20210730 Feb 9 07:52:56.547084 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Feb 9 07:52:56.547090 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 07:52:56.547095 kernel: DMAR: Host address width 39 Feb 9 07:52:56.547100 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Feb 9 07:52:56.547104 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Feb 9 07:52:56.547109 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 9 07:52:56.547114 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 9 07:52:56.547119 kernel: DMAR: RMRR base: 0x0000006e011000 end: 0x0000006e25afff Feb 9 07:52:56.547124 kernel: DMAR: RMRR base: 0x00000079000000 end: 0x0000007b7fffff Feb 9 07:52:56.547129 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Feb 9 07:52:56.547134 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 9 07:52:56.547139 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 9 07:52:56.547144 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 9 07:52:56.547149 kernel: x2apic enabled Feb 9 07:52:56.547154 kernel: Switched APIC routing to cluster x2apic. Feb 9 07:52:56.547158 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 07:52:56.547163 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 9 07:52:56.547168 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 9 07:52:56.547173 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 9 07:52:56.547179 kernel: process: using mwait in idle threads Feb 9 07:52:56.547183 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 07:52:56.547188 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 07:52:56.547193 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 07:52:56.547198 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 07:52:56.547203 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 9 07:52:56.547208 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 07:52:56.547213 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 9 07:52:56.547218 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 9 07:52:56.547223 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 07:52:56.547228 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 07:52:56.547233 kernel: TAA: Mitigation: TSX disabled Feb 9 07:52:56.547238 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 9 07:52:56.547243 kernel: SRBDS: Mitigation: Microcode Feb 9 07:52:56.547247 kernel: GDS: Vulnerable: No microcode Feb 9 07:52:56.547252 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 07:52:56.547257 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 07:52:56.547262 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 07:52:56.547267 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 9 07:52:56.547272 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 9 07:52:56.547277 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 07:52:56.547282 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 9 07:52:56.547287 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 9 07:52:56.547291 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 9 07:52:56.547296 kernel: Freeing SMP alternatives memory: 32K Feb 9 07:52:56.547301 kernel: pid_max: default: 32768 minimum: 301 Feb 9 07:52:56.547306 kernel: LSM: Security Framework initializing Feb 9 07:52:56.547311 kernel: SELinux: Initializing. Feb 9 07:52:56.547316 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 07:52:56.547321 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 07:52:56.547326 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 9 07:52:56.547331 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 9 07:52:56.547336 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 9 07:52:56.547341 kernel: ... version: 4 Feb 9 07:52:56.547345 kernel: ... bit width: 48 Feb 9 07:52:56.547351 kernel: ... generic registers: 4 Feb 9 07:52:56.547356 kernel: ... value mask: 0000ffffffffffff Feb 9 07:52:56.547361 kernel: ... max period: 00007fffffffffff Feb 9 07:52:56.547366 kernel: ... fixed-purpose events: 3 Feb 9 07:52:56.547370 kernel: ... event mask: 000000070000000f Feb 9 07:52:56.547375 kernel: signal: max sigframe size: 2032 Feb 9 07:52:56.547380 kernel: rcu: Hierarchical SRCU implementation. Feb 9 07:52:56.547385 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 9 07:52:56.547390 kernel: smp: Bringing up secondary CPUs ... Feb 9 07:52:56.547395 kernel: x86: Booting SMP configuration: Feb 9 07:52:56.547400 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 9 07:52:56.547405 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 07:52:56.547410 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 9 07:52:56.547415 kernel: smp: Brought up 1 node, 16 CPUs Feb 9 07:52:56.547420 kernel: smpboot: Max logical packages: 1 Feb 9 07:52:56.547425 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 9 07:52:56.547429 kernel: devtmpfs: initialized Feb 9 07:52:56.547434 kernel: x86/mm: Memory block size: 128MB Feb 9 07:52:56.547440 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x61f6f000-0x61f6ffff] (4096 bytes) Feb 9 07:52:56.547445 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6d331000-0x6d762fff] (4399104 bytes) Feb 9 07:52:56.547450 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 07:52:56.547454 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 9 07:52:56.547459 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 07:52:56.547464 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 07:52:56.547469 kernel: audit: initializing netlink subsys (disabled) Feb 9 07:52:56.547474 kernel: audit: type=2000 audit(1707465171.110:1): state=initialized audit_enabled=0 res=1 Feb 9 07:52:56.547478 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 07:52:56.547484 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 07:52:56.547489 kernel: cpuidle: using governor menu Feb 9 07:52:56.547494 kernel: ACPI: bus type PCI registered Feb 9 07:52:56.547498 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 07:52:56.547503 kernel: dca service started, version 1.12.1 Feb 9 07:52:56.547508 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 9 07:52:56.547513 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 9 07:52:56.547518 kernel: PCI: Using configuration type 1 for base access Feb 9 07:52:56.547522 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 9 07:52:56.547528 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 07:52:56.547533 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 07:52:56.547538 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 07:52:56.547542 kernel: ACPI: Added _OSI(Module Device) Feb 9 07:52:56.547547 kernel: ACPI: Added _OSI(Processor Device) Feb 9 07:52:56.547554 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 07:52:56.547578 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 07:52:56.547582 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 07:52:56.547587 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 07:52:56.547593 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 07:52:56.547598 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 9 07:52:56.547603 kernel: ACPI: Dynamic OEM Table Load: Feb 9 07:52:56.547621 kernel: ACPI: SSDT 0xFFFF903840215D00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 9 07:52:56.547626 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 9 07:52:56.547631 kernel: ACPI: Dynamic OEM Table Load: Feb 9 07:52:56.547636 kernel: ACPI: SSDT 0xFFFF903841CE9400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 9 07:52:56.547641 kernel: ACPI: Dynamic OEM Table Load: Feb 9 07:52:56.547645 kernel: ACPI: SSDT 0xFFFF903841C5F800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 9 07:52:56.547651 kernel: ACPI: Dynamic OEM Table Load: Feb 9 07:52:56.547656 kernel: ACPI: SSDT 0xFFFF903841C59800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 9 07:52:56.547660 kernel: ACPI: Dynamic OEM Table Load: Feb 9 07:52:56.547665 kernel: ACPI: SSDT 0xFFFF90384014A000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 9 07:52:56.547670 kernel: ACPI: Dynamic OEM Table Load: Feb 9 07:52:56.547675 kernel: ACPI: SSDT 0xFFFF903841CEF800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 9 07:52:56.547679 kernel: ACPI: Interpreter enabled Feb 9 07:52:56.547684 kernel: ACPI: PM: (supports S0 S5) Feb 9 07:52:56.547689 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 07:52:56.547694 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 9 07:52:56.547699 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 9 07:52:56.547704 kernel: HEST: Table parsing has been initialized. Feb 9 07:52:56.547709 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 9 07:52:56.547714 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 07:52:56.547719 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 9 07:52:56.547724 kernel: ACPI: PM: Power Resource [USBC] Feb 9 07:52:56.547728 kernel: ACPI: PM: Power Resource [V0PR] Feb 9 07:52:56.547733 kernel: ACPI: PM: Power Resource [V1PR] Feb 9 07:52:56.547738 kernel: ACPI: PM: Power Resource [V2PR] Feb 9 07:52:56.547743 kernel: ACPI: PM: Power Resource [WRST] Feb 9 07:52:56.547748 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 9 07:52:56.547753 kernel: ACPI: PM: Power Resource [FN00] Feb 9 07:52:56.547758 kernel: ACPI: PM: Power Resource [FN01] Feb 9 07:52:56.547763 kernel: ACPI: PM: Power Resource [FN02] Feb 9 07:52:56.547767 kernel: ACPI: PM: Power Resource [FN03] Feb 9 07:52:56.547772 kernel: ACPI: PM: Power Resource [FN04] Feb 9 07:52:56.547777 kernel: ACPI: PM: Power Resource [PIN] Feb 9 07:52:56.547782 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 9 07:52:56.547843 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 07:52:56.547887 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 9 07:52:56.547927 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 9 07:52:56.547934 kernel: PCI host bridge to bus 0000:00 Feb 9 07:52:56.547976 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 07:52:56.548012 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 07:52:56.548051 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 07:52:56.548087 kernel: pci_bus 0000:00: root bus resource [mem 0x7b800000-0xdfffffff window] Feb 9 07:52:56.548122 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 9 07:52:56.548157 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 9 07:52:56.548205 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 9 07:52:56.548252 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 9 07:52:56.548295 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 9 07:52:56.548342 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Feb 9 07:52:56.548383 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Feb 9 07:52:56.548428 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Feb 9 07:52:56.548469 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x7c000000-0x7cffffff 64bit] Feb 9 07:52:56.548510 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Feb 9 07:52:56.548552 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Feb 9 07:52:56.548636 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 9 07:52:56.548676 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x7e51f000-0x7e51ffff 64bit] Feb 9 07:52:56.548720 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 9 07:52:56.548761 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x7e51e000-0x7e51efff 64bit] Feb 9 07:52:56.548805 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 9 07:52:56.548847 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x7e500000-0x7e50ffff 64bit] Feb 9 07:52:56.548888 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 9 07:52:56.548935 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 9 07:52:56.548974 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x7e512000-0x7e513fff 64bit] Feb 9 07:52:56.549015 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x7e51d000-0x7e51dfff 64bit] Feb 9 07:52:56.549057 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 9 07:52:56.549097 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 07:52:56.549140 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 9 07:52:56.549182 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 07:52:56.549225 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 9 07:52:56.549266 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x7e51a000-0x7e51afff 64bit] Feb 9 07:52:56.549308 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 9 07:52:56.549358 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 9 07:52:56.549400 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x7e519000-0x7e519fff 64bit] Feb 9 07:52:56.549443 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 9 07:52:56.549486 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 9 07:52:56.549527 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x7e518000-0x7e518fff 64bit] Feb 9 07:52:56.549603 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 9 07:52:56.549650 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 9 07:52:56.549690 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x7e510000-0x7e511fff] Feb 9 07:52:56.549730 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x7e517000-0x7e5170ff] Feb 9 07:52:56.549772 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Feb 9 07:52:56.549811 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Feb 9 07:52:56.549852 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Feb 9 07:52:56.549892 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x7e516000-0x7e5167ff] Feb 9 07:52:56.549932 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 9 07:52:56.549977 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 9 07:52:56.550021 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 9 07:52:56.550068 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 9 07:52:56.550109 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 9 07:52:56.550154 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 9 07:52:56.550197 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 9 07:52:56.550241 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 9 07:52:56.550281 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 9 07:52:56.550327 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Feb 9 07:52:56.550368 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Feb 9 07:52:56.550412 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 9 07:52:56.550454 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 07:52:56.550501 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 9 07:52:56.550546 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 9 07:52:56.550619 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x7e514000-0x7e5140ff 64bit] Feb 9 07:52:56.550661 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 9 07:52:56.550704 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 9 07:52:56.550745 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 9 07:52:56.550787 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 07:52:56.550834 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Feb 9 07:52:56.550876 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 9 07:52:56.550917 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x7e200000-0x7e2fffff pref] Feb 9 07:52:56.550959 kernel: pci 0000:02:00.0: PME# supported from D3cold Feb 9 07:52:56.551000 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 07:52:56.551043 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 07:52:56.551092 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Feb 9 07:52:56.551136 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 9 07:52:56.551178 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x7e100000-0x7e1fffff pref] Feb 9 07:52:56.551220 kernel: pci 0000:02:00.1: PME# supported from D3cold Feb 9 07:52:56.551261 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 07:52:56.551303 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 07:52:56.551344 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 9 07:52:56.551388 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Feb 9 07:52:56.551428 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 07:52:56.551468 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 9 07:52:56.551515 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 9 07:52:56.551582 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x7e400000-0x7e47ffff] Feb 9 07:52:56.551644 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Feb 9 07:52:56.551685 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x7e480000-0x7e483fff] Feb 9 07:52:56.551729 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 9 07:52:56.551769 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 9 07:52:56.551810 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 07:52:56.551850 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Feb 9 07:52:56.551898 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Feb 9 07:52:56.551940 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x7e300000-0x7e37ffff] Feb 9 07:52:56.551982 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Feb 9 07:52:56.552024 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x7e380000-0x7e383fff] Feb 9 07:52:56.552067 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Feb 9 07:52:56.552108 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 9 07:52:56.552149 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 07:52:56.552190 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Feb 9 07:52:56.552230 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 9 07:52:56.552276 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Feb 9 07:52:56.552375 kernel: pci 0000:07:00.0: enabling Extended Tags Feb 9 07:52:56.552418 kernel: pci 0000:07:00.0: supports D1 D2 Feb 9 07:52:56.552462 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 07:52:56.552503 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 9 07:52:56.552544 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 9 07:52:56.552628 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Feb 9 07:52:56.552674 kernel: pci_bus 0000:08: extended config space not accessible Feb 9 07:52:56.552722 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Feb 9 07:52:56.552768 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x7d000000-0x7dffffff] Feb 9 07:52:56.552813 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x7e000000-0x7e01ffff] Feb 9 07:52:56.552858 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Feb 9 07:52:56.552901 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 07:52:56.552946 kernel: pci 0000:08:00.0: supports D1 D2 Feb 9 07:52:56.552990 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 07:52:56.553034 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 9 07:52:56.553077 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 9 07:52:56.553121 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Feb 9 07:52:56.553129 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 9 07:52:56.553134 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 9 07:52:56.553139 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 9 07:52:56.553145 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 9 07:52:56.553150 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 9 07:52:56.553155 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 9 07:52:56.553160 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 9 07:52:56.553165 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 9 07:52:56.553172 kernel: iommu: Default domain type: Translated Feb 9 07:52:56.553177 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 07:52:56.553220 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Feb 9 07:52:56.553265 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 07:52:56.553309 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Feb 9 07:52:56.553316 kernel: vgaarb: loaded Feb 9 07:52:56.553321 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 07:52:56.553327 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 07:52:56.553332 kernel: PTP clock support registered Feb 9 07:52:56.553338 kernel: PCI: Using ACPI for IRQ routing Feb 9 07:52:56.553343 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 07:52:56.553348 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 9 07:52:56.553354 kernel: e820: reserve RAM buffer [mem 0x61f6f000-0x63ffffff] Feb 9 07:52:56.553359 kernel: e820: reserve RAM buffer [mem 0x6c0c5000-0x6fffffff] Feb 9 07:52:56.553364 kernel: e820: reserve RAM buffer [mem 0x6d331000-0x6fffffff] Feb 9 07:52:56.553369 kernel: e820: reserve RAM buffer [mem 0x883800000-0x883ffffff] Feb 9 07:52:56.553374 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 9 07:52:56.553379 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Feb 9 07:52:56.553385 kernel: clocksource: Switched to clocksource tsc-early Feb 9 07:52:56.553390 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 07:52:56.553395 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 07:52:56.553401 kernel: pnp: PnP ACPI init Feb 9 07:52:56.553444 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 9 07:52:56.553487 kernel: pnp 00:02: [dma 0 disabled] Feb 9 07:52:56.553529 kernel: pnp 00:03: [dma 0 disabled] Feb 9 07:52:56.553592 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 9 07:52:56.553649 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 9 07:52:56.553690 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 9 07:52:56.553729 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 9 07:52:56.553766 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 9 07:52:56.553803 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 9 07:52:56.553841 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 9 07:52:56.553877 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 9 07:52:56.553913 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 9 07:52:56.553950 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 9 07:52:56.553986 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 9 07:52:56.554026 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 9 07:52:56.554062 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 9 07:52:56.554100 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 9 07:52:56.554136 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 9 07:52:56.554172 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 9 07:52:56.554208 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 9 07:52:56.554244 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 9 07:52:56.554286 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 9 07:52:56.554294 kernel: pnp: PnP ACPI: found 10 devices Feb 9 07:52:56.554299 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 07:52:56.554305 kernel: NET: Registered PF_INET protocol family Feb 9 07:52:56.554311 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 07:52:56.554316 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 07:52:56.554321 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 07:52:56.554326 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 07:52:56.554331 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 07:52:56.554337 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 9 07:52:56.554342 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 07:52:56.554348 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 07:52:56.554353 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 07:52:56.554358 kernel: NET: Registered PF_XDP protocol family Feb 9 07:52:56.554399 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7b800000-0x7b800fff 64bit] Feb 9 07:52:56.554440 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7b801000-0x7b801fff 64bit] Feb 9 07:52:56.554481 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7b802000-0x7b802fff 64bit] Feb 9 07:52:56.554524 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 07:52:56.554595 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 07:52:56.554658 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 07:52:56.554702 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 07:52:56.554744 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 07:52:56.554786 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 9 07:52:56.554827 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Feb 9 07:52:56.554870 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 07:52:56.554911 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 9 07:52:56.554953 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 9 07:52:56.554994 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 07:52:56.555036 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Feb 9 07:52:56.555077 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 9 07:52:56.555117 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 07:52:56.555159 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Feb 9 07:52:56.555200 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 9 07:52:56.555246 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 9 07:52:56.555288 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 9 07:52:56.555330 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Feb 9 07:52:56.555371 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 9 07:52:56.555413 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 9 07:52:56.555454 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Feb 9 07:52:56.555491 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 9 07:52:56.555527 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 07:52:56.555592 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 07:52:56.555649 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 07:52:56.555684 kernel: pci_bus 0000:00: resource 7 [mem 0x7b800000-0xdfffffff window] Feb 9 07:52:56.555719 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 9 07:52:56.555762 kernel: pci_bus 0000:02: resource 1 [mem 0x7e100000-0x7e2fffff] Feb 9 07:52:56.555800 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 07:52:56.555843 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Feb 9 07:52:56.555883 kernel: pci_bus 0000:04: resource 1 [mem 0x7e400000-0x7e4fffff] Feb 9 07:52:56.555924 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 9 07:52:56.555961 kernel: pci_bus 0000:05: resource 1 [mem 0x7e300000-0x7e3fffff] Feb 9 07:52:56.556003 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 9 07:52:56.556041 kernel: pci_bus 0000:07: resource 1 [mem 0x7d000000-0x7e0fffff] Feb 9 07:52:56.556080 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Feb 9 07:52:56.556122 kernel: pci_bus 0000:08: resource 1 [mem 0x7d000000-0x7e0fffff] Feb 9 07:52:56.556130 kernel: PCI: CLS 64 bytes, default 64 Feb 9 07:52:56.556135 kernel: DMAR: No ATSR found Feb 9 07:52:56.556140 kernel: DMAR: No SATC found Feb 9 07:52:56.556146 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Feb 9 07:52:56.556151 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Feb 9 07:52:56.556156 kernel: DMAR: IOMMU feature nwfs inconsistent Feb 9 07:52:56.556162 kernel: DMAR: IOMMU feature pasid inconsistent Feb 9 07:52:56.556167 kernel: DMAR: IOMMU feature eafs inconsistent Feb 9 07:52:56.556172 kernel: DMAR: IOMMU feature prs inconsistent Feb 9 07:52:56.556178 kernel: DMAR: IOMMU feature nest inconsistent Feb 9 07:52:56.556183 kernel: DMAR: IOMMU feature mts inconsistent Feb 9 07:52:56.556188 kernel: DMAR: IOMMU feature sc_support inconsistent Feb 9 07:52:56.556194 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Feb 9 07:52:56.556199 kernel: DMAR: dmar0: Using Queued invalidation Feb 9 07:52:56.556204 kernel: DMAR: dmar1: Using Queued invalidation Feb 9 07:52:56.556245 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 9 07:52:56.556287 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 9 07:52:56.556330 kernel: pci 0000:00:01.1: Adding to iommu group 1 Feb 9 07:52:56.556371 kernel: pci 0000:00:02.0: Adding to iommu group 2 Feb 9 07:52:56.556411 kernel: pci 0000:00:08.0: Adding to iommu group 3 Feb 9 07:52:56.556451 kernel: pci 0000:00:12.0: Adding to iommu group 4 Feb 9 07:52:56.556491 kernel: pci 0000:00:14.0: Adding to iommu group 5 Feb 9 07:52:56.556531 kernel: pci 0000:00:14.2: Adding to iommu group 5 Feb 9 07:52:56.556599 kernel: pci 0000:00:15.0: Adding to iommu group 6 Feb 9 07:52:56.556658 kernel: pci 0000:00:15.1: Adding to iommu group 6 Feb 9 07:52:56.556700 kernel: pci 0000:00:16.0: Adding to iommu group 7 Feb 9 07:52:56.556742 kernel: pci 0000:00:16.1: Adding to iommu group 7 Feb 9 07:52:56.556783 kernel: pci 0000:00:16.4: Adding to iommu group 7 Feb 9 07:52:56.556823 kernel: pci 0000:00:17.0: Adding to iommu group 8 Feb 9 07:52:56.556862 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Feb 9 07:52:56.556904 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Feb 9 07:52:56.556944 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Feb 9 07:52:56.556984 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Feb 9 07:52:56.557024 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Feb 9 07:52:56.557066 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Feb 9 07:52:56.557106 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Feb 9 07:52:56.557148 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Feb 9 07:52:56.557188 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Feb 9 07:52:56.557229 kernel: pci 0000:02:00.0: Adding to iommu group 1 Feb 9 07:52:56.557271 kernel: pci 0000:02:00.1: Adding to iommu group 1 Feb 9 07:52:56.557314 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 9 07:52:56.557357 kernel: pci 0000:05:00.0: Adding to iommu group 17 Feb 9 07:52:56.557401 kernel: pci 0000:07:00.0: Adding to iommu group 18 Feb 9 07:52:56.557446 kernel: pci 0000:08:00.0: Adding to iommu group 18 Feb 9 07:52:56.557453 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 9 07:52:56.557459 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 07:52:56.557464 kernel: software IO TLB: mapped [mem 0x00000000680c5000-0x000000006c0c5000] (64MB) Feb 9 07:52:56.557469 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Feb 9 07:52:56.557475 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 9 07:52:56.557480 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 9 07:52:56.557486 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 9 07:52:56.557491 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Feb 9 07:52:56.557536 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 9 07:52:56.557544 kernel: Initialise system trusted keyrings Feb 9 07:52:56.557551 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 9 07:52:56.557556 kernel: Key type asymmetric registered Feb 9 07:52:56.557582 kernel: Asymmetric key parser 'x509' registered Feb 9 07:52:56.557587 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 07:52:56.557594 kernel: io scheduler mq-deadline registered Feb 9 07:52:56.557600 kernel: io scheduler kyber registered Feb 9 07:52:56.557624 kernel: io scheduler bfq registered Feb 9 07:52:56.557666 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Feb 9 07:52:56.557708 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Feb 9 07:52:56.557749 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Feb 9 07:52:56.557790 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Feb 9 07:52:56.557830 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Feb 9 07:52:56.557874 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Feb 9 07:52:56.557915 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Feb 9 07:52:56.557960 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 9 07:52:56.557968 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 9 07:52:56.557973 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 9 07:52:56.557978 kernel: pstore: Registered erst as persistent store backend Feb 9 07:52:56.557984 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 07:52:56.557989 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 07:52:56.557995 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 07:52:56.558000 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 07:52:56.558041 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 9 07:52:56.558049 kernel: i8042: PNP: No PS/2 controller found. Feb 9 07:52:56.558086 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 9 07:52:56.558123 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 9 07:52:56.558160 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-09T07:52:55 UTC (1707465175) Feb 9 07:52:56.558196 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 9 07:52:56.558205 kernel: fail to initialize ptp_kvm Feb 9 07:52:56.558210 kernel: intel_pstate: Intel P-state driver initializing Feb 9 07:52:56.558215 kernel: intel_pstate: Disabling energy efficiency optimization Feb 9 07:52:56.558221 kernel: intel_pstate: HWP enabled Feb 9 07:52:56.558226 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 9 07:52:56.558231 kernel: vesafb: scrolling: redraw Feb 9 07:52:56.558236 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 9 07:52:56.558241 kernel: vesafb: framebuffer at 0x7d000000, mapped to 0x000000006920125c, using 768k, total 768k Feb 9 07:52:56.558247 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 07:52:56.558253 kernel: fb0: VESA VGA frame buffer device Feb 9 07:52:56.558258 kernel: NET: Registered PF_INET6 protocol family Feb 9 07:52:56.558263 kernel: Segment Routing with IPv6 Feb 9 07:52:56.558268 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 07:52:56.558273 kernel: NET: Registered PF_PACKET protocol family Feb 9 07:52:56.558279 kernel: Key type dns_resolver registered Feb 9 07:52:56.558284 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 9 07:52:56.558289 kernel: microcode: Microcode Update Driver: v2.2. Feb 9 07:52:56.558295 kernel: IPI shorthand broadcast: enabled Feb 9 07:52:56.558300 kernel: sched_clock: Marking stable (2315614948, 1353728048)->(4616535094, -947192098) Feb 9 07:52:56.558305 kernel: registered taskstats version 1 Feb 9 07:52:56.558310 kernel: Loading compiled-in X.509 certificates Feb 9 07:52:56.558315 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 9 07:52:56.558321 kernel: Key type .fscrypt registered Feb 9 07:52:56.558326 kernel: Key type fscrypt-provisioning registered Feb 9 07:52:56.558331 kernel: pstore: Using crash dump compression: deflate Feb 9 07:52:56.558336 kernel: ima: Allocated hash algorithm: sha1 Feb 9 07:52:56.558342 kernel: ima: No architecture policies found Feb 9 07:52:56.558347 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 07:52:56.558352 kernel: Write protecting the kernel read-only data: 28672k Feb 9 07:52:56.558357 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 07:52:56.558362 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 07:52:56.558368 kernel: Run /init as init process Feb 9 07:52:56.558373 kernel: with arguments: Feb 9 07:52:56.558378 kernel: /init Feb 9 07:52:56.558383 kernel: with environment: Feb 9 07:52:56.558389 kernel: HOME=/ Feb 9 07:52:56.558394 kernel: TERM=linux Feb 9 07:52:56.558399 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 07:52:56.558405 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 07:52:56.558412 systemd[1]: Detected architecture x86-64. Feb 9 07:52:56.558417 systemd[1]: Running in initrd. Feb 9 07:52:56.558423 systemd[1]: No hostname configured, using default hostname. Feb 9 07:52:56.558428 systemd[1]: Hostname set to . Feb 9 07:52:56.558434 systemd[1]: Initializing machine ID from random generator. Feb 9 07:52:56.558440 systemd[1]: Queued start job for default target initrd.target. Feb 9 07:52:56.558445 systemd[1]: Started systemd-ask-password-console.path. Feb 9 07:52:56.558451 systemd[1]: Reached target cryptsetup.target. Feb 9 07:52:56.558456 systemd[1]: Reached target paths.target. Feb 9 07:52:56.558461 systemd[1]: Reached target slices.target. Feb 9 07:52:56.558466 systemd[1]: Reached target swap.target. Feb 9 07:52:56.558472 systemd[1]: Reached target timers.target. Feb 9 07:52:56.558478 systemd[1]: Listening on iscsid.socket. Feb 9 07:52:56.558483 systemd[1]: Listening on iscsiuio.socket. Feb 9 07:52:56.558489 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 07:52:56.558494 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 07:52:56.558500 systemd[1]: Listening on systemd-journald.socket. Feb 9 07:52:56.558505 systemd[1]: Listening on systemd-networkd.socket. Feb 9 07:52:56.558510 kernel: tsc: Refined TSC clocksource calibration: 3408.018 MHz Feb 9 07:52:56.558516 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fe4e33bb, max_idle_ns: 440795249257 ns Feb 9 07:52:56.558522 kernel: clocksource: Switched to clocksource tsc Feb 9 07:52:56.558527 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 07:52:56.558532 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 07:52:56.558538 systemd[1]: Reached target sockets.target. Feb 9 07:52:56.558543 systemd[1]: Starting kmod-static-nodes.service... Feb 9 07:52:56.558551 systemd[1]: Finished network-cleanup.service. Feb 9 07:52:56.558578 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 07:52:56.558584 systemd[1]: Starting systemd-journald.service... Feb 9 07:52:56.558590 systemd[1]: Starting systemd-modules-load.service... Feb 9 07:52:56.558617 systemd-journald[268]: Journal started Feb 9 07:52:56.558641 systemd-journald[268]: Runtime Journal (/run/log/journal/fd80c226985b4669b830d30032769700) is 8.0M, max 636.8M, 628.8M free. Feb 9 07:52:56.561061 systemd-modules-load[269]: Inserted module 'overlay' Feb 9 07:52:56.567000 audit: BPF prog-id=6 op=LOAD Feb 9 07:52:56.585552 kernel: audit: type=1334 audit(1707465176.567:2): prog-id=6 op=LOAD Feb 9 07:52:56.585565 systemd[1]: Starting systemd-resolved.service... Feb 9 07:52:56.633553 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 07:52:56.633585 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 07:52:56.665553 kernel: Bridge firewalling registered Feb 9 07:52:56.665583 systemd[1]: Started systemd-journald.service. Feb 9 07:52:56.680176 systemd-modules-load[269]: Inserted module 'br_netfilter' Feb 9 07:52:56.728428 kernel: audit: type=1130 audit(1707465176.688:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:56.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:56.685818 systemd-resolved[271]: Positive Trust Anchors: Feb 9 07:52:56.803422 kernel: SCSI subsystem initialized Feb 9 07:52:56.803437 kernel: audit: type=1130 audit(1707465176.740:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:56.803445 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 07:52:56.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:56.685824 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 07:52:56.902903 kernel: device-mapper: uevent: version 1.0.3 Feb 9 07:52:56.902914 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 07:52:56.902922 kernel: audit: type=1130 audit(1707465176.860:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:56.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:56.685843 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 07:52:57.001760 kernel: audit: type=1130 audit(1707465176.912:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:56.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:56.687394 systemd-resolved[271]: Defaulting to hostname 'linux'. Feb 9 07:52:57.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:56.688787 systemd[1]: Started systemd-resolved.service. Feb 9 07:52:57.114763 kernel: audit: type=1130 audit(1707465177.009:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:57.114777 kernel: audit: type=1130 audit(1707465177.062:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:57.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:56.740737 systemd[1]: Finished kmod-static-nodes.service. Feb 9 07:52:56.860799 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 07:52:56.903360 systemd-modules-load[269]: Inserted module 'dm_multipath' Feb 9 07:52:56.912831 systemd[1]: Finished systemd-modules-load.service. Feb 9 07:52:57.009935 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 07:52:57.062803 systemd[1]: Reached target nss-lookup.target. Feb 9 07:52:57.124264 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 07:52:57.131340 systemd[1]: Starting systemd-sysctl.service... Feb 9 07:52:57.131769 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 07:52:57.134799 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 07:52:57.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:57.135532 systemd[1]: Finished systemd-sysctl.service. Feb 9 07:52:57.253770 kernel: audit: type=1130 audit(1707465177.134:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:57.253785 kernel: audit: type=1130 audit(1707465177.197:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:57.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:57.197885 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 07:52:57.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:57.263133 systemd[1]: Starting dracut-cmdline.service... Feb 9 07:52:57.285658 dracut-cmdline[294]: dracut-dracut-053 Feb 9 07:52:57.285658 dracut-cmdline[294]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 07:52:57.285658 dracut-cmdline[294]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 07:52:57.353636 kernel: Loading iSCSI transport class v2.0-870. Feb 9 07:52:57.353653 kernel: iscsi: registered transport (tcp) Feb 9 07:52:57.403285 kernel: iscsi: registered transport (qla4xxx) Feb 9 07:52:57.403302 kernel: QLogic iSCSI HBA Driver Feb 9 07:52:57.419554 systemd[1]: Finished dracut-cmdline.service. Feb 9 07:52:57.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:57.429235 systemd[1]: Starting dracut-pre-udev.service... Feb 9 07:52:57.484614 kernel: raid6: avx2x4 gen() 48785 MB/s Feb 9 07:52:57.519613 kernel: raid6: avx2x4 xor() 22311 MB/s Feb 9 07:52:57.554613 kernel: raid6: avx2x2 gen() 54682 MB/s Feb 9 07:52:57.589580 kernel: raid6: avx2x2 xor() 32731 MB/s Feb 9 07:52:57.624608 kernel: raid6: avx2x1 gen() 45990 MB/s Feb 9 07:52:57.658584 kernel: raid6: avx2x1 xor() 28461 MB/s Feb 9 07:52:57.692612 kernel: raid6: sse2x4 gen() 21770 MB/s Feb 9 07:52:57.726612 kernel: raid6: sse2x4 xor() 11967 MB/s Feb 9 07:52:57.760612 kernel: raid6: sse2x2 gen() 22104 MB/s Feb 9 07:52:57.794587 kernel: raid6: sse2x2 xor() 13680 MB/s Feb 9 07:52:57.828612 kernel: raid6: sse2x1 gen() 18587 MB/s Feb 9 07:52:57.880231 kernel: raid6: sse2x1 xor() 9109 MB/s Feb 9 07:52:57.880246 kernel: raid6: using algorithm avx2x2 gen() 54682 MB/s Feb 9 07:52:57.880254 kernel: raid6: .... xor() 32731 MB/s, rmw enabled Feb 9 07:52:57.898328 kernel: raid6: using avx2x2 recovery algorithm Feb 9 07:52:57.944565 kernel: xor: automatically using best checksumming function avx Feb 9 07:52:58.022583 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 07:52:58.027212 systemd[1]: Finished dracut-pre-udev.service. Feb 9 07:52:58.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:58.035000 audit: BPF prog-id=7 op=LOAD Feb 9 07:52:58.035000 audit: BPF prog-id=8 op=LOAD Feb 9 07:52:58.036507 systemd[1]: Starting systemd-udevd.service... Feb 9 07:52:58.044207 systemd-udevd[474]: Using default interface naming scheme 'v252'. Feb 9 07:52:58.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:58.050715 systemd[1]: Started systemd-udevd.service. Feb 9 07:52:58.090668 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Feb 9 07:52:58.067163 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 07:52:58.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:58.096005 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 07:52:58.108670 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 07:52:58.158948 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 07:52:58.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:52:58.182557 kernel: ACPI: bus type USB registered Feb 9 07:52:58.182593 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 07:52:58.182606 kernel: libata version 3.00 loaded. Feb 9 07:52:58.214539 kernel: usbcore: registered new interface driver usbfs Feb 9 07:52:58.233554 kernel: usbcore: registered new interface driver hub Feb 9 07:52:58.233577 kernel: usbcore: registered new device driver usb Feb 9 07:52:58.302428 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 9 07:52:58.302455 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 9 07:52:58.309560 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 07:52:58.309602 kernel: ahci 0000:00:17.0: version 3.0 Feb 9 07:52:58.309713 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 07:52:58.309791 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 9 07:52:58.310555 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 9 07:52:58.311582 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 07:52:58.311666 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 9 07:52:58.311738 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 9 07:52:58.311805 kernel: hub 1-0:1.0: USB hub found Feb 9 07:52:58.311879 kernel: hub 1-0:1.0: 16 ports detected Feb 9 07:52:58.311942 kernel: hub 2-0:1.0: USB hub found Feb 9 07:52:58.312013 kernel: hub 2-0:1.0: 10 ports detected Feb 9 07:52:58.312093 kernel: usb: port power management may be unreliable Feb 9 07:52:58.380180 kernel: pps pps0: new PPS source ptp0 Feb 9 07:52:58.380376 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Feb 9 07:52:58.380516 kernel: igb 0000:04:00.0: added PHC on eth0 Feb 9 07:52:58.414595 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 9 07:52:58.414802 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 07:52:58.447555 kernel: AES CTR mode by8 optimization enabled Feb 9 07:52:58.474367 kernel: scsi host0: ahci Feb 9 07:52:58.474414 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:07:66 Feb 9 07:52:58.474513 kernel: scsi host1: ahci Feb 9 07:52:58.501821 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Feb 9 07:52:58.514344 kernel: scsi host2: ahci Feb 9 07:52:58.514368 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 07:52:58.575612 kernel: pps pps1: new PPS source ptp1 Feb 9 07:52:58.575687 kernel: scsi host3: ahci Feb 9 07:52:58.575702 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 9 07:52:58.575716 kernel: igb 0000:05:00.0: added PHC on eth1 Feb 9 07:52:58.597939 kernel: scsi host4: ahci Feb 9 07:52:58.612375 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 07:52:58.612445 kernel: scsi host5: ahci Feb 9 07:52:58.622368 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:07:67 Feb 9 07:52:58.645038 kernel: scsi host6: ahci Feb 9 07:52:58.660226 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Feb 9 07:52:58.660592 kernel: scsi host7: ahci Feb 9 07:52:58.682416 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 07:52:58.682499 kernel: ata1: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516100 irq 135 Feb 9 07:52:58.720542 kernel: hub 1-14:1.0: USB hub found Feb 9 07:52:58.720626 kernel: ata2: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516180 irq 135 Feb 9 07:52:58.720635 kernel: hub 1-14:1.0: 4 ports detected Feb 9 07:52:58.747101 kernel: ata3: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516200 irq 135 Feb 9 07:52:58.912875 kernel: ata4: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516280 irq 135 Feb 9 07:52:58.912892 kernel: ata5: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516300 irq 135 Feb 9 07:52:58.912900 kernel: ata6: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516380 irq 135 Feb 9 07:52:58.946597 kernel: ata7: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516400 irq 135 Feb 9 07:52:58.946627 kernel: ata8: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516480 irq 135 Feb 9 07:52:59.015917 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Feb 9 07:52:59.015993 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 07:52:59.084607 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 9 07:52:59.214623 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 07:52:59.266585 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 9 07:52:59.266602 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 07:52:59.281555 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 07:52:59.281640 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 9 07:52:59.315553 kernel: port_module: 8 callbacks suppressed Feb 9 07:52:59.315571 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Feb 9 07:52:59.315648 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 9 07:52:59.347580 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 07:52:59.347656 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 07:52:59.404591 kernel: ata8: SATA link down (SStatus 0 SControl 300) Feb 9 07:52:59.420585 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 07:52:59.437587 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 9 07:52:59.453599 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 07:52:59.470602 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 9 07:52:59.520211 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 07:52:59.520229 kernel: ata2.00: Features: NCQ-prio Feb 9 07:52:59.520237 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 07:52:59.550945 kernel: ata1.00: Features: NCQ-prio Feb 9 07:52:59.570583 kernel: ata2.00: configured for UDMA/133 Feb 9 07:52:59.570599 kernel: ata1.00: configured for UDMA/133 Feb 9 07:52:59.584613 kernel: mlx5_core 0000:02:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 07:52:59.584684 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 07:52:59.621553 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Feb 9 07:52:59.621626 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 07:52:59.655874 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 07:52:59.695556 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Feb 9 07:52:59.695647 kernel: usbcore: registered new interface driver usbhid Feb 9 07:52:59.725351 kernel: usbhid: USB HID core driver Feb 9 07:52:59.761555 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 9 07:52:59.778554 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Feb 9 07:52:59.778678 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 07:52:59.794088 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 07:52:59.809303 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 07:52:59.809385 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 07:52:59.814614 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 9 07:52:59.814700 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 9 07:52:59.814710 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 9 07:52:59.845234 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Feb 9 07:52:59.845309 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Feb 9 07:52:59.878052 kernel: sd 1:0:0:0: [sda] Write Protect is off Feb 9 07:52:59.911793 kernel: sd 0:0:0:0: [sdb] Write Protect is off Feb 9 07:52:59.947066 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 9 07:52:59.957600 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 07:52:59.961599 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Feb 9 07:52:59.964483 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 9 07:52:59.964563 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 07:52:59.981999 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 07:52:59.983552 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 07:53:00.135184 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 07:53:00.151823 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 07:53:00.168338 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 07:53:00.168350 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Feb 9 07:53:00.179613 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 07:53:00.179629 kernel: mlx5_core 0000:02:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 07:53:00.239200 kernel: GPT:9289727 != 937703087 Feb 9 07:53:00.239215 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 07:53:00.256755 kernel: GPT:9289727 != 937703087 Feb 9 07:53:00.271690 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 07:53:00.288158 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 9 07:53:00.304568 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 07:53:00.337554 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Feb 9 07:53:00.357556 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth2 Feb 9 07:53:00.367678 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 07:53:00.430788 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (530) Feb 9 07:53:00.430805 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth0 Feb 9 07:53:00.412808 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 07:53:00.415547 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 07:53:00.443659 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 07:53:00.468038 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 07:53:00.535676 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 07:53:00.535701 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 9 07:53:00.486408 systemd[1]: Starting disk-uuid.service... Feb 9 07:53:00.553652 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 07:53:00.553713 disk-uuid[693]: Primary Header is updated. Feb 9 07:53:00.553713 disk-uuid[693]: Secondary Entries is updated. Feb 9 07:53:00.553713 disk-uuid[693]: Secondary Header is updated. Feb 9 07:53:00.594624 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 9 07:53:01.561216 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 07:53:01.579528 disk-uuid[695]: The operation has completed successfully. Feb 9 07:53:01.587749 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 9 07:53:01.617319 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 07:53:01.711150 kernel: audit: type=1130 audit(1707465181.624:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:01.711169 kernel: audit: type=1131 audit(1707465181.624:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:01.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:01.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:01.617377 systemd[1]: Finished disk-uuid.service. Feb 9 07:53:01.740637 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 07:53:01.625236 systemd[1]: Starting verity-setup.service... Feb 9 07:53:01.794846 systemd[1]: Found device dev-mapper-usr.device. Feb 9 07:53:01.807000 systemd[1]: Mounting sysusr-usr.mount... Feb 9 07:53:01.819168 systemd[1]: Finished verity-setup.service. Feb 9 07:53:01.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:01.886555 kernel: audit: type=1130 audit(1707465181.834:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:01.940596 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 07:53:01.940810 systemd[1]: Mounted sysusr-usr.mount. Feb 9 07:53:01.948829 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 07:53:02.036573 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 9 07:53:02.036589 kernel: BTRFS info (device sdb6): using free space tree Feb 9 07:53:02.036596 kernel: BTRFS info (device sdb6): has skinny extents Feb 9 07:53:02.036603 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 9 07:53:01.949216 systemd[1]: Starting ignition-setup.service... Feb 9 07:53:01.969957 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 07:53:02.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:02.044987 systemd[1]: Finished ignition-setup.service. Feb 9 07:53:02.159780 kernel: audit: type=1130 audit(1707465182.061:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:02.159869 kernel: audit: type=1130 audit(1707465182.114:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:02.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:02.061890 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 07:53:02.189181 kernel: audit: type=1334 audit(1707465182.168:24): prog-id=9 op=LOAD Feb 9 07:53:02.168000 audit: BPF prog-id=9 op=LOAD Feb 9 07:53:02.115203 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 07:53:02.169353 systemd[1]: Starting systemd-networkd.service... Feb 9 07:53:02.258575 kernel: audit: type=1130 audit(1707465182.212:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:02.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:02.233790 ignition[869]: Ignition 2.14.0 Feb 9 07:53:02.202781 systemd-networkd[880]: lo: Link UP Feb 9 07:53:02.233794 ignition[869]: Stage: fetch-offline Feb 9 07:53:02.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:02.202783 systemd-networkd[880]: lo: Gained carrier Feb 9 07:53:02.355627 kernel: audit: type=1130 audit(1707465182.293:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:02.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:02.233821 ignition[869]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 07:53:02.398693 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 9 07:53:02.398777 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f1np1: link becomes ready Feb 9 07:53:02.203063 systemd-networkd[880]: Enumeration completed Feb 9 07:53:02.233835 ignition[869]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 07:53:02.203138 systemd[1]: Started systemd-networkd.service. Feb 9 07:53:02.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:02.236492 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 07:53:02.440750 iscsid[907]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 07:53:02.440750 iscsid[907]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 07:53:02.440750 iscsid[907]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 07:53:02.440750 iscsid[907]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 07:53:02.440750 iscsid[907]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 07:53:02.440750 iscsid[907]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 07:53:02.440750 iscsid[907]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 07:53:02.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:02.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:02.203726 systemd-networkd[880]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 07:53:02.236559 ignition[869]: parsed url from cmdline: "" Feb 9 07:53:02.212650 systemd[1]: Reached target network.target. Feb 9 07:53:02.634670 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 9 07:53:02.236561 ignition[869]: no config URL provided Feb 9 07:53:02.261065 unknown[869]: fetched base config from "system" Feb 9 07:53:02.236564 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 07:53:02.261069 unknown[869]: fetched user config from "system" Feb 9 07:53:02.241807 ignition[869]: parsing config with SHA512: 6813d97c2697c340f9b999ea6bdd8ec7643e57a8b976f2e5e4b29d325e6b87b5c01336cd130f9d1c2ed502258e0ab63b3bee112cff41a900085d7772a3c7ca74 Feb 9 07:53:02.267226 systemd[1]: Starting iscsiuio.service... Feb 9 07:53:02.261717 ignition[869]: fetch-offline: fetch-offline passed Feb 9 07:53:02.279821 systemd[1]: Started iscsiuio.service. Feb 9 07:53:02.261720 ignition[869]: POST message to Packet Timeline Feb 9 07:53:02.293870 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 07:53:02.261725 ignition[869]: POST Status error: resource requires networking Feb 9 07:53:02.348678 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 07:53:02.261757 ignition[869]: Ignition finished successfully Feb 9 07:53:02.349149 systemd[1]: Starting ignition-kargs.service... Feb 9 07:53:02.353816 ignition[897]: Ignition 2.14.0 Feb 9 07:53:02.368828 systemd[1]: Starting iscsid.service... Feb 9 07:53:02.353819 ignition[897]: Stage: kargs Feb 9 07:53:02.383322 systemd-networkd[880]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 07:53:02.353874 ignition[897]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 07:53:02.411721 systemd[1]: Started iscsid.service. Feb 9 07:53:02.353883 ignition[897]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 07:53:02.426082 systemd[1]: Starting dracut-initqueue.service... Feb 9 07:53:02.355189 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 07:53:02.440719 systemd[1]: Finished dracut-initqueue.service. Feb 9 07:53:02.356686 ignition[897]: kargs: kargs passed Feb 9 07:53:02.455684 systemd[1]: Reached target remote-fs-pre.target. Feb 9 07:53:02.356689 ignition[897]: POST message to Packet Timeline Feb 9 07:53:02.499689 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 07:53:02.356699 ignition[897]: GET https://metadata.packet.net/metadata: attempt #1 Feb 9 07:53:02.499734 systemd[1]: Reached target remote-fs.target. Feb 9 07:53:02.358760 ignition[897]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59101->[::1]:53: read: connection refused Feb 9 07:53:02.528191 systemd[1]: Starting dracut-pre-mount.service... Feb 9 07:53:02.559207 ignition[897]: GET https://metadata.packet.net/metadata: attempt #2 Feb 9 07:53:02.549828 systemd[1]: Finished dracut-pre-mount.service. Feb 9 07:53:02.559576 ignition[897]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44579->[::1]:53: read: connection refused Feb 9 07:53:02.628489 systemd-networkd[880]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 07:53:02.656776 systemd-networkd[880]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 07:53:02.685387 systemd-networkd[880]: enp2s0f1np1: Link UP Feb 9 07:53:02.685643 systemd-networkd[880]: enp2s0f1np1: Gained carrier Feb 9 07:53:02.697984 systemd-networkd[880]: enp2s0f0np0: Link UP Feb 9 07:53:02.960225 ignition[897]: GET https://metadata.packet.net/metadata: attempt #3 Feb 9 07:53:02.698318 systemd-networkd[880]: eno2: Link UP Feb 9 07:53:02.961283 ignition[897]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42344->[::1]:53: read: connection refused Feb 9 07:53:02.698641 systemd-networkd[880]: eno1: Link UP Feb 9 07:53:03.442775 systemd-networkd[880]: enp2s0f0np0: Gained carrier Feb 9 07:53:03.452660 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0np0: link becomes ready Feb 9 07:53:03.492749 systemd-networkd[880]: enp2s0f0np0: DHCPv4 address 139.178.90.113/31, gateway 139.178.90.112 acquired from 145.40.83.140 Feb 9 07:53:03.678028 systemd-networkd[880]: enp2s0f1np1: Gained IPv6LL Feb 9 07:53:03.761777 ignition[897]: GET https://metadata.packet.net/metadata: attempt #4 Feb 9 07:53:03.763214 ignition[897]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52656->[::1]:53: read: connection refused Feb 9 07:53:05.214056 systemd-networkd[880]: enp2s0f0np0: Gained IPv6LL Feb 9 07:53:05.364865 ignition[897]: GET https://metadata.packet.net/metadata: attempt #5 Feb 9 07:53:05.366149 ignition[897]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52711->[::1]:53: read: connection refused Feb 9 07:53:08.569581 ignition[897]: GET https://metadata.packet.net/metadata: attempt #6 Feb 9 07:53:08.610285 ignition[897]: GET result: OK Feb 9 07:53:08.823937 ignition[897]: Ignition finished successfully Feb 9 07:53:08.835147 systemd[1]: Finished ignition-kargs.service. Feb 9 07:53:08.919966 kernel: kauditd_printk_skb: 4 callbacks suppressed Feb 9 07:53:08.919982 kernel: audit: type=1130 audit(1707465188.850:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:08.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:08.859909 ignition[926]: Ignition 2.14.0 Feb 9 07:53:08.852869 systemd[1]: Starting ignition-disks.service... Feb 9 07:53:08.859912 ignition[926]: Stage: disks Feb 9 07:53:08.859979 ignition[926]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 07:53:08.859990 ignition[926]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 07:53:08.861582 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 07:53:08.863485 ignition[926]: disks: disks passed Feb 9 07:53:08.863488 ignition[926]: POST message to Packet Timeline Feb 9 07:53:08.863498 ignition[926]: GET https://metadata.packet.net/metadata: attempt #1 Feb 9 07:53:08.886473 ignition[926]: GET result: OK Feb 9 07:53:09.075081 ignition[926]: Ignition finished successfully Feb 9 07:53:09.078356 systemd[1]: Finished ignition-disks.service. Feb 9 07:53:09.143572 kernel: audit: type=1130 audit(1707465189.091:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:09.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:09.092215 systemd[1]: Reached target initrd-root-device.target. Feb 9 07:53:09.151787 systemd[1]: Reached target local-fs-pre.target. Feb 9 07:53:09.165779 systemd[1]: Reached target local-fs.target. Feb 9 07:53:09.179804 systemd[1]: Reached target sysinit.target. Feb 9 07:53:09.195753 systemd[1]: Reached target basic.target. Feb 9 07:53:09.209515 systemd[1]: Starting systemd-fsck-root.service... Feb 9 07:53:09.229456 systemd-fsck[943]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 07:53:09.241041 systemd[1]: Finished systemd-fsck-root.service. Feb 9 07:53:09.327947 kernel: audit: type=1130 audit(1707465189.249:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:09.327962 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 07:53:09.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:09.255276 systemd[1]: Mounting sysroot.mount... Feb 9 07:53:09.335204 systemd[1]: Mounted sysroot.mount. Feb 9 07:53:09.349824 systemd[1]: Reached target initrd-root-fs.target. Feb 9 07:53:09.358394 systemd[1]: Mounting sysroot-usr.mount... Feb 9 07:53:09.373365 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 07:53:09.385204 systemd[1]: Starting flatcar-static-network.service... Feb 9 07:53:09.407675 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 07:53:09.407711 systemd[1]: Reached target ignition-diskful.target. Feb 9 07:53:09.427755 systemd[1]: Mounted sysroot-usr.mount. Feb 9 07:53:09.451078 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 07:53:09.584695 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (956) Feb 9 07:53:09.584716 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 9 07:53:09.584724 kernel: BTRFS info (device sdb6): using free space tree Feb 9 07:53:09.584732 kernel: BTRFS info (device sdb6): has skinny extents Feb 9 07:53:09.584739 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 9 07:53:09.584801 coreos-metadata[951]: Feb 09 07:53:09.548 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 07:53:09.656192 kernel: audit: type=1130 audit(1707465189.602:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:09.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:09.656232 coreos-metadata[950]: Feb 09 07:53:09.548 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 07:53:09.656232 coreos-metadata[950]: Feb 09 07:53:09.591 INFO Fetch successful Feb 9 07:53:09.656232 coreos-metadata[950]: Feb 09 07:53:09.608 INFO wrote hostname ci-3510.3.2-a-d9875e643b to /sysroot/etc/hostname Feb 9 07:53:09.747286 kernel: audit: type=1130 audit(1707465189.693:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:09.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:09.465685 systemd[1]: Starting initrd-setup-root.service... Feb 9 07:53:09.867529 kernel: audit: type=1130 audit(1707465189.755:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:09.867568 kernel: audit: type=1131 audit(1707465189.755:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:09.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:09.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:09.867718 coreos-metadata[951]: Feb 09 07:53:09.592 INFO Fetch successful Feb 9 07:53:09.549616 systemd[1]: Finished initrd-setup-root.service. Feb 9 07:53:09.897679 initrd-setup-root[961]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 07:53:09.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:09.603717 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 07:53:09.969776 kernel: audit: type=1130 audit(1707465189.905:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:09.969840 initrd-setup-root[969]: cut: /sysroot/etc/group: No such file or directory Feb 9 07:53:09.664874 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 07:53:09.987798 initrd-setup-root[977]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 07:53:09.693824 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 9 07:53:10.007724 ignition[1026]: INFO : Ignition 2.14.0 Feb 9 07:53:10.007724 ignition[1026]: INFO : Stage: mount Feb 9 07:53:10.007724 ignition[1026]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 07:53:10.007724 ignition[1026]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 07:53:10.007724 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 07:53:10.007724 ignition[1026]: INFO : mount: mount passed Feb 9 07:53:10.007724 ignition[1026]: INFO : POST message to Packet Timeline Feb 9 07:53:10.007724 ignition[1026]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 07:53:10.007724 ignition[1026]: INFO : GET result: OK Feb 9 07:53:10.095807 initrd-setup-root[985]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 07:53:09.693866 systemd[1]: Finished flatcar-static-network.service. Feb 9 07:53:09.756157 systemd[1]: Starting ignition-mount.service... Feb 9 07:53:09.875098 systemd[1]: Starting sysroot-boot.service... Feb 9 07:53:09.889970 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 07:53:09.890016 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 07:53:09.892156 systemd[1]: Finished sysroot-boot.service. Feb 9 07:53:10.329888 ignition[1026]: INFO : Ignition finished successfully Feb 9 07:53:10.332600 systemd[1]: Finished ignition-mount.service. Feb 9 07:53:10.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:10.349747 systemd[1]: Starting ignition-files.service... Feb 9 07:53:10.420657 kernel: audit: type=1130 audit(1707465190.347:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:10.414478 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 07:53:10.477187 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1039) Feb 9 07:53:10.477202 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 9 07:53:10.477210 kernel: BTRFS info (device sdb6): using free space tree Feb 9 07:53:10.500215 kernel: BTRFS info (device sdb6): has skinny extents Feb 9 07:53:10.548553 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 9 07:53:10.549982 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 07:53:10.566679 ignition[1058]: INFO : Ignition 2.14.0 Feb 9 07:53:10.566679 ignition[1058]: INFO : Stage: files Feb 9 07:53:10.566679 ignition[1058]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 07:53:10.566679 ignition[1058]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 07:53:10.566679 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 07:53:10.570852 unknown[1058]: wrote ssh authorized keys file for user: core Feb 9 07:53:10.632787 ignition[1058]: DEBUG : files: compiled without relabeling support, skipping Feb 9 07:53:10.632787 ignition[1058]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 07:53:10.632787 ignition[1058]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 07:53:10.632787 ignition[1058]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 07:53:10.632787 ignition[1058]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 07:53:10.632787 ignition[1058]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 07:53:10.632787 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 07:53:10.632787 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 07:53:11.065666 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 07:53:11.115551 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 07:53:11.132774 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 07:53:11.132774 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 07:53:11.645337 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 07:53:11.724114 ignition[1058]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 07:53:11.724114 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 07:53:11.767768 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 07:53:11.767768 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 07:53:12.117196 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 07:53:12.166859 ignition[1058]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 07:53:12.191791 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 07:53:12.191791 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 07:53:12.191791 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 07:53:12.242668 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 07:53:12.541183 ignition[1058]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 07:53:12.541183 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 07:53:12.541183 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 07:53:12.605785 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 07:53:12.605785 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 07:53:13.028668 ignition[1058]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 07:53:13.053785 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 07:53:13.053785 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 07:53:13.053785 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 07:53:13.084767 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 07:53:13.232982 ignition[1058]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 07:53:13.232982 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 07:53:13.273781 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 07:53:13.273781 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 07:53:13.273781 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 07:53:13.273781 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 07:53:13.617914 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 07:53:13.643158 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 07:53:13.643158 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 07:53:13.690785 kernel: BTRFS info: devid 1 device path /dev/sdb6 changed to /dev/disk/by-label/OEM scanned by ignition (1079) Feb 9 07:53:13.690847 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 07:53:13.690847 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 07:53:13.690847 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 07:53:13.690847 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 07:53:13.690847 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 07:53:13.690847 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 07:53:13.690847 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 07:53:13.690847 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 07:53:13.690847 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 07:53:13.690847 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 9 07:53:13.690847 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 07:53:13.690847 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3010194541" Feb 9 07:53:13.690847 ignition[1058]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3010194541": device or resource busy Feb 9 07:53:13.690847 ignition[1058]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3010194541", trying btrfs: device or resource busy Feb 9 07:53:13.690847 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3010194541" Feb 9 07:53:13.945891 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3010194541" Feb 9 07:53:13.945891 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem3010194541" Feb 9 07:53:13.945891 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem3010194541" Feb 9 07:53:13.945891 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 9 07:53:13.945891 ignition[1058]: INFO : files: op(14): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 07:53:13.945891 ignition[1058]: INFO : files: op(14): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 07:53:13.945891 ignition[1058]: INFO : files: op(15): [started] processing unit "packet-phone-home.service" Feb 9 07:53:13.945891 ignition[1058]: INFO : files: op(15): [finished] processing unit "packet-phone-home.service" Feb 9 07:53:13.945891 ignition[1058]: INFO : files: op(16): [started] processing unit "prepare-cni-plugins.service" Feb 9 07:53:13.945891 ignition[1058]: INFO : files: op(16): op(17): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 07:53:13.945891 ignition[1058]: INFO : files: op(16): op(17): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 07:53:13.945891 ignition[1058]: INFO : files: op(16): [finished] processing unit "prepare-cni-plugins.service" Feb 9 07:53:13.945891 ignition[1058]: INFO : files: op(18): [started] processing unit "prepare-critools.service" Feb 9 07:53:13.945891 ignition[1058]: INFO : files: op(18): op(19): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 07:53:13.945891 ignition[1058]: INFO : files: op(18): op(19): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 07:53:13.945891 ignition[1058]: INFO : files: op(18): [finished] processing unit "prepare-critools.service" Feb 9 07:53:13.945891 ignition[1058]: INFO : files: op(1a): [started] processing unit "prepare-helm.service" Feb 9 07:53:13.945891 ignition[1058]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 07:53:14.723491 kernel: audit: type=1130 audit(1707465193.953:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.723511 kernel: audit: type=1130 audit(1707465194.069:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.723522 kernel: audit: type=1130 audit(1707465194.135:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.723535 kernel: audit: type=1131 audit(1707465194.135:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.723543 kernel: audit: type=1130 audit(1707465194.298:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.723555 kernel: audit: type=1131 audit(1707465194.298:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.723563 kernel: audit: type=1130 audit(1707465194.490:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.723571 kernel: audit: type=1131 audit(1707465194.655:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:13.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:13.935287 systemd[1]: Finished ignition-files.service. Feb 9 07:53:14.737771 ignition[1058]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 07:53:14.737771 ignition[1058]: INFO : files: op(1a): [finished] processing unit "prepare-helm.service" Feb 9 07:53:14.737771 ignition[1058]: INFO : files: op(1c): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 07:53:14.737771 ignition[1058]: INFO : files: op(1c): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 07:53:14.737771 ignition[1058]: INFO : files: op(1d): [started] setting preset to enabled for "packet-phone-home.service" Feb 9 07:53:14.737771 ignition[1058]: INFO : files: op(1d): [finished] setting preset to enabled for "packet-phone-home.service" Feb 9 07:53:14.737771 ignition[1058]: INFO : files: op(1e): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 07:53:14.737771 ignition[1058]: INFO : files: op(1e): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 07:53:14.737771 ignition[1058]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-critools.service" Feb 9 07:53:14.737771 ignition[1058]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 07:53:14.737771 ignition[1058]: INFO : files: op(20): [started] setting preset to enabled for "prepare-helm.service" Feb 9 07:53:14.737771 ignition[1058]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 07:53:14.737771 ignition[1058]: INFO : files: createResultFile: createFiles: op(21): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 07:53:14.737771 ignition[1058]: INFO : files: createResultFile: createFiles: op(21): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 07:53:14.737771 ignition[1058]: INFO : files: files passed Feb 9 07:53:14.737771 ignition[1058]: INFO : POST message to Packet Timeline Feb 9 07:53:14.737771 ignition[1058]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 07:53:14.737771 ignition[1058]: INFO : GET result: OK Feb 9 07:53:14.737771 ignition[1058]: INFO : Ignition finished successfully Feb 9 07:53:15.227837 kernel: audit: type=1131 audit(1707465194.964:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:15.227920 kernel: audit: type=1131 audit(1707465195.064:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:15.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:15.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:15.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:13.960051 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 07:53:15.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.019812 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 07:53:15.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:15.273811 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 07:53:14.020119 systemd[1]: Starting ignition-quench.service... Feb 9 07:53:15.309677 iscsid[907]: iscsid shutting down. Feb 9 07:53:14.043874 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 07:53:15.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.069945 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 07:53:15.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.070017 systemd[1]: Finished ignition-quench.service. Feb 9 07:53:14.135790 systemd[1]: Reached target ignition-complete.target. Feb 9 07:53:15.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.261380 systemd[1]: Starting initrd-parse-etc.service... Feb 9 07:53:15.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.286559 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 07:53:15.411951 ignition[1107]: INFO : Ignition 2.14.0 Feb 9 07:53:15.411951 ignition[1107]: INFO : Stage: umount Feb 9 07:53:15.411951 ignition[1107]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 07:53:15.411951 ignition[1107]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 07:53:15.411951 ignition[1107]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 07:53:15.411951 ignition[1107]: INFO : umount: umount passed Feb 9 07:53:15.411951 ignition[1107]: INFO : POST message to Packet Timeline Feb 9 07:53:15.411951 ignition[1107]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 07:53:15.411951 ignition[1107]: INFO : GET result: OK Feb 9 07:53:15.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:15.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:15.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:15.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.286608 systemd[1]: Finished initrd-parse-etc.service. Feb 9 07:53:15.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:15.569038 ignition[1107]: INFO : Ignition finished successfully Feb 9 07:53:15.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.298856 systemd[1]: Reached target initrd-fs.target. Feb 9 07:53:15.584000 audit: BPF prog-id=6 op=UNLOAD Feb 9 07:53:14.422762 systemd[1]: Reached target initrd.target. Feb 9 07:53:15.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.422820 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 07:53:15.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.423172 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 07:53:15.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.465354 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 07:53:15.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.492768 systemd[1]: Starting initrd-cleanup.service... Feb 9 07:53:14.559717 systemd[1]: Stopped target nss-lookup.target. Feb 9 07:53:15.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.592915 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 07:53:15.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.610021 systemd[1]: Stopped target timers.target. Feb 9 07:53:15.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.636012 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 07:53:14.636223 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 07:53:15.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.656501 systemd[1]: Stopped target initrd.target. Feb 9 07:53:14.730914 systemd[1]: Stopped target basic.target. Feb 9 07:53:14.744930 systemd[1]: Stopped target ignition-complete.target. Feb 9 07:53:14.762984 systemd[1]: Stopped target ignition-diskful.target. Feb 9 07:53:15.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.789934 systemd[1]: Stopped target initrd-root-device.target. Feb 9 07:53:15.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.813026 systemd[1]: Stopped target remote-fs.target. Feb 9 07:53:15.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.836223 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 07:53:14.857269 systemd[1]: Stopped target sysinit.target. Feb 9 07:53:15.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.878257 systemd[1]: Stopped target local-fs.target. Feb 9 07:53:15.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.901222 systemd[1]: Stopped target local-fs-pre.target. Feb 9 07:53:15.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.924224 systemd[1]: Stopped target swap.target. Feb 9 07:53:15.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:15.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:14.944141 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 07:53:14.944510 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 07:53:14.965465 systemd[1]: Stopped target cryptsetup.target. Feb 9 07:53:15.041903 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 07:53:15.041986 systemd[1]: Stopped dracut-initqueue.service. Feb 9 07:53:15.064979 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 07:53:15.065051 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 07:53:15.133968 systemd[1]: Stopped target paths.target. Feb 9 07:53:15.149977 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 07:53:15.153770 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 07:53:15.163008 systemd[1]: Stopped target slices.target. Feb 9 07:53:15.178989 systemd[1]: Stopped target sockets.target. Feb 9 07:53:15.203009 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 07:53:15.203187 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 07:53:16.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:15.220261 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 07:53:15.220547 systemd[1]: Stopped ignition-files.service. Feb 9 07:53:15.237296 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 07:53:15.237684 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 07:53:15.259454 systemd[1]: Stopping ignition-mount.service... Feb 9 07:53:15.280860 systemd[1]: Stopping iscsid.service... Feb 9 07:53:15.302334 systemd[1]: Stopping sysroot-boot.service... Feb 9 07:53:15.317746 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 07:53:15.318046 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 07:53:15.334360 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 07:53:15.334701 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 07:53:15.357585 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 07:53:15.359391 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 07:53:15.359700 systemd[1]: Stopped iscsid.service. Feb 9 07:53:15.373165 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 07:53:15.373367 systemd[1]: Stopped sysroot-boot.service. Feb 9 07:53:15.388926 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 07:53:15.389150 systemd[1]: Closed iscsid.socket. Feb 9 07:53:15.402981 systemd[1]: Stopping iscsiuio.service... Feb 9 07:53:15.419271 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 07:53:15.419478 systemd[1]: Stopped iscsiuio.service. Feb 9 07:53:15.433266 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 07:53:15.433457 systemd[1]: Finished initrd-cleanup.service. Feb 9 07:53:15.449508 systemd[1]: Stopped target network.target. Feb 9 07:53:15.466803 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 07:53:15.466898 systemd[1]: Closed iscsiuio.socket. Feb 9 07:53:15.494120 systemd[1]: Stopping systemd-networkd.service... Feb 9 07:53:16.149576 systemd-journald[268]: Received SIGTERM from PID 1 (n/a). Feb 9 07:53:15.499684 systemd-networkd[880]: enp2s0f1np1: DHCPv6 lease lost Feb 9 07:53:15.505111 systemd[1]: Stopping systemd-resolved.service... Feb 9 07:53:15.509675 systemd-networkd[880]: enp2s0f0np0: DHCPv6 lease lost Feb 9 07:53:15.525358 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 07:53:16.149000 audit: BPF prog-id=9 op=UNLOAD Feb 9 07:53:15.525594 systemd[1]: Stopped systemd-resolved.service. Feb 9 07:53:15.534118 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 07:53:15.534411 systemd[1]: Stopped systemd-networkd.service. Feb 9 07:53:15.561897 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 07:53:15.561954 systemd[1]: Stopped ignition-mount.service. Feb 9 07:53:15.577091 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 07:53:15.577178 systemd[1]: Closed systemd-networkd.socket. Feb 9 07:53:15.592824 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 07:53:15.592940 systemd[1]: Stopped ignition-disks.service. Feb 9 07:53:15.607856 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 07:53:15.607968 systemd[1]: Stopped ignition-kargs.service. Feb 9 07:53:15.622857 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 07:53:15.622968 systemd[1]: Stopped ignition-setup.service. Feb 9 07:53:15.641938 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 07:53:15.642074 systemd[1]: Stopped initrd-setup-root.service. Feb 9 07:53:15.660495 systemd[1]: Stopping network-cleanup.service... Feb 9 07:53:15.672785 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 07:53:15.672945 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 07:53:15.687929 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 07:53:15.688056 systemd[1]: Stopped systemd-sysctl.service. Feb 9 07:53:15.703234 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 07:53:15.703366 systemd[1]: Stopped systemd-modules-load.service. Feb 9 07:53:15.719162 systemd[1]: Stopping systemd-udevd.service... Feb 9 07:53:15.736200 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 07:53:15.737504 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 07:53:15.737808 systemd[1]: Stopped systemd-udevd.service. Feb 9 07:53:15.751179 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 07:53:15.751299 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 07:53:15.764933 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 07:53:15.765022 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 07:53:15.780637 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 07:53:15.780682 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 07:53:15.802834 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 07:53:15.802881 systemd[1]: Stopped dracut-cmdline.service. Feb 9 07:53:15.819777 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 07:53:15.819835 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 07:53:15.837226 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 07:53:15.850737 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 07:53:15.850769 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 07:53:15.867038 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 07:53:15.867145 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 07:53:15.881775 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 07:53:15.881886 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 07:53:15.899685 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 07:53:15.901143 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 07:53:15.901401 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 07:53:16.023730 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 07:53:16.023949 systemd[1]: Stopped network-cleanup.service. Feb 9 07:53:16.035108 systemd[1]: Reached target initrd-switch-root.target. Feb 9 07:53:16.053444 systemd[1]: Starting initrd-switch-root.service... Feb 9 07:53:16.090198 systemd[1]: Switching root. Feb 9 07:53:16.151561 systemd-journald[268]: Journal stopped Feb 9 07:53:20.093460 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 07:53:20.093473 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 07:53:20.093481 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 07:53:20.093486 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 07:53:20.093491 kernel: SELinux: policy capability open_perms=1 Feb 9 07:53:20.093496 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 07:53:20.093502 kernel: SELinux: policy capability always_check_network=0 Feb 9 07:53:20.093507 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 07:53:20.093513 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 07:53:20.093519 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 07:53:20.093524 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 07:53:20.093531 systemd[1]: Successfully loaded SELinux policy in 321.125ms. Feb 9 07:53:20.093537 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.072ms. Feb 9 07:53:20.093544 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 07:53:20.093553 systemd[1]: Detected architecture x86-64. Feb 9 07:53:20.093559 systemd[1]: Detected first boot. Feb 9 07:53:20.093588 systemd[1]: Hostname set to . Feb 9 07:53:20.093594 systemd[1]: Initializing machine ID from random generator. Feb 9 07:53:20.093617 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 07:53:20.093636 systemd[1]: Populated /etc with preset unit settings. Feb 9 07:53:20.093642 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 07:53:20.093650 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 07:53:20.093656 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 07:53:20.093662 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 07:53:20.093668 systemd[1]: Stopped initrd-switch-root.service. Feb 9 07:53:20.093674 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 07:53:20.093680 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 07:53:20.093687 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 07:53:20.093693 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 07:53:20.093699 systemd[1]: Created slice system-getty.slice. Feb 9 07:53:20.093705 systemd[1]: Created slice system-modprobe.slice. Feb 9 07:53:20.093711 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 07:53:20.093717 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 07:53:20.093723 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 07:53:20.093728 systemd[1]: Created slice user.slice. Feb 9 07:53:20.093734 systemd[1]: Started systemd-ask-password-console.path. Feb 9 07:53:20.093741 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 07:53:20.093747 systemd[1]: Set up automount boot.automount. Feb 9 07:53:20.093753 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 07:53:20.093759 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 07:53:20.093766 systemd[1]: Stopped target initrd-fs.target. Feb 9 07:53:20.093772 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 07:53:20.093778 systemd[1]: Reached target integritysetup.target. Feb 9 07:53:20.093785 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 07:53:20.093791 systemd[1]: Reached target remote-fs.target. Feb 9 07:53:20.093798 systemd[1]: Reached target slices.target. Feb 9 07:53:20.093804 systemd[1]: Reached target swap.target. Feb 9 07:53:20.093810 systemd[1]: Reached target torcx.target. Feb 9 07:53:20.093816 systemd[1]: Reached target veritysetup.target. Feb 9 07:53:20.093822 systemd[1]: Listening on systemd-coredump.socket. Feb 9 07:53:20.093829 systemd[1]: Listening on systemd-initctl.socket. Feb 9 07:53:20.093835 systemd[1]: Listening on systemd-networkd.socket. Feb 9 07:53:20.093842 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 07:53:20.093849 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 07:53:20.093855 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 07:53:20.093861 systemd[1]: Mounting dev-hugepages.mount... Feb 9 07:53:20.093867 systemd[1]: Mounting dev-mqueue.mount... Feb 9 07:53:20.093874 systemd[1]: Mounting media.mount... Feb 9 07:53:20.093881 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 07:53:20.093887 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 07:53:20.093893 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 07:53:20.093900 systemd[1]: Mounting tmp.mount... Feb 9 07:53:20.093906 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 07:53:20.093912 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 07:53:20.093918 systemd[1]: Starting kmod-static-nodes.service... Feb 9 07:53:20.093924 systemd[1]: Starting modprobe@configfs.service... Feb 9 07:53:20.093931 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 07:53:20.093938 systemd[1]: Starting modprobe@drm.service... Feb 9 07:53:20.093944 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 07:53:20.093950 systemd[1]: Starting modprobe@fuse.service... Feb 9 07:53:20.093957 kernel: fuse: init (API version 7.34) Feb 9 07:53:20.093962 systemd[1]: Starting modprobe@loop.service... Feb 9 07:53:20.093969 kernel: loop: module loaded Feb 9 07:53:20.093975 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 07:53:20.093982 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 07:53:20.093989 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 07:53:20.093995 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 07:53:20.094001 kernel: kauditd_printk_skb: 60 callbacks suppressed Feb 9 07:53:20.094007 kernel: audit: type=1131 audit(1707465199.736:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.094013 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 07:53:20.094019 kernel: audit: type=1131 audit(1707465199.823:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.094025 systemd[1]: Stopped systemd-journald.service. Feb 9 07:53:20.094032 kernel: audit: type=1130 audit(1707465199.887:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.094038 kernel: audit: type=1131 audit(1707465199.887:106): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.094044 kernel: audit: type=1334 audit(1707465199.972:107): prog-id=15 op=LOAD Feb 9 07:53:20.094050 kernel: audit: type=1334 audit(1707465199.990:108): prog-id=16 op=LOAD Feb 9 07:53:20.094056 kernel: audit: type=1334 audit(1707465200.009:109): prog-id=17 op=LOAD Feb 9 07:53:20.094062 kernel: audit: type=1334 audit(1707465200.027:110): prog-id=13 op=UNLOAD Feb 9 07:53:20.094068 systemd[1]: Starting systemd-journald.service... Feb 9 07:53:20.094074 kernel: audit: type=1334 audit(1707465200.027:111): prog-id=14 op=UNLOAD Feb 9 07:53:20.094081 systemd[1]: Starting systemd-modules-load.service... Feb 9 07:53:20.094087 kernel: audit: type=1305 audit(1707465200.090:112): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 07:53:20.094095 systemd-journald[1257]: Journal started Feb 9 07:53:20.094118 systemd-journald[1257]: Runtime Journal (/run/log/journal/d4ea54eddabb4c5b86ca09d1a58426bd) is 8.0M, max 636.8M, 628.8M free. Feb 9 07:53:16.571000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 07:53:16.865000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 07:53:16.868000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 07:53:16.868000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 07:53:16.868000 audit: BPF prog-id=10 op=LOAD Feb 9 07:53:16.868000 audit: BPF prog-id=10 op=UNLOAD Feb 9 07:53:16.868000 audit: BPF prog-id=11 op=LOAD Feb 9 07:53:16.868000 audit: BPF prog-id=11 op=UNLOAD Feb 9 07:53:16.937000 audit[1147]: AVC avc: denied { associate } for pid=1147 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 07:53:16.937000 audit[1147]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a58e2 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=1130 pid=1147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 07:53:16.937000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 07:53:16.964000 audit[1147]: AVC avc: denied { associate } for pid=1147 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 07:53:16.964000 audit[1147]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001a59b9 a2=1ed a3=0 items=2 ppid=1130 pid=1147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 07:53:16.964000 audit: CWD cwd="/" Feb 9 07:53:16.964000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:16.964000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:16.964000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 07:53:18.481000 audit: BPF prog-id=12 op=LOAD Feb 9 07:53:18.481000 audit: BPF prog-id=3 op=UNLOAD Feb 9 07:53:18.482000 audit: BPF prog-id=13 op=LOAD Feb 9 07:53:18.482000 audit: BPF prog-id=14 op=LOAD Feb 9 07:53:18.482000 audit: BPF prog-id=4 op=UNLOAD Feb 9 07:53:18.482000 audit: BPF prog-id=5 op=UNLOAD Feb 9 07:53:18.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:18.531000 audit: BPF prog-id=12 op=UNLOAD Feb 9 07:53:18.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:18.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:19.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:19.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:19.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:19.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:19.972000 audit: BPF prog-id=15 op=LOAD Feb 9 07:53:19.990000 audit: BPF prog-id=16 op=LOAD Feb 9 07:53:20.009000 audit: BPF prog-id=17 op=LOAD Feb 9 07:53:20.027000 audit: BPF prog-id=13 op=UNLOAD Feb 9 07:53:20.027000 audit: BPF prog-id=14 op=UNLOAD Feb 9 07:53:20.090000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 07:53:16.936580 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 07:53:18.480600 systemd[1]: Queued start job for default target multi-user.target. Feb 9 07:53:16.937106 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:16Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 07:53:18.483270 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 07:53:16.937118 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:16Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 07:53:16.937138 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:16Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 07:53:16.937144 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:16Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 07:53:16.937161 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:16Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 07:53:16.937168 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:16Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 07:53:16.937275 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:16Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 07:53:16.937298 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:16Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 07:53:16.937305 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:16Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 07:53:16.937719 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:16Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 07:53:16.937741 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:16Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 07:53:16.937752 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:16Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 07:53:16.937760 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:16Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 07:53:16.937769 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:16Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 07:53:16.937777 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:16Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 07:53:18.123169 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:18Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 07:53:18.123309 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:18Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 07:53:18.123365 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:18Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 07:53:18.123452 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:18Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 07:53:18.123483 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:18Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 07:53:18.123515 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-02-09T07:53:18Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 07:53:20.090000 audit[1257]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffeb3f2df10 a2=4000 a3=7ffeb3f2dfac items=0 ppid=1 pid=1257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 07:53:20.090000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 07:53:20.171762 systemd[1]: Starting systemd-network-generator.service... Feb 9 07:53:20.198582 systemd[1]: Starting systemd-remount-fs.service... Feb 9 07:53:20.225606 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 07:53:20.268934 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 07:53:20.268957 systemd[1]: Stopped verity-setup.service. Feb 9 07:53:20.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.313596 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 07:53:20.333719 systemd[1]: Started systemd-journald.service. Feb 9 07:53:20.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.342062 systemd[1]: Mounted dev-hugepages.mount. Feb 9 07:53:20.348794 systemd[1]: Mounted dev-mqueue.mount. Feb 9 07:53:20.356778 systemd[1]: Mounted media.mount. Feb 9 07:53:20.363803 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 07:53:20.372782 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 07:53:20.381783 systemd[1]: Mounted tmp.mount. Feb 9 07:53:20.388848 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 07:53:20.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.396865 systemd[1]: Finished kmod-static-nodes.service. Feb 9 07:53:20.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.404907 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 07:53:20.405010 systemd[1]: Finished modprobe@configfs.service. Feb 9 07:53:20.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.413930 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 07:53:20.414043 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 07:53:20.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.423017 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 07:53:20.423173 systemd[1]: Finished modprobe@drm.service. Feb 9 07:53:20.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.432202 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 07:53:20.432437 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 07:53:20.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.441345 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 07:53:20.441658 systemd[1]: Finished modprobe@fuse.service. Feb 9 07:53:20.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.450341 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 07:53:20.450663 systemd[1]: Finished modprobe@loop.service. Feb 9 07:53:20.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.459386 systemd[1]: Finished systemd-modules-load.service. Feb 9 07:53:20.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.468337 systemd[1]: Finished systemd-network-generator.service. Feb 9 07:53:20.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.477314 systemd[1]: Finished systemd-remount-fs.service. Feb 9 07:53:20.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.486335 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 07:53:20.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.495890 systemd[1]: Reached target network-pre.target. Feb 9 07:53:20.507418 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 07:53:20.518257 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 07:53:20.524789 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 07:53:20.525729 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 07:53:20.533171 systemd[1]: Starting systemd-journal-flush.service... Feb 9 07:53:20.536454 systemd-journald[1257]: Time spent on flushing to /var/log/journal/d4ea54eddabb4c5b86ca09d1a58426bd is 14.994ms for 1647 entries. Feb 9 07:53:20.536454 systemd-journald[1257]: System Journal (/var/log/journal/d4ea54eddabb4c5b86ca09d1a58426bd) is 8.0M, max 195.6M, 187.6M free. Feb 9 07:53:20.584280 systemd-journald[1257]: Received client request to flush runtime journal. Feb 9 07:53:20.550663 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 07:53:20.551168 systemd[1]: Starting systemd-random-seed.service... Feb 9 07:53:20.565694 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 07:53:20.566194 systemd[1]: Starting systemd-sysctl.service... Feb 9 07:53:20.573155 systemd[1]: Starting systemd-sysusers.service... Feb 9 07:53:20.580185 systemd[1]: Starting systemd-udev-settle.service... Feb 9 07:53:20.587671 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 07:53:20.595738 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 07:53:20.603753 systemd[1]: Finished systemd-journal-flush.service. Feb 9 07:53:20.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.611799 systemd[1]: Finished systemd-random-seed.service. Feb 9 07:53:20.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.619774 systemd[1]: Finished systemd-sysctl.service. Feb 9 07:53:20.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.627745 systemd[1]: Finished systemd-sysusers.service. Feb 9 07:53:20.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.636706 systemd[1]: Reached target first-boot-complete.target. Feb 9 07:53:20.645293 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 07:53:20.654532 udevadm[1273]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 07:53:20.665150 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 07:53:20.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.824945 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 07:53:20.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.834000 audit: BPF prog-id=18 op=LOAD Feb 9 07:53:20.834000 audit: BPF prog-id=19 op=LOAD Feb 9 07:53:20.834000 audit: BPF prog-id=7 op=UNLOAD Feb 9 07:53:20.834000 audit: BPF prog-id=8 op=UNLOAD Feb 9 07:53:20.835843 systemd[1]: Starting systemd-udevd.service... Feb 9 07:53:20.847952 systemd-udevd[1276]: Using default interface naming scheme 'v252'. Feb 9 07:53:20.868878 systemd[1]: Started systemd-udevd.service. Feb 9 07:53:20.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:20.878891 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Feb 9 07:53:20.879000 audit: BPF prog-id=20 op=LOAD Feb 9 07:53:20.880086 systemd[1]: Starting systemd-networkd.service... Feb 9 07:53:20.903000 audit: BPF prog-id=21 op=LOAD Feb 9 07:53:20.922176 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 9 07:53:20.922259 kernel: ACPI: button: Sleep Button [SLPB] Feb 9 07:53:20.922284 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 9 07:53:20.921000 audit: BPF prog-id=22 op=LOAD Feb 9 07:53:20.943579 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sdb6 scanned by (udev-worker) (1280) Feb 9 07:53:20.968556 kernel: ACPI: button: Power Button [PWRF] Feb 9 07:53:20.985000 audit: BPF prog-id=23 op=LOAD Feb 9 07:53:20.986557 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 07:53:20.986879 systemd[1]: Starting systemd-userdbd.service... Feb 9 07:53:20.913000 audit[1351]: AVC avc: denied { confidentiality } for pid=1351 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 07:53:20.913000 audit[1351]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7f18aff9e010 a1=4d8bc a2=7f18b1c56bc5 a3=5 items=42 ppid=1276 pid=1351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 07:53:21.023665 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 07:53:20.913000 audit: CWD cwd="/" Feb 9 07:53:20.913000 audit: PATH item=0 name=(null) inode=1039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=1 name=(null) inode=12237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=2 name=(null) inode=12237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=3 name=(null) inode=12238 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=4 name=(null) inode=12237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=5 name=(null) inode=12239 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=6 name=(null) inode=12237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=7 name=(null) inode=12240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=8 name=(null) inode=12240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=9 name=(null) inode=12241 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=10 name=(null) inode=12240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=11 name=(null) inode=12242 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=12 name=(null) inode=12240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=13 name=(null) inode=12243 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=14 name=(null) inode=12240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=15 name=(null) inode=12244 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=16 name=(null) inode=12240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=17 name=(null) inode=12245 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=18 name=(null) inode=12237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=19 name=(null) inode=12246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=20 name=(null) inode=12246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=21 name=(null) inode=12247 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=22 name=(null) inode=12246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=23 name=(null) inode=12248 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=24 name=(null) inode=12246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=25 name=(null) inode=12249 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=26 name=(null) inode=12246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=27 name=(null) inode=12250 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=28 name=(null) inode=12246 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=29 name=(null) inode=12251 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=30 name=(null) inode=12237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=31 name=(null) inode=12252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=32 name=(null) inode=12252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=33 name=(null) inode=12253 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=34 name=(null) inode=12252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=35 name=(null) inode=12254 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=36 name=(null) inode=12252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=37 name=(null) inode=12255 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=38 name=(null) inode=12252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=39 name=(null) inode=12256 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=40 name=(null) inode=12252 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PATH item=41 name=(null) inode=12257 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 07:53:20.913000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 07:53:21.041618 kernel: IPMI message handler: version 39.2 Feb 9 07:53:21.061565 kernel: ipmi device interface Feb 9 07:53:21.083570 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 9 07:53:21.083786 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 9 07:53:21.086144 systemd[1]: Started systemd-userdbd.service. Feb 9 07:53:21.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:21.153943 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 9 07:53:21.154079 kernel: ipmi_si: IPMI System Interface driver Feb 9 07:53:21.154093 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 9 07:53:21.154159 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 9 07:53:21.213327 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 9 07:53:21.213415 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 9 07:53:21.253503 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 9 07:53:21.253526 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 9 07:53:21.294387 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 9 07:53:21.318570 kernel: iTCO_vendor_support: vendor-support=0 Feb 9 07:53:21.359272 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 9 07:53:21.359500 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 9 07:53:21.359520 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 9 07:53:21.406555 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Feb 9 07:53:21.406656 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 9 07:53:21.460039 systemd-networkd[1315]: bond0: netdev ready Feb 9 07:53:21.462241 systemd-networkd[1315]: lo: Link UP Feb 9 07:53:21.462243 systemd-networkd[1315]: lo: Gained carrier Feb 9 07:53:21.462715 systemd-networkd[1315]: Enumeration completed Feb 9 07:53:21.462786 systemd[1]: Started systemd-networkd.service. Feb 9 07:53:21.463008 systemd-networkd[1315]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 9 07:53:21.469266 systemd-networkd[1315]: enp2s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:7e:a1:e9.network. Feb 9 07:53:21.495612 kernel: intel_rapl_common: Found RAPL domain package Feb 9 07:53:21.495640 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Feb 9 07:53:21.495736 kernel: intel_rapl_common: Found RAPL domain core Feb 9 07:53:21.513554 kernel: intel_rapl_common: Found RAPL domain uncore Feb 9 07:53:21.513577 kernel: intel_rapl_common: Found RAPL domain dram Feb 9 07:53:21.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:21.598589 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 9 07:53:21.616553 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 9 07:53:21.619833 systemd[1]: Finished systemd-udev-settle.service. Feb 9 07:53:21.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:21.628281 systemd[1]: Starting lvm2-activation-early.service... Feb 9 07:53:21.643478 lvm[1379]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 07:53:21.669984 systemd[1]: Finished lvm2-activation-early.service. Feb 9 07:53:21.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:21.678688 systemd[1]: Reached target cryptsetup.target. Feb 9 07:53:21.688210 systemd[1]: Starting lvm2-activation.service... Feb 9 07:53:21.690322 lvm[1380]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 07:53:21.718992 systemd[1]: Finished lvm2-activation.service. Feb 9 07:53:21.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:21.727692 systemd[1]: Reached target local-fs-pre.target. Feb 9 07:53:21.735649 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 07:53:21.735664 systemd[1]: Reached target local-fs.target. Feb 9 07:53:21.743639 systemd[1]: Reached target machines.target. Feb 9 07:53:21.752228 systemd[1]: Starting ldconfig.service... Feb 9 07:53:21.758999 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 07:53:21.759021 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 07:53:21.759515 systemd[1]: Starting systemd-boot-update.service... Feb 9 07:53:21.767053 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 07:53:21.777177 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 07:53:21.777299 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 07:53:21.777352 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 07:53:21.777849 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 07:53:21.778106 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1382 (bootctl) Feb 9 07:53:21.778737 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 07:53:21.796505 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 07:53:21.798111 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 07:53:21.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:21.800680 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 07:53:21.806345 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 07:53:22.064559 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 9 07:53:22.090554 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Feb 9 07:53:22.090588 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 07:53:22.111614 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 9 07:53:22.129781 systemd-networkd[1315]: enp2s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:7e:a1:e8.network. Feb 9 07:53:22.249187 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 07:53:22.287601 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 9 07:53:22.312616 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Feb 9 07:53:22.313809 systemd-networkd[1315]: bond0: Link UP Feb 9 07:53:22.314007 systemd-networkd[1315]: enp2s0f1np1: Link UP Feb 9 07:53:22.314145 systemd-networkd[1315]: enp2s0f1np1: Gained carrier Feb 9 07:53:22.315166 systemd-networkd[1315]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:7e:a1:e8.network. Feb 9 07:53:22.351717 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 07:53:22.351745 kernel: bond0: active interface up! Feb 9 07:53:22.372911 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 9 07:53:22.385125 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 07:53:22.385540 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 07:53:22.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:22.407669 systemd-fsck[1391]: fsck.fat 4.2 (2021-01-31) Feb 9 07:53:22.407669 systemd-fsck[1391]: /dev/sdb1: 789 files, 115332/258078 clusters Feb 9 07:53:22.408529 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 07:53:22.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:22.418419 systemd[1]: Mounting boot.mount... Feb 9 07:53:22.429721 systemd[1]: Mounted boot.mount. Feb 9 07:53:22.447413 systemd[1]: Finished systemd-boot-update.service. Feb 9 07:53:22.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:22.458593 systemd-networkd[1315]: bond0: Gained carrier Feb 9 07:53:22.458743 systemd-networkd[1315]: enp2s0f0np0: Link UP Feb 9 07:53:22.458918 systemd-networkd[1315]: enp2s0f0np0: Gained carrier Feb 9 07:53:22.479344 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 07:53:22.498558 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 07:53:22.498601 kernel: bond0: (slave enp2s0f1np1): invalid new link 1 on slave Feb 9 07:53:22.498854 systemd-networkd[1315]: enp2s0f1np1: Link DOWN Feb 9 07:53:22.498856 systemd-networkd[1315]: enp2s0f1np1: Lost carrier Feb 9 07:53:22.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 07:53:22.525416 systemd[1]: Starting audit-rules.service... Feb 9 07:53:22.533161 systemd[1]: Starting clean-ca-certificates.service... Feb 9 07:53:22.542208 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 07:53:22.543000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 07:53:22.543000 audit[1411]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffff65f7b60 a2=420 a3=0 items=0 ppid=1394 pid=1411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 07:53:22.543000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 07:53:22.544250 augenrules[1411]: No rules Feb 9 07:53:22.551531 systemd[1]: Starting systemd-resolved.service... Feb 9 07:53:22.559477 systemd[1]: Starting systemd-timesyncd.service... Feb 9 07:53:22.567101 systemd[1]: Starting systemd-update-utmp.service... Feb 9 07:53:22.573871 systemd[1]: Finished audit-rules.service. Feb 9 07:53:22.580745 systemd[1]: Finished clean-ca-certificates.service. Feb 9 07:53:22.588754 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 07:53:22.600323 systemd[1]: Finished systemd-update-utmp.service. Feb 9 07:53:22.602602 ldconfig[1381]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 07:53:22.609717 systemd[1]: Finished ldconfig.service. Feb 9 07:53:22.618349 systemd[1]: Starting systemd-update-done.service... Feb 9 07:53:22.625665 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 07:53:22.625906 systemd[1]: Finished systemd-update-done.service. Feb 9 07:53:22.627359 systemd-resolved[1416]: Positive Trust Anchors: Feb 9 07:53:22.627364 systemd-resolved[1416]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 07:53:22.627383 systemd-resolved[1416]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 07:53:22.631377 systemd-resolved[1416]: Using system hostname 'ci-3510.3.2-a-d9875e643b'. Feb 9 07:53:22.634648 systemd[1]: Started systemd-timesyncd.service. Feb 9 07:53:22.643790 systemd[1]: Reached target time-set.target. Feb 9 07:53:22.693589 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 9 07:53:22.714593 kernel: bond0: (slave enp2s0f1np1): speed changed to 0 on port 1 Feb 9 07:53:22.714622 kernel: bond0: (slave enp2s0f1np1): link status up again after 200 ms Feb 9 07:53:22.715204 systemd-networkd[1315]: enp2s0f1np1: Link UP Feb 9 07:53:22.715370 systemd-networkd[1315]: enp2s0f1np1: Gained carrier Feb 9 07:53:22.716142 systemd[1]: Started systemd-resolved.service. Feb 9 07:53:22.748653 systemd[1]: Reached target network.target. Feb 9 07:53:22.753588 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 07:53:22.761632 systemd[1]: Reached target nss-lookup.target. Feb 9 07:53:22.769659 systemd[1]: Reached target sysinit.target. Feb 9 07:53:22.777676 systemd[1]: Started motdgen.path. Feb 9 07:53:22.784652 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 07:53:22.794688 systemd[1]: Started logrotate.timer. Feb 9 07:53:22.801675 systemd[1]: Started mdadm.timer. Feb 9 07:53:22.808620 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 07:53:22.816626 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 07:53:22.816640 systemd[1]: Reached target paths.target. Feb 9 07:53:22.823616 systemd[1]: Reached target timers.target. Feb 9 07:53:22.830750 systemd[1]: Listening on dbus.socket. Feb 9 07:53:22.838162 systemd[1]: Starting docker.socket... Feb 9 07:53:22.846033 systemd[1]: Listening on sshd.socket. Feb 9 07:53:22.852708 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 07:53:22.852940 systemd[1]: Listening on docker.socket. Feb 9 07:53:22.859685 systemd[1]: Reached target sockets.target. Feb 9 07:53:22.867644 systemd[1]: Reached target basic.target. Feb 9 07:53:22.874661 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 07:53:22.874681 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 07:53:22.875141 systemd[1]: Starting containerd.service... Feb 9 07:53:22.882055 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 07:53:22.891082 systemd[1]: Starting coreos-metadata.service... Feb 9 07:53:22.898094 systemd[1]: Starting dbus.service... Feb 9 07:53:22.904112 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 07:53:22.909267 jq[1432]: false Feb 9 07:53:22.910912 coreos-metadata[1425]: Feb 09 07:53:22.910 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 07:53:22.911347 systemd[1]: Starting extend-filesystems.service... Feb 9 07:53:22.916837 dbus-daemon[1431]: [system] SELinux support is enabled Feb 9 07:53:22.917682 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 07:53:22.918356 systemd[1]: Starting motdgen.service... Feb 9 07:53:22.920484 extend-filesystems[1434]: Found sda Feb 9 07:53:22.938150 extend-filesystems[1434]: Found sdb Feb 9 07:53:22.938150 extend-filesystems[1434]: Found sdb1 Feb 9 07:53:22.938150 extend-filesystems[1434]: Found sdb2 Feb 9 07:53:22.938150 extend-filesystems[1434]: Found sdb3 Feb 9 07:53:22.938150 extend-filesystems[1434]: Found usr Feb 9 07:53:22.938150 extend-filesystems[1434]: Found sdb4 Feb 9 07:53:22.938150 extend-filesystems[1434]: Found sdb6 Feb 9 07:53:22.938150 extend-filesystems[1434]: Found sdb7 Feb 9 07:53:22.938150 extend-filesystems[1434]: Found sdb9 Feb 9 07:53:22.938150 extend-filesystems[1434]: Checking size of /dev/sdb9 Feb 9 07:53:22.938150 extend-filesystems[1434]: Resized partition /dev/sdb9 Feb 9 07:53:23.068595 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Feb 9 07:53:23.068729 coreos-metadata[1428]: Feb 09 07:53:22.921 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 07:53:22.925394 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 07:53:23.068885 extend-filesystems[1450]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 07:53:22.957378 systemd[1]: Starting prepare-critools.service... Feb 9 07:53:22.972141 systemd[1]: Starting prepare-helm.service... Feb 9 07:53:22.991084 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 07:53:23.011086 systemd[1]: Starting sshd-keygen.service... Feb 9 07:53:23.030814 systemd[1]: Starting systemd-logind.service... Feb 9 07:53:23.047622 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 07:53:23.048124 systemd[1]: Starting tcsd.service... Feb 9 07:53:23.054175 systemd-logind[1462]: Watching system buttons on /dev/input/event3 (Power Button) Feb 9 07:53:23.084282 jq[1465]: true Feb 9 07:53:23.054184 systemd-logind[1462]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 9 07:53:23.054193 systemd-logind[1462]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 9 07:53:23.054306 systemd-logind[1462]: New seat seat0. Feb 9 07:53:23.060977 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 07:53:23.061327 systemd[1]: Starting update-engine.service... Feb 9 07:53:23.076206 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 07:53:23.091949 systemd[1]: Started dbus.service. Feb 9 07:53:23.100543 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 07:53:23.100633 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 07:53:23.100798 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 07:53:23.100877 systemd[1]: Finished motdgen.service. Feb 9 07:53:23.106787 update_engine[1464]: I0209 07:53:23.106287 1464 main.cc:92] Flatcar Update Engine starting Feb 9 07:53:23.108898 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 07:53:23.108978 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 07:53:23.110082 update_engine[1464]: I0209 07:53:23.110070 1464 update_check_scheduler.cc:74] Next update check in 7m38s Feb 9 07:53:23.113440 tar[1467]: ./ Feb 9 07:53:23.113440 tar[1467]: ./macvlan Feb 9 07:53:23.119237 jq[1473]: true Feb 9 07:53:23.120029 dbus-daemon[1431]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 07:53:23.121097 tar[1469]: linux-amd64/helm Feb 9 07:53:23.122420 tar[1468]: crictl Feb 9 07:53:23.124541 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 9 07:53:23.124677 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 9 07:53:23.125541 systemd[1]: Started update-engine.service. Feb 9 07:53:23.130344 env[1474]: time="2024-02-09T07:53:23.130317029Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 07:53:23.135568 tar[1467]: ./static Feb 9 07:53:23.137680 systemd[1]: Started systemd-logind.service. Feb 9 07:53:23.138823 env[1474]: time="2024-02-09T07:53:23.138806668Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 07:53:23.140094 env[1474]: time="2024-02-09T07:53:23.140081008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 07:53:23.140706 env[1474]: time="2024-02-09T07:53:23.140688723Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 07:53:23.140706 env[1474]: time="2024-02-09T07:53:23.140703746Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 07:53:23.142380 env[1474]: time="2024-02-09T07:53:23.142366484Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 07:53:23.142380 env[1474]: time="2024-02-09T07:53:23.142378979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 07:53:23.142441 env[1474]: time="2024-02-09T07:53:23.142387309Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 07:53:23.142441 env[1474]: time="2024-02-09T07:53:23.142393093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 07:53:23.142441 env[1474]: time="2024-02-09T07:53:23.142435624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 07:53:23.142574 env[1474]: time="2024-02-09T07:53:23.142564266Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 07:53:23.142642 env[1474]: time="2024-02-09T07:53:23.142631531Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 07:53:23.142671 env[1474]: time="2024-02-09T07:53:23.142642354Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 07:53:23.142671 env[1474]: time="2024-02-09T07:53:23.142667322Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 07:53:23.142726 env[1474]: time="2024-02-09T07:53:23.142675321Z" level=info msg="metadata content store policy set" policy=shared Feb 9 07:53:23.147194 bash[1502]: Updated "/home/core/.ssh/authorized_keys" Feb 9 07:53:23.147230 systemd[1]: Started locksmithd.service. Feb 9 07:53:23.153424 env[1474]: time="2024-02-09T07:53:23.153407886Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 07:53:23.153465 env[1474]: time="2024-02-09T07:53:23.153428373Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 07:53:23.153465 env[1474]: time="2024-02-09T07:53:23.153437083Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 07:53:23.153465 env[1474]: time="2024-02-09T07:53:23.153457524Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 07:53:23.153526 env[1474]: time="2024-02-09T07:53:23.153471647Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 07:53:23.153526 env[1474]: time="2024-02-09T07:53:23.153484329Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 07:53:23.153526 env[1474]: time="2024-02-09T07:53:23.153492711Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 07:53:23.153526 env[1474]: time="2024-02-09T07:53:23.153501841Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 07:53:23.153526 env[1474]: time="2024-02-09T07:53:23.153509082Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 07:53:23.153526 env[1474]: time="2024-02-09T07:53:23.153516714Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 07:53:23.153624 env[1474]: time="2024-02-09T07:53:23.153528784Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 07:53:23.153624 env[1474]: time="2024-02-09T07:53:23.153536681Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 07:53:23.153624 env[1474]: time="2024-02-09T07:53:23.153600291Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 07:53:23.153675 env[1474]: time="2024-02-09T07:53:23.153652848Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 07:53:23.153727 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 07:53:23.153803 env[1474]: time="2024-02-09T07:53:23.153795181Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 07:53:23.153823 env[1474]: time="2024-02-09T07:53:23.153815840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 07:53:23.153840 env[1474]: time="2024-02-09T07:53:23.153830508Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 07:53:23.153855 systemd[1]: Reached target system-config.target. Feb 9 07:53:23.153891 env[1474]: time="2024-02-09T07:53:23.153861856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 07:53:23.153891 env[1474]: time="2024-02-09T07:53:23.153870119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 07:53:23.153891 env[1474]: time="2024-02-09T07:53:23.153877019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 07:53:23.153891 env[1474]: time="2024-02-09T07:53:23.153883122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 07:53:23.153891 env[1474]: time="2024-02-09T07:53:23.153889525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 07:53:23.153968 env[1474]: time="2024-02-09T07:53:23.153897134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 07:53:23.153968 env[1474]: time="2024-02-09T07:53:23.153903672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 07:53:23.153968 env[1474]: time="2024-02-09T07:53:23.153909829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 07:53:23.153968 env[1474]: time="2024-02-09T07:53:23.153916700Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 07:53:23.154036 env[1474]: time="2024-02-09T07:53:23.153976962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 07:53:23.154036 env[1474]: time="2024-02-09T07:53:23.153985406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 07:53:23.154036 env[1474]: time="2024-02-09T07:53:23.153991798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 07:53:23.154036 env[1474]: time="2024-02-09T07:53:23.153999090Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 07:53:23.154036 env[1474]: time="2024-02-09T07:53:23.154013313Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 07:53:23.154036 env[1474]: time="2024-02-09T07:53:23.154023295Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 07:53:23.154130 env[1474]: time="2024-02-09T07:53:23.154038385Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 07:53:23.154130 env[1474]: time="2024-02-09T07:53:23.154068841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 07:53:23.154248 env[1474]: time="2024-02-09T07:53:23.154210617Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 07:53:23.156896 env[1474]: time="2024-02-09T07:53:23.154256485Z" level=info msg="Connect containerd service" Feb 9 07:53:23.156896 env[1474]: time="2024-02-09T07:53:23.154275708Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 07:53:23.156896 env[1474]: time="2024-02-09T07:53:23.154590928Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 07:53:23.156896 env[1474]: time="2024-02-09T07:53:23.154703009Z" level=info msg="Start subscribing containerd event" Feb 9 07:53:23.156896 env[1474]: time="2024-02-09T07:53:23.154734223Z" level=info msg="Start recovering state" Feb 9 07:53:23.156896 env[1474]: time="2024-02-09T07:53:23.154742785Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 07:53:23.156896 env[1474]: time="2024-02-09T07:53:23.154931348Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 07:53:23.156896 env[1474]: time="2024-02-09T07:53:23.155010005Z" level=info msg="containerd successfully booted in 0.025092s" Feb 9 07:53:23.156896 env[1474]: time="2024-02-09T07:53:23.155021957Z" level=info msg="Start event monitor" Feb 9 07:53:23.156896 env[1474]: time="2024-02-09T07:53:23.155060670Z" level=info msg="Start snapshots syncer" Feb 9 07:53:23.156896 env[1474]: time="2024-02-09T07:53:23.155085471Z" level=info msg="Start cni network conf syncer for default" Feb 9 07:53:23.156896 env[1474]: time="2024-02-09T07:53:23.155099324Z" level=info msg="Start streaming server" Feb 9 07:53:23.160618 tar[1467]: ./vlan Feb 9 07:53:23.161694 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 07:53:23.161772 systemd[1]: Reached target user-config.target. Feb 9 07:53:23.171309 systemd[1]: Started containerd.service. Feb 9 07:53:23.177843 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 07:53:23.180754 tar[1467]: ./portmap Feb 9 07:53:23.199759 tar[1467]: ./host-local Feb 9 07:53:23.209768 locksmithd[1508]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 07:53:23.216483 tar[1467]: ./vrf Feb 9 07:53:23.234602 tar[1467]: ./bridge Feb 9 07:53:23.256254 tar[1467]: ./tuning Feb 9 07:53:23.273556 tar[1467]: ./firewall Feb 9 07:53:23.295946 tar[1467]: ./host-device Feb 9 07:53:23.315489 tar[1467]: ./sbr Feb 9 07:53:23.325611 systemd-networkd[1315]: bond0: Gained IPv6LL Feb 9 07:53:23.333376 tar[1467]: ./loopback Feb 9 07:53:23.350344 tar[1467]: ./dhcp Feb 9 07:53:23.379158 systemd[1]: Finished prepare-critools.service. Feb 9 07:53:23.381519 tar[1469]: linux-amd64/LICENSE Feb 9 07:53:23.381519 tar[1469]: linux-amd64/README.md Feb 9 07:53:23.390039 systemd[1]: Finished prepare-helm.service. Feb 9 07:53:23.399982 tar[1467]: ./ptp Feb 9 07:53:23.421086 tar[1467]: ./ipvlan Feb 9 07:53:23.441399 tar[1467]: ./bandwidth Feb 9 07:53:23.454593 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Feb 9 07:53:23.481109 extend-filesystems[1450]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Feb 9 07:53:23.481109 extend-filesystems[1450]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 9 07:53:23.481109 extend-filesystems[1450]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Feb 9 07:53:23.517664 extend-filesystems[1434]: Resized filesystem in /dev/sdb9 Feb 9 07:53:23.481518 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 07:53:23.532778 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 07:53:23.481611 systemd[1]: Finished extend-filesystems.service. Feb 9 07:53:23.512455 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 07:53:23.518619 systemd[1]: Finished sshd-keygen.service. Feb 9 07:53:23.540533 systemd[1]: Starting issuegen.service... Feb 9 07:53:23.547933 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 07:53:23.548002 systemd[1]: Finished issuegen.service. Feb 9 07:53:23.555324 systemd[1]: Starting systemd-user-sessions.service... Feb 9 07:53:23.563914 systemd[1]: Finished systemd-user-sessions.service. Feb 9 07:53:23.572417 systemd[1]: Started getty@tty1.service. Feb 9 07:53:23.580302 systemd[1]: Started serial-getty@ttyS1.service. Feb 9 07:53:23.589790 systemd[1]: Reached target getty.target. Feb 9 07:53:25.013618 kernel: mlx5_core 0000:02:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 9 07:53:28.599183 login[1533]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 07:53:28.606234 login[1532]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 07:53:28.607128 systemd-logind[1462]: New session 1 of user core. Feb 9 07:53:28.608089 systemd[1]: Created slice user-500.slice. Feb 9 07:53:28.608789 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 07:53:28.610024 systemd-logind[1462]: New session 2 of user core. Feb 9 07:53:28.614057 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 07:53:28.614878 systemd[1]: Starting user@500.service... Feb 9 07:53:28.616759 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:53:28.703451 systemd[1538]: Queued start job for default target default.target. Feb 9 07:53:28.703689 systemd[1538]: Reached target paths.target. Feb 9 07:53:28.703701 systemd[1538]: Reached target sockets.target. Feb 9 07:53:28.703709 systemd[1538]: Reached target timers.target. Feb 9 07:53:28.703716 systemd[1538]: Reached target basic.target. Feb 9 07:53:28.703736 systemd[1538]: Reached target default.target. Feb 9 07:53:28.703750 systemd[1538]: Startup finished in 84ms. Feb 9 07:53:28.703775 systemd[1]: Started user@500.service. Feb 9 07:53:28.704510 systemd[1]: Started session-1.scope. Feb 9 07:53:28.704982 systemd[1]: Started session-2.scope. Feb 9 07:53:28.766298 coreos-metadata[1428]: Feb 09 07:53:28.766 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 07:53:28.766587 coreos-metadata[1425]: Feb 09 07:53:28.766 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 07:53:29.766797 coreos-metadata[1428]: Feb 09 07:53:29.766 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 07:53:29.767617 coreos-metadata[1425]: Feb 09 07:53:29.766 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 07:53:30.388599 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:2 port 2:2 Feb 9 07:53:30.395590 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:2 Feb 9 07:53:30.840698 coreos-metadata[1425]: Feb 09 07:53:30.840 INFO Fetch successful Feb 9 07:53:30.841604 coreos-metadata[1428]: Feb 09 07:53:30.840 INFO Fetch successful Feb 9 07:53:30.866725 systemd[1]: Finished coreos-metadata.service. Feb 9 07:53:30.867447 unknown[1425]: wrote ssh authorized keys file for user: core Feb 9 07:53:30.867694 systemd[1]: Started packet-phone-home.service. Feb 9 07:53:30.873044 curl[1560]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 9 07:53:30.873193 curl[1560]: Dload Upload Total Spent Left Speed Feb 9 07:53:30.877524 update-ssh-keys[1561]: Updated "/home/core/.ssh/authorized_keys" Feb 9 07:53:30.877778 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 07:53:30.878025 systemd[1]: Reached target multi-user.target. Feb 9 07:53:30.878739 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 07:53:30.882579 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 07:53:30.882694 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 07:53:30.882779 systemd[1]: Startup finished in 2.482s (kernel) + 20.398s (initrd) + 14.655s (userspace) = 37.537s. Feb 9 07:53:31.077902 curl[1560]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 9 07:53:31.080338 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 9 07:53:31.227826 systemd-timesyncd[1417]: Contacted time server 209.51.161.238:123 (0.flatcar.pool.ntp.org). Feb 9 07:53:31.227969 systemd-timesyncd[1417]: Initial clock synchronization to Fri 2024-02-09 07:53:31.483071 UTC. Feb 9 07:53:31.763554 systemd[1]: Created slice system-sshd.slice. Feb 9 07:53:31.764213 systemd[1]: Started sshd@0-139.178.90.113:22-147.75.109.163:41566.service. Feb 9 07:53:31.833360 sshd[1564]: Accepted publickey for core from 147.75.109.163 port 41566 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:53:31.834154 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:53:31.836772 systemd-logind[1462]: New session 3 of user core. Feb 9 07:53:31.837619 systemd[1]: Started session-3.scope. Feb 9 07:53:31.888451 systemd[1]: Started sshd@1-139.178.90.113:22-147.75.109.163:41572.service. Feb 9 07:53:31.920492 sshd[1569]: Accepted publickey for core from 147.75.109.163 port 41572 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:53:31.921176 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:53:31.923493 systemd-logind[1462]: New session 4 of user core. Feb 9 07:53:31.924183 systemd[1]: Started session-4.scope. Feb 9 07:53:31.977144 sshd[1569]: pam_unix(sshd:session): session closed for user core Feb 9 07:53:31.978594 systemd[1]: sshd@1-139.178.90.113:22-147.75.109.163:41572.service: Deactivated successfully. Feb 9 07:53:31.978910 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 07:53:31.979290 systemd-logind[1462]: Session 4 logged out. Waiting for processes to exit. Feb 9 07:53:31.979773 systemd[1]: Started sshd@2-139.178.90.113:22-147.75.109.163:41578.service. Feb 9 07:53:31.980186 systemd-logind[1462]: Removed session 4. Feb 9 07:53:32.012985 sshd[1575]: Accepted publickey for core from 147.75.109.163 port 41578 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:53:32.013882 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:53:32.017000 systemd-logind[1462]: New session 5 of user core. Feb 9 07:53:32.017685 systemd[1]: Started session-5.scope. Feb 9 07:53:32.070705 sshd[1575]: pam_unix(sshd:session): session closed for user core Feb 9 07:53:32.072269 systemd[1]: sshd@2-139.178.90.113:22-147.75.109.163:41578.service: Deactivated successfully. Feb 9 07:53:32.072594 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 07:53:32.072994 systemd-logind[1462]: Session 5 logged out. Waiting for processes to exit. Feb 9 07:53:32.073484 systemd[1]: Started sshd@3-139.178.90.113:22-147.75.109.163:41580.service. Feb 9 07:53:32.073871 systemd-logind[1462]: Removed session 5. Feb 9 07:53:32.106990 sshd[1581]: Accepted publickey for core from 147.75.109.163 port 41580 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:53:32.107892 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:53:32.111002 systemd-logind[1462]: New session 6 of user core. Feb 9 07:53:32.111642 systemd[1]: Started session-6.scope. Feb 9 07:53:32.178353 sshd[1581]: pam_unix(sshd:session): session closed for user core Feb 9 07:53:32.185065 systemd[1]: sshd@3-139.178.90.113:22-147.75.109.163:41580.service: Deactivated successfully. Feb 9 07:53:32.185954 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 07:53:32.186330 systemd-logind[1462]: Session 6 logged out. Waiting for processes to exit. Feb 9 07:53:32.186812 systemd[1]: Started sshd@4-139.178.90.113:22-147.75.109.163:41592.service. Feb 9 07:53:32.187222 systemd-logind[1462]: Removed session 6. Feb 9 07:53:32.219949 sshd[1588]: Accepted publickey for core from 147.75.109.163 port 41592 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 07:53:32.220721 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 07:53:32.223252 systemd-logind[1462]: New session 7 of user core. Feb 9 07:53:32.224051 systemd[1]: Started session-7.scope. Feb 9 07:53:32.308774 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 07:53:32.309384 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 07:53:35.942033 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 07:53:35.946333 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 07:53:35.946625 systemd[1]: Reached target network-online.target. Feb 9 07:53:35.947414 systemd[1]: Starting docker.service... Feb 9 07:53:35.968266 env[1612]: time="2024-02-09T07:53:35.968216321Z" level=info msg="Starting up" Feb 9 07:53:35.968984 env[1612]: time="2024-02-09T07:53:35.968943032Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 07:53:35.968984 env[1612]: time="2024-02-09T07:53:35.968951732Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 07:53:35.968984 env[1612]: time="2024-02-09T07:53:35.968961901Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 07:53:35.968984 env[1612]: time="2024-02-09T07:53:35.968968346Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 07:53:35.969982 env[1612]: time="2024-02-09T07:53:35.969935417Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 07:53:35.969982 env[1612]: time="2024-02-09T07:53:35.969945740Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 07:53:35.969982 env[1612]: time="2024-02-09T07:53:35.969980367Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 07:53:35.970119 env[1612]: time="2024-02-09T07:53:35.969986939Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 07:53:36.003065 env[1612]: time="2024-02-09T07:53:36.003027967Z" level=info msg="Loading containers: start." Feb 9 07:53:36.151637 kernel: Initializing XFRM netlink socket Feb 9 07:53:36.193055 env[1612]: time="2024-02-09T07:53:36.193011426Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 07:53:36.314314 systemd-networkd[1315]: docker0: Link UP Feb 9 07:53:36.331994 env[1612]: time="2024-02-09T07:53:36.331902540Z" level=info msg="Loading containers: done." Feb 9 07:53:36.349887 env[1612]: time="2024-02-09T07:53:36.349790017Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 07:53:36.350199 env[1612]: time="2024-02-09T07:53:36.350135577Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 07:53:36.350443 env[1612]: time="2024-02-09T07:53:36.350377909Z" level=info msg="Daemon has completed initialization" Feb 9 07:53:36.351726 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck741257570-merged.mount: Deactivated successfully. Feb 9 07:53:36.358225 systemd[1]: Started docker.service. Feb 9 07:53:36.360349 env[1612]: time="2024-02-09T07:53:36.360299563Z" level=info msg="API listen on /run/docker.sock" Feb 9 07:53:36.369821 systemd[1]: Reloading. Feb 9 07:53:36.431915 /usr/lib/systemd/system-generators/torcx-generator[1764]: time="2024-02-09T07:53:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 07:53:36.431955 /usr/lib/systemd/system-generators/torcx-generator[1764]: time="2024-02-09T07:53:36Z" level=info msg="torcx already run" Feb 9 07:53:36.532729 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 07:53:36.532742 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 07:53:36.549483 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 07:53:36.603305 systemd[1]: Started kubelet.service. Feb 9 07:53:36.629351 kubelet[1823]: E0209 07:53:36.629317 1823 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 07:53:36.630704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 07:53:36.630771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 07:53:37.257062 env[1474]: time="2024-02-09T07:53:37.256914393Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 07:53:37.894674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount858673768.mount: Deactivated successfully. Feb 9 07:53:39.246890 env[1474]: time="2024-02-09T07:53:39.246818866Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:39.247477 env[1474]: time="2024-02-09T07:53:39.247439188Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:39.248978 env[1474]: time="2024-02-09T07:53:39.248942331Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:39.249903 env[1474]: time="2024-02-09T07:53:39.249864096Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:39.250377 env[1474]: time="2024-02-09T07:53:39.250336186Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 07:53:39.255969 env[1474]: time="2024-02-09T07:53:39.255949772Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 07:53:40.898017 env[1474]: time="2024-02-09T07:53:40.897945996Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:40.898611 env[1474]: time="2024-02-09T07:53:40.898582321Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:40.899655 env[1474]: time="2024-02-09T07:53:40.899641892Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:40.901649 env[1474]: time="2024-02-09T07:53:40.901620553Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:40.902022 env[1474]: time="2024-02-09T07:53:40.902008717Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 07:53:40.907873 env[1474]: time="2024-02-09T07:53:40.907855026Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 07:53:41.913068 env[1474]: time="2024-02-09T07:53:41.913006051Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:41.913757 env[1474]: time="2024-02-09T07:53:41.913714478Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:41.914828 env[1474]: time="2024-02-09T07:53:41.914766018Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:41.915753 env[1474]: time="2024-02-09T07:53:41.915713282Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:41.916214 env[1474]: time="2024-02-09T07:53:41.916179488Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 07:53:41.923436 env[1474]: time="2024-02-09T07:53:41.923368565Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 07:53:42.876024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4103602360.mount: Deactivated successfully. Feb 9 07:53:43.167328 env[1474]: time="2024-02-09T07:53:43.167226309Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:43.167876 env[1474]: time="2024-02-09T07:53:43.167842440Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:43.168488 env[1474]: time="2024-02-09T07:53:43.168477676Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:43.169201 env[1474]: time="2024-02-09T07:53:43.169174188Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:43.169521 env[1474]: time="2024-02-09T07:53:43.169506141Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 07:53:43.174913 env[1474]: time="2024-02-09T07:53:43.174893526Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 07:53:43.771837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount642429165.mount: Deactivated successfully. Feb 9 07:53:43.773216 env[1474]: time="2024-02-09T07:53:43.773175810Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:43.773848 env[1474]: time="2024-02-09T07:53:43.773787990Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:43.774474 env[1474]: time="2024-02-09T07:53:43.774462095Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:43.775305 env[1474]: time="2024-02-09T07:53:43.775291436Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:43.775656 env[1474]: time="2024-02-09T07:53:43.775622884Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 07:53:43.781501 env[1474]: time="2024-02-09T07:53:43.781481478Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 07:53:44.542914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3650929169.mount: Deactivated successfully. Feb 9 07:53:46.825386 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 07:53:46.825524 systemd[1]: Stopped kubelet.service. Feb 9 07:53:46.826346 systemd[1]: Started kubelet.service. Feb 9 07:53:46.850408 kubelet[1913]: E0209 07:53:46.850315 1913 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 07:53:46.852530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 07:53:46.852644 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 07:53:47.364466 env[1474]: time="2024-02-09T07:53:47.364416662Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:47.365089 env[1474]: time="2024-02-09T07:53:47.365049820Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:47.365968 env[1474]: time="2024-02-09T07:53:47.365927568Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:47.366958 env[1474]: time="2024-02-09T07:53:47.366911794Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:47.367731 env[1474]: time="2024-02-09T07:53:47.367696682Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 07:53:47.376351 env[1474]: time="2024-02-09T07:53:47.376320466Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 07:53:47.953772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1338919428.mount: Deactivated successfully. Feb 9 07:53:48.375428 env[1474]: time="2024-02-09T07:53:48.375402817Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:48.376041 env[1474]: time="2024-02-09T07:53:48.376011901Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:48.376907 env[1474]: time="2024-02-09T07:53:48.376895596Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:48.378102 env[1474]: time="2024-02-09T07:53:48.378072385Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:48.378402 env[1474]: time="2024-02-09T07:53:48.378360491Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 07:53:49.735275 systemd[1]: Stopped kubelet.service. Feb 9 07:53:49.743133 systemd[1]: Reloading. Feb 9 07:53:49.778024 /usr/lib/systemd/system-generators/torcx-generator[2068]: time="2024-02-09T07:53:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 07:53:49.778042 /usr/lib/systemd/system-generators/torcx-generator[2068]: time="2024-02-09T07:53:49Z" level=info msg="torcx already run" Feb 9 07:53:49.833211 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 07:53:49.833219 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 07:53:49.845445 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 07:53:49.900412 systemd[1]: Started kubelet.service. Feb 9 07:53:49.921988 kubelet[2128]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 07:53:49.921988 kubelet[2128]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 07:53:49.921988 kubelet[2128]: I0209 07:53:49.921978 2128 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 07:53:49.922731 kubelet[2128]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 07:53:49.922731 kubelet[2128]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 07:53:50.030655 kubelet[2128]: I0209 07:53:50.030591 2128 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 07:53:50.030655 kubelet[2128]: I0209 07:53:50.030601 2128 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 07:53:50.030714 kubelet[2128]: I0209 07:53:50.030701 2128 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 07:53:50.032034 kubelet[2128]: I0209 07:53:50.032016 2128 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 07:53:50.032339 kubelet[2128]: E0209 07:53:50.032330 2128 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.90.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 07:53:50.051201 kubelet[2128]: I0209 07:53:50.051192 2128 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 07:53:50.051325 kubelet[2128]: I0209 07:53:50.051289 2128 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 07:53:50.051361 kubelet[2128]: I0209 07:53:50.051327 2128 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 07:53:50.051361 kubelet[2128]: I0209 07:53:50.051336 2128 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 07:53:50.051361 kubelet[2128]: I0209 07:53:50.051343 2128 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 07:53:50.051455 kubelet[2128]: I0209 07:53:50.051386 2128 state_mem.go:36] "Initialized new in-memory state store" Feb 9 07:53:50.052942 kubelet[2128]: I0209 07:53:50.052932 2128 kubelet.go:398] "Attempting to sync node with API server" Feb 9 07:53:50.052988 kubelet[2128]: I0209 07:53:50.052947 2128 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 07:53:50.052988 kubelet[2128]: I0209 07:53:50.052964 2128 kubelet.go:297] "Adding apiserver pod source" Feb 9 07:53:50.052988 kubelet[2128]: I0209 07:53:50.052978 2128 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 07:53:50.053461 kubelet[2128]: W0209 07:53:50.053430 2128 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.90.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-d9875e643b&limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 07:53:50.053461 kubelet[2128]: I0209 07:53:50.053449 2128 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 07:53:50.053541 kubelet[2128]: E0209 07:53:50.053485 2128 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.90.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-d9875e643b&limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 07:53:50.053541 kubelet[2128]: W0209 07:53:50.053496 2128 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.90.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 07:53:50.053541 kubelet[2128]: E0209 07:53:50.053529 2128 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.90.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 07:53:50.053774 kubelet[2128]: W0209 07:53:50.053758 2128 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 07:53:50.054233 kubelet[2128]: I0209 07:53:50.054223 2128 server.go:1186] "Started kubelet" Feb 9 07:53:50.054278 kubelet[2128]: I0209 07:53:50.054269 2128 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 07:53:50.054432 kubelet[2128]: E0209 07:53:50.054424 2128 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 07:53:50.054468 kubelet[2128]: E0209 07:53:50.054435 2128 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 07:53:50.054468 kubelet[2128]: E0209 07:53:50.054414 2128 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-d9875e643b.17b2229487b64224", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-d9875e643b", UID:"ci-3510.3.2-a-d9875e643b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-d9875e643b"}, FirstTimestamp:time.Date(2024, time.February, 9, 7, 53, 50, 54212132, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 7, 53, 50, 54212132, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://139.178.90.113:6443/api/v1/namespaces/default/events": dial tcp 139.178.90.113:6443: connect: connection refused'(may retry after sleeping) Feb 9 07:53:50.054756 kubelet[2128]: I0209 07:53:50.054750 2128 server.go:451] "Adding debug handlers to kubelet server" Feb 9 07:53:50.064088 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 07:53:50.064122 kubelet[2128]: I0209 07:53:50.064091 2128 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 07:53:50.064202 kubelet[2128]: I0209 07:53:50.064168 2128 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 07:53:50.064246 kubelet[2128]: I0209 07:53:50.064204 2128 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 07:53:50.064335 kubelet[2128]: E0209 07:53:50.064321 2128 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://139.178.90.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-d9875e643b?timeout=10s": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 07:53:50.064378 kubelet[2128]: W0209 07:53:50.064329 2128 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.90.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 07:53:50.064378 kubelet[2128]: E0209 07:53:50.064352 2128 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.90.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 07:53:50.083454 kubelet[2128]: I0209 07:53:50.083440 2128 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 07:53:50.094547 kubelet[2128]: I0209 07:53:50.094508 2128 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 07:53:50.094547 kubelet[2128]: I0209 07:53:50.094521 2128 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 07:53:50.094547 kubelet[2128]: I0209 07:53:50.094532 2128 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 07:53:50.094646 kubelet[2128]: E0209 07:53:50.094566 2128 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 07:53:50.094751 kubelet[2128]: W0209 07:53:50.094736 2128 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.90.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 07:53:50.094793 kubelet[2128]: E0209 07:53:50.094758 2128 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.90.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 07:53:50.101818 kubelet[2128]: I0209 07:53:50.101806 2128 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 07:53:50.101818 kubelet[2128]: I0209 07:53:50.101816 2128 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 07:53:50.101885 kubelet[2128]: I0209 07:53:50.101824 2128 state_mem.go:36] "Initialized new in-memory state store" Feb 9 07:53:50.102607 kubelet[2128]: I0209 07:53:50.102600 2128 policy_none.go:49] "None policy: Start" Feb 9 07:53:50.102826 kubelet[2128]: I0209 07:53:50.102817 2128 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 07:53:50.102871 kubelet[2128]: I0209 07:53:50.102830 2128 state_mem.go:35] "Initializing new in-memory state store" Feb 9 07:53:50.105470 systemd[1]: Created slice kubepods.slice. Feb 9 07:53:50.107904 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 07:53:50.133376 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 07:53:50.134631 kubelet[2128]: I0209 07:53:50.134611 2128 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 07:53:50.134829 kubelet[2128]: I0209 07:53:50.134812 2128 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 07:53:50.135243 kubelet[2128]: E0209 07:53:50.135222 2128 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-d9875e643b\" not found" Feb 9 07:53:50.168058 kubelet[2128]: I0209 07:53:50.168005 2128 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-d9875e643b" Feb 9 07:53:50.168832 kubelet[2128]: E0209 07:53:50.168747 2128 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.90.113:6443/api/v1/nodes\": dial tcp 139.178.90.113:6443: connect: connection refused" node="ci-3510.3.2-a-d9875e643b" Feb 9 07:53:50.195117 kubelet[2128]: I0209 07:53:50.195019 2128 topology_manager.go:210] "Topology Admit Handler" Feb 9 07:53:50.198728 kubelet[2128]: I0209 07:53:50.198647 2128 topology_manager.go:210] "Topology Admit Handler" Feb 9 07:53:50.202016 kubelet[2128]: I0209 07:53:50.201968 2128 topology_manager.go:210] "Topology Admit Handler" Feb 9 07:53:50.202591 kubelet[2128]: I0209 07:53:50.202519 2128 status_manager.go:698] "Failed to get status for pod" podUID=05a4533a3438fdcb9206e7889ed2d4ed pod="kube-system/kube-apiserver-ci-3510.3.2-a-d9875e643b" err="Get \"https://139.178.90.113:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-d9875e643b\": dial tcp 139.178.90.113:6443: connect: connection refused" Feb 9 07:53:50.206346 kubelet[2128]: I0209 07:53:50.206262 2128 status_manager.go:698] "Failed to get status for pod" podUID=2b8f0d28b919f4f93b2729d07401a4e4 pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d9875e643b" err="Get \"https://139.178.90.113:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-d9875e643b\": dial tcp 139.178.90.113:6443: connect: connection refused" Feb 9 07:53:50.209565 kubelet[2128]: I0209 07:53:50.209502 2128 status_manager.go:698] "Failed to get status for pod" podUID=75382b07cf6b7e51ba52f8d4c702e4c9 pod="kube-system/kube-scheduler-ci-3510.3.2-a-d9875e643b" err="Get \"https://139.178.90.113:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-d9875e643b\": dial tcp 139.178.90.113:6443: connect: connection refused" Feb 9 07:53:50.214242 systemd[1]: Created slice kubepods-burstable-pod05a4533a3438fdcb9206e7889ed2d4ed.slice. Feb 9 07:53:50.249896 systemd[1]: Created slice kubepods-burstable-pod2b8f0d28b919f4f93b2729d07401a4e4.slice. Feb 9 07:53:50.265634 kubelet[2128]: E0209 07:53:50.265538 2128 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://139.178.90.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-d9875e643b?timeout=10s": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 07:53:50.271243 systemd[1]: Created slice kubepods-burstable-pod75382b07cf6b7e51ba52f8d4c702e4c9.slice. Feb 9 07:53:50.365397 kubelet[2128]: I0209 07:53:50.365294 2128 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05a4533a3438fdcb9206e7889ed2d4ed-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-d9875e643b\" (UID: \"05a4533a3438fdcb9206e7889ed2d4ed\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:50.365725 kubelet[2128]: I0209 07:53:50.365425 2128 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2b8f0d28b919f4f93b2729d07401a4e4-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-d9875e643b\" (UID: \"2b8f0d28b919f4f93b2729d07401a4e4\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:50.365725 kubelet[2128]: I0209 07:53:50.365519 2128 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2b8f0d28b919f4f93b2729d07401a4e4-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-d9875e643b\" (UID: \"2b8f0d28b919f4f93b2729d07401a4e4\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:50.365725 kubelet[2128]: I0209 07:53:50.365638 2128 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2b8f0d28b919f4f93b2729d07401a4e4-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-d9875e643b\" (UID: \"2b8f0d28b919f4f93b2729d07401a4e4\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:50.366048 kubelet[2128]: I0209 07:53:50.365739 2128 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b8f0d28b919f4f93b2729d07401a4e4-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-d9875e643b\" (UID: \"2b8f0d28b919f4f93b2729d07401a4e4\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:50.366048 kubelet[2128]: I0209 07:53:50.365846 2128 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b8f0d28b919f4f93b2729d07401a4e4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-d9875e643b\" (UID: \"2b8f0d28b919f4f93b2729d07401a4e4\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:50.366048 kubelet[2128]: I0209 07:53:50.365908 2128 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75382b07cf6b7e51ba52f8d4c702e4c9-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-d9875e643b\" (UID: \"75382b07cf6b7e51ba52f8d4c702e4c9\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:50.366048 kubelet[2128]: I0209 07:53:50.365964 2128 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05a4533a3438fdcb9206e7889ed2d4ed-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-d9875e643b\" (UID: \"05a4533a3438fdcb9206e7889ed2d4ed\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:50.366399 kubelet[2128]: I0209 07:53:50.366077 2128 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05a4533a3438fdcb9206e7889ed2d4ed-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-d9875e643b\" (UID: \"05a4533a3438fdcb9206e7889ed2d4ed\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:50.372869 kubelet[2128]: I0209 07:53:50.372786 2128 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-d9875e643b" Feb 9 07:53:50.373454 kubelet[2128]: E0209 07:53:50.373379 2128 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.90.113:6443/api/v1/nodes\": dial tcp 139.178.90.113:6443: connect: connection refused" node="ci-3510.3.2-a-d9875e643b" Feb 9 07:53:50.544814 env[1474]: time="2024-02-09T07:53:50.544673914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-d9875e643b,Uid:05a4533a3438fdcb9206e7889ed2d4ed,Namespace:kube-system,Attempt:0,}" Feb 9 07:53:50.567310 env[1474]: time="2024-02-09T07:53:50.567177004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-d9875e643b,Uid:2b8f0d28b919f4f93b2729d07401a4e4,Namespace:kube-system,Attempt:0,}" Feb 9 07:53:50.576414 env[1474]: time="2024-02-09T07:53:50.576305022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-d9875e643b,Uid:75382b07cf6b7e51ba52f8d4c702e4c9,Namespace:kube-system,Attempt:0,}" Feb 9 07:53:50.667134 kubelet[2128]: E0209 07:53:50.666912 2128 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://139.178.90.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-d9875e643b?timeout=10s": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 07:53:50.777742 kubelet[2128]: I0209 07:53:50.777650 2128 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-d9875e643b" Feb 9 07:53:50.778317 kubelet[2128]: E0209 07:53:50.778282 2128 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.90.113:6443/api/v1/nodes\": dial tcp 139.178.90.113:6443: connect: connection refused" node="ci-3510.3.2-a-d9875e643b" Feb 9 07:53:50.879853 kubelet[2128]: W0209 07:53:50.879696 2128 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.90.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 07:53:50.879853 kubelet[2128]: E0209 07:53:50.879818 2128 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.90.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 07:53:51.078775 kubelet[2128]: W0209 07:53:51.078635 2128 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.90.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-d9875e643b&limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 07:53:51.078775 kubelet[2128]: E0209 07:53:51.078755 2128 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.90.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-d9875e643b&limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 07:53:51.114515 kubelet[2128]: W0209 07:53:51.114398 2128 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.90.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 07:53:51.114515 kubelet[2128]: E0209 07:53:51.114488 2128 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.90.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.90.113:6443: connect: connection refused Feb 9 07:53:51.155165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1002575532.mount: Deactivated successfully. Feb 9 07:53:51.156697 env[1474]: time="2024-02-09T07:53:51.156646366Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:51.157777 env[1474]: time="2024-02-09T07:53:51.157720625Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:51.158663 env[1474]: time="2024-02-09T07:53:51.158606577Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:51.159311 env[1474]: time="2024-02-09T07:53:51.159271718Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:51.159734 env[1474]: time="2024-02-09T07:53:51.159699069Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:51.160469 env[1474]: time="2024-02-09T07:53:51.160434573Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:51.161783 env[1474]: time="2024-02-09T07:53:51.161731759Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:51.162961 env[1474]: time="2024-02-09T07:53:51.162910386Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:51.163776 env[1474]: time="2024-02-09T07:53:51.163736393Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:51.164172 env[1474]: time="2024-02-09T07:53:51.164131087Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:51.164507 env[1474]: time="2024-02-09T07:53:51.164472392Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:51.166399 env[1474]: time="2024-02-09T07:53:51.166340540Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:53:51.168290 env[1474]: time="2024-02-09T07:53:51.168260546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 07:53:51.168290 env[1474]: time="2024-02-09T07:53:51.168282139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 07:53:51.168384 env[1474]: time="2024-02-09T07:53:51.168289158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 07:53:51.168384 env[1474]: time="2024-02-09T07:53:51.168349547Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5276e31e6a1a942bc10c5bfbdb5f8fd98b9fe7e678c13eb276c1a7c2c100dc8a pid=2215 runtime=io.containerd.runc.v2 Feb 9 07:53:51.173632 env[1474]: time="2024-02-09T07:53:51.173596313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 07:53:51.173632 env[1474]: time="2024-02-09T07:53:51.173618675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 07:53:51.173730 env[1474]: time="2024-02-09T07:53:51.173629479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 07:53:51.173730 env[1474]: time="2024-02-09T07:53:51.173693665Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a200bc69b49c39098652d29303421d9b9b3011f6a5a40338ab837878848ee0bf pid=2242 runtime=io.containerd.runc.v2 Feb 9 07:53:51.174098 env[1474]: time="2024-02-09T07:53:51.174077304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 07:53:51.174098 env[1474]: time="2024-02-09T07:53:51.174093791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 07:53:51.174161 env[1474]: time="2024-02-09T07:53:51.174100814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 07:53:51.174181 env[1474]: time="2024-02-09T07:53:51.174170162Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe027d6436fb03e2f34a031f6941756d99aa376bdd31be43ec7d6cd54a5aff9e pid=2249 runtime=io.containerd.runc.v2 Feb 9 07:53:51.187068 systemd[1]: Started cri-containerd-5276e31e6a1a942bc10c5bfbdb5f8fd98b9fe7e678c13eb276c1a7c2c100dc8a.scope. Feb 9 07:53:51.192549 systemd[1]: Started cri-containerd-a200bc69b49c39098652d29303421d9b9b3011f6a5a40338ab837878848ee0bf.scope. Feb 9 07:53:51.193384 systemd[1]: Started cri-containerd-fe027d6436fb03e2f34a031f6941756d99aa376bdd31be43ec7d6cd54a5aff9e.scope. Feb 9 07:53:51.215564 env[1474]: time="2024-02-09T07:53:51.215532919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-d9875e643b,Uid:2b8f0d28b919f4f93b2729d07401a4e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a200bc69b49c39098652d29303421d9b9b3011f6a5a40338ab837878848ee0bf\"" Feb 9 07:53:51.215564 env[1474]: time="2024-02-09T07:53:51.215533461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-d9875e643b,Uid:75382b07cf6b7e51ba52f8d4c702e4c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe027d6436fb03e2f34a031f6941756d99aa376bdd31be43ec7d6cd54a5aff9e\"" Feb 9 07:53:51.217306 env[1474]: time="2024-02-09T07:53:51.217293805Z" level=info msg="CreateContainer within sandbox \"fe027d6436fb03e2f34a031f6941756d99aa376bdd31be43ec7d6cd54a5aff9e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 07:53:51.217361 env[1474]: time="2024-02-09T07:53:51.217348784Z" level=info msg="CreateContainer within sandbox \"a200bc69b49c39098652d29303421d9b9b3011f6a5a40338ab837878848ee0bf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 07:53:51.222900 env[1474]: time="2024-02-09T07:53:51.222884481Z" level=info msg="CreateContainer within sandbox \"fe027d6436fb03e2f34a031f6941756d99aa376bdd31be43ec7d6cd54a5aff9e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0fec6197a727344845290b058a4e4a947631a122542e8507e8da93bf5478a9cd\"" Feb 9 07:53:51.223102 env[1474]: time="2024-02-09T07:53:51.223088645Z" level=info msg="StartContainer for \"0fec6197a727344845290b058a4e4a947631a122542e8507e8da93bf5478a9cd\"" Feb 9 07:53:51.223697 env[1474]: time="2024-02-09T07:53:51.223677630Z" level=info msg="CreateContainer within sandbox \"a200bc69b49c39098652d29303421d9b9b3011f6a5a40338ab837878848ee0bf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e511f9b174dfdc6c16e08c6710d144b6b73847d8b25dd2dced0176666b29517b\"" Feb 9 07:53:51.223788 env[1474]: time="2024-02-09T07:53:51.223773053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-d9875e643b,Uid:05a4533a3438fdcb9206e7889ed2d4ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"5276e31e6a1a942bc10c5bfbdb5f8fd98b9fe7e678c13eb276c1a7c2c100dc8a\"" Feb 9 07:53:51.223837 env[1474]: time="2024-02-09T07:53:51.223822020Z" level=info msg="StartContainer for \"e511f9b174dfdc6c16e08c6710d144b6b73847d8b25dd2dced0176666b29517b\"" Feb 9 07:53:51.224751 env[1474]: time="2024-02-09T07:53:51.224737896Z" level=info msg="CreateContainer within sandbox \"5276e31e6a1a942bc10c5bfbdb5f8fd98b9fe7e678c13eb276c1a7c2c100dc8a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 07:53:51.228928 env[1474]: time="2024-02-09T07:53:51.228861912Z" level=info msg="CreateContainer within sandbox \"5276e31e6a1a942bc10c5bfbdb5f8fd98b9fe7e678c13eb276c1a7c2c100dc8a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3f3bc9a32f88b3b456c5f0eda42507fc117823685a32d7e3913be1be938ca418\"" Feb 9 07:53:51.229140 env[1474]: time="2024-02-09T07:53:51.229125375Z" level=info msg="StartContainer for \"3f3bc9a32f88b3b456c5f0eda42507fc117823685a32d7e3913be1be938ca418\"" Feb 9 07:53:51.231315 systemd[1]: Started cri-containerd-e511f9b174dfdc6c16e08c6710d144b6b73847d8b25dd2dced0176666b29517b.scope. Feb 9 07:53:51.243180 systemd[1]: Started cri-containerd-0fec6197a727344845290b058a4e4a947631a122542e8507e8da93bf5478a9cd.scope. Feb 9 07:53:51.250296 systemd[1]: Started cri-containerd-3f3bc9a32f88b3b456c5f0eda42507fc117823685a32d7e3913be1be938ca418.scope. Feb 9 07:53:51.267500 env[1474]: time="2024-02-09T07:53:51.267475245Z" level=info msg="StartContainer for \"0fec6197a727344845290b058a4e4a947631a122542e8507e8da93bf5478a9cd\" returns successfully" Feb 9 07:53:51.267620 env[1474]: time="2024-02-09T07:53:51.267603674Z" level=info msg="StartContainer for \"e511f9b174dfdc6c16e08c6710d144b6b73847d8b25dd2dced0176666b29517b\" returns successfully" Feb 9 07:53:51.274828 env[1474]: time="2024-02-09T07:53:51.274774167Z" level=info msg="StartContainer for \"3f3bc9a32f88b3b456c5f0eda42507fc117823685a32d7e3913be1be938ca418\" returns successfully" Feb 9 07:53:51.579476 kubelet[2128]: I0209 07:53:51.579465 2128 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-d9875e643b" Feb 9 07:53:52.185733 kubelet[2128]: E0209 07:53:52.185712 2128 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-d9875e643b\" not found" node="ci-3510.3.2-a-d9875e643b" Feb 9 07:53:52.285401 kubelet[2128]: I0209 07:53:52.285317 2128 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-d9875e643b" Feb 9 07:53:52.310356 kubelet[2128]: E0209 07:53:52.310294 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-d9875e643b\" not found" Feb 9 07:53:52.411341 kubelet[2128]: E0209 07:53:52.411248 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-d9875e643b\" not found" Feb 9 07:53:52.512434 kubelet[2128]: E0209 07:53:52.512214 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-d9875e643b\" not found" Feb 9 07:53:52.613255 kubelet[2128]: E0209 07:53:52.613143 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-d9875e643b\" not found" Feb 9 07:53:52.714065 kubelet[2128]: E0209 07:53:52.714011 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-d9875e643b\" not found" Feb 9 07:53:52.815111 kubelet[2128]: E0209 07:53:52.814912 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-d9875e643b\" not found" Feb 9 07:53:52.915805 kubelet[2128]: E0209 07:53:52.915698 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-d9875e643b\" not found" Feb 9 07:53:53.016641 kubelet[2128]: E0209 07:53:53.016523 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-d9875e643b\" not found" Feb 9 07:53:54.054763 kubelet[2128]: I0209 07:53:54.054651 2128 apiserver.go:52] "Watching apiserver" Feb 9 07:53:54.265724 kubelet[2128]: I0209 07:53:54.265623 2128 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 07:53:54.292011 kubelet[2128]: I0209 07:53:54.291908 2128 reconciler.go:41] "Reconciler: start to sync state" Feb 9 07:53:54.462593 kubelet[2128]: E0209 07:53:54.462473 2128 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-d9875e643b\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:54.661817 kubelet[2128]: E0209 07:53:54.661718 2128 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-d9875e643b\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:55.565491 systemd[1]: Reloading. Feb 9 07:53:55.633943 /usr/lib/systemd/system-generators/torcx-generator[2496]: time="2024-02-09T07:53:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 07:53:55.633977 /usr/lib/systemd/system-generators/torcx-generator[2496]: time="2024-02-09T07:53:55Z" level=info msg="torcx already run" Feb 9 07:53:55.715570 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 07:53:55.715581 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 07:53:55.730999 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 07:53:55.798351 systemd[1]: Stopping kubelet.service... Feb 9 07:53:55.798485 kubelet[2128]: I0209 07:53:55.798353 2128 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 07:53:55.817316 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 07:53:55.817783 systemd[1]: Stopped kubelet.service. Feb 9 07:53:55.821710 systemd[1]: Started kubelet.service. Feb 9 07:53:55.907434 kubelet[2555]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 07:53:55.907434 kubelet[2555]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 07:53:55.907788 kubelet[2555]: I0209 07:53:55.907487 2555 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 07:53:55.908825 kubelet[2555]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 07:53:55.908825 kubelet[2555]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 07:53:55.911789 kubelet[2555]: I0209 07:53:55.911761 2555 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 07:53:55.911941 kubelet[2555]: I0209 07:53:55.911922 2555 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 07:53:55.912357 kubelet[2555]: I0209 07:53:55.912341 2555 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 07:53:55.913386 kubelet[2555]: I0209 07:53:55.913342 2555 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 07:53:55.913879 kubelet[2555]: I0209 07:53:55.913864 2555 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 07:53:55.931101 sudo[2581]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 07:53:55.931242 sudo[2581]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 07:53:55.931655 kubelet[2555]: I0209 07:53:55.931619 2555 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 07:53:55.931767 kubelet[2555]: I0209 07:53:55.931731 2555 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 07:53:55.931819 kubelet[2555]: I0209 07:53:55.931781 2555 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 07:53:55.931819 kubelet[2555]: I0209 07:53:55.931797 2555 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 07:53:55.931819 kubelet[2555]: I0209 07:53:55.931809 2555 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 07:53:55.931937 kubelet[2555]: I0209 07:53:55.931836 2555 state_mem.go:36] "Initialized new in-memory state store" Feb 9 07:53:55.933753 kubelet[2555]: I0209 07:53:55.933714 2555 kubelet.go:398] "Attempting to sync node with API server" Feb 9 07:53:55.933753 kubelet[2555]: I0209 07:53:55.933728 2555 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 07:53:55.933753 kubelet[2555]: I0209 07:53:55.933742 2555 kubelet.go:297] "Adding apiserver pod source" Feb 9 07:53:55.933753 kubelet[2555]: I0209 07:53:55.933754 2555 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 07:53:55.934225 kubelet[2555]: I0209 07:53:55.934212 2555 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 07:53:55.934592 kubelet[2555]: I0209 07:53:55.934578 2555 server.go:1186] "Started kubelet" Feb 9 07:53:55.934656 kubelet[2555]: I0209 07:53:55.934625 2555 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 07:53:55.934936 kubelet[2555]: E0209 07:53:55.934924 2555 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 07:53:55.934987 kubelet[2555]: E0209 07:53:55.934942 2555 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 07:53:55.936543 kubelet[2555]: I0209 07:53:55.936533 2555 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 07:53:55.936616 kubelet[2555]: I0209 07:53:55.936584 2555 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 07:53:55.936658 kubelet[2555]: I0209 07:53:55.936614 2555 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 07:53:55.936739 kubelet[2555]: I0209 07:53:55.936726 2555 server.go:451] "Adding debug handlers to kubelet server" Feb 9 07:53:55.950733 kubelet[2555]: I0209 07:53:55.950715 2555 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 07:53:55.957593 kubelet[2555]: I0209 07:53:55.957554 2555 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 07:53:55.957593 kubelet[2555]: I0209 07:53:55.957568 2555 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 07:53:55.957593 kubelet[2555]: I0209 07:53:55.957583 2555 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 07:53:55.957714 kubelet[2555]: E0209 07:53:55.957618 2555 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 07:53:55.957855 kubelet[2555]: I0209 07:53:55.957842 2555 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 07:53:55.957855 kubelet[2555]: I0209 07:53:55.957851 2555 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 07:53:55.957927 kubelet[2555]: I0209 07:53:55.957861 2555 state_mem.go:36] "Initialized new in-memory state store" Feb 9 07:53:55.957956 kubelet[2555]: I0209 07:53:55.957951 2555 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 07:53:55.957979 kubelet[2555]: I0209 07:53:55.957959 2555 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 07:53:55.957979 kubelet[2555]: I0209 07:53:55.957965 2555 policy_none.go:49] "None policy: Start" Feb 9 07:53:55.958260 kubelet[2555]: I0209 07:53:55.958251 2555 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 07:53:55.958260 kubelet[2555]: I0209 07:53:55.958261 2555 state_mem.go:35] "Initializing new in-memory state store" Feb 9 07:53:55.958348 kubelet[2555]: I0209 07:53:55.958332 2555 state_mem.go:75] "Updated machine memory state" Feb 9 07:53:55.960320 kubelet[2555]: I0209 07:53:55.960310 2555 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 07:53:55.960434 kubelet[2555]: I0209 07:53:55.960428 2555 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 07:53:56.038524 kubelet[2555]: I0209 07:53:56.038511 2555 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-d9875e643b" Feb 9 07:53:56.043348 kubelet[2555]: I0209 07:53:56.043337 2555 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-d9875e643b" Feb 9 07:53:56.043400 kubelet[2555]: I0209 07:53:56.043375 2555 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-d9875e643b" Feb 9 07:53:56.058178 kubelet[2555]: I0209 07:53:56.058133 2555 topology_manager.go:210] "Topology Admit Handler" Feb 9 07:53:56.058178 kubelet[2555]: I0209 07:53:56.058178 2555 topology_manager.go:210] "Topology Admit Handler" Feb 9 07:53:56.058277 kubelet[2555]: I0209 07:53:56.058197 2555 topology_manager.go:210] "Topology Admit Handler" Feb 9 07:53:56.061802 kubelet[2555]: E0209 07:53:56.061744 2555 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-d9875e643b\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:56.138568 kubelet[2555]: E0209 07:53:56.138546 2555 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-d9875e643b\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:56.238253 kubelet[2555]: I0209 07:53:56.238201 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05a4533a3438fdcb9206e7889ed2d4ed-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-d9875e643b\" (UID: \"05a4533a3438fdcb9206e7889ed2d4ed\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:56.238253 kubelet[2555]: I0209 07:53:56.238231 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2b8f0d28b919f4f93b2729d07401a4e4-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-d9875e643b\" (UID: \"2b8f0d28b919f4f93b2729d07401a4e4\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:56.238253 kubelet[2555]: I0209 07:53:56.238246 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2b8f0d28b919f4f93b2729d07401a4e4-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-d9875e643b\" (UID: \"2b8f0d28b919f4f93b2729d07401a4e4\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:56.238387 kubelet[2555]: I0209 07:53:56.238266 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b8f0d28b919f4f93b2729d07401a4e4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-d9875e643b\" (UID: \"2b8f0d28b919f4f93b2729d07401a4e4\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:56.238387 kubelet[2555]: I0209 07:53:56.238279 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75382b07cf6b7e51ba52f8d4c702e4c9-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-d9875e643b\" (UID: \"75382b07cf6b7e51ba52f8d4c702e4c9\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:56.238387 kubelet[2555]: I0209 07:53:56.238289 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05a4533a3438fdcb9206e7889ed2d4ed-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-d9875e643b\" (UID: \"05a4533a3438fdcb9206e7889ed2d4ed\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:56.238387 kubelet[2555]: I0209 07:53:56.238323 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05a4533a3438fdcb9206e7889ed2d4ed-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-d9875e643b\" (UID: \"05a4533a3438fdcb9206e7889ed2d4ed\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:56.238387 kubelet[2555]: I0209 07:53:56.238351 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2b8f0d28b919f4f93b2729d07401a4e4-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-d9875e643b\" (UID: \"2b8f0d28b919f4f93b2729d07401a4e4\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:56.238477 kubelet[2555]: I0209 07:53:56.238370 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b8f0d28b919f4f93b2729d07401a4e4-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-d9875e643b\" (UID: \"2b8f0d28b919f4f93b2729d07401a4e4\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:56.268758 sudo[2581]: pam_unix(sudo:session): session closed for user root Feb 9 07:53:56.338402 kubelet[2555]: E0209 07:53:56.338355 2555 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-d9875e643b\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:56.934618 kubelet[2555]: I0209 07:53:56.934534 2555 apiserver.go:52] "Watching apiserver" Feb 9 07:53:57.037448 kubelet[2555]: I0209 07:53:57.037419 2555 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 07:53:57.043554 kubelet[2555]: I0209 07:53:57.043532 2555 reconciler.go:41] "Reconciler: start to sync state" Feb 9 07:53:57.154415 sudo[1591]: pam_unix(sudo:session): session closed for user root Feb 9 07:53:57.157239 sshd[1588]: pam_unix(sshd:session): session closed for user core Feb 9 07:53:57.162779 systemd[1]: sshd@4-139.178.90.113:22-147.75.109.163:41592.service: Deactivated successfully. Feb 9 07:53:57.164478 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 07:53:57.164888 systemd[1]: session-7.scope: Consumed 2.476s CPU time. Feb 9 07:53:57.166196 systemd-logind[1462]: Session 7 logged out. Waiting for processes to exit. Feb 9 07:53:57.168422 systemd-logind[1462]: Removed session 7. Feb 9 07:53:57.343040 kubelet[2555]: E0209 07:53:57.342938 2555 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-d9875e643b\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:57.542905 kubelet[2555]: E0209 07:53:57.542805 2555 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-d9875e643b\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:57.743694 kubelet[2555]: E0209 07:53:57.743471 2555 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-d9875e643b\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d9875e643b" Feb 9 07:53:58.346538 kubelet[2555]: I0209 07:53:58.346476 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-d9875e643b" podStartSLOduration=5.346336152 pod.CreationTimestamp="2024-02-09 07:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 07:53:58.346284059 +0000 UTC m=+2.517596007" watchObservedRunningTime="2024-02-09 07:53:58.346336152 +0000 UTC m=+2.517648086" Feb 9 07:53:58.347509 kubelet[2555]: I0209 07:53:58.346756 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-d9875e643b" podStartSLOduration=5.346691002 pod.CreationTimestamp="2024-02-09 07:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 07:53:57.953778184 +0000 UTC m=+2.125090075" watchObservedRunningTime="2024-02-09 07:53:58.346691002 +0000 UTC m=+2.518003000" Feb 9 07:53:58.749941 kubelet[2555]: I0209 07:53:58.749761 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-d9875e643b" podStartSLOduration=5.749669991 pod.CreationTimestamp="2024-02-09 07:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 07:53:58.749667265 +0000 UTC m=+2.920979153" watchObservedRunningTime="2024-02-09 07:53:58.749669991 +0000 UTC m=+2.920981859" Feb 9 07:54:08.356830 update_engine[1464]: I0209 07:54:08.356725 1464 update_attempter.cc:509] Updating boot flags... Feb 9 07:54:08.915620 kubelet[2555]: I0209 07:54:08.915576 2555 topology_manager.go:210] "Topology Admit Handler" Feb 9 07:54:08.916059 kubelet[2555]: I0209 07:54:08.915780 2555 topology_manager.go:210] "Topology Admit Handler" Feb 9 07:54:08.921867 systemd[1]: Created slice kubepods-besteffort-podc621dd3c_d455_4698_bd6d_8d355774d718.slice. Feb 9 07:54:08.932471 kubelet[2555]: I0209 07:54:08.932446 2555 topology_manager.go:210] "Topology Admit Handler" Feb 9 07:54:08.949954 systemd[1]: Created slice kubepods-burstable-pod2c922fd4_2685_4c9c_b9ea_0a0c75a91457.slice. Feb 9 07:54:08.951966 systemd[1]: Created slice kubepods-besteffort-pod0bdea72d_3afb_4099_8ccc_d7557aa5e795.slice. Feb 9 07:54:08.976188 kubelet[2555]: I0209 07:54:08.976137 2555 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 07:54:08.976815 env[1474]: time="2024-02-09T07:54:08.976744699Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 07:54:08.977489 kubelet[2555]: I0209 07:54:08.977175 2555 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 07:54:09.019944 kubelet[2555]: I0209 07:54:09.019879 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-cilium-config-path\") pod \"cilium-8jh96\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " pod="kube-system/cilium-8jh96" Feb 9 07:54:09.020295 kubelet[2555]: I0209 07:54:09.019982 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-cilium-cgroup\") pod \"cilium-8jh96\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " pod="kube-system/cilium-8jh96" Feb 9 07:54:09.020295 kubelet[2555]: I0209 07:54:09.020087 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggp9f\" (UniqueName: \"kubernetes.io/projected/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-kube-api-access-ggp9f\") pod \"cilium-8jh96\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " pod="kube-system/cilium-8jh96" Feb 9 07:54:09.020295 kubelet[2555]: I0209 07:54:09.020206 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-cilium-run\") pod \"cilium-8jh96\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " pod="kube-system/cilium-8jh96" Feb 9 07:54:09.020690 kubelet[2555]: I0209 07:54:09.020319 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbdls\" (UniqueName: \"kubernetes.io/projected/0bdea72d-3afb-4099-8ccc-d7557aa5e795-kube-api-access-fbdls\") pod \"cilium-operator-f59cbd8c6-cv4z9\" (UID: \"0bdea72d-3afb-4099-8ccc-d7557aa5e795\") " pod="kube-system/cilium-operator-f59cbd8c6-cv4z9" Feb 9 07:54:09.020690 kubelet[2555]: I0209 07:54:09.020427 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-hostproc\") pod \"cilium-8jh96\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " pod="kube-system/cilium-8jh96" Feb 9 07:54:09.020690 kubelet[2555]: I0209 07:54:09.020634 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-etc-cni-netd\") pod \"cilium-8jh96\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " pod="kube-system/cilium-8jh96" Feb 9 07:54:09.021184 kubelet[2555]: I0209 07:54:09.020699 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c621dd3c-d455-4698-bd6d-8d355774d718-xtables-lock\") pod \"kube-proxy-htfb9\" (UID: \"c621dd3c-d455-4698-bd6d-8d355774d718\") " pod="kube-system/kube-proxy-htfb9" Feb 9 07:54:09.021184 kubelet[2555]: I0209 07:54:09.020810 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bdea72d-3afb-4099-8ccc-d7557aa5e795-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-cv4z9\" (UID: \"0bdea72d-3afb-4099-8ccc-d7557aa5e795\") " pod="kube-system/cilium-operator-f59cbd8c6-cv4z9" Feb 9 07:54:09.021184 kubelet[2555]: I0209 07:54:09.020964 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-host-proc-sys-net\") pod \"cilium-8jh96\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " pod="kube-system/cilium-8jh96" Feb 9 07:54:09.021184 kubelet[2555]: I0209 07:54:09.021088 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c621dd3c-d455-4698-bd6d-8d355774d718-kube-proxy\") pod \"kube-proxy-htfb9\" (UID: \"c621dd3c-d455-4698-bd6d-8d355774d718\") " pod="kube-system/kube-proxy-htfb9" Feb 9 07:54:09.021942 kubelet[2555]: I0209 07:54:09.021241 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c621dd3c-d455-4698-bd6d-8d355774d718-lib-modules\") pod \"kube-proxy-htfb9\" (UID: \"c621dd3c-d455-4698-bd6d-8d355774d718\") " pod="kube-system/kube-proxy-htfb9" Feb 9 07:54:09.021942 kubelet[2555]: I0209 07:54:09.021355 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-cni-path\") pod \"cilium-8jh96\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " pod="kube-system/cilium-8jh96" Feb 9 07:54:09.021942 kubelet[2555]: I0209 07:54:09.021461 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-xtables-lock\") pod \"cilium-8jh96\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " pod="kube-system/cilium-8jh96" Feb 9 07:54:09.021942 kubelet[2555]: I0209 07:54:09.021601 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-clustermesh-secrets\") pod \"cilium-8jh96\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " pod="kube-system/cilium-8jh96" Feb 9 07:54:09.021942 kubelet[2555]: I0209 07:54:09.021704 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-lib-modules\") pod \"cilium-8jh96\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " pod="kube-system/cilium-8jh96" Feb 9 07:54:09.021942 kubelet[2555]: I0209 07:54:09.021762 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-host-proc-sys-kernel\") pod \"cilium-8jh96\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " pod="kube-system/cilium-8jh96" Feb 9 07:54:09.022797 kubelet[2555]: I0209 07:54:09.021814 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-hubble-tls\") pod \"cilium-8jh96\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " pod="kube-system/cilium-8jh96" Feb 9 07:54:09.022797 kubelet[2555]: I0209 07:54:09.021868 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s7tb\" (UniqueName: \"kubernetes.io/projected/c621dd3c-d455-4698-bd6d-8d355774d718-kube-api-access-9s7tb\") pod \"kube-proxy-htfb9\" (UID: \"c621dd3c-d455-4698-bd6d-8d355774d718\") " pod="kube-system/kube-proxy-htfb9" Feb 9 07:54:09.022797 kubelet[2555]: I0209 07:54:09.021958 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-bpf-maps\") pod \"cilium-8jh96\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " pod="kube-system/cilium-8jh96" Feb 9 07:54:09.548968 env[1474]: time="2024-02-09T07:54:09.548939034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-htfb9,Uid:c621dd3c-d455-4698-bd6d-8d355774d718,Namespace:kube-system,Attempt:0,}" Feb 9 07:54:09.551337 env[1474]: time="2024-02-09T07:54:09.551321703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8jh96,Uid:2c922fd4-2685-4c9c-b9ea-0a0c75a91457,Namespace:kube-system,Attempt:0,}" Feb 9 07:54:09.878730 env[1474]: time="2024-02-09T07:54:09.878591887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-cv4z9,Uid:0bdea72d-3afb-4099-8ccc-d7557aa5e795,Namespace:kube-system,Attempt:0,}" Feb 9 07:54:09.940396 env[1474]: time="2024-02-09T07:54:09.940323962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 07:54:09.940396 env[1474]: time="2024-02-09T07:54:09.940360071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 07:54:09.940396 env[1474]: time="2024-02-09T07:54:09.940389176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 07:54:09.940515 env[1474]: time="2024-02-09T07:54:09.940454857Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/918e6bab029d37030e6af9589f26e11f29f7bc5faa10f16f207d8b6502775c34 pid=2752 runtime=io.containerd.runc.v2 Feb 9 07:54:09.958601 systemd[1]: Started cri-containerd-918e6bab029d37030e6af9589f26e11f29f7bc5faa10f16f207d8b6502775c34.scope. Feb 9 07:54:09.991309 env[1474]: time="2024-02-09T07:54:09.991193374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-htfb9,Uid:c621dd3c-d455-4698-bd6d-8d355774d718,Namespace:kube-system,Attempt:0,} returns sandbox id \"918e6bab029d37030e6af9589f26e11f29f7bc5faa10f16f207d8b6502775c34\"" Feb 9 07:54:09.996508 env[1474]: time="2024-02-09T07:54:09.996395217Z" level=info msg="CreateContainer within sandbox \"918e6bab029d37030e6af9589f26e11f29f7bc5faa10f16f207d8b6502775c34\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 07:54:10.000029 env[1474]: time="2024-02-09T07:54:09.999846091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 07:54:10.000029 env[1474]: time="2024-02-09T07:54:09.999966293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 07:54:10.000029 env[1474]: time="2024-02-09T07:54:10.000007297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 07:54:10.000631 env[1474]: time="2024-02-09T07:54:10.000506055Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02 pid=2792 runtime=io.containerd.runc.v2 Feb 9 07:54:10.041035 systemd[1]: Started cri-containerd-11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02.scope. Feb 9 07:54:10.101294 env[1474]: time="2024-02-09T07:54:10.101202500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8jh96,Uid:2c922fd4-2685-4c9c-b9ea-0a0c75a91457,Namespace:kube-system,Attempt:0,} returns sandbox id \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\"" Feb 9 07:54:10.104463 env[1474]: time="2024-02-09T07:54:10.104381041Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 07:54:10.352999 env[1474]: time="2024-02-09T07:54:10.352768801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 07:54:10.352999 env[1474]: time="2024-02-09T07:54:10.352864531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 07:54:10.352999 env[1474]: time="2024-02-09T07:54:10.352903369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 07:54:10.367304 env[1474]: time="2024-02-09T07:54:10.353304878Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/210624dd0c82ea6d1f9b46dc78c2a8faab7b6568457fb484f48752fddf10062c pid=2833 runtime=io.containerd.runc.v2 Feb 9 07:54:10.379663 env[1474]: time="2024-02-09T07:54:10.379459007Z" level=info msg="CreateContainer within sandbox \"918e6bab029d37030e6af9589f26e11f29f7bc5faa10f16f207d8b6502775c34\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dda1f24c5264f4a2e54d5c07dbe2fcdd0a14f8eacb55e6ce0aec58fe3e6b0d17\"" Feb 9 07:54:10.380850 env[1474]: time="2024-02-09T07:54:10.380714964Z" level=info msg="StartContainer for \"dda1f24c5264f4a2e54d5c07dbe2fcdd0a14f8eacb55e6ce0aec58fe3e6b0d17\"" Feb 9 07:54:10.408861 systemd[1]: Started cri-containerd-210624dd0c82ea6d1f9b46dc78c2a8faab7b6568457fb484f48752fddf10062c.scope. Feb 9 07:54:10.416130 systemd[1]: Started cri-containerd-dda1f24c5264f4a2e54d5c07dbe2fcdd0a14f8eacb55e6ce0aec58fe3e6b0d17.scope. Feb 9 07:54:10.444002 env[1474]: time="2024-02-09T07:54:10.443938792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-cv4z9,Uid:0bdea72d-3afb-4099-8ccc-d7557aa5e795,Namespace:kube-system,Attempt:0,} returns sandbox id \"210624dd0c82ea6d1f9b46dc78c2a8faab7b6568457fb484f48752fddf10062c\"" Feb 9 07:54:10.447900 env[1474]: time="2024-02-09T07:54:10.447850298Z" level=info msg="StartContainer for \"dda1f24c5264f4a2e54d5c07dbe2fcdd0a14f8eacb55e6ce0aec58fe3e6b0d17\" returns successfully" Feb 9 07:54:11.015121 kubelet[2555]: I0209 07:54:11.015061 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-htfb9" podStartSLOduration=3.01497822 pod.CreationTimestamp="2024-02-09 07:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 07:54:11.014440986 +0000 UTC m=+15.185752884" watchObservedRunningTime="2024-02-09 07:54:11.01497822 +0000 UTC m=+15.186290086" Feb 9 07:54:14.156383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2797673141.mount: Deactivated successfully. Feb 9 07:54:15.857317 env[1474]: time="2024-02-09T07:54:15.857250234Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:54:15.857840 env[1474]: time="2024-02-09T07:54:15.857789533Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:54:15.858707 env[1474]: time="2024-02-09T07:54:15.858659109Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:54:15.859532 env[1474]: time="2024-02-09T07:54:15.859488402Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 07:54:15.860020 env[1474]: time="2024-02-09T07:54:15.859987059Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 07:54:15.860939 env[1474]: time="2024-02-09T07:54:15.860876872Z" level=info msg="CreateContainer within sandbox \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 07:54:15.865953 env[1474]: time="2024-02-09T07:54:15.865906145Z" level=info msg="CreateContainer within sandbox \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0340a0c7d6ecb56daf488ab04609bdb1b6ef5839cb41ea29da87e420843c051a\"" Feb 9 07:54:15.866322 env[1474]: time="2024-02-09T07:54:15.866280999Z" level=info msg="StartContainer for \"0340a0c7d6ecb56daf488ab04609bdb1b6ef5839cb41ea29da87e420843c051a\"" Feb 9 07:54:15.888511 systemd[1]: Started cri-containerd-0340a0c7d6ecb56daf488ab04609bdb1b6ef5839cb41ea29da87e420843c051a.scope. Feb 9 07:54:15.912985 env[1474]: time="2024-02-09T07:54:15.912956846Z" level=info msg="StartContainer for \"0340a0c7d6ecb56daf488ab04609bdb1b6ef5839cb41ea29da87e420843c051a\" returns successfully" Feb 9 07:54:15.918128 systemd[1]: cri-containerd-0340a0c7d6ecb56daf488ab04609bdb1b6ef5839cb41ea29da87e420843c051a.scope: Deactivated successfully. Feb 9 07:54:16.869359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0340a0c7d6ecb56daf488ab04609bdb1b6ef5839cb41ea29da87e420843c051a-rootfs.mount: Deactivated successfully. Feb 9 07:54:16.999296 env[1474]: time="2024-02-09T07:54:16.999190531Z" level=info msg="shim disconnected" id=0340a0c7d6ecb56daf488ab04609bdb1b6ef5839cb41ea29da87e420843c051a Feb 9 07:54:16.999296 env[1474]: time="2024-02-09T07:54:16.999294969Z" level=warning msg="cleaning up after shim disconnected" id=0340a0c7d6ecb56daf488ab04609bdb1b6ef5839cb41ea29da87e420843c051a namespace=k8s.io Feb 9 07:54:17.000187 env[1474]: time="2024-02-09T07:54:16.999323143Z" level=info msg="cleaning up dead shim" Feb 9 07:54:17.026641 env[1474]: time="2024-02-09T07:54:17.026516228Z" level=warning msg="cleanup warnings time=\"2024-02-09T07:54:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3066 runtime=io.containerd.runc.v2\n" Feb 9 07:54:17.493943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2972635736.mount: Deactivated successfully. Feb 9 07:54:18.012262 env[1474]: time="2024-02-09T07:54:18.012180521Z" level=info msg="CreateContainer within sandbox \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 07:54:18.017388 env[1474]: time="2024-02-09T07:54:18.017340817Z" level=info msg="CreateContainer within sandbox \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f7840b845ef27e1f270408068256489af3261dca6b118752efde8b8ff7b7dcc8\"" Feb 9 07:54:18.017707 env[1474]: time="2024-02-09T07:54:18.017664000Z" level=info msg="StartContainer for \"f7840b845ef27e1f270408068256489af3261dca6b118752efde8b8ff7b7dcc8\"" Feb 9 07:54:18.018148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount676168987.mount: Deactivated successfully. Feb 9 07:54:18.038283 systemd[1]: Started cri-containerd-f7840b845ef27e1f270408068256489af3261dca6b118752efde8b8ff7b7dcc8.scope. Feb 9 07:54:18.063177 env[1474]: time="2024-02-09T07:54:18.063149969Z" level=info msg="StartContainer for \"f7840b845ef27e1f270408068256489af3261dca6b118752efde8b8ff7b7dcc8\" returns successfully" Feb 9 07:54:18.068942 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 07:54:18.069090 systemd[1]: Stopped systemd-sysctl.service. Feb 9 07:54:18.069220 systemd[1]: Stopping systemd-sysctl.service... Feb 9 07:54:18.070110 systemd[1]: Starting systemd-sysctl.service... Feb 9 07:54:18.071411 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 07:54:18.071820 systemd[1]: cri-containerd-f7840b845ef27e1f270408068256489af3261dca6b118752efde8b8ff7b7dcc8.scope: Deactivated successfully. Feb 9 07:54:18.074237 systemd[1]: Finished systemd-sysctl.service. Feb 9 07:54:18.237371 env[1474]: time="2024-02-09T07:54:18.237270587Z" level=info msg="shim disconnected" id=f7840b845ef27e1f270408068256489af3261dca6b118752efde8b8ff7b7dcc8 Feb 9 07:54:18.237371 env[1474]: time="2024-02-09T07:54:18.237363499Z" level=warning msg="cleaning up after shim disconnected" id=f7840b845ef27e1f270408068256489af3261dca6b118752efde8b8ff7b7dcc8 namespace=k8s.io Feb 9 07:54:18.237922 env[1474]: time="2024-02-09T07:54:18.237390110Z" level=info msg="cleaning up dead shim" Feb 9 07:54:18.245866 env[1474]: time="2024-02-09T07:54:18.245815963Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:54:18.246393 env[1474]: time="2024-02-09T07:54:18.246351928Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:54:18.247077 env[1474]: time="2024-02-09T07:54:18.247042572Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 07:54:18.248452 env[1474]: time="2024-02-09T07:54:18.248412094Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 07:54:18.249697 env[1474]: time="2024-02-09T07:54:18.249678746Z" level=info msg="CreateContainer within sandbox \"210624dd0c82ea6d1f9b46dc78c2a8faab7b6568457fb484f48752fddf10062c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 07:54:18.254310 env[1474]: time="2024-02-09T07:54:18.254265171Z" level=info msg="CreateContainer within sandbox \"210624dd0c82ea6d1f9b46dc78c2a8faab7b6568457fb484f48752fddf10062c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346\"" Feb 9 07:54:18.254399 env[1474]: time="2024-02-09T07:54:18.254388010Z" level=warning msg="cleanup warnings time=\"2024-02-09T07:54:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3128 runtime=io.containerd.runc.v2\n" Feb 9 07:54:18.254509 env[1474]: time="2024-02-09T07:54:18.254490909Z" level=info msg="StartContainer for \"bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346\"" Feb 9 07:54:18.274325 systemd[1]: Started cri-containerd-bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346.scope. Feb 9 07:54:18.293084 env[1474]: time="2024-02-09T07:54:18.293017920Z" level=info msg="StartContainer for \"bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346\" returns successfully" Feb 9 07:54:19.024843 env[1474]: time="2024-02-09T07:54:19.024722721Z" level=info msg="CreateContainer within sandbox \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 07:54:19.025231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7840b845ef27e1f270408068256489af3261dca6b118752efde8b8ff7b7dcc8-rootfs.mount: Deactivated successfully. Feb 9 07:54:19.036764 kubelet[2555]: I0209 07:54:19.036673 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-cv4z9" podStartSLOduration=-9.22337202581825e+09 pod.CreationTimestamp="2024-02-09 07:54:08 +0000 UTC" firstStartedPulling="2024-02-09 07:54:10.444439975 +0000 UTC m=+14.615751793" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 07:54:19.035532659 +0000 UTC m=+23.206844570" watchObservedRunningTime="2024-02-09 07:54:19.036525524 +0000 UTC m=+23.207837394" Feb 9 07:54:19.047190 env[1474]: time="2024-02-09T07:54:19.047076342Z" level=info msg="CreateContainer within sandbox \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"59e6c632550e98aa09fdb64460bf9df02d06e89c9f00f72464b6c159d53030aa\"" Feb 9 07:54:19.048183 env[1474]: time="2024-02-09T07:54:19.048088560Z" level=info msg="StartContainer for \"59e6c632550e98aa09fdb64460bf9df02d06e89c9f00f72464b6c159d53030aa\"" Feb 9 07:54:19.090981 systemd[1]: Started cri-containerd-59e6c632550e98aa09fdb64460bf9df02d06e89c9f00f72464b6c159d53030aa.scope. Feb 9 07:54:19.126211 env[1474]: time="2024-02-09T07:54:19.126136761Z" level=info msg="StartContainer for \"59e6c632550e98aa09fdb64460bf9df02d06e89c9f00f72464b6c159d53030aa\" returns successfully" Feb 9 07:54:19.126740 systemd[1]: cri-containerd-59e6c632550e98aa09fdb64460bf9df02d06e89c9f00f72464b6c159d53030aa.scope: Deactivated successfully. Feb 9 07:54:19.161024 env[1474]: time="2024-02-09T07:54:19.160997988Z" level=info msg="shim disconnected" id=59e6c632550e98aa09fdb64460bf9df02d06e89c9f00f72464b6c159d53030aa Feb 9 07:54:19.161024 env[1474]: time="2024-02-09T07:54:19.161024144Z" level=warning msg="cleaning up after shim disconnected" id=59e6c632550e98aa09fdb64460bf9df02d06e89c9f00f72464b6c159d53030aa namespace=k8s.io Feb 9 07:54:19.161024 env[1474]: time="2024-02-09T07:54:19.161029807Z" level=info msg="cleaning up dead shim" Feb 9 07:54:19.164895 env[1474]: time="2024-02-09T07:54:19.164850914Z" level=warning msg="cleanup warnings time=\"2024-02-09T07:54:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3232 runtime=io.containerd.runc.v2\n" Feb 9 07:54:20.020232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59e6c632550e98aa09fdb64460bf9df02d06e89c9f00f72464b6c159d53030aa-rootfs.mount: Deactivated successfully. Feb 9 07:54:20.032289 env[1474]: time="2024-02-09T07:54:20.032208099Z" level=info msg="CreateContainer within sandbox \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 07:54:20.043124 env[1474]: time="2024-02-09T07:54:20.043075370Z" level=info msg="CreateContainer within sandbox \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"476f78d6d02977780683b30b34a7d478df911077d1803e651d78b69668dc3520\"" Feb 9 07:54:20.043453 env[1474]: time="2024-02-09T07:54:20.043404747Z" level=info msg="StartContainer for \"476f78d6d02977780683b30b34a7d478df911077d1803e651d78b69668dc3520\"" Feb 9 07:54:20.052041 systemd[1]: Started cri-containerd-476f78d6d02977780683b30b34a7d478df911077d1803e651d78b69668dc3520.scope. Feb 9 07:54:20.063032 systemd[1]: cri-containerd-476f78d6d02977780683b30b34a7d478df911077d1803e651d78b69668dc3520.scope: Deactivated successfully. Feb 9 07:54:20.063344 env[1474]: time="2024-02-09T07:54:20.063323512Z" level=info msg="StartContainer for \"476f78d6d02977780683b30b34a7d478df911077d1803e651d78b69668dc3520\" returns successfully" Feb 9 07:54:20.130903 env[1474]: time="2024-02-09T07:54:20.130804443Z" level=info msg="shim disconnected" id=476f78d6d02977780683b30b34a7d478df911077d1803e651d78b69668dc3520 Feb 9 07:54:20.131332 env[1474]: time="2024-02-09T07:54:20.130904160Z" level=warning msg="cleaning up after shim disconnected" id=476f78d6d02977780683b30b34a7d478df911077d1803e651d78b69668dc3520 namespace=k8s.io Feb 9 07:54:20.131332 env[1474]: time="2024-02-09T07:54:20.130945159Z" level=info msg="cleaning up dead shim" Feb 9 07:54:20.146732 env[1474]: time="2024-02-09T07:54:20.146616244Z" level=warning msg="cleanup warnings time=\"2024-02-09T07:54:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3285 runtime=io.containerd.runc.v2\n" Feb 9 07:54:21.019619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-476f78d6d02977780683b30b34a7d478df911077d1803e651d78b69668dc3520-rootfs.mount: Deactivated successfully. Feb 9 07:54:21.030399 env[1474]: time="2024-02-09T07:54:21.030381188Z" level=info msg="CreateContainer within sandbox \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 07:54:21.052406 env[1474]: time="2024-02-09T07:54:21.052350261Z" level=info msg="CreateContainer within sandbox \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb\"" Feb 9 07:54:21.052667 env[1474]: time="2024-02-09T07:54:21.052627087Z" level=info msg="StartContainer for \"cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb\"" Feb 9 07:54:21.060787 systemd[1]: Started cri-containerd-cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb.scope. Feb 9 07:54:21.085624 env[1474]: time="2024-02-09T07:54:21.085526425Z" level=info msg="StartContainer for \"cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb\" returns successfully" Feb 9 07:54:21.134468 kubelet[2555]: I0209 07:54:21.134448 2555 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 07:54:21.145561 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 07:54:21.147949 kubelet[2555]: I0209 07:54:21.147931 2555 topology_manager.go:210] "Topology Admit Handler" Feb 9 07:54:21.148200 kubelet[2555]: I0209 07:54:21.148188 2555 topology_manager.go:210] "Topology Admit Handler" Feb 9 07:54:21.151258 systemd[1]: Created slice kubepods-burstable-pod6d273904_8abd_4164_b526_4c45de3404c4.slice. Feb 9 07:54:21.154050 systemd[1]: Created slice kubepods-burstable-pod9e3c4205_708f_493f_87fb_d6b95b233fba.slice. Feb 9 07:54:21.286560 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 07:54:21.304875 kubelet[2555]: I0209 07:54:21.304861 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6dl9\" (UniqueName: \"kubernetes.io/projected/6d273904-8abd-4164-b526-4c45de3404c4-kube-api-access-b6dl9\") pod \"coredns-787d4945fb-xjgks\" (UID: \"6d273904-8abd-4164-b526-4c45de3404c4\") " pod="kube-system/coredns-787d4945fb-xjgks" Feb 9 07:54:21.304940 kubelet[2555]: I0209 07:54:21.304885 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e3c4205-708f-493f-87fb-d6b95b233fba-config-volume\") pod \"coredns-787d4945fb-hnzkr\" (UID: \"9e3c4205-708f-493f-87fb-d6b95b233fba\") " pod="kube-system/coredns-787d4945fb-hnzkr" Feb 9 07:54:21.304940 kubelet[2555]: I0209 07:54:21.304900 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xqzl\" (UniqueName: \"kubernetes.io/projected/9e3c4205-708f-493f-87fb-d6b95b233fba-kube-api-access-7xqzl\") pod \"coredns-787d4945fb-hnzkr\" (UID: \"9e3c4205-708f-493f-87fb-d6b95b233fba\") " pod="kube-system/coredns-787d4945fb-hnzkr" Feb 9 07:54:21.304940 kubelet[2555]: I0209 07:54:21.304912 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d273904-8abd-4164-b526-4c45de3404c4-config-volume\") pod \"coredns-787d4945fb-xjgks\" (UID: \"6d273904-8abd-4164-b526-4c45de3404c4\") " pod="kube-system/coredns-787d4945fb-xjgks" Feb 9 07:54:21.454520 env[1474]: time="2024-02-09T07:54:21.454385591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-xjgks,Uid:6d273904-8abd-4164-b526-4c45de3404c4,Namespace:kube-system,Attempt:0,}" Feb 9 07:54:21.456542 env[1474]: time="2024-02-09T07:54:21.456468657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-hnzkr,Uid:9e3c4205-708f-493f-87fb-d6b95b233fba,Namespace:kube-system,Attempt:0,}" Feb 9 07:54:22.041841 kubelet[2555]: I0209 07:54:22.041821 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8jh96" podStartSLOduration=-9.223372022812979e+09 pod.CreationTimestamp="2024-02-09 07:54:08 +0000 UTC" firstStartedPulling="2024-02-09 07:54:10.103379625 +0000 UTC m=+14.274691483" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 07:54:22.041514165 +0000 UTC m=+26.212825982" watchObservedRunningTime="2024-02-09 07:54:22.041796731 +0000 UTC m=+26.213108544" Feb 9 07:54:22.899154 systemd-networkd[1315]: cilium_host: Link UP Feb 9 07:54:22.899563 systemd-networkd[1315]: cilium_net: Link UP Feb 9 07:54:22.899579 systemd-networkd[1315]: cilium_net: Gained carrier Feb 9 07:54:22.900140 systemd-networkd[1315]: cilium_host: Gained carrier Feb 9 07:54:22.908567 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 07:54:22.908707 systemd-networkd[1315]: cilium_host: Gained IPv6LL Feb 9 07:54:22.950283 systemd-networkd[1315]: cilium_vxlan: Link UP Feb 9 07:54:22.950286 systemd-networkd[1315]: cilium_vxlan: Gained carrier Feb 9 07:54:23.155572 kernel: NET: Registered PF_ALG protocol family Feb 9 07:54:23.205759 systemd-networkd[1315]: cilium_net: Gained IPv6LL Feb 9 07:54:23.676554 systemd-networkd[1315]: lxc_health: Link UP Feb 9 07:54:23.702381 systemd-networkd[1315]: lxc_health: Gained carrier Feb 9 07:54:23.702561 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 07:54:24.001152 systemd-networkd[1315]: lxc9ce3bf81b804: Link UP Feb 9 07:54:24.001250 systemd-networkd[1315]: lxc86d36cbf0fe9: Link UP Feb 9 07:54:24.042559 kernel: eth0: renamed from tmpd8278 Feb 9 07:54:24.054637 kernel: eth0: renamed from tmpcbf82 Feb 9 07:54:24.082032 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 07:54:24.082083 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc86d36cbf0fe9: link becomes ready Feb 9 07:54:24.082098 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 07:54:24.096256 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9ce3bf81b804: link becomes ready Feb 9 07:54:24.096566 systemd-networkd[1315]: lxc86d36cbf0fe9: Gained carrier Feb 9 07:54:24.096686 systemd-networkd[1315]: lxc9ce3bf81b804: Gained carrier Feb 9 07:54:24.765721 systemd-networkd[1315]: cilium_vxlan: Gained IPv6LL Feb 9 07:54:25.278688 systemd-networkd[1315]: lxc_health: Gained IPv6LL Feb 9 07:54:25.533740 systemd-networkd[1315]: lxc9ce3bf81b804: Gained IPv6LL Feb 9 07:54:25.597747 systemd-networkd[1315]: lxc86d36cbf0fe9: Gained IPv6LL Feb 9 07:54:26.394508 env[1474]: time="2024-02-09T07:54:26.394470650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 07:54:26.394508 env[1474]: time="2024-02-09T07:54:26.394491228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 07:54:26.394508 env[1474]: time="2024-02-09T07:54:26.394498459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 07:54:26.394803 env[1474]: time="2024-02-09T07:54:26.394584695Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cbf82debaa166360eb349452350293718aa5ddb8085b9cd156e67c124274b84d pid=3968 runtime=io.containerd.runc.v2 Feb 9 07:54:26.397668 env[1474]: time="2024-02-09T07:54:26.397631546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 07:54:26.397668 env[1474]: time="2024-02-09T07:54:26.397652077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 07:54:26.397668 env[1474]: time="2024-02-09T07:54:26.397660063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 07:54:26.397792 env[1474]: time="2024-02-09T07:54:26.397755038Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8278d5c6e1914229b7d250ad0cac2a3bd3952ea8d22fa24dc094776c728a869 pid=3992 runtime=io.containerd.runc.v2 Feb 9 07:54:26.414832 systemd[1]: Started cri-containerd-cbf82debaa166360eb349452350293718aa5ddb8085b9cd156e67c124274b84d.scope. Feb 9 07:54:26.417445 systemd[1]: Started cri-containerd-d8278d5c6e1914229b7d250ad0cac2a3bd3952ea8d22fa24dc094776c728a869.scope. Feb 9 07:54:26.451444 env[1474]: time="2024-02-09T07:54:26.451416547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-xjgks,Uid:6d273904-8abd-4164-b526-4c45de3404c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbf82debaa166360eb349452350293718aa5ddb8085b9cd156e67c124274b84d\"" Feb 9 07:54:26.452770 env[1474]: time="2024-02-09T07:54:26.452753358Z" level=info msg="CreateContainer within sandbox \"cbf82debaa166360eb349452350293718aa5ddb8085b9cd156e67c124274b84d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 07:54:26.453113 env[1474]: time="2024-02-09T07:54:26.453092349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-hnzkr,Uid:9e3c4205-708f-493f-87fb-d6b95b233fba,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8278d5c6e1914229b7d250ad0cac2a3bd3952ea8d22fa24dc094776c728a869\"" Feb 9 07:54:26.454298 env[1474]: time="2024-02-09T07:54:26.454282662Z" level=info msg="CreateContainer within sandbox \"d8278d5c6e1914229b7d250ad0cac2a3bd3952ea8d22fa24dc094776c728a869\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 07:54:26.461106 env[1474]: time="2024-02-09T07:54:26.461059017Z" level=info msg="CreateContainer within sandbox \"cbf82debaa166360eb349452350293718aa5ddb8085b9cd156e67c124274b84d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8fb5ff05b00dc15f5e62833e04d979a5f78e66a24acd843ec389a1df78314a22\"" Feb 9 07:54:26.461306 env[1474]: time="2024-02-09T07:54:26.461263626Z" level=info msg="StartContainer for \"8fb5ff05b00dc15f5e62833e04d979a5f78e66a24acd843ec389a1df78314a22\"" Feb 9 07:54:26.462056 env[1474]: time="2024-02-09T07:54:26.462006263Z" level=info msg="CreateContainer within sandbox \"d8278d5c6e1914229b7d250ad0cac2a3bd3952ea8d22fa24dc094776c728a869\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0853e2b97be6b2a491ab24449ed1391566c12aa876cd40fd10f25d9a95eb1a05\"" Feb 9 07:54:26.462264 env[1474]: time="2024-02-09T07:54:26.462216593Z" level=info msg="StartContainer for \"0853e2b97be6b2a491ab24449ed1391566c12aa876cd40fd10f25d9a95eb1a05\"" Feb 9 07:54:26.483880 systemd[1]: Started cri-containerd-0853e2b97be6b2a491ab24449ed1391566c12aa876cd40fd10f25d9a95eb1a05.scope. Feb 9 07:54:26.493922 systemd[1]: Started cri-containerd-8fb5ff05b00dc15f5e62833e04d979a5f78e66a24acd843ec389a1df78314a22.scope. Feb 9 07:54:26.498678 env[1474]: time="2024-02-09T07:54:26.498647964Z" level=info msg="StartContainer for \"0853e2b97be6b2a491ab24449ed1391566c12aa876cd40fd10f25d9a95eb1a05\" returns successfully" Feb 9 07:54:26.521944 env[1474]: time="2024-02-09T07:54:26.521913465Z" level=info msg="StartContainer for \"8fb5ff05b00dc15f5e62833e04d979a5f78e66a24acd843ec389a1df78314a22\" returns successfully" Feb 9 07:54:27.061673 kubelet[2555]: I0209 07:54:27.061616 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-hnzkr" podStartSLOduration=19.061504584 pod.CreationTimestamp="2024-02-09 07:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 07:54:27.060138613 +0000 UTC m=+31.231450518" watchObservedRunningTime="2024-02-09 07:54:27.061504584 +0000 UTC m=+31.232816444" Feb 9 07:54:27.073437 kubelet[2555]: I0209 07:54:27.073409 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-xjgks" podStartSLOduration=19.073371256 pod.CreationTimestamp="2024-02-09 07:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 07:54:27.073256071 +0000 UTC m=+31.244567887" watchObservedRunningTime="2024-02-09 07:54:27.073371256 +0000 UTC m=+31.244683070" Feb 9 07:54:37.285353 kubelet[2555]: I0209 07:54:37.285233 2555 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 07:54:42.509766 systemd[1]: Started sshd@5-139.178.90.113:22-218.92.0.25:57549.service. Feb 9 07:54:43.458448 sshd[4204]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.25 user=root Feb 9 07:54:45.451811 sshd[4204]: Failed password for root from 218.92.0.25 port 57549 ssh2 Feb 9 07:54:47.652830 systemd[1]: Started sshd@6-139.178.90.113:22-218.92.0.28:5504.service. Feb 9 07:54:47.817833 sshd[4208]: Unable to negotiate with 218.92.0.28 port 5504: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Feb 9 07:54:47.818574 systemd[1]: sshd@6-139.178.90.113:22-218.92.0.28:5504.service: Deactivated successfully. Feb 9 07:54:48.013080 sshd[4204]: Failed password for root from 218.92.0.25 port 57549 ssh2 Feb 9 07:54:51.243068 sshd[4204]: Failed password for root from 218.92.0.25 port 57549 ssh2 Feb 9 07:54:51.910140 sshd[4204]: Received disconnect from 218.92.0.25 port 57549:11: [preauth] Feb 9 07:54:51.910140 sshd[4204]: Disconnected from authenticating user root 218.92.0.25 port 57549 [preauth] Feb 9 07:54:51.910684 sshd[4204]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.25 user=root Feb 9 07:54:51.912692 systemd[1]: sshd@5-139.178.90.113:22-218.92.0.25:57549.service: Deactivated successfully. Feb 9 07:54:52.043937 systemd[1]: Started sshd@7-139.178.90.113:22-218.92.0.25:55570.service. Feb 9 07:54:52.940332 sshd[4214]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.25 user=root Feb 9 07:54:54.502806 sshd[4214]: Failed password for root from 218.92.0.25 port 55570 ssh2 Feb 9 07:54:55.750361 sshd[4214]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 9 07:54:58.060111 sshd[4214]: Failed password for root from 218.92.0.25 port 55570 ssh2 Feb 9 07:54:59.946753 sshd[4214]: Failed password for root from 218.92.0.25 port 55570 ssh2 Feb 9 07:55:01.369127 sshd[4214]: Received disconnect from 218.92.0.25 port 55570:11: [preauth] Feb 9 07:55:01.369127 sshd[4214]: Disconnected from authenticating user root 218.92.0.25 port 55570 [preauth] Feb 9 07:55:01.369681 sshd[4214]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.25 user=root Feb 9 07:55:01.371773 systemd[1]: sshd@7-139.178.90.113:22-218.92.0.25:55570.service: Deactivated successfully. Feb 9 07:55:01.539724 systemd[1]: Started sshd@8-139.178.90.113:22-218.92.0.25:55895.service. Feb 9 07:55:02.518730 sshd[4221]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.25 user=root Feb 9 07:55:04.788567 sshd[4221]: Failed password for root from 218.92.0.25 port 55895 ssh2 Feb 9 07:55:07.355424 sshd[4221]: Failed password for root from 218.92.0.25 port 55895 ssh2 Feb 9 07:55:10.591793 sshd[4221]: Failed password for root from 218.92.0.25 port 55895 ssh2 Feb 9 07:55:10.988638 sshd[4221]: Received disconnect from 218.92.0.25 port 55895:11: [preauth] Feb 9 07:55:10.988638 sshd[4221]: Disconnected from authenticating user root 218.92.0.25 port 55895 [preauth] Feb 9 07:55:10.989085 sshd[4221]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.25 user=root Feb 9 07:55:10.991168 systemd[1]: sshd@8-139.178.90.113:22-218.92.0.25:55895.service: Deactivated successfully. Feb 9 07:59:34.676200 systemd[1]: Started sshd@9-139.178.90.113:22-218.92.0.76:26991.service. Feb 9 07:59:35.655432 sshd[4263]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.76 user=root Feb 9 07:59:37.066871 sshd[4263]: Failed password for root from 218.92.0.76 port 26991 ssh2 Feb 9 07:59:40.639195 sshd[4263]: Failed password for root from 218.92.0.76 port 26991 ssh2 Feb 9 07:59:43.207140 sshd[4263]: Failed password for root from 218.92.0.76 port 26991 ssh2 Feb 9 07:59:44.125277 sshd[4263]: Received disconnect from 218.92.0.76 port 26991:11: [preauth] Feb 9 07:59:44.125277 sshd[4263]: Disconnected from authenticating user root 218.92.0.76 port 26991 [preauth] Feb 9 07:59:44.125452 sshd[4263]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.76 user=root Feb 9 07:59:44.126126 systemd[1]: sshd@9-139.178.90.113:22-218.92.0.76:26991.service: Deactivated successfully. Feb 9 07:59:44.277429 systemd[1]: Started sshd@10-139.178.90.113:22-218.92.0.76:26036.service. Feb 9 07:59:45.665041 sshd[4271]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.76 user=root Feb 9 07:59:47.783800 sshd[4271]: Failed password for root from 218.92.0.76 port 26036 ssh2 Feb 9 07:59:48.486524 sshd[4271]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 9 07:59:50.350144 sshd[4271]: Failed password for root from 218.92.0.76 port 26036 ssh2 Feb 9 07:59:53.250950 sshd[4271]: Failed password for root from 218.92.0.76 port 26036 ssh2 Feb 9 07:59:54.127705 sshd[4271]: Received disconnect from 218.92.0.76 port 26036:11: [preauth] Feb 9 07:59:54.127705 sshd[4271]: Disconnected from authenticating user root 218.92.0.76 port 26036 [preauth] Feb 9 07:59:54.127900 sshd[4271]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.76 user=root Feb 9 07:59:54.128589 systemd[1]: sshd@10-139.178.90.113:22-218.92.0.76:26036.service: Deactivated successfully. Feb 9 07:59:54.300597 systemd[1]: Started sshd@11-139.178.90.113:22-218.92.0.76:30313.service. Feb 9 07:59:55.318379 sshd[4275]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.76 user=root Feb 9 07:59:57.478234 sshd[4275]: Failed password for root from 218.92.0.76 port 30313 ssh2 Feb 9 08:00:00.050456 sshd[4275]: Failed password for root from 218.92.0.76 port 30313 ssh2 Feb 9 08:00:03.155786 sshd[4275]: Failed password for root from 218.92.0.76 port 30313 ssh2 Feb 9 08:00:03.813159 sshd[4275]: Received disconnect from 218.92.0.76 port 30313:11: [preauth] Feb 9 08:00:03.813159 sshd[4275]: Disconnected from authenticating user root 218.92.0.76 port 30313 [preauth] Feb 9 08:00:03.813701 sshd[4275]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.76 user=root Feb 9 08:00:03.815759 systemd[1]: sshd@11-139.178.90.113:22-218.92.0.76:30313.service: Deactivated successfully. Feb 9 08:00:40.675840 systemd[1]: Started sshd@12-139.178.90.113:22-147.75.109.163:33482.service. Feb 9 08:00:40.708402 sshd[4285]: Accepted publickey for core from 147.75.109.163 port 33482 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:00:40.709299 sshd[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:00:40.712371 systemd-logind[1462]: New session 8 of user core. Feb 9 08:00:40.713000 systemd[1]: Started session-8.scope. Feb 9 08:00:40.806984 sshd[4285]: pam_unix(sshd:session): session closed for user core Feb 9 08:00:40.808307 systemd[1]: sshd@12-139.178.90.113:22-147.75.109.163:33482.service: Deactivated successfully. Feb 9 08:00:40.808728 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 08:00:40.809147 systemd-logind[1462]: Session 8 logged out. Waiting for processes to exit. Feb 9 08:00:40.809554 systemd-logind[1462]: Removed session 8. Feb 9 08:00:45.816054 systemd[1]: Started sshd@13-139.178.90.113:22-147.75.109.163:33812.service. Feb 9 08:00:45.848840 sshd[4319]: Accepted publickey for core from 147.75.109.163 port 33812 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:00:45.849735 sshd[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:00:45.852941 systemd-logind[1462]: New session 9 of user core. Feb 9 08:00:45.853576 systemd[1]: Started session-9.scope. Feb 9 08:00:45.942215 sshd[4319]: pam_unix(sshd:session): session closed for user core Feb 9 08:00:45.943631 systemd[1]: sshd@13-139.178.90.113:22-147.75.109.163:33812.service: Deactivated successfully. Feb 9 08:00:45.944051 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 08:00:45.944412 systemd-logind[1462]: Session 9 logged out. Waiting for processes to exit. Feb 9 08:00:45.945074 systemd-logind[1462]: Removed session 9. Feb 9 08:00:50.951761 systemd[1]: Started sshd@14-139.178.90.113:22-147.75.109.163:33822.service. Feb 9 08:00:50.991698 sshd[4345]: Accepted publickey for core from 147.75.109.163 port 33822 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:00:50.992438 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:00:50.994953 systemd-logind[1462]: New session 10 of user core. Feb 9 08:00:50.995498 systemd[1]: Started session-10.scope. Feb 9 08:00:51.079562 sshd[4345]: pam_unix(sshd:session): session closed for user core Feb 9 08:00:51.081111 systemd[1]: sshd@14-139.178.90.113:22-147.75.109.163:33822.service: Deactivated successfully. Feb 9 08:00:51.081630 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 08:00:51.082110 systemd-logind[1462]: Session 10 logged out. Waiting for processes to exit. Feb 9 08:00:51.082703 systemd-logind[1462]: Removed session 10. Feb 9 08:00:56.088907 systemd[1]: Started sshd@15-139.178.90.113:22-147.75.109.163:34820.service. Feb 9 08:00:56.121499 sshd[4373]: Accepted publickey for core from 147.75.109.163 port 34820 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:00:56.122473 sshd[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:00:56.125622 systemd-logind[1462]: New session 11 of user core. Feb 9 08:00:56.126412 systemd[1]: Started session-11.scope. Feb 9 08:00:56.254878 sshd[4373]: pam_unix(sshd:session): session closed for user core Feb 9 08:00:56.256386 systemd[1]: sshd@15-139.178.90.113:22-147.75.109.163:34820.service: Deactivated successfully. Feb 9 08:00:56.256831 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 08:00:56.257256 systemd-logind[1462]: Session 11 logged out. Waiting for processes to exit. Feb 9 08:00:56.257845 systemd-logind[1462]: Removed session 11. Feb 9 08:01:01.264890 systemd[1]: Started sshd@16-139.178.90.113:22-147.75.109.163:34824.service. Feb 9 08:01:01.297574 sshd[4399]: Accepted publickey for core from 147.75.109.163 port 34824 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:01:01.298377 sshd[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:01:01.301229 systemd-logind[1462]: New session 12 of user core. Feb 9 08:01:01.301945 systemd[1]: Started session-12.scope. Feb 9 08:01:01.346889 update_engine[1464]: I0209 08:01:01.346768 1464 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 08:01:01.346889 update_engine[1464]: I0209 08:01:01.346850 1464 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 08:01:01.348661 update_engine[1464]: I0209 08:01:01.348577 1464 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 08:01:01.349505 update_engine[1464]: I0209 08:01:01.349427 1464 omaha_request_params.cc:62] Current group set to lts Feb 9 08:01:01.349806 update_engine[1464]: I0209 08:01:01.349737 1464 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 08:01:01.349806 update_engine[1464]: I0209 08:01:01.349757 1464 update_attempter.cc:643] Scheduling an action processor start. Feb 9 08:01:01.349806 update_engine[1464]: I0209 08:01:01.349792 1464 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 08:01:01.350239 update_engine[1464]: I0209 08:01:01.349877 1464 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 08:01:01.350239 update_engine[1464]: I0209 08:01:01.350023 1464 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 08:01:01.350239 update_engine[1464]: I0209 08:01:01.350040 1464 omaha_request_action.cc:271] Request: Feb 9 08:01:01.350239 update_engine[1464]: Feb 9 08:01:01.350239 update_engine[1464]: Feb 9 08:01:01.350239 update_engine[1464]: Feb 9 08:01:01.350239 update_engine[1464]: Feb 9 08:01:01.350239 update_engine[1464]: Feb 9 08:01:01.350239 update_engine[1464]: Feb 9 08:01:01.350239 update_engine[1464]: Feb 9 08:01:01.350239 update_engine[1464]: Feb 9 08:01:01.350239 update_engine[1464]: I0209 08:01:01.350050 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 08:01:01.351429 locksmithd[1508]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 08:01:01.353131 update_engine[1464]: I0209 08:01:01.353046 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 08:01:01.353336 update_engine[1464]: E0209 08:01:01.353303 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 08:01:01.353487 update_engine[1464]: I0209 08:01:01.353462 1464 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 08:01:01.391589 sshd[4399]: pam_unix(sshd:session): session closed for user core Feb 9 08:01:01.393321 systemd[1]: sshd@16-139.178.90.113:22-147.75.109.163:34824.service: Deactivated successfully. Feb 9 08:01:01.393661 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 08:01:01.394055 systemd-logind[1462]: Session 12 logged out. Waiting for processes to exit. Feb 9 08:01:01.394612 systemd[1]: Started sshd@17-139.178.90.113:22-147.75.109.163:34832.service. Feb 9 08:01:01.395041 systemd-logind[1462]: Removed session 12. Feb 9 08:01:01.428392 sshd[4425]: Accepted publickey for core from 147.75.109.163 port 34832 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:01:01.431420 sshd[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:01:01.441069 systemd-logind[1462]: New session 13 of user core. Feb 9 08:01:01.443315 systemd[1]: Started session-13.scope. Feb 9 08:01:01.987002 sshd[4425]: pam_unix(sshd:session): session closed for user core Feb 9 08:01:01.988918 systemd[1]: sshd@17-139.178.90.113:22-147.75.109.163:34832.service: Deactivated successfully. Feb 9 08:01:01.989296 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 08:01:01.989657 systemd-logind[1462]: Session 13 logged out. Waiting for processes to exit. Feb 9 08:01:01.990293 systemd[1]: Started sshd@18-139.178.90.113:22-147.75.109.163:34844.service. Feb 9 08:01:01.990843 systemd-logind[1462]: Removed session 13. Feb 9 08:01:02.023971 sshd[4449]: Accepted publickey for core from 147.75.109.163 port 34844 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:01:02.024785 sshd[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:01:02.027376 systemd-logind[1462]: New session 14 of user core. Feb 9 08:01:02.027890 systemd[1]: Started session-14.scope. Feb 9 08:01:02.155944 sshd[4449]: pam_unix(sshd:session): session closed for user core Feb 9 08:01:02.157779 systemd[1]: sshd@18-139.178.90.113:22-147.75.109.163:34844.service: Deactivated successfully. Feb 9 08:01:02.158308 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 08:01:02.158787 systemd-logind[1462]: Session 14 logged out. Waiting for processes to exit. Feb 9 08:01:02.159396 systemd-logind[1462]: Removed session 14. Feb 9 08:01:07.161975 systemd[1]: Started sshd@19-139.178.90.113:22-147.75.109.163:48300.service. Feb 9 08:01:07.197838 sshd[4475]: Accepted publickey for core from 147.75.109.163 port 48300 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:01:07.198732 sshd[4475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:01:07.201919 systemd-logind[1462]: New session 15 of user core. Feb 9 08:01:07.202544 systemd[1]: Started session-15.scope. Feb 9 08:01:07.289726 sshd[4475]: pam_unix(sshd:session): session closed for user core Feb 9 08:01:07.291246 systemd[1]: sshd@19-139.178.90.113:22-147.75.109.163:48300.service: Deactivated successfully. Feb 9 08:01:07.291677 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 08:01:07.292125 systemd-logind[1462]: Session 15 logged out. Waiting for processes to exit. Feb 9 08:01:07.292646 systemd-logind[1462]: Removed session 15. Feb 9 08:01:11.351039 update_engine[1464]: I0209 08:01:11.350914 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 08:01:11.351936 update_engine[1464]: I0209 08:01:11.351401 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 08:01:11.351936 update_engine[1464]: E0209 08:01:11.351634 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 08:01:11.351936 update_engine[1464]: I0209 08:01:11.351800 1464 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 08:01:12.300857 systemd[1]: Started sshd@20-139.178.90.113:22-147.75.109.163:48310.service. Feb 9 08:01:12.339942 sshd[4502]: Accepted publickey for core from 147.75.109.163 port 48310 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:01:12.340708 sshd[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:01:12.343254 systemd-logind[1462]: New session 16 of user core. Feb 9 08:01:12.343870 systemd[1]: Started session-16.scope. Feb 9 08:01:12.433813 sshd[4502]: pam_unix(sshd:session): session closed for user core Feb 9 08:01:12.435295 systemd[1]: sshd@20-139.178.90.113:22-147.75.109.163:48310.service: Deactivated successfully. Feb 9 08:01:12.435751 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 08:01:12.436132 systemd-logind[1462]: Session 16 logged out. Waiting for processes to exit. Feb 9 08:01:12.436546 systemd-logind[1462]: Removed session 16. Feb 9 08:01:17.443590 systemd[1]: Started sshd@21-139.178.90.113:22-147.75.109.163:54792.service. Feb 9 08:01:17.476440 sshd[4528]: Accepted publickey for core from 147.75.109.163 port 54792 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:01:17.477360 sshd[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:01:17.480434 systemd-logind[1462]: New session 17 of user core. Feb 9 08:01:17.481178 systemd[1]: Started session-17.scope. Feb 9 08:01:17.570248 sshd[4528]: pam_unix(sshd:session): session closed for user core Feb 9 08:01:17.571958 systemd[1]: sshd@21-139.178.90.113:22-147.75.109.163:54792.service: Deactivated successfully. Feb 9 08:01:17.572451 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 08:01:17.572924 systemd-logind[1462]: Session 17 logged out. Waiting for processes to exit. Feb 9 08:01:17.573431 systemd-logind[1462]: Removed session 17. Feb 9 08:01:19.766650 systemd[1]: Started sshd@22-139.178.90.113:22-170.64.196.239:51482.service. Feb 9 08:01:19.921547 sshd[4552]: kex_exchange_identification: Connection closed by remote host Feb 9 08:01:19.921547 sshd[4552]: Connection closed by 170.64.196.239 port 51482 Feb 9 08:01:19.923105 systemd[1]: sshd@22-139.178.90.113:22-170.64.196.239:51482.service: Deactivated successfully. Feb 9 08:01:21.349944 update_engine[1464]: I0209 08:01:21.349826 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 08:01:21.350774 update_engine[1464]: I0209 08:01:21.350301 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 08:01:21.350774 update_engine[1464]: E0209 08:01:21.350499 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 08:01:21.350774 update_engine[1464]: I0209 08:01:21.350692 1464 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 9 08:01:22.579545 systemd[1]: Started sshd@23-139.178.90.113:22-147.75.109.163:54798.service. Feb 9 08:01:22.612710 sshd[4555]: Accepted publickey for core from 147.75.109.163 port 54798 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:01:22.613355 sshd[4555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:01:22.615909 systemd-logind[1462]: New session 18 of user core. Feb 9 08:01:22.616448 systemd[1]: Started session-18.scope. Feb 9 08:01:22.701738 sshd[4555]: pam_unix(sshd:session): session closed for user core Feb 9 08:01:22.703263 systemd[1]: sshd@23-139.178.90.113:22-147.75.109.163:54798.service: Deactivated successfully. Feb 9 08:01:22.703738 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 08:01:22.704243 systemd-logind[1462]: Session 18 logged out. Waiting for processes to exit. Feb 9 08:01:22.704776 systemd-logind[1462]: Removed session 18. Feb 9 08:01:27.711505 systemd[1]: Started sshd@24-139.178.90.113:22-147.75.109.163:46354.service. Feb 9 08:01:27.744236 sshd[4580]: Accepted publickey for core from 147.75.109.163 port 46354 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:01:27.745158 sshd[4580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:01:27.748350 systemd-logind[1462]: New session 19 of user core. Feb 9 08:01:27.748982 systemd[1]: Started session-19.scope. Feb 9 08:01:27.830633 sshd[4580]: pam_unix(sshd:session): session closed for user core Feb 9 08:01:27.832218 systemd[1]: sshd@24-139.178.90.113:22-147.75.109.163:46354.service: Deactivated successfully. Feb 9 08:01:27.832686 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 08:01:27.833151 systemd-logind[1462]: Session 19 logged out. Waiting for processes to exit. Feb 9 08:01:27.833716 systemd-logind[1462]: Removed session 19. Feb 9 08:01:31.350463 update_engine[1464]: I0209 08:01:31.350334 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 08:01:31.351388 update_engine[1464]: I0209 08:01:31.350840 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 08:01:31.351388 update_engine[1464]: E0209 08:01:31.351042 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 08:01:31.351388 update_engine[1464]: I0209 08:01:31.351203 1464 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 08:01:31.351388 update_engine[1464]: I0209 08:01:31.351219 1464 omaha_request_action.cc:621] Omaha request response: Feb 9 08:01:31.351388 update_engine[1464]: E0209 08:01:31.351361 1464 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 9 08:01:31.351388 update_engine[1464]: I0209 08:01:31.351389 1464 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 9 08:01:31.351388 update_engine[1464]: I0209 08:01:31.351398 1464 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 08:01:31.352287 update_engine[1464]: I0209 08:01:31.351408 1464 update_attempter.cc:306] Processing Done. Feb 9 08:01:31.352287 update_engine[1464]: E0209 08:01:31.351434 1464 update_attempter.cc:619] Update failed. Feb 9 08:01:31.352287 update_engine[1464]: I0209 08:01:31.351444 1464 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 9 08:01:31.352287 update_engine[1464]: I0209 08:01:31.351454 1464 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 9 08:01:31.352287 update_engine[1464]: I0209 08:01:31.351462 1464 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 9 08:01:31.352287 update_engine[1464]: I0209 08:01:31.351634 1464 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 08:01:31.352287 update_engine[1464]: I0209 08:01:31.351688 1464 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 08:01:31.352287 update_engine[1464]: I0209 08:01:31.351697 1464 omaha_request_action.cc:271] Request: Feb 9 08:01:31.352287 update_engine[1464]: Feb 9 08:01:31.352287 update_engine[1464]: Feb 9 08:01:31.352287 update_engine[1464]: Feb 9 08:01:31.352287 update_engine[1464]: Feb 9 08:01:31.352287 update_engine[1464]: Feb 9 08:01:31.352287 update_engine[1464]: Feb 9 08:01:31.352287 update_engine[1464]: I0209 08:01:31.351708 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 08:01:31.352287 update_engine[1464]: I0209 08:01:31.352027 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 08:01:31.352287 update_engine[1464]: E0209 08:01:31.352188 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 08:01:31.353864 locksmithd[1508]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 9 08:01:31.353864 locksmithd[1508]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 9 08:01:31.354503 update_engine[1464]: I0209 08:01:31.352319 1464 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 08:01:31.354503 update_engine[1464]: I0209 08:01:31.352332 1464 omaha_request_action.cc:621] Omaha request response: Feb 9 08:01:31.354503 update_engine[1464]: I0209 08:01:31.352342 1464 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 08:01:31.354503 update_engine[1464]: I0209 08:01:31.352350 1464 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 08:01:31.354503 update_engine[1464]: I0209 08:01:31.352358 1464 update_attempter.cc:306] Processing Done. Feb 9 08:01:31.354503 update_engine[1464]: I0209 08:01:31.352367 1464 update_attempter.cc:310] Error event sent. Feb 9 08:01:31.354503 update_engine[1464]: I0209 08:01:31.352387 1464 update_check_scheduler.cc:74] Next update check in 45m31s Feb 9 08:01:32.840437 systemd[1]: Started sshd@25-139.178.90.113:22-147.75.109.163:46370.service. Feb 9 08:01:32.873020 sshd[4605]: Accepted publickey for core from 147.75.109.163 port 46370 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:01:32.873933 sshd[4605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:01:32.877263 systemd-logind[1462]: New session 20 of user core. Feb 9 08:01:32.877889 systemd[1]: Started session-20.scope. Feb 9 08:01:32.967655 sshd[4605]: pam_unix(sshd:session): session closed for user core Feb 9 08:01:32.969002 systemd[1]: sshd@25-139.178.90.113:22-147.75.109.163:46370.service: Deactivated successfully. Feb 9 08:01:32.969427 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 08:01:32.969761 systemd-logind[1462]: Session 20 logged out. Waiting for processes to exit. Feb 9 08:01:32.970147 systemd-logind[1462]: Removed session 20. Feb 9 08:01:35.989863 systemd[1]: Started sshd@26-139.178.90.113:22-141.98.11.11:47980.service. Feb 9 08:01:37.977119 systemd[1]: Started sshd@27-139.178.90.113:22-147.75.109.163:47520.service. Feb 9 08:01:38.009777 sshd[4632]: Accepted publickey for core from 147.75.109.163 port 47520 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:01:38.010684 sshd[4632]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:01:38.013848 systemd-logind[1462]: New session 21 of user core. Feb 9 08:01:38.014568 systemd[1]: Started session-21.scope. Feb 9 08:01:38.102077 sshd[4632]: pam_unix(sshd:session): session closed for user core Feb 9 08:01:38.103526 systemd[1]: sshd@27-139.178.90.113:22-147.75.109.163:47520.service: Deactivated successfully. Feb 9 08:01:38.103952 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 08:01:38.104326 systemd-logind[1462]: Session 21 logged out. Waiting for processes to exit. Feb 9 08:01:38.104876 systemd-logind[1462]: Removed session 21. Feb 9 08:01:39.062169 sshd[4630]: Invalid user admin from 141.98.11.11 port 47980 Feb 9 08:01:39.275355 sshd[4630]: pam_faillock(sshd:auth): User unknown Feb 9 08:01:39.276383 sshd[4630]: pam_unix(sshd:auth): check pass; user unknown Feb 9 08:01:39.276472 sshd[4630]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=141.98.11.11 Feb 9 08:01:39.277407 sshd[4630]: pam_faillock(sshd:auth): User unknown Feb 9 08:01:41.381576 sshd[4630]: Failed password for invalid user admin from 141.98.11.11 port 47980 ssh2 Feb 9 08:01:43.104090 systemd[1]: Started sshd@28-139.178.90.113:22-147.75.109.163:47530.service. Feb 9 08:01:43.144177 sshd[4661]: Accepted publickey for core from 147.75.109.163 port 47530 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:01:43.144820 sshd[4661]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:01:43.147182 systemd-logind[1462]: New session 22 of user core. Feb 9 08:01:43.147634 systemd[1]: Started session-22.scope. Feb 9 08:01:43.215193 sshd[4630]: Connection closed by invalid user admin 141.98.11.11 port 47980 [preauth] Feb 9 08:01:43.215886 systemd[1]: sshd@26-139.178.90.113:22-141.98.11.11:47980.service: Deactivated successfully. Feb 9 08:01:43.233678 sshd[4661]: pam_unix(sshd:session): session closed for user core Feb 9 08:01:43.235256 systemd[1]: sshd@28-139.178.90.113:22-147.75.109.163:47530.service: Deactivated successfully. Feb 9 08:01:43.235675 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 08:01:43.236096 systemd-logind[1462]: Session 22 logged out. Waiting for processes to exit. Feb 9 08:01:43.236518 systemd-logind[1462]: Removed session 22. Feb 9 08:01:48.242480 systemd[1]: Started sshd@29-139.178.90.113:22-147.75.109.163:52362.service. Feb 9 08:01:48.275058 sshd[4688]: Accepted publickey for core from 147.75.109.163 port 52362 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:01:48.275962 sshd[4688]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:01:48.279106 systemd-logind[1462]: New session 23 of user core. Feb 9 08:01:48.279751 systemd[1]: Started session-23.scope. Feb 9 08:01:48.367567 sshd[4688]: pam_unix(sshd:session): session closed for user core Feb 9 08:01:48.369054 systemd[1]: sshd@29-139.178.90.113:22-147.75.109.163:52362.service: Deactivated successfully. Feb 9 08:01:48.369488 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 08:01:48.369878 systemd-logind[1462]: Session 23 logged out. Waiting for processes to exit. Feb 9 08:01:48.370281 systemd-logind[1462]: Removed session 23. Feb 9 08:01:53.376764 systemd[1]: Started sshd@30-139.178.90.113:22-147.75.109.163:52374.service. Feb 9 08:01:53.409362 sshd[4710]: Accepted publickey for core from 147.75.109.163 port 52374 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:01:53.410233 sshd[4710]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:01:53.413190 systemd-logind[1462]: New session 24 of user core. Feb 9 08:01:53.413797 systemd[1]: Started session-24.scope. Feb 9 08:01:53.499740 sshd[4710]: pam_unix(sshd:session): session closed for user core Feb 9 08:01:53.501354 systemd[1]: sshd@30-139.178.90.113:22-147.75.109.163:52374.service: Deactivated successfully. Feb 9 08:01:53.501832 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 08:01:53.502237 systemd-logind[1462]: Session 24 logged out. Waiting for processes to exit. Feb 9 08:01:53.502837 systemd-logind[1462]: Removed session 24. Feb 9 08:01:58.508742 systemd[1]: Started sshd@31-139.178.90.113:22-147.75.109.163:53474.service. Feb 9 08:01:58.542090 sshd[4737]: Accepted publickey for core from 147.75.109.163 port 53474 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:01:58.543129 sshd[4737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:01:58.546494 systemd-logind[1462]: New session 25 of user core. Feb 9 08:01:58.547293 systemd[1]: Started session-25.scope. Feb 9 08:01:58.635542 sshd[4737]: pam_unix(sshd:session): session closed for user core Feb 9 08:01:58.637128 systemd[1]: sshd@31-139.178.90.113:22-147.75.109.163:53474.service: Deactivated successfully. Feb 9 08:01:58.637603 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 08:01:58.637992 systemd-logind[1462]: Session 25 logged out. Waiting for processes to exit. Feb 9 08:01:58.638401 systemd-logind[1462]: Removed session 25. Feb 9 08:02:03.644977 systemd[1]: Started sshd@32-139.178.90.113:22-147.75.109.163:53480.service. Feb 9 08:02:03.678305 sshd[4763]: Accepted publickey for core from 147.75.109.163 port 53480 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:02:03.679265 sshd[4763]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:02:03.682371 systemd-logind[1462]: New session 26 of user core. Feb 9 08:02:03.683152 systemd[1]: Started session-26.scope. Feb 9 08:02:03.769717 sshd[4763]: pam_unix(sshd:session): session closed for user core Feb 9 08:02:03.771228 systemd[1]: sshd@32-139.178.90.113:22-147.75.109.163:53480.service: Deactivated successfully. Feb 9 08:02:03.771674 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 08:02:03.772050 systemd-logind[1462]: Session 26 logged out. Waiting for processes to exit. Feb 9 08:02:03.772484 systemd-logind[1462]: Removed session 26. Feb 9 08:02:08.778656 systemd[1]: Started sshd@33-139.178.90.113:22-147.75.109.163:33924.service. Feb 9 08:02:08.811645 sshd[4786]: Accepted publickey for core from 147.75.109.163 port 33924 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:02:08.812314 sshd[4786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:02:08.814742 systemd-logind[1462]: New session 27 of user core. Feb 9 08:02:08.815216 systemd[1]: Started session-27.scope. Feb 9 08:02:08.900189 sshd[4786]: pam_unix(sshd:session): session closed for user core Feb 9 08:02:08.901710 systemd[1]: sshd@33-139.178.90.113:22-147.75.109.163:33924.service: Deactivated successfully. Feb 9 08:02:08.902186 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 08:02:08.902524 systemd-logind[1462]: Session 27 logged out. Waiting for processes to exit. Feb 9 08:02:08.903198 systemd-logind[1462]: Removed session 27. Feb 9 08:02:13.909478 systemd[1]: Started sshd@34-139.178.90.113:22-147.75.109.163:33926.service. Feb 9 08:02:13.942446 sshd[4813]: Accepted publickey for core from 147.75.109.163 port 33926 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:02:13.943328 sshd[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:02:13.946282 systemd-logind[1462]: New session 28 of user core. Feb 9 08:02:13.946992 systemd[1]: Started session-28.scope. Feb 9 08:02:14.045653 sshd[4813]: pam_unix(sshd:session): session closed for user core Feb 9 08:02:14.051201 systemd[1]: sshd@34-139.178.90.113:22-147.75.109.163:33926.service: Deactivated successfully. Feb 9 08:02:14.053024 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 08:02:14.054626 systemd-logind[1462]: Session 28 logged out. Waiting for processes to exit. Feb 9 08:02:14.056742 systemd-logind[1462]: Removed session 28. Feb 9 08:02:19.054743 systemd[1]: Started sshd@35-139.178.90.113:22-147.75.109.163:60304.service. Feb 9 08:02:19.087406 sshd[4838]: Accepted publickey for core from 147.75.109.163 port 60304 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:02:19.088252 sshd[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:02:19.091252 systemd-logind[1462]: New session 29 of user core. Feb 9 08:02:19.091939 systemd[1]: Started session-29.scope. Feb 9 08:02:19.175488 sshd[4838]: pam_unix(sshd:session): session closed for user core Feb 9 08:02:19.177003 systemd[1]: sshd@35-139.178.90.113:22-147.75.109.163:60304.service: Deactivated successfully. Feb 9 08:02:19.177411 systemd[1]: session-29.scope: Deactivated successfully. Feb 9 08:02:19.177774 systemd-logind[1462]: Session 29 logged out. Waiting for processes to exit. Feb 9 08:02:19.178293 systemd-logind[1462]: Removed session 29. Feb 9 08:02:24.184540 systemd[1]: Started sshd@36-139.178.90.113:22-147.75.109.163:60320.service. Feb 9 08:02:24.217622 sshd[4863]: Accepted publickey for core from 147.75.109.163 port 60320 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:02:24.218532 sshd[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:02:24.221635 systemd-logind[1462]: New session 30 of user core. Feb 9 08:02:24.222356 systemd[1]: Started session-30.scope. Feb 9 08:02:24.316281 sshd[4863]: pam_unix(sshd:session): session closed for user core Feb 9 08:02:24.321655 systemd[1]: sshd@36-139.178.90.113:22-147.75.109.163:60320.service: Deactivated successfully. Feb 9 08:02:24.323307 systemd[1]: session-30.scope: Deactivated successfully. Feb 9 08:02:24.325065 systemd-logind[1462]: Session 30 logged out. Waiting for processes to exit. Feb 9 08:02:24.327176 systemd-logind[1462]: Removed session 30. Feb 9 08:02:29.324422 systemd[1]: Started sshd@37-139.178.90.113:22-147.75.109.163:50690.service. Feb 9 08:02:29.357219 sshd[4889]: Accepted publickey for core from 147.75.109.163 port 50690 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:02:29.358173 sshd[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:02:29.361371 systemd-logind[1462]: New session 31 of user core. Feb 9 08:02:29.362032 systemd[1]: Started session-31.scope. Feb 9 08:02:29.447546 sshd[4889]: pam_unix(sshd:session): session closed for user core Feb 9 08:02:29.449144 systemd[1]: sshd@37-139.178.90.113:22-147.75.109.163:50690.service: Deactivated successfully. Feb 9 08:02:29.449573 systemd[1]: session-31.scope: Deactivated successfully. Feb 9 08:02:29.449972 systemd-logind[1462]: Session 31 logged out. Waiting for processes to exit. Feb 9 08:02:29.450458 systemd-logind[1462]: Removed session 31. Feb 9 08:02:34.458094 systemd[1]: Started sshd@38-139.178.90.113:22-147.75.109.163:57802.service. Feb 9 08:02:34.496263 sshd[4914]: Accepted publickey for core from 147.75.109.163 port 57802 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:02:34.497051 sshd[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:02:34.499993 systemd-logind[1462]: New session 32 of user core. Feb 9 08:02:34.500518 systemd[1]: Started session-32.scope. Feb 9 08:02:34.590007 sshd[4914]: pam_unix(sshd:session): session closed for user core Feb 9 08:02:34.591517 systemd[1]: sshd@38-139.178.90.113:22-147.75.109.163:57802.service: Deactivated successfully. Feb 9 08:02:34.591946 systemd[1]: session-32.scope: Deactivated successfully. Feb 9 08:02:34.592323 systemd-logind[1462]: Session 32 logged out. Waiting for processes to exit. Feb 9 08:02:34.592957 systemd-logind[1462]: Removed session 32. Feb 9 08:02:39.599543 systemd[1]: Started sshd@39-139.178.90.113:22-147.75.109.163:57814.service. Feb 9 08:02:39.632512 sshd[4938]: Accepted publickey for core from 147.75.109.163 port 57814 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:02:39.633364 sshd[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:02:39.636249 systemd-logind[1462]: New session 33 of user core. Feb 9 08:02:39.636837 systemd[1]: Started session-33.scope. Feb 9 08:02:39.724785 sshd[4938]: pam_unix(sshd:session): session closed for user core Feb 9 08:02:39.726488 systemd[1]: sshd@39-139.178.90.113:22-147.75.109.163:57814.service: Deactivated successfully. Feb 9 08:02:39.726991 systemd[1]: session-33.scope: Deactivated successfully. Feb 9 08:02:39.727408 systemd-logind[1462]: Session 33 logged out. Waiting for processes to exit. Feb 9 08:02:39.728074 systemd-logind[1462]: Removed session 33. Feb 9 08:02:44.734365 systemd[1]: Started sshd@40-139.178.90.113:22-147.75.109.163:38984.service. Feb 9 08:02:44.766955 sshd[4963]: Accepted publickey for core from 147.75.109.163 port 38984 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:02:44.767816 sshd[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:02:44.770380 systemd-logind[1462]: New session 34 of user core. Feb 9 08:02:44.771073 systemd[1]: Started session-34.scope. Feb 9 08:02:44.855302 sshd[4963]: pam_unix(sshd:session): session closed for user core Feb 9 08:02:44.856722 systemd[1]: sshd@40-139.178.90.113:22-147.75.109.163:38984.service: Deactivated successfully. Feb 9 08:02:44.857139 systemd[1]: session-34.scope: Deactivated successfully. Feb 9 08:02:44.857460 systemd-logind[1462]: Session 34 logged out. Waiting for processes to exit. Feb 9 08:02:44.858038 systemd-logind[1462]: Removed session 34. Feb 9 08:02:49.864265 systemd[1]: Started sshd@41-139.178.90.113:22-147.75.109.163:38996.service. Feb 9 08:02:49.919963 sshd[4989]: Accepted publickey for core from 147.75.109.163 port 38996 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:02:49.923203 sshd[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:02:49.932884 systemd-logind[1462]: New session 35 of user core. Feb 9 08:02:49.935425 systemd[1]: Started session-35.scope. Feb 9 08:02:50.039735 sshd[4989]: pam_unix(sshd:session): session closed for user core Feb 9 08:02:50.041125 systemd[1]: sshd@41-139.178.90.113:22-147.75.109.163:38996.service: Deactivated successfully. Feb 9 08:02:50.041569 systemd[1]: session-35.scope: Deactivated successfully. Feb 9 08:02:50.041982 systemd-logind[1462]: Session 35 logged out. Waiting for processes to exit. Feb 9 08:02:50.042449 systemd-logind[1462]: Removed session 35. Feb 9 08:02:55.049068 systemd[1]: Started sshd@42-139.178.90.113:22-147.75.109.163:48530.service. Feb 9 08:02:55.081887 sshd[5014]: Accepted publickey for core from 147.75.109.163 port 48530 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:02:55.082529 sshd[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:02:55.085116 systemd-logind[1462]: New session 36 of user core. Feb 9 08:02:55.085634 systemd[1]: Started session-36.scope. Feb 9 08:02:55.174892 sshd[5014]: pam_unix(sshd:session): session closed for user core Feb 9 08:02:55.176329 systemd[1]: sshd@42-139.178.90.113:22-147.75.109.163:48530.service: Deactivated successfully. Feb 9 08:02:55.176756 systemd[1]: session-36.scope: Deactivated successfully. Feb 9 08:02:55.177188 systemd-logind[1462]: Session 36 logged out. Waiting for processes to exit. Feb 9 08:02:55.177766 systemd-logind[1462]: Removed session 36. Feb 9 08:03:00.186025 systemd[1]: Started sshd@43-139.178.90.113:22-147.75.109.163:48534.service. Feb 9 08:03:00.222500 sshd[5043]: Accepted publickey for core from 147.75.109.163 port 48534 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:03:00.223245 sshd[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:03:00.226113 systemd-logind[1462]: New session 37 of user core. Feb 9 08:03:00.226633 systemd[1]: Started session-37.scope. Feb 9 08:03:00.313257 sshd[5043]: pam_unix(sshd:session): session closed for user core Feb 9 08:03:00.314886 systemd[1]: sshd@43-139.178.90.113:22-147.75.109.163:48534.service: Deactivated successfully. Feb 9 08:03:00.315349 systemd[1]: session-37.scope: Deactivated successfully. Feb 9 08:03:00.315744 systemd-logind[1462]: Session 37 logged out. Waiting for processes to exit. Feb 9 08:03:00.316329 systemd-logind[1462]: Removed session 37. Feb 9 08:03:05.323153 systemd[1]: Started sshd@44-139.178.90.113:22-147.75.109.163:46020.service. Feb 9 08:03:05.356373 sshd[5070]: Accepted publickey for core from 147.75.109.163 port 46020 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:03:05.357221 sshd[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:03:05.359950 systemd-logind[1462]: New session 38 of user core. Feb 9 08:03:05.360489 systemd[1]: Started session-38.scope. Feb 9 08:03:05.445365 sshd[5070]: pam_unix(sshd:session): session closed for user core Feb 9 08:03:05.446900 systemd[1]: sshd@44-139.178.90.113:22-147.75.109.163:46020.service: Deactivated successfully. Feb 9 08:03:05.447338 systemd[1]: session-38.scope: Deactivated successfully. Feb 9 08:03:05.447712 systemd-logind[1462]: Session 38 logged out. Waiting for processes to exit. Feb 9 08:03:05.448240 systemd-logind[1462]: Removed session 38. Feb 9 08:03:10.455137 systemd[1]: Started sshd@45-139.178.90.113:22-147.75.109.163:46028.service. Feb 9 08:03:10.488322 sshd[5095]: Accepted publickey for core from 147.75.109.163 port 46028 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:03:10.489251 sshd[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:03:10.492388 systemd-logind[1462]: New session 39 of user core. Feb 9 08:03:10.493022 systemd[1]: Started session-39.scope. Feb 9 08:03:10.582273 sshd[5095]: pam_unix(sshd:session): session closed for user core Feb 9 08:03:10.583757 systemd[1]: sshd@45-139.178.90.113:22-147.75.109.163:46028.service: Deactivated successfully. Feb 9 08:03:10.584162 systemd[1]: session-39.scope: Deactivated successfully. Feb 9 08:03:10.584497 systemd-logind[1462]: Session 39 logged out. Waiting for processes to exit. Feb 9 08:03:10.585137 systemd-logind[1462]: Removed session 39. Feb 9 08:03:15.591399 systemd[1]: Started sshd@46-139.178.90.113:22-147.75.109.163:38874.service. Feb 9 08:03:15.624082 sshd[5122]: Accepted publickey for core from 147.75.109.163 port 38874 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:03:15.624923 sshd[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:03:15.627815 systemd-logind[1462]: New session 40 of user core. Feb 9 08:03:15.628417 systemd[1]: Started session-40.scope. Feb 9 08:03:15.715542 sshd[5122]: pam_unix(sshd:session): session closed for user core Feb 9 08:03:15.717097 systemd[1]: sshd@46-139.178.90.113:22-147.75.109.163:38874.service: Deactivated successfully. Feb 9 08:03:15.717513 systemd[1]: session-40.scope: Deactivated successfully. Feb 9 08:03:15.717931 systemd-logind[1462]: Session 40 logged out. Waiting for processes to exit. Feb 9 08:03:15.718393 systemd-logind[1462]: Removed session 40. Feb 9 08:03:20.724993 systemd[1]: Started sshd@47-139.178.90.113:22-147.75.109.163:38876.service. Feb 9 08:03:20.757970 sshd[5147]: Accepted publickey for core from 147.75.109.163 port 38876 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:03:20.758885 sshd[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:03:20.762277 systemd-logind[1462]: New session 41 of user core. Feb 9 08:03:20.762979 systemd[1]: Started session-41.scope. Feb 9 08:03:20.849931 sshd[5147]: pam_unix(sshd:session): session closed for user core Feb 9 08:03:20.851391 systemd[1]: sshd@47-139.178.90.113:22-147.75.109.163:38876.service: Deactivated successfully. Feb 9 08:03:20.851849 systemd[1]: session-41.scope: Deactivated successfully. Feb 9 08:03:20.852298 systemd-logind[1462]: Session 41 logged out. Waiting for processes to exit. Feb 9 08:03:20.852925 systemd-logind[1462]: Removed session 41. Feb 9 08:03:25.859870 systemd[1]: Started sshd@48-139.178.90.113:22-147.75.109.163:54134.service. Feb 9 08:03:25.937297 sshd[5172]: Accepted publickey for core from 147.75.109.163 port 54134 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:03:25.939174 sshd[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:03:25.944719 systemd-logind[1462]: New session 42 of user core. Feb 9 08:03:25.946380 systemd[1]: Started session-42.scope. Feb 9 08:03:26.037769 sshd[5172]: pam_unix(sshd:session): session closed for user core Feb 9 08:03:26.039292 systemd[1]: sshd@48-139.178.90.113:22-147.75.109.163:54134.service: Deactivated successfully. Feb 9 08:03:26.039762 systemd[1]: session-42.scope: Deactivated successfully. Feb 9 08:03:26.040230 systemd-logind[1462]: Session 42 logged out. Waiting for processes to exit. Feb 9 08:03:26.040784 systemd-logind[1462]: Removed session 42. Feb 9 08:03:31.048096 systemd[1]: Started sshd@49-139.178.90.113:22-147.75.109.163:54146.service. Feb 9 08:03:31.080544 sshd[5197]: Accepted publickey for core from 147.75.109.163 port 54146 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:03:31.081411 sshd[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:03:31.084406 systemd-logind[1462]: New session 43 of user core. Feb 9 08:03:31.085034 systemd[1]: Started session-43.scope. Feb 9 08:03:31.172999 sshd[5197]: pam_unix(sshd:session): session closed for user core Feb 9 08:03:31.174395 systemd[1]: sshd@49-139.178.90.113:22-147.75.109.163:54146.service: Deactivated successfully. Feb 9 08:03:31.174839 systemd[1]: session-43.scope: Deactivated successfully. Feb 9 08:03:31.175269 systemd-logind[1462]: Session 43 logged out. Waiting for processes to exit. Feb 9 08:03:31.175917 systemd-logind[1462]: Removed session 43. Feb 9 08:03:36.184525 systemd[1]: Started sshd@50-139.178.90.113:22-147.75.109.163:39220.service. Feb 9 08:03:36.220455 sshd[5223]: Accepted publickey for core from 147.75.109.163 port 39220 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:03:36.221207 sshd[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:03:36.224005 systemd-logind[1462]: New session 44 of user core. Feb 9 08:03:36.224491 systemd[1]: Started session-44.scope. Feb 9 08:03:36.310536 sshd[5223]: pam_unix(sshd:session): session closed for user core Feb 9 08:03:36.312039 systemd[1]: sshd@50-139.178.90.113:22-147.75.109.163:39220.service: Deactivated successfully. Feb 9 08:03:36.312475 systemd[1]: session-44.scope: Deactivated successfully. Feb 9 08:03:36.312892 systemd-logind[1462]: Session 44 logged out. Waiting for processes to exit. Feb 9 08:03:36.313483 systemd-logind[1462]: Removed session 44. Feb 9 08:03:41.320097 systemd[1]: Started sshd@51-139.178.90.113:22-147.75.109.163:39232.service. Feb 9 08:03:41.352988 sshd[5250]: Accepted publickey for core from 147.75.109.163 port 39232 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:03:41.353901 sshd[5250]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:03:41.357097 systemd-logind[1462]: New session 45 of user core. Feb 9 08:03:41.357729 systemd[1]: Started session-45.scope. Feb 9 08:03:41.446520 sshd[5250]: pam_unix(sshd:session): session closed for user core Feb 9 08:03:41.448057 systemd[1]: sshd@51-139.178.90.113:22-147.75.109.163:39232.service: Deactivated successfully. Feb 9 08:03:41.448471 systemd[1]: session-45.scope: Deactivated successfully. Feb 9 08:03:41.448886 systemd-logind[1462]: Session 45 logged out. Waiting for processes to exit. Feb 9 08:03:41.449419 systemd-logind[1462]: Removed session 45. Feb 9 08:03:46.455430 systemd[1]: Started sshd@52-139.178.90.113:22-147.75.109.163:43966.service. Feb 9 08:03:46.488336 sshd[5275]: Accepted publickey for core from 147.75.109.163 port 43966 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:03:46.489208 sshd[5275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:03:46.492288 systemd-logind[1462]: New session 46 of user core. Feb 9 08:03:46.492881 systemd[1]: Started session-46.scope. Feb 9 08:03:46.579193 sshd[5275]: pam_unix(sshd:session): session closed for user core Feb 9 08:03:46.580725 systemd[1]: sshd@52-139.178.90.113:22-147.75.109.163:43966.service: Deactivated successfully. Feb 9 08:03:46.581173 systemd[1]: session-46.scope: Deactivated successfully. Feb 9 08:03:46.581512 systemd-logind[1462]: Session 46 logged out. Waiting for processes to exit. Feb 9 08:03:46.582197 systemd-logind[1462]: Removed session 46. Feb 9 08:03:51.589228 systemd[1]: Started sshd@53-139.178.90.113:22-147.75.109.163:43972.service. Feb 9 08:03:51.622120 sshd[5301]: Accepted publickey for core from 147.75.109.163 port 43972 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:03:51.623017 sshd[5301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:03:51.626348 systemd-logind[1462]: New session 47 of user core. Feb 9 08:03:51.627032 systemd[1]: Started session-47.scope. Feb 9 08:03:51.714405 sshd[5301]: pam_unix(sshd:session): session closed for user core Feb 9 08:03:51.715910 systemd[1]: sshd@53-139.178.90.113:22-147.75.109.163:43972.service: Deactivated successfully. Feb 9 08:03:51.716320 systemd[1]: session-47.scope: Deactivated successfully. Feb 9 08:03:51.716749 systemd-logind[1462]: Session 47 logged out. Waiting for processes to exit. Feb 9 08:03:51.717219 systemd-logind[1462]: Removed session 47. Feb 9 08:03:56.725757 systemd[1]: Started sshd@54-139.178.90.113:22-147.75.109.163:52498.service. Feb 9 08:03:56.761493 sshd[5328]: Accepted publickey for core from 147.75.109.163 port 52498 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:03:56.762308 sshd[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:03:56.765121 systemd-logind[1462]: New session 48 of user core. Feb 9 08:03:56.765827 systemd[1]: Started session-48.scope. Feb 9 08:03:56.851684 sshd[5328]: pam_unix(sshd:session): session closed for user core Feb 9 08:03:56.853351 systemd[1]: sshd@54-139.178.90.113:22-147.75.109.163:52498.service: Deactivated successfully. Feb 9 08:03:56.853847 systemd[1]: session-48.scope: Deactivated successfully. Feb 9 08:03:56.854255 systemd-logind[1462]: Session 48 logged out. Waiting for processes to exit. Feb 9 08:03:56.854754 systemd-logind[1462]: Removed session 48. Feb 9 08:04:01.861263 systemd[1]: Started sshd@55-139.178.90.113:22-147.75.109.163:52504.service. Feb 9 08:04:01.893879 sshd[5353]: Accepted publickey for core from 147.75.109.163 port 52504 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:04:01.894729 sshd[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:04:01.897603 systemd-logind[1462]: New session 49 of user core. Feb 9 08:04:01.898242 systemd[1]: Started session-49.scope. Feb 9 08:04:01.991273 sshd[5353]: pam_unix(sshd:session): session closed for user core Feb 9 08:04:01.994081 systemd[1]: sshd@55-139.178.90.113:22-147.75.109.163:52504.service: Deactivated successfully. Feb 9 08:04:01.994715 systemd[1]: session-49.scope: Deactivated successfully. Feb 9 08:04:01.995340 systemd-logind[1462]: Session 49 logged out. Waiting for processes to exit. Feb 9 08:04:01.996557 systemd[1]: Started sshd@56-139.178.90.113:22-147.75.109.163:52512.service. Feb 9 08:04:01.997495 systemd-logind[1462]: Removed session 49. Feb 9 08:04:02.057216 sshd[5379]: Accepted publickey for core from 147.75.109.163 port 52512 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:04:02.058856 sshd[5379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:04:02.064101 systemd-logind[1462]: New session 50 of user core. Feb 9 08:04:02.065261 systemd[1]: Started session-50.scope. Feb 9 08:04:03.010363 sshd[5379]: pam_unix(sshd:session): session closed for user core Feb 9 08:04:03.017359 systemd[1]: sshd@56-139.178.90.113:22-147.75.109.163:52512.service: Deactivated successfully. Feb 9 08:04:03.019096 systemd[1]: session-50.scope: Deactivated successfully. Feb 9 08:04:03.020846 systemd-logind[1462]: Session 50 logged out. Waiting for processes to exit. Feb 9 08:04:03.023671 systemd[1]: Started sshd@57-139.178.90.113:22-147.75.109.163:52520.service. Feb 9 08:04:03.026486 systemd-logind[1462]: Removed session 50. Feb 9 08:04:03.060513 sshd[5401]: Accepted publickey for core from 147.75.109.163 port 52520 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:04:03.061273 sshd[5401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:04:03.064075 systemd-logind[1462]: New session 51 of user core. Feb 9 08:04:03.064601 systemd[1]: Started session-51.scope. Feb 9 08:04:04.017391 sshd[5401]: pam_unix(sshd:session): session closed for user core Feb 9 08:04:04.022968 systemd[1]: sshd@57-139.178.90.113:22-147.75.109.163:52520.service: Deactivated successfully. Feb 9 08:04:04.024345 systemd[1]: session-51.scope: Deactivated successfully. Feb 9 08:04:04.025766 systemd-logind[1462]: Session 51 logged out. Waiting for processes to exit. Feb 9 08:04:04.028728 systemd[1]: Started sshd@58-139.178.90.113:22-147.75.109.163:52522.service. Feb 9 08:04:04.030722 systemd-logind[1462]: Removed session 51. Feb 9 08:04:04.079904 sshd[5445]: Accepted publickey for core from 147.75.109.163 port 52522 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:04:04.081232 sshd[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:04:04.085255 systemd-logind[1462]: New session 52 of user core. Feb 9 08:04:04.086130 systemd[1]: Started session-52.scope. Feb 9 08:04:04.343334 sshd[5445]: pam_unix(sshd:session): session closed for user core Feb 9 08:04:04.346619 systemd[1]: sshd@58-139.178.90.113:22-147.75.109.163:52522.service: Deactivated successfully. Feb 9 08:04:04.347274 systemd[1]: session-52.scope: Deactivated successfully. Feb 9 08:04:04.348051 systemd-logind[1462]: Session 52 logged out. Waiting for processes to exit. Feb 9 08:04:04.349197 systemd[1]: Started sshd@59-139.178.90.113:22-147.75.109.163:52530.service. Feb 9 08:04:04.350178 systemd-logind[1462]: Removed session 52. Feb 9 08:04:04.406703 sshd[5503]: Accepted publickey for core from 147.75.109.163 port 52530 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:04:04.408075 sshd[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:04:04.412297 systemd-logind[1462]: New session 53 of user core. Feb 9 08:04:04.413275 systemd[1]: Started session-53.scope. Feb 9 08:04:04.541670 sshd[5503]: pam_unix(sshd:session): session closed for user core Feb 9 08:04:04.543349 systemd[1]: sshd@59-139.178.90.113:22-147.75.109.163:52530.service: Deactivated successfully. Feb 9 08:04:04.543859 systemd[1]: session-53.scope: Deactivated successfully. Feb 9 08:04:04.544247 systemd-logind[1462]: Session 53 logged out. Waiting for processes to exit. Feb 9 08:04:04.544746 systemd-logind[1462]: Removed session 53. Feb 9 08:04:09.551454 systemd[1]: Started sshd@60-139.178.90.113:22-147.75.109.163:60974.service. Feb 9 08:04:09.584846 sshd[5528]: Accepted publickey for core from 147.75.109.163 port 60974 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:04:09.585818 sshd[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:04:09.588835 systemd-logind[1462]: New session 54 of user core. Feb 9 08:04:09.589591 systemd[1]: Started session-54.scope. Feb 9 08:04:09.678023 sshd[5528]: pam_unix(sshd:session): session closed for user core Feb 9 08:04:09.679475 systemd[1]: sshd@60-139.178.90.113:22-147.75.109.163:60974.service: Deactivated successfully. Feb 9 08:04:09.679907 systemd[1]: session-54.scope: Deactivated successfully. Feb 9 08:04:09.680273 systemd-logind[1462]: Session 54 logged out. Waiting for processes to exit. Feb 9 08:04:09.680774 systemd-logind[1462]: Removed session 54. Feb 9 08:04:14.687262 systemd[1]: Started sshd@61-139.178.90.113:22-147.75.109.163:46684.service. Feb 9 08:04:14.719863 sshd[5554]: Accepted publickey for core from 147.75.109.163 port 46684 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:04:14.720864 sshd[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:04:14.723861 systemd-logind[1462]: New session 55 of user core. Feb 9 08:04:14.724617 systemd[1]: Started session-55.scope. Feb 9 08:04:14.806084 sshd[5554]: pam_unix(sshd:session): session closed for user core Feb 9 08:04:14.807728 systemd[1]: sshd@61-139.178.90.113:22-147.75.109.163:46684.service: Deactivated successfully. Feb 9 08:04:14.808197 systemd[1]: session-55.scope: Deactivated successfully. Feb 9 08:04:14.808512 systemd-logind[1462]: Session 55 logged out. Waiting for processes to exit. Feb 9 08:04:14.809188 systemd-logind[1462]: Removed session 55. Feb 9 08:04:18.981630 systemd[1]: Started sshd@62-139.178.90.113:22-170.64.196.239:48784.service. Feb 9 08:04:19.585808 sshd[5579]: Invalid user sFTPUser from 170.64.196.239 port 48784 Feb 9 08:04:19.738138 sshd[5579]: pam_faillock(sshd:auth): User unknown Feb 9 08:04:19.739317 sshd[5579]: pam_unix(sshd:auth): check pass; user unknown Feb 9 08:04:19.739407 sshd[5579]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=170.64.196.239 Feb 9 08:04:19.740398 sshd[5579]: pam_faillock(sshd:auth): User unknown Feb 9 08:04:19.816168 systemd[1]: Started sshd@63-139.178.90.113:22-147.75.109.163:46692.service. Feb 9 08:04:19.848985 sshd[5582]: Accepted publickey for core from 147.75.109.163 port 46692 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:04:19.849993 sshd[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:04:19.853227 systemd-logind[1462]: New session 56 of user core. Feb 9 08:04:19.853825 systemd[1]: Started session-56.scope. Feb 9 08:04:19.946781 sshd[5582]: pam_unix(sshd:session): session closed for user core Feb 9 08:04:19.952350 systemd[1]: sshd@63-139.178.90.113:22-147.75.109.163:46692.service: Deactivated successfully. Feb 9 08:04:19.954290 systemd[1]: session-56.scope: Deactivated successfully. Feb 9 08:04:19.956077 systemd-logind[1462]: Session 56 logged out. Waiting for processes to exit. Feb 9 08:04:19.958392 systemd-logind[1462]: Removed session 56. Feb 9 08:04:21.142100 sshd[5579]: Failed password for invalid user sFTPUser from 170.64.196.239 port 48784 ssh2 Feb 9 08:04:21.963976 sshd[5579]: Connection closed by invalid user sFTPUser 170.64.196.239 port 48784 [preauth] Feb 9 08:04:21.966481 systemd[1]: sshd@62-139.178.90.113:22-170.64.196.239:48784.service: Deactivated successfully. Feb 9 08:04:24.957974 systemd[1]: Started sshd@64-139.178.90.113:22-147.75.109.163:55118.service. Feb 9 08:04:24.994260 sshd[5608]: Accepted publickey for core from 147.75.109.163 port 55118 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:04:24.995096 sshd[5608]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:04:24.998224 systemd-logind[1462]: New session 57 of user core. Feb 9 08:04:24.998833 systemd[1]: Started session-57.scope. Feb 9 08:04:25.086695 sshd[5608]: pam_unix(sshd:session): session closed for user core Feb 9 08:04:25.088382 systemd[1]: sshd@64-139.178.90.113:22-147.75.109.163:55118.service: Deactivated successfully. Feb 9 08:04:25.088895 systemd[1]: session-57.scope: Deactivated successfully. Feb 9 08:04:25.089375 systemd-logind[1462]: Session 57 logged out. Waiting for processes to exit. Feb 9 08:04:25.090116 systemd-logind[1462]: Removed session 57. Feb 9 08:04:30.096756 systemd[1]: Started sshd@65-139.178.90.113:22-147.75.109.163:55120.service. Feb 9 08:04:30.129954 sshd[5633]: Accepted publickey for core from 147.75.109.163 port 55120 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:04:30.130809 sshd[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:04:30.133813 systemd-logind[1462]: New session 58 of user core. Feb 9 08:04:30.134464 systemd[1]: Started session-58.scope. Feb 9 08:04:30.222495 sshd[5633]: pam_unix(sshd:session): session closed for user core Feb 9 08:04:30.224099 systemd[1]: sshd@65-139.178.90.113:22-147.75.109.163:55120.service: Deactivated successfully. Feb 9 08:04:30.224539 systemd[1]: session-58.scope: Deactivated successfully. Feb 9 08:04:30.224990 systemd-logind[1462]: Session 58 logged out. Waiting for processes to exit. Feb 9 08:04:30.225556 systemd-logind[1462]: Removed session 58. Feb 9 08:04:35.231849 systemd[1]: Started sshd@66-139.178.90.113:22-147.75.109.163:47140.service. Feb 9 08:04:35.265419 sshd[5658]: Accepted publickey for core from 147.75.109.163 port 47140 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:04:35.268707 sshd[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:04:35.279468 systemd-logind[1462]: New session 59 of user core. Feb 9 08:04:35.282394 systemd[1]: Started session-59.scope. Feb 9 08:04:35.385322 sshd[5658]: pam_unix(sshd:session): session closed for user core Feb 9 08:04:35.386894 systemd[1]: sshd@66-139.178.90.113:22-147.75.109.163:47140.service: Deactivated successfully. Feb 9 08:04:35.387362 systemd[1]: session-59.scope: Deactivated successfully. Feb 9 08:04:35.387824 systemd-logind[1462]: Session 59 logged out. Waiting for processes to exit. Feb 9 08:04:35.388351 systemd-logind[1462]: Removed session 59. Feb 9 08:04:40.394992 systemd[1]: Started sshd@67-139.178.90.113:22-147.75.109.163:47152.service. Feb 9 08:04:40.427916 sshd[5683]: Accepted publickey for core from 147.75.109.163 port 47152 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:04:40.428704 sshd[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:04:40.431790 systemd-logind[1462]: New session 60 of user core. Feb 9 08:04:40.432491 systemd[1]: Started session-60.scope. Feb 9 08:04:40.521705 sshd[5683]: pam_unix(sshd:session): session closed for user core Feb 9 08:04:40.523278 systemd[1]: sshd@67-139.178.90.113:22-147.75.109.163:47152.service: Deactivated successfully. Feb 9 08:04:40.523730 systemd[1]: session-60.scope: Deactivated successfully. Feb 9 08:04:40.524199 systemd-logind[1462]: Session 60 logged out. Waiting for processes to exit. Feb 9 08:04:40.524772 systemd-logind[1462]: Removed session 60. Feb 9 08:04:45.531291 systemd[1]: Started sshd@68-139.178.90.113:22-147.75.109.163:60794.service. Feb 9 08:04:45.563914 sshd[5710]: Accepted publickey for core from 147.75.109.163 port 60794 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:04:45.564781 sshd[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:04:45.567810 systemd-logind[1462]: New session 61 of user core. Feb 9 08:04:45.568478 systemd[1]: Started session-61.scope. Feb 9 08:04:45.652581 sshd[5710]: pam_unix(sshd:session): session closed for user core Feb 9 08:04:45.654140 systemd[1]: sshd@68-139.178.90.113:22-147.75.109.163:60794.service: Deactivated successfully. Feb 9 08:04:45.654557 systemd[1]: session-61.scope: Deactivated successfully. Feb 9 08:04:45.655035 systemd-logind[1462]: Session 61 logged out. Waiting for processes to exit. Feb 9 08:04:45.655515 systemd-logind[1462]: Removed session 61. Feb 9 08:04:50.662246 systemd[1]: Started sshd@69-139.178.90.113:22-147.75.109.163:60810.service. Feb 9 08:04:50.695339 sshd[5733]: Accepted publickey for core from 147.75.109.163 port 60810 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:04:50.696300 sshd[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:04:50.699469 systemd-logind[1462]: New session 62 of user core. Feb 9 08:04:50.700158 systemd[1]: Started session-62.scope. Feb 9 08:04:50.785908 sshd[5733]: pam_unix(sshd:session): session closed for user core Feb 9 08:04:50.787437 systemd[1]: sshd@69-139.178.90.113:22-147.75.109.163:60810.service: Deactivated successfully. Feb 9 08:04:50.787909 systemd[1]: session-62.scope: Deactivated successfully. Feb 9 08:04:50.788315 systemd-logind[1462]: Session 62 logged out. Waiting for processes to exit. Feb 9 08:04:50.788858 systemd-logind[1462]: Removed session 62. Feb 9 08:04:55.795528 systemd[1]: Started sshd@70-139.178.90.113:22-147.75.109.163:55292.service. Feb 9 08:04:55.828813 sshd[5757]: Accepted publickey for core from 147.75.109.163 port 55292 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:04:55.829700 sshd[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:04:55.832802 systemd-logind[1462]: New session 63 of user core. Feb 9 08:04:55.833513 systemd[1]: Started session-63.scope. Feb 9 08:04:55.918247 sshd[5757]: pam_unix(sshd:session): session closed for user core Feb 9 08:04:55.919688 systemd[1]: sshd@70-139.178.90.113:22-147.75.109.163:55292.service: Deactivated successfully. Feb 9 08:04:55.920109 systemd[1]: session-63.scope: Deactivated successfully. Feb 9 08:04:55.920447 systemd-logind[1462]: Session 63 logged out. Waiting for processes to exit. Feb 9 08:04:55.921116 systemd-logind[1462]: Removed session 63. Feb 9 08:05:00.928425 systemd[1]: Started sshd@71-139.178.90.113:22-147.75.109.163:55306.service. Feb 9 08:05:00.961269 sshd[5784]: Accepted publickey for core from 147.75.109.163 port 55306 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:05:00.962114 sshd[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:05:00.965243 systemd-logind[1462]: New session 64 of user core. Feb 9 08:05:00.965849 systemd[1]: Started session-64.scope. Feb 9 08:05:01.051356 sshd[5784]: pam_unix(sshd:session): session closed for user core Feb 9 08:05:01.053007 systemd[1]: sshd@71-139.178.90.113:22-147.75.109.163:55306.service: Deactivated successfully. Feb 9 08:05:01.053522 systemd[1]: session-64.scope: Deactivated successfully. Feb 9 08:05:01.054043 systemd-logind[1462]: Session 64 logged out. Waiting for processes to exit. Feb 9 08:05:01.054632 systemd-logind[1462]: Removed session 64. Feb 9 08:05:06.061054 systemd[1]: Started sshd@72-139.178.90.113:22-147.75.109.163:46132.service. Feb 9 08:05:06.094220 sshd[5808]: Accepted publickey for core from 147.75.109.163 port 46132 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:05:06.094894 sshd[5808]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:05:06.097139 systemd-logind[1462]: New session 65 of user core. Feb 9 08:05:06.097593 systemd[1]: Started session-65.scope. Feb 9 08:05:06.180290 sshd[5808]: pam_unix(sshd:session): session closed for user core Feb 9 08:05:06.181880 systemd[1]: sshd@72-139.178.90.113:22-147.75.109.163:46132.service: Deactivated successfully. Feb 9 08:05:06.182376 systemd[1]: session-65.scope: Deactivated successfully. Feb 9 08:05:06.182818 systemd-logind[1462]: Session 65 logged out. Waiting for processes to exit. Feb 9 08:05:06.183276 systemd-logind[1462]: Removed session 65. Feb 9 08:05:11.189453 systemd[1]: Started sshd@73-139.178.90.113:22-147.75.109.163:46142.service. Feb 9 08:05:11.222502 sshd[5835]: Accepted publickey for core from 147.75.109.163 port 46142 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:05:11.223365 sshd[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:05:11.226392 systemd-logind[1462]: New session 66 of user core. Feb 9 08:05:11.227047 systemd[1]: Started session-66.scope. Feb 9 08:05:11.314593 sshd[5835]: pam_unix(sshd:session): session closed for user core Feb 9 08:05:11.316091 systemd[1]: sshd@73-139.178.90.113:22-147.75.109.163:46142.service: Deactivated successfully. Feb 9 08:05:11.316521 systemd[1]: session-66.scope: Deactivated successfully. Feb 9 08:05:11.316888 systemd-logind[1462]: Session 66 logged out. Waiting for processes to exit. Feb 9 08:05:11.317352 systemd-logind[1462]: Removed session 66. Feb 9 08:05:16.324402 systemd[1]: Started sshd@74-139.178.90.113:22-147.75.109.163:60126.service. Feb 9 08:05:16.357219 sshd[5860]: Accepted publickey for core from 147.75.109.163 port 60126 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:05:16.357857 sshd[5860]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:05:16.359935 systemd-logind[1462]: New session 67 of user core. Feb 9 08:05:16.360438 systemd[1]: Started session-67.scope. Feb 9 08:05:16.467799 sshd[5860]: pam_unix(sshd:session): session closed for user core Feb 9 08:05:16.469205 systemd[1]: sshd@74-139.178.90.113:22-147.75.109.163:60126.service: Deactivated successfully. Feb 9 08:05:16.469658 systemd[1]: session-67.scope: Deactivated successfully. Feb 9 08:05:16.470023 systemd-logind[1462]: Session 67 logged out. Waiting for processes to exit. Feb 9 08:05:16.470432 systemd-logind[1462]: Removed session 67. Feb 9 08:05:21.477541 systemd[1]: Started sshd@75-139.178.90.113:22-147.75.109.163:60132.service. Feb 9 08:05:21.510336 sshd[5885]: Accepted publickey for core from 147.75.109.163 port 60132 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:05:21.511222 sshd[5885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:05:21.514279 systemd-logind[1462]: New session 68 of user core. Feb 9 08:05:21.515062 systemd[1]: Started session-68.scope. Feb 9 08:05:21.602542 sshd[5885]: pam_unix(sshd:session): session closed for user core Feb 9 08:05:21.604079 systemd[1]: sshd@75-139.178.90.113:22-147.75.109.163:60132.service: Deactivated successfully. Feb 9 08:05:21.604524 systemd[1]: session-68.scope: Deactivated successfully. Feb 9 08:05:21.604896 systemd-logind[1462]: Session 68 logged out. Waiting for processes to exit. Feb 9 08:05:21.605337 systemd-logind[1462]: Removed session 68. Feb 9 08:05:26.612708 systemd[1]: Started sshd@76-139.178.90.113:22-147.75.109.163:40594.service. Feb 9 08:05:26.645289 sshd[5912]: Accepted publickey for core from 147.75.109.163 port 40594 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:05:26.646164 sshd[5912]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:05:26.649362 systemd-logind[1462]: New session 69 of user core. Feb 9 08:05:26.650086 systemd[1]: Started session-69.scope. Feb 9 08:05:26.742596 sshd[5912]: pam_unix(sshd:session): session closed for user core Feb 9 08:05:26.744505 systemd[1]: sshd@76-139.178.90.113:22-147.75.109.163:40594.service: Deactivated successfully. Feb 9 08:05:26.745081 systemd[1]: session-69.scope: Deactivated successfully. Feb 9 08:05:26.745516 systemd-logind[1462]: Session 69 logged out. Waiting for processes to exit. Feb 9 08:05:26.746302 systemd-logind[1462]: Removed session 69. Feb 9 08:05:31.751726 systemd[1]: Started sshd@77-139.178.90.113:22-147.75.109.163:40602.service. Feb 9 08:05:31.784846 sshd[5939]: Accepted publickey for core from 147.75.109.163 port 40602 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:05:31.785508 sshd[5939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:05:31.788013 systemd-logind[1462]: New session 70 of user core. Feb 9 08:05:31.788452 systemd[1]: Started session-70.scope. Feb 9 08:05:31.873851 sshd[5939]: pam_unix(sshd:session): session closed for user core Feb 9 08:05:31.875328 systemd[1]: sshd@77-139.178.90.113:22-147.75.109.163:40602.service: Deactivated successfully. Feb 9 08:05:31.875788 systemd[1]: session-70.scope: Deactivated successfully. Feb 9 08:05:31.876225 systemd-logind[1462]: Session 70 logged out. Waiting for processes to exit. Feb 9 08:05:31.876780 systemd-logind[1462]: Removed session 70. Feb 9 08:05:36.885461 systemd[1]: Started sshd@78-139.178.90.113:22-147.75.109.163:43158.service. Feb 9 08:05:36.922792 sshd[5963]: Accepted publickey for core from 147.75.109.163 port 43158 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:05:36.923537 sshd[5963]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:05:36.926177 systemd-logind[1462]: New session 71 of user core. Feb 9 08:05:36.926688 systemd[1]: Started session-71.scope. Feb 9 08:05:37.010756 sshd[5963]: pam_unix(sshd:session): session closed for user core Feb 9 08:05:37.012271 systemd[1]: sshd@78-139.178.90.113:22-147.75.109.163:43158.service: Deactivated successfully. Feb 9 08:05:37.012748 systemd[1]: session-71.scope: Deactivated successfully. Feb 9 08:05:37.013215 systemd-logind[1462]: Session 71 logged out. Waiting for processes to exit. Feb 9 08:05:37.013771 systemd-logind[1462]: Removed session 71. Feb 9 08:05:42.014639 systemd[1]: Started sshd@79-139.178.90.113:22-147.75.109.163:43172.service. Feb 9 08:05:42.047495 sshd[5990]: Accepted publickey for core from 147.75.109.163 port 43172 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:05:42.048444 sshd[5990]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:05:42.051286 systemd-logind[1462]: New session 72 of user core. Feb 9 08:05:42.051889 systemd[1]: Started session-72.scope. Feb 9 08:05:42.134768 sshd[5990]: pam_unix(sshd:session): session closed for user core Feb 9 08:05:42.136298 systemd[1]: sshd@79-139.178.90.113:22-147.75.109.163:43172.service: Deactivated successfully. Feb 9 08:05:42.136785 systemd[1]: session-72.scope: Deactivated successfully. Feb 9 08:05:42.137186 systemd-logind[1462]: Session 72 logged out. Waiting for processes to exit. Feb 9 08:05:42.137791 systemd-logind[1462]: Removed session 72. Feb 9 08:05:47.137788 systemd[1]: Started sshd@80-139.178.90.113:22-147.75.109.163:59232.service. Feb 9 08:05:47.170555 sshd[6014]: Accepted publickey for core from 147.75.109.163 port 59232 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:05:47.171570 sshd[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:05:47.174586 systemd-logind[1462]: New session 73 of user core. Feb 9 08:05:47.175288 systemd[1]: Started session-73.scope. Feb 9 08:05:47.300728 sshd[6014]: pam_unix(sshd:session): session closed for user core Feb 9 08:05:47.302293 systemd[1]: sshd@80-139.178.90.113:22-147.75.109.163:59232.service: Deactivated successfully. Feb 9 08:05:47.302755 systemd[1]: session-73.scope: Deactivated successfully. Feb 9 08:05:47.303207 systemd-logind[1462]: Session 73 logged out. Waiting for processes to exit. Feb 9 08:05:47.303745 systemd-logind[1462]: Removed session 73. Feb 9 08:05:52.310618 systemd[1]: Started sshd@81-139.178.90.113:22-147.75.109.163:59242.service. Feb 9 08:05:52.343318 sshd[6041]: Accepted publickey for core from 147.75.109.163 port 59242 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:05:52.344238 sshd[6041]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:05:52.347349 systemd-logind[1462]: New session 74 of user core. Feb 9 08:05:52.348031 systemd[1]: Started session-74.scope. Feb 9 08:05:52.435608 sshd[6041]: pam_unix(sshd:session): session closed for user core Feb 9 08:05:52.437237 systemd[1]: sshd@81-139.178.90.113:22-147.75.109.163:59242.service: Deactivated successfully. Feb 9 08:05:52.437713 systemd[1]: session-74.scope: Deactivated successfully. Feb 9 08:05:52.438166 systemd-logind[1462]: Session 74 logged out. Waiting for processes to exit. Feb 9 08:05:52.438777 systemd-logind[1462]: Removed session 74. Feb 9 08:05:57.446518 systemd[1]: Started sshd@82-139.178.90.113:22-147.75.109.163:51280.service. Feb 9 08:05:57.505385 sshd[6069]: Accepted publickey for core from 147.75.109.163 port 51280 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:05:57.506080 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:05:57.508644 systemd-logind[1462]: New session 75 of user core. Feb 9 08:05:57.509292 systemd[1]: Started session-75.scope. Feb 9 08:05:57.619523 sshd[6069]: pam_unix(sshd:session): session closed for user core Feb 9 08:05:57.621892 systemd[1]: sshd@82-139.178.90.113:22-147.75.109.163:51280.service: Deactivated successfully. Feb 9 08:05:57.622616 systemd[1]: session-75.scope: Deactivated successfully. Feb 9 08:05:57.623274 systemd-logind[1462]: Session 75 logged out. Waiting for processes to exit. Feb 9 08:05:57.624179 systemd-logind[1462]: Removed session 75. Feb 9 08:06:02.628873 systemd[1]: Started sshd@83-139.178.90.113:22-147.75.109.163:51290.service. Feb 9 08:06:02.661835 sshd[6094]: Accepted publickey for core from 147.75.109.163 port 51290 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:06:02.662824 sshd[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:06:02.666083 systemd-logind[1462]: New session 76 of user core. Feb 9 08:06:02.666853 systemd[1]: Started session-76.scope. Feb 9 08:06:02.752741 sshd[6094]: pam_unix(sshd:session): session closed for user core Feb 9 08:06:02.754254 systemd[1]: sshd@83-139.178.90.113:22-147.75.109.163:51290.service: Deactivated successfully. Feb 9 08:06:02.754703 systemd[1]: session-76.scope: Deactivated successfully. Feb 9 08:06:02.755144 systemd-logind[1462]: Session 76 logged out. Waiting for processes to exit. Feb 9 08:06:02.755641 systemd-logind[1462]: Removed session 76. Feb 9 08:06:07.762143 systemd[1]: Started sshd@84-139.178.90.113:22-147.75.109.163:39890.service. Feb 9 08:06:07.794977 sshd[6119]: Accepted publickey for core from 147.75.109.163 port 39890 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:06:07.795878 sshd[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:06:07.798963 systemd-logind[1462]: New session 77 of user core. Feb 9 08:06:07.799747 systemd[1]: Started session-77.scope. Feb 9 08:06:07.885966 sshd[6119]: pam_unix(sshd:session): session closed for user core Feb 9 08:06:07.887561 systemd[1]: sshd@84-139.178.90.113:22-147.75.109.163:39890.service: Deactivated successfully. Feb 9 08:06:07.888004 systemd[1]: session-77.scope: Deactivated successfully. Feb 9 08:06:07.888421 systemd-logind[1462]: Session 77 logged out. Waiting for processes to exit. Feb 9 08:06:07.889139 systemd-logind[1462]: Removed session 77. Feb 9 08:06:12.895694 systemd[1]: Started sshd@85-139.178.90.113:22-147.75.109.163:39904.service. Feb 9 08:06:12.928433 sshd[6146]: Accepted publickey for core from 147.75.109.163 port 39904 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:06:12.929295 sshd[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:06:12.932191 systemd-logind[1462]: New session 78 of user core. Feb 9 08:06:12.932801 systemd[1]: Started session-78.scope. Feb 9 08:06:13.020867 sshd[6146]: pam_unix(sshd:session): session closed for user core Feb 9 08:06:13.022332 systemd[1]: sshd@85-139.178.90.113:22-147.75.109.163:39904.service: Deactivated successfully. Feb 9 08:06:13.022751 systemd[1]: session-78.scope: Deactivated successfully. Feb 9 08:06:13.023146 systemd-logind[1462]: Session 78 logged out. Waiting for processes to exit. Feb 9 08:06:13.023559 systemd-logind[1462]: Removed session 78. Feb 9 08:06:18.030380 systemd[1]: Started sshd@86-139.178.90.113:22-147.75.109.163:50840.service. Feb 9 08:06:18.062782 sshd[6168]: Accepted publickey for core from 147.75.109.163 port 50840 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:06:18.063624 sshd[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:06:18.066258 systemd-logind[1462]: New session 79 of user core. Feb 9 08:06:18.066887 systemd[1]: Started session-79.scope. Feb 9 08:06:18.151696 sshd[6168]: pam_unix(sshd:session): session closed for user core Feb 9 08:06:18.153229 systemd[1]: sshd@86-139.178.90.113:22-147.75.109.163:50840.service: Deactivated successfully. Feb 9 08:06:18.153698 systemd[1]: session-79.scope: Deactivated successfully. Feb 9 08:06:18.154117 systemd-logind[1462]: Session 79 logged out. Waiting for processes to exit. Feb 9 08:06:18.154484 systemd-logind[1462]: Removed session 79. Feb 9 08:06:23.158043 systemd[1]: Started sshd@87-139.178.90.113:22-147.75.109.163:50848.service. Feb 9 08:06:23.194609 sshd[6190]: Accepted publickey for core from 147.75.109.163 port 50848 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:06:23.195543 sshd[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:06:23.198266 systemd-logind[1462]: New session 80 of user core. Feb 9 08:06:23.198833 systemd[1]: Started session-80.scope. Feb 9 08:06:23.287507 sshd[6190]: pam_unix(sshd:session): session closed for user core Feb 9 08:06:23.289086 systemd[1]: sshd@87-139.178.90.113:22-147.75.109.163:50848.service: Deactivated successfully. Feb 9 08:06:23.289530 systemd[1]: session-80.scope: Deactivated successfully. Feb 9 08:06:23.289948 systemd-logind[1462]: Session 80 logged out. Waiting for processes to exit. Feb 9 08:06:23.290419 systemd-logind[1462]: Removed session 80. Feb 9 08:06:28.298185 systemd[1]: Started sshd@88-139.178.90.113:22-147.75.109.163:38086.service. Feb 9 08:06:28.334439 sshd[6215]: Accepted publickey for core from 147.75.109.163 port 38086 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:06:28.335187 sshd[6215]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:06:28.338002 systemd-logind[1462]: New session 81 of user core. Feb 9 08:06:28.338470 systemd[1]: Started session-81.scope. Feb 9 08:06:28.426490 sshd[6215]: pam_unix(sshd:session): session closed for user core Feb 9 08:06:28.428072 systemd[1]: sshd@88-139.178.90.113:22-147.75.109.163:38086.service: Deactivated successfully. Feb 9 08:06:28.428502 systemd[1]: session-81.scope: Deactivated successfully. Feb 9 08:06:28.428933 systemd-logind[1462]: Session 81 logged out. Waiting for processes to exit. Feb 9 08:06:28.429427 systemd-logind[1462]: Removed session 81. Feb 9 08:06:33.436865 systemd[1]: Started sshd@89-139.178.90.113:22-147.75.109.163:38092.service. Feb 9 08:06:33.469954 sshd[6240]: Accepted publickey for core from 147.75.109.163 port 38092 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:06:33.470878 sshd[6240]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:06:33.474024 systemd-logind[1462]: New session 82 of user core. Feb 9 08:06:33.474756 systemd[1]: Started session-82.scope. Feb 9 08:06:33.563513 sshd[6240]: pam_unix(sshd:session): session closed for user core Feb 9 08:06:33.565092 systemd[1]: sshd@89-139.178.90.113:22-147.75.109.163:38092.service: Deactivated successfully. Feb 9 08:06:33.565538 systemd[1]: session-82.scope: Deactivated successfully. Feb 9 08:06:33.566000 systemd-logind[1462]: Session 82 logged out. Waiting for processes to exit. Feb 9 08:06:33.566527 systemd-logind[1462]: Removed session 82. Feb 9 08:06:38.571511 systemd[1]: Started sshd@90-139.178.90.113:22-147.75.109.163:43490.service. Feb 9 08:06:38.604810 sshd[6265]: Accepted publickey for core from 147.75.109.163 port 43490 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:06:38.605658 sshd[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:06:38.608478 systemd-logind[1462]: New session 83 of user core. Feb 9 08:06:38.609124 systemd[1]: Started session-83.scope. Feb 9 08:06:38.698515 sshd[6265]: pam_unix(sshd:session): session closed for user core Feb 9 08:06:38.700097 systemd[1]: sshd@90-139.178.90.113:22-147.75.109.163:43490.service: Deactivated successfully. Feb 9 08:06:38.700584 systemd[1]: session-83.scope: Deactivated successfully. Feb 9 08:06:38.701051 systemd-logind[1462]: Session 83 logged out. Waiting for processes to exit. Feb 9 08:06:38.701559 systemd-logind[1462]: Removed session 83. Feb 9 08:06:43.707779 systemd[1]: Started sshd@91-139.178.90.113:22-147.75.109.163:43498.service. Feb 9 08:06:43.740862 sshd[6291]: Accepted publickey for core from 147.75.109.163 port 43498 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:06:43.741522 sshd[6291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:06:43.743856 systemd-logind[1462]: New session 84 of user core. Feb 9 08:06:43.744329 systemd[1]: Started session-84.scope. Feb 9 08:06:43.829097 sshd[6291]: pam_unix(sshd:session): session closed for user core Feb 9 08:06:43.830575 systemd[1]: sshd@91-139.178.90.113:22-147.75.109.163:43498.service: Deactivated successfully. Feb 9 08:06:43.831048 systemd[1]: session-84.scope: Deactivated successfully. Feb 9 08:06:43.831459 systemd-logind[1462]: Session 84 logged out. Waiting for processes to exit. Feb 9 08:06:43.832057 systemd-logind[1462]: Removed session 84. Feb 9 08:06:48.838425 systemd[1]: Started sshd@92-139.178.90.113:22-147.75.109.163:59862.service. Feb 9 08:06:48.871324 sshd[6314]: Accepted publickey for core from 147.75.109.163 port 59862 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:06:48.872359 sshd[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:06:48.875436 systemd-logind[1462]: New session 85 of user core. Feb 9 08:06:48.876103 systemd[1]: Started session-85.scope. Feb 9 08:06:48.961348 sshd[6314]: pam_unix(sshd:session): session closed for user core Feb 9 08:06:48.962948 systemd[1]: sshd@92-139.178.90.113:22-147.75.109.163:59862.service: Deactivated successfully. Feb 9 08:06:48.963448 systemd[1]: session-85.scope: Deactivated successfully. Feb 9 08:06:48.963965 systemd-logind[1462]: Session 85 logged out. Waiting for processes to exit. Feb 9 08:06:48.964537 systemd-logind[1462]: Removed session 85. Feb 9 08:06:53.972501 systemd[1]: Started sshd@93-139.178.90.113:22-147.75.109.163:59872.service. Feb 9 08:06:54.010106 sshd[6339]: Accepted publickey for core from 147.75.109.163 port 59872 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:06:54.010863 sshd[6339]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:06:54.013780 systemd-logind[1462]: New session 86 of user core. Feb 9 08:06:54.014332 systemd[1]: Started session-86.scope. Feb 9 08:06:54.103263 sshd[6339]: pam_unix(sshd:session): session closed for user core Feb 9 08:06:54.104956 systemd[1]: sshd@93-139.178.90.113:22-147.75.109.163:59872.service: Deactivated successfully. Feb 9 08:06:54.105430 systemd[1]: session-86.scope: Deactivated successfully. Feb 9 08:06:54.105889 systemd-logind[1462]: Session 86 logged out. Waiting for processes to exit. Feb 9 08:06:54.106442 systemd-logind[1462]: Removed session 86. Feb 9 08:06:59.105674 systemd[1]: Started sshd@94-139.178.90.113:22-147.75.109.163:48008.service. Feb 9 08:06:59.138818 sshd[6366]: Accepted publickey for core from 147.75.109.163 port 48008 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:06:59.139814 sshd[6366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:06:59.143124 systemd-logind[1462]: New session 87 of user core. Feb 9 08:06:59.143875 systemd[1]: Started session-87.scope. Feb 9 08:06:59.232981 sshd[6366]: pam_unix(sshd:session): session closed for user core Feb 9 08:06:59.234342 systemd[1]: sshd@94-139.178.90.113:22-147.75.109.163:48008.service: Deactivated successfully. Feb 9 08:06:59.234761 systemd[1]: session-87.scope: Deactivated successfully. Feb 9 08:06:59.235083 systemd-logind[1462]: Session 87 logged out. Waiting for processes to exit. Feb 9 08:06:59.235455 systemd-logind[1462]: Removed session 87. Feb 9 08:07:04.242409 systemd[1]: Started sshd@95-139.178.90.113:22-147.75.109.163:48022.service. Feb 9 08:07:04.275554 sshd[6390]: Accepted publickey for core from 147.75.109.163 port 48022 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:07:04.276431 sshd[6390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:07:04.279323 systemd-logind[1462]: New session 88 of user core. Feb 9 08:07:04.279913 systemd[1]: Started session-88.scope. Feb 9 08:07:04.370019 sshd[6390]: pam_unix(sshd:session): session closed for user core Feb 9 08:07:04.371908 systemd[1]: sshd@95-139.178.90.113:22-147.75.109.163:48022.service: Deactivated successfully. Feb 9 08:07:04.372264 systemd[1]: session-88.scope: Deactivated successfully. Feb 9 08:07:04.372608 systemd-logind[1462]: Session 88 logged out. Waiting for processes to exit. Feb 9 08:07:04.373201 systemd[1]: Started sshd@96-139.178.90.113:22-147.75.109.163:48036.service. Feb 9 08:07:04.373664 systemd-logind[1462]: Removed session 88. Feb 9 08:07:04.406528 sshd[6413]: Accepted publickey for core from 147.75.109.163 port 48036 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:07:04.407256 sshd[6413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:07:04.409789 systemd-logind[1462]: New session 89 of user core. Feb 9 08:07:04.410270 systemd[1]: Started session-89.scope. Feb 9 08:07:05.764208 env[1474]: time="2024-02-09T08:07:05.764151856Z" level=info msg="StopContainer for \"bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346\" with timeout 30 (s)" Feb 9 08:07:05.764427 env[1474]: time="2024-02-09T08:07:05.764346187Z" level=info msg="Stop container \"bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346\" with signal terminated" Feb 9 08:07:05.769351 systemd[1]: cri-containerd-bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346.scope: Deactivated successfully. Feb 9 08:07:05.769514 systemd[1]: cri-containerd-bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346.scope: Consumed 1.739s CPU time. Feb 9 08:07:05.782693 env[1474]: time="2024-02-09T08:07:05.782657406Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 08:07:05.785660 env[1474]: time="2024-02-09T08:07:05.785643396Z" level=info msg="StopContainer for \"cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb\" with timeout 1 (s)" Feb 9 08:07:05.785775 env[1474]: time="2024-02-09T08:07:05.785762371Z" level=info msg="Stop container \"cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb\" with signal terminated" Feb 9 08:07:05.788892 systemd-networkd[1315]: lxc_health: Link DOWN Feb 9 08:07:05.788894 systemd-networkd[1315]: lxc_health: Lost carrier Feb 9 08:07:05.802382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346-rootfs.mount: Deactivated successfully. Feb 9 08:07:05.804792 env[1474]: time="2024-02-09T08:07:05.804764351Z" level=info msg="shim disconnected" id=bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346 Feb 9 08:07:05.804871 env[1474]: time="2024-02-09T08:07:05.804795198Z" level=warning msg="cleaning up after shim disconnected" id=bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346 namespace=k8s.io Feb 9 08:07:05.804871 env[1474]: time="2024-02-09T08:07:05.804802339Z" level=info msg="cleaning up dead shim" Feb 9 08:07:05.821788 env[1474]: time="2024-02-09T08:07:05.821740846Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:07:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6475 runtime=io.containerd.runc.v2\n" Feb 9 08:07:05.822692 env[1474]: time="2024-02-09T08:07:05.822645971Z" level=info msg="StopContainer for \"bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346\" returns successfully" Feb 9 08:07:05.823166 env[1474]: time="2024-02-09T08:07:05.823116079Z" level=info msg="StopPodSandbox for \"210624dd0c82ea6d1f9b46dc78c2a8faab7b6568457fb484f48752fddf10062c\"" Feb 9 08:07:05.823230 env[1474]: time="2024-02-09T08:07:05.823180645Z" level=info msg="Container to stop \"bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 08:07:05.824857 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-210624dd0c82ea6d1f9b46dc78c2a8faab7b6568457fb484f48752fddf10062c-shm.mount: Deactivated successfully. Feb 9 08:07:05.828756 systemd[1]: cri-containerd-210624dd0c82ea6d1f9b46dc78c2a8faab7b6568457fb484f48752fddf10062c.scope: Deactivated successfully. Feb 9 08:07:05.873843 env[1474]: time="2024-02-09T08:07:05.873691453Z" level=info msg="shim disconnected" id=210624dd0c82ea6d1f9b46dc78c2a8faab7b6568457fb484f48752fddf10062c Feb 9 08:07:05.874225 env[1474]: time="2024-02-09T08:07:05.873844162Z" level=warning msg="cleaning up after shim disconnected" id=210624dd0c82ea6d1f9b46dc78c2a8faab7b6568457fb484f48752fddf10062c namespace=k8s.io Feb 9 08:07:05.874225 env[1474]: time="2024-02-09T08:07:05.873888231Z" level=info msg="cleaning up dead shim" Feb 9 08:07:05.874309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-210624dd0c82ea6d1f9b46dc78c2a8faab7b6568457fb484f48752fddf10062c-rootfs.mount: Deactivated successfully. Feb 9 08:07:05.876029 systemd[1]: cri-containerd-cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb.scope: Deactivated successfully. Feb 9 08:07:05.876536 systemd[1]: cri-containerd-cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb.scope: Consumed 8.622s CPU time. Feb 9 08:07:05.890367 env[1474]: time="2024-02-09T08:07:05.890243575Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:07:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6511 runtime=io.containerd.runc.v2\n" Feb 9 08:07:05.891067 env[1474]: time="2024-02-09T08:07:05.890956480Z" level=info msg="TearDown network for sandbox \"210624dd0c82ea6d1f9b46dc78c2a8faab7b6568457fb484f48752fddf10062c\" successfully" Feb 9 08:07:05.891067 env[1474]: time="2024-02-09T08:07:05.891012283Z" level=info msg="StopPodSandbox for \"210624dd0c82ea6d1f9b46dc78c2a8faab7b6568457fb484f48752fddf10062c\" returns successfully" Feb 9 08:07:05.921746 env[1474]: time="2024-02-09T08:07:05.921622321Z" level=info msg="shim disconnected" id=cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb Feb 9 08:07:05.921746 env[1474]: time="2024-02-09T08:07:05.921714239Z" level=warning msg="cleaning up after shim disconnected" id=cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb namespace=k8s.io Feb 9 08:07:05.922207 env[1474]: time="2024-02-09T08:07:05.921756552Z" level=info msg="cleaning up dead shim" Feb 9 08:07:05.921773 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb-rootfs.mount: Deactivated successfully. Feb 9 08:07:05.949847 env[1474]: time="2024-02-09T08:07:05.949766682Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:07:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6535 runtime=io.containerd.runc.v2\n" Feb 9 08:07:05.951832 env[1474]: time="2024-02-09T08:07:05.951755264Z" level=info msg="StopContainer for \"cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb\" returns successfully" Feb 9 08:07:05.952698 env[1474]: time="2024-02-09T08:07:05.952580425Z" level=info msg="StopPodSandbox for \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\"" Feb 9 08:07:05.952897 env[1474]: time="2024-02-09T08:07:05.952734486Z" level=info msg="Container to stop \"0340a0c7d6ecb56daf488ab04609bdb1b6ef5839cb41ea29da87e420843c051a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 08:07:05.952897 env[1474]: time="2024-02-09T08:07:05.952780522Z" level=info msg="Container to stop \"f7840b845ef27e1f270408068256489af3261dca6b118752efde8b8ff7b7dcc8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 08:07:05.952897 env[1474]: time="2024-02-09T08:07:05.952811752Z" level=info msg="Container to stop \"476f78d6d02977780683b30b34a7d478df911077d1803e651d78b69668dc3520\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 08:07:05.952897 env[1474]: time="2024-02-09T08:07:05.952840455Z" level=info msg="Container to stop \"cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 08:07:05.952897 env[1474]: time="2024-02-09T08:07:05.952868461Z" level=info msg="Container to stop \"59e6c632550e98aa09fdb64460bf9df02d06e89c9f00f72464b6c159d53030aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 08:07:05.966887 systemd[1]: cri-containerd-11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02.scope: Deactivated successfully. Feb 9 08:07:05.982630 kubelet[2555]: I0209 08:07:05.982567 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbdls\" (UniqueName: \"kubernetes.io/projected/0bdea72d-3afb-4099-8ccc-d7557aa5e795-kube-api-access-fbdls\") pod \"0bdea72d-3afb-4099-8ccc-d7557aa5e795\" (UID: \"0bdea72d-3afb-4099-8ccc-d7557aa5e795\") " Feb 9 08:07:05.983667 kubelet[2555]: I0209 08:07:05.982725 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bdea72d-3afb-4099-8ccc-d7557aa5e795-cilium-config-path\") pod \"0bdea72d-3afb-4099-8ccc-d7557aa5e795\" (UID: \"0bdea72d-3afb-4099-8ccc-d7557aa5e795\") " Feb 9 08:07:05.983667 kubelet[2555]: W0209 08:07:05.983240 2555 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/0bdea72d-3afb-4099-8ccc-d7557aa5e795/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 08:07:05.987988 kubelet[2555]: I0209 08:07:05.987919 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bdea72d-3afb-4099-8ccc-d7557aa5e795-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0bdea72d-3afb-4099-8ccc-d7557aa5e795" (UID: "0bdea72d-3afb-4099-8ccc-d7557aa5e795"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 08:07:05.998875 env[1474]: time="2024-02-09T08:07:05.998815057Z" level=info msg="shim disconnected" id=11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02 Feb 9 08:07:05.999062 env[1474]: time="2024-02-09T08:07:05.998883242Z" level=warning msg="cleaning up after shim disconnected" id=11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02 namespace=k8s.io Feb 9 08:07:05.999062 env[1474]: time="2024-02-09T08:07:05.998907469Z" level=info msg="cleaning up dead shim" Feb 9 08:07:06.002106 kubelet[2555]: I0209 08:07:06.002071 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bdea72d-3afb-4099-8ccc-d7557aa5e795-kube-api-access-fbdls" (OuterVolumeSpecName: "kube-api-access-fbdls") pod "0bdea72d-3afb-4099-8ccc-d7557aa5e795" (UID: "0bdea72d-3afb-4099-8ccc-d7557aa5e795"). InnerVolumeSpecName "kube-api-access-fbdls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 08:07:06.007797 env[1474]: time="2024-02-09T08:07:06.007728785Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:07:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6567 runtime=io.containerd.runc.v2\n" Feb 9 08:07:06.008119 env[1474]: time="2024-02-09T08:07:06.008057692Z" level=info msg="TearDown network for sandbox \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\" successfully" Feb 9 08:07:06.008119 env[1474]: time="2024-02-09T08:07:06.008087429Z" level=info msg="StopPodSandbox for \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\" returns successfully" Feb 9 08:07:06.083676 kubelet[2555]: I0209 08:07:06.083586 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-cilium-cgroup\") pod \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " Feb 9 08:07:06.084022 kubelet[2555]: I0209 08:07:06.083756 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2c922fd4-2685-4c9c-b9ea-0a0c75a91457" (UID: "2c922fd4-2685-4c9c-b9ea-0a0c75a91457"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 08:07:06.084022 kubelet[2555]: I0209 08:07:06.083762 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2c922fd4-2685-4c9c-b9ea-0a0c75a91457" (UID: "2c922fd4-2685-4c9c-b9ea-0a0c75a91457"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 08:07:06.084022 kubelet[2555]: I0209 08:07:06.083774 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-cilium-run\") pod \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " Feb 9 08:07:06.084022 kubelet[2555]: I0209 08:07:06.083923 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-hubble-tls\") pod \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " Feb 9 08:07:06.084022 kubelet[2555]: I0209 08:07:06.083993 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-clustermesh-secrets\") pod \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " Feb 9 08:07:06.084828 kubelet[2555]: I0209 08:07:06.084055 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-host-proc-sys-net\") pod \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " Feb 9 08:07:06.084828 kubelet[2555]: I0209 08:07:06.084109 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-xtables-lock\") pod \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " Feb 9 08:07:06.084828 kubelet[2555]: I0209 08:07:06.084162 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-cni-path\") pod \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " Feb 9 08:07:06.084828 kubelet[2555]: I0209 08:07:06.084233 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggp9f\" (UniqueName: \"kubernetes.io/projected/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-kube-api-access-ggp9f\") pod \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " Feb 9 08:07:06.084828 kubelet[2555]: I0209 08:07:06.084224 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2c922fd4-2685-4c9c-b9ea-0a0c75a91457" (UID: "2c922fd4-2685-4c9c-b9ea-0a0c75a91457"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 08:07:06.084828 kubelet[2555]: I0209 08:07:06.084289 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-etc-cni-netd\") pod \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " Feb 9 08:07:06.085477 kubelet[2555]: I0209 08:07:06.084307 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2c922fd4-2685-4c9c-b9ea-0a0c75a91457" (UID: "2c922fd4-2685-4c9c-b9ea-0a0c75a91457"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 08:07:06.085477 kubelet[2555]: I0209 08:07:06.084351 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-cilium-config-path\") pod \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " Feb 9 08:07:06.085477 kubelet[2555]: I0209 08:07:06.084403 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-hostproc\") pod \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " Feb 9 08:07:06.085477 kubelet[2555]: I0209 08:07:06.084457 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-host-proc-sys-kernel\") pod \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " Feb 9 08:07:06.085477 kubelet[2555]: I0209 08:07:06.084379 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-cni-path" (OuterVolumeSpecName: "cni-path") pod "2c922fd4-2685-4c9c-b9ea-0a0c75a91457" (UID: "2c922fd4-2685-4c9c-b9ea-0a0c75a91457"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 08:07:06.085477 kubelet[2555]: I0209 08:07:06.084519 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-lib-modules\") pod \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " Feb 9 08:07:06.086137 kubelet[2555]: I0209 08:07:06.084420 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2c922fd4-2685-4c9c-b9ea-0a0c75a91457" (UID: "2c922fd4-2685-4c9c-b9ea-0a0c75a91457"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 08:07:06.086137 kubelet[2555]: I0209 08:07:06.084604 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-bpf-maps\") pod \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\" (UID: \"2c922fd4-2685-4c9c-b9ea-0a0c75a91457\") " Feb 9 08:07:06.086137 kubelet[2555]: I0209 08:07:06.084628 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2c922fd4-2685-4c9c-b9ea-0a0c75a91457" (UID: "2c922fd4-2685-4c9c-b9ea-0a0c75a91457"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 08:07:06.086137 kubelet[2555]: I0209 08:07:06.084578 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-hostproc" (OuterVolumeSpecName: "hostproc") pod "2c922fd4-2685-4c9c-b9ea-0a0c75a91457" (UID: "2c922fd4-2685-4c9c-b9ea-0a0c75a91457"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 08:07:06.086137 kubelet[2555]: I0209 08:07:06.084702 2555 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bdea72d-3afb-4099-8ccc-d7557aa5e795-cilium-config-path\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:06.086137 kubelet[2555]: I0209 08:07:06.084741 2555 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-cilium-cgroup\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:06.086805 kubelet[2555]: I0209 08:07:06.084646 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2c922fd4-2685-4c9c-b9ea-0a0c75a91457" (UID: "2c922fd4-2685-4c9c-b9ea-0a0c75a91457"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 08:07:06.086805 kubelet[2555]: I0209 08:07:06.084776 2555 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-cilium-run\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:06.086805 kubelet[2555]: I0209 08:07:06.084725 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2c922fd4-2685-4c9c-b9ea-0a0c75a91457" (UID: "2c922fd4-2685-4c9c-b9ea-0a0c75a91457"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 08:07:06.086805 kubelet[2555]: W0209 08:07:06.084774 2555 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/2c922fd4-2685-4c9c-b9ea-0a0c75a91457/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 08:07:06.086805 kubelet[2555]: I0209 08:07:06.084810 2555 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-xtables-lock\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:06.086805 kubelet[2555]: I0209 08:07:06.084861 2555 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-host-proc-sys-net\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:06.086805 kubelet[2555]: I0209 08:07:06.084891 2555 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-cni-path\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:06.087531 kubelet[2555]: I0209 08:07:06.084923 2555 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-fbdls\" (UniqueName: \"kubernetes.io/projected/0bdea72d-3afb-4099-8ccc-d7557aa5e795-kube-api-access-fbdls\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:06.087531 kubelet[2555]: I0209 08:07:06.084954 2555 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-etc-cni-netd\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:06.089787 kubelet[2555]: I0209 08:07:06.089697 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2c922fd4-2685-4c9c-b9ea-0a0c75a91457" (UID: "2c922fd4-2685-4c9c-b9ea-0a0c75a91457"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 08:07:06.090921 kubelet[2555]: I0209 08:07:06.090827 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2c922fd4-2685-4c9c-b9ea-0a0c75a91457" (UID: "2c922fd4-2685-4c9c-b9ea-0a0c75a91457"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 08:07:06.091146 kubelet[2555]: I0209 08:07:06.090975 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2c922fd4-2685-4c9c-b9ea-0a0c75a91457" (UID: "2c922fd4-2685-4c9c-b9ea-0a0c75a91457"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 08:07:06.091281 kubelet[2555]: I0209 08:07:06.091243 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-kube-api-access-ggp9f" (OuterVolumeSpecName: "kube-api-access-ggp9f") pod "2c922fd4-2685-4c9c-b9ea-0a0c75a91457" (UID: "2c922fd4-2685-4c9c-b9ea-0a0c75a91457"). InnerVolumeSpecName "kube-api-access-ggp9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 08:07:06.170438 kubelet[2555]: I0209 08:07:06.170338 2555 scope.go:115] "RemoveContainer" containerID="bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346" Feb 9 08:07:06.173213 env[1474]: time="2024-02-09T08:07:06.173108242Z" level=info msg="RemoveContainer for \"bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346\"" Feb 9 08:07:06.177474 env[1474]: time="2024-02-09T08:07:06.177392258Z" level=info msg="RemoveContainer for \"bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346\" returns successfully" Feb 9 08:07:06.177930 kubelet[2555]: I0209 08:07:06.177875 2555 scope.go:115] "RemoveContainer" containerID="bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346" Feb 9 08:07:06.178529 env[1474]: time="2024-02-09T08:07:06.178334123Z" level=error msg="ContainerStatus for \"bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346\": not found" Feb 9 08:07:06.178902 kubelet[2555]: E0209 08:07:06.178856 2555 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346\": not found" containerID="bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346" Feb 9 08:07:06.179106 kubelet[2555]: I0209 08:07:06.178955 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346} err="failed to get container status \"bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346\": rpc error: code = NotFound desc = an error occurred when try to find container \"bbfbdb61b0b3726aab03157a28bfaca7532a46fcd9d25f8d915924fb19291346\": not found" Feb 9 08:07:06.179106 kubelet[2555]: I0209 08:07:06.178988 2555 scope.go:115] "RemoveContainer" containerID="cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb" Feb 9 08:07:06.179469 kubelet[2555]: E0209 08:07:06.179341 2555 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 08:07:06.181043 systemd[1]: Removed slice kubepods-besteffort-pod0bdea72d_3afb_4099_8ccc_d7557aa5e795.slice. Feb 9 08:07:06.181355 systemd[1]: kubepods-besteffort-pod0bdea72d_3afb_4099_8ccc_d7557aa5e795.slice: Consumed 1.769s CPU time. Feb 9 08:07:06.182378 env[1474]: time="2024-02-09T08:07:06.181505988Z" level=info msg="RemoveContainer for \"cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb\"" Feb 9 08:07:06.185545 env[1474]: time="2024-02-09T08:07:06.185467939Z" level=info msg="RemoveContainer for \"cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb\" returns successfully" Feb 9 08:07:06.186009 kubelet[2555]: I0209 08:07:06.185950 2555 scope.go:115] "RemoveContainer" containerID="476f78d6d02977780683b30b34a7d478df911077d1803e651d78b69668dc3520" Feb 9 08:07:06.186238 kubelet[2555]: I0209 08:07:06.186024 2555 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-ggp9f\" (UniqueName: \"kubernetes.io/projected/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-kube-api-access-ggp9f\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:06.186238 kubelet[2555]: I0209 08:07:06.186076 2555 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-cilium-config-path\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:06.186238 kubelet[2555]: I0209 08:07:06.186111 2555 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-hostproc\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:06.186238 kubelet[2555]: I0209 08:07:06.186145 2555 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:06.186238 kubelet[2555]: I0209 08:07:06.186177 2555 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-lib-modules\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:06.186238 kubelet[2555]: I0209 08:07:06.186207 2555 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-bpf-maps\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:06.186238 kubelet[2555]: I0209 08:07:06.186237 2555 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-hubble-tls\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:06.186148 systemd[1]: Removed slice kubepods-burstable-pod2c922fd4_2685_4c9c_b9ea_0a0c75a91457.slice. Feb 9 08:07:06.187803 kubelet[2555]: I0209 08:07:06.186268 2555 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c922fd4-2685-4c9c-b9ea-0a0c75a91457-clustermesh-secrets\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:06.186409 systemd[1]: kubepods-burstable-pod2c922fd4_2685_4c9c_b9ea_0a0c75a91457.slice: Consumed 8.702s CPU time. Feb 9 08:07:06.188625 env[1474]: time="2024-02-09T08:07:06.188490664Z" level=info msg="RemoveContainer for \"476f78d6d02977780683b30b34a7d478df911077d1803e651d78b69668dc3520\"" Feb 9 08:07:06.192978 env[1474]: time="2024-02-09T08:07:06.192865225Z" level=info msg="RemoveContainer for \"476f78d6d02977780683b30b34a7d478df911077d1803e651d78b69668dc3520\" returns successfully" Feb 9 08:07:06.193327 kubelet[2555]: I0209 08:07:06.193256 2555 scope.go:115] "RemoveContainer" containerID="59e6c632550e98aa09fdb64460bf9df02d06e89c9f00f72464b6c159d53030aa" Feb 9 08:07:06.195656 env[1474]: time="2024-02-09T08:07:06.195547577Z" level=info msg="RemoveContainer for \"59e6c632550e98aa09fdb64460bf9df02d06e89c9f00f72464b6c159d53030aa\"" Feb 9 08:07:06.199536 env[1474]: time="2024-02-09T08:07:06.199448888Z" level=info msg="RemoveContainer for \"59e6c632550e98aa09fdb64460bf9df02d06e89c9f00f72464b6c159d53030aa\" returns successfully" Feb 9 08:07:06.199924 kubelet[2555]: I0209 08:07:06.199876 2555 scope.go:115] "RemoveContainer" containerID="f7840b845ef27e1f270408068256489af3261dca6b118752efde8b8ff7b7dcc8" Feb 9 08:07:06.202603 env[1474]: time="2024-02-09T08:07:06.202460494Z" level=info msg="RemoveContainer for \"f7840b845ef27e1f270408068256489af3261dca6b118752efde8b8ff7b7dcc8\"" Feb 9 08:07:06.206846 env[1474]: time="2024-02-09T08:07:06.206777512Z" level=info msg="RemoveContainer for \"f7840b845ef27e1f270408068256489af3261dca6b118752efde8b8ff7b7dcc8\" returns successfully" Feb 9 08:07:06.207205 kubelet[2555]: I0209 08:07:06.207159 2555 scope.go:115] "RemoveContainer" containerID="0340a0c7d6ecb56daf488ab04609bdb1b6ef5839cb41ea29da87e420843c051a" Feb 9 08:07:06.209714 env[1474]: time="2024-02-09T08:07:06.209614229Z" level=info msg="RemoveContainer for \"0340a0c7d6ecb56daf488ab04609bdb1b6ef5839cb41ea29da87e420843c051a\"" Feb 9 08:07:06.213578 env[1474]: time="2024-02-09T08:07:06.213442605Z" level=info msg="RemoveContainer for \"0340a0c7d6ecb56daf488ab04609bdb1b6ef5839cb41ea29da87e420843c051a\" returns successfully" Feb 9 08:07:06.213921 kubelet[2555]: I0209 08:07:06.213831 2555 scope.go:115] "RemoveContainer" containerID="cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb" Feb 9 08:07:06.214470 env[1474]: time="2024-02-09T08:07:06.214313686Z" level=error msg="ContainerStatus for \"cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb\": not found" Feb 9 08:07:06.214782 kubelet[2555]: E0209 08:07:06.214695 2555 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb\": not found" containerID="cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb" Feb 9 08:07:06.214782 kubelet[2555]: I0209 08:07:06.214781 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb} err="failed to get container status \"cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"cc959cd866ff1bdba2106802a6402e15e33821ab89e369c6eba883450bcbc7cb\": not found" Feb 9 08:07:06.215106 kubelet[2555]: I0209 08:07:06.214814 2555 scope.go:115] "RemoveContainer" containerID="476f78d6d02977780683b30b34a7d478df911077d1803e651d78b69668dc3520" Feb 9 08:07:06.215435 env[1474]: time="2024-02-09T08:07:06.215272679Z" level=error msg="ContainerStatus for \"476f78d6d02977780683b30b34a7d478df911077d1803e651d78b69668dc3520\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"476f78d6d02977780683b30b34a7d478df911077d1803e651d78b69668dc3520\": not found" Feb 9 08:07:06.215751 kubelet[2555]: E0209 08:07:06.215698 2555 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"476f78d6d02977780683b30b34a7d478df911077d1803e651d78b69668dc3520\": not found" containerID="476f78d6d02977780683b30b34a7d478df911077d1803e651d78b69668dc3520" Feb 9 08:07:06.215990 kubelet[2555]: I0209 08:07:06.215829 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:476f78d6d02977780683b30b34a7d478df911077d1803e651d78b69668dc3520} err="failed to get container status \"476f78d6d02977780683b30b34a7d478df911077d1803e651d78b69668dc3520\": rpc error: code = NotFound desc = an error occurred when try to find container \"476f78d6d02977780683b30b34a7d478df911077d1803e651d78b69668dc3520\": not found" Feb 9 08:07:06.215990 kubelet[2555]: I0209 08:07:06.215879 2555 scope.go:115] "RemoveContainer" containerID="59e6c632550e98aa09fdb64460bf9df02d06e89c9f00f72464b6c159d53030aa" Feb 9 08:07:06.216584 env[1474]: time="2024-02-09T08:07:06.216387229Z" level=error msg="ContainerStatus for \"59e6c632550e98aa09fdb64460bf9df02d06e89c9f00f72464b6c159d53030aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"59e6c632550e98aa09fdb64460bf9df02d06e89c9f00f72464b6c159d53030aa\": not found" Feb 9 08:07:06.216945 kubelet[2555]: E0209 08:07:06.216865 2555 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"59e6c632550e98aa09fdb64460bf9df02d06e89c9f00f72464b6c159d53030aa\": not found" containerID="59e6c632550e98aa09fdb64460bf9df02d06e89c9f00f72464b6c159d53030aa" Feb 9 08:07:06.216945 kubelet[2555]: I0209 08:07:06.216946 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:59e6c632550e98aa09fdb64460bf9df02d06e89c9f00f72464b6c159d53030aa} err="failed to get container status \"59e6c632550e98aa09fdb64460bf9df02d06e89c9f00f72464b6c159d53030aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"59e6c632550e98aa09fdb64460bf9df02d06e89c9f00f72464b6c159d53030aa\": not found" Feb 9 08:07:06.217279 kubelet[2555]: I0209 08:07:06.216978 2555 scope.go:115] "RemoveContainer" containerID="f7840b845ef27e1f270408068256489af3261dca6b118752efde8b8ff7b7dcc8" Feb 9 08:07:06.217667 env[1474]: time="2024-02-09T08:07:06.217471237Z" level=error msg="ContainerStatus for \"f7840b845ef27e1f270408068256489af3261dca6b118752efde8b8ff7b7dcc8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7840b845ef27e1f270408068256489af3261dca6b118752efde8b8ff7b7dcc8\": not found" Feb 9 08:07:06.217966 kubelet[2555]: E0209 08:07:06.217892 2555 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f7840b845ef27e1f270408068256489af3261dca6b118752efde8b8ff7b7dcc8\": not found" containerID="f7840b845ef27e1f270408068256489af3261dca6b118752efde8b8ff7b7dcc8" Feb 9 08:07:06.218188 kubelet[2555]: I0209 08:07:06.217990 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f7840b845ef27e1f270408068256489af3261dca6b118752efde8b8ff7b7dcc8} err="failed to get container status \"f7840b845ef27e1f270408068256489af3261dca6b118752efde8b8ff7b7dcc8\": rpc error: code = NotFound desc = an error occurred when try to find container \"f7840b845ef27e1f270408068256489af3261dca6b118752efde8b8ff7b7dcc8\": not found" Feb 9 08:07:06.218188 kubelet[2555]: I0209 08:07:06.218035 2555 scope.go:115] "RemoveContainer" containerID="0340a0c7d6ecb56daf488ab04609bdb1b6ef5839cb41ea29da87e420843c051a" Feb 9 08:07:06.218749 env[1474]: time="2024-02-09T08:07:06.218543603Z" level=error msg="ContainerStatus for \"0340a0c7d6ecb56daf488ab04609bdb1b6ef5839cb41ea29da87e420843c051a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0340a0c7d6ecb56daf488ab04609bdb1b6ef5839cb41ea29da87e420843c051a\": not found" Feb 9 08:07:06.219057 kubelet[2555]: E0209 08:07:06.218977 2555 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0340a0c7d6ecb56daf488ab04609bdb1b6ef5839cb41ea29da87e420843c051a\": not found" containerID="0340a0c7d6ecb56daf488ab04609bdb1b6ef5839cb41ea29da87e420843c051a" Feb 9 08:07:06.219057 kubelet[2555]: I0209 08:07:06.219052 2555 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0340a0c7d6ecb56daf488ab04609bdb1b6ef5839cb41ea29da87e420843c051a} err="failed to get container status \"0340a0c7d6ecb56daf488ab04609bdb1b6ef5839cb41ea29da87e420843c051a\": rpc error: code = NotFound desc = an error occurred when try to find container \"0340a0c7d6ecb56daf488ab04609bdb1b6ef5839cb41ea29da87e420843c051a\": not found" Feb 9 08:07:06.778389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02-rootfs.mount: Deactivated successfully. Feb 9 08:07:06.778439 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02-shm.mount: Deactivated successfully. Feb 9 08:07:06.778473 systemd[1]: var-lib-kubelet-pods-0bdea72d\x2d3afb\x2d4099\x2d8ccc\x2dd7557aa5e795-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfbdls.mount: Deactivated successfully. Feb 9 08:07:06.778509 systemd[1]: var-lib-kubelet-pods-2c922fd4\x2d2685\x2d4c9c\x2db9ea\x2d0a0c75a91457-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dggp9f.mount: Deactivated successfully. Feb 9 08:07:06.778540 systemd[1]: var-lib-kubelet-pods-2c922fd4\x2d2685\x2d4c9c\x2db9ea\x2d0a0c75a91457-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 08:07:06.778611 systemd[1]: var-lib-kubelet-pods-2c922fd4\x2d2685\x2d4c9c\x2db9ea\x2d0a0c75a91457-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 08:07:07.711797 sshd[6413]: pam_unix(sshd:session): session closed for user core Feb 9 08:07:07.713718 systemd[1]: sshd@96-139.178.90.113:22-147.75.109.163:48036.service: Deactivated successfully. Feb 9 08:07:07.714031 systemd[1]: session-89.scope: Deactivated successfully. Feb 9 08:07:07.714356 systemd-logind[1462]: Session 89 logged out. Waiting for processes to exit. Feb 9 08:07:07.714926 systemd[1]: Started sshd@97-139.178.90.113:22-147.75.109.163:59712.service. Feb 9 08:07:07.715365 systemd-logind[1462]: Removed session 89. Feb 9 08:07:07.748186 sshd[6584]: Accepted publickey for core from 147.75.109.163 port 59712 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:07:07.749211 sshd[6584]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:07:07.752467 systemd-logind[1462]: New session 90 of user core. Feb 9 08:07:07.753373 systemd[1]: Started session-90.scope. Feb 9 08:07:07.959502 env[1474]: time="2024-02-09T08:07:07.959465448Z" level=info msg="StopPodSandbox for \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\"" Feb 9 08:07:07.959844 env[1474]: time="2024-02-09T08:07:07.959558187Z" level=info msg="TearDown network for sandbox \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\" successfully" Feb 9 08:07:07.959844 env[1474]: time="2024-02-09T08:07:07.959582416Z" level=info msg="StopPodSandbox for \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\" returns successfully" Feb 9 08:07:07.960326 kubelet[2555]: I0209 08:07:07.960270 2555 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=0bdea72d-3afb-4099-8ccc-d7557aa5e795 path="/var/lib/kubelet/pods/0bdea72d-3afb-4099-8ccc-d7557aa5e795/volumes" Feb 9 08:07:07.960565 kubelet[2555]: I0209 08:07:07.960559 2555 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=2c922fd4-2685-4c9c-b9ea-0a0c75a91457 path="/var/lib/kubelet/pods/2c922fd4-2685-4c9c-b9ea-0a0c75a91457/volumes" Feb 9 08:07:08.142444 sshd[6584]: pam_unix(sshd:session): session closed for user core Feb 9 08:07:08.144388 systemd[1]: sshd@97-139.178.90.113:22-147.75.109.163:59712.service: Deactivated successfully. Feb 9 08:07:08.144753 systemd[1]: session-90.scope: Deactivated successfully. Feb 9 08:07:08.145128 systemd-logind[1462]: Session 90 logged out. Waiting for processes to exit. Feb 9 08:07:08.145741 systemd[1]: Started sshd@98-139.178.90.113:22-147.75.109.163:59726.service. Feb 9 08:07:08.146168 systemd-logind[1462]: Removed session 90. Feb 9 08:07:08.151252 kubelet[2555]: I0209 08:07:08.151232 2555 topology_manager.go:210] "Topology Admit Handler" Feb 9 08:07:08.151353 kubelet[2555]: E0209 08:07:08.151275 2555 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c922fd4-2685-4c9c-b9ea-0a0c75a91457" containerName="mount-bpf-fs" Feb 9 08:07:08.151353 kubelet[2555]: E0209 08:07:08.151286 2555 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c922fd4-2685-4c9c-b9ea-0a0c75a91457" containerName="clean-cilium-state" Feb 9 08:07:08.151353 kubelet[2555]: E0209 08:07:08.151293 2555 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c922fd4-2685-4c9c-b9ea-0a0c75a91457" containerName="cilium-agent" Feb 9 08:07:08.151353 kubelet[2555]: E0209 08:07:08.151299 2555 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c922fd4-2685-4c9c-b9ea-0a0c75a91457" containerName="apply-sysctl-overwrites" Feb 9 08:07:08.151353 kubelet[2555]: E0209 08:07:08.151305 2555 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0bdea72d-3afb-4099-8ccc-d7557aa5e795" containerName="cilium-operator" Feb 9 08:07:08.151353 kubelet[2555]: E0209 08:07:08.151312 2555 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c922fd4-2685-4c9c-b9ea-0a0c75a91457" containerName="mount-cgroup" Feb 9 08:07:08.151353 kubelet[2555]: I0209 08:07:08.151331 2555 memory_manager.go:346] "RemoveStaleState removing state" podUID="0bdea72d-3afb-4099-8ccc-d7557aa5e795" containerName="cilium-operator" Feb 9 08:07:08.151353 kubelet[2555]: I0209 08:07:08.151337 2555 memory_manager.go:346] "RemoveStaleState removing state" podUID="2c922fd4-2685-4c9c-b9ea-0a0c75a91457" containerName="cilium-agent" Feb 9 08:07:08.155012 systemd[1]: Created slice kubepods-burstable-podaac8c542_1e32_4c11_8ad1_68ec3d3d59b1.slice. Feb 9 08:07:08.179909 sshd[6608]: Accepted publickey for core from 147.75.109.163 port 59726 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:07:08.180798 sshd[6608]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:07:08.183183 systemd-logind[1462]: New session 91 of user core. Feb 9 08:07:08.183658 systemd[1]: Started session-91.scope. Feb 9 08:07:08.301224 kubelet[2555]: I0209 08:07:08.301156 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-cilium-ipsec-secrets\") pod \"cilium-nc4kv\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " pod="kube-system/cilium-nc4kv" Feb 9 08:07:08.301402 kubelet[2555]: I0209 08:07:08.301276 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-hubble-tls\") pod \"cilium-nc4kv\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " pod="kube-system/cilium-nc4kv" Feb 9 08:07:08.301402 kubelet[2555]: I0209 08:07:08.301347 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-bpf-maps\") pod \"cilium-nc4kv\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " pod="kube-system/cilium-nc4kv" Feb 9 08:07:08.301532 kubelet[2555]: I0209 08:07:08.301425 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-cilium-config-path\") pod \"cilium-nc4kv\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " pod="kube-system/cilium-nc4kv" Feb 9 08:07:08.301532 kubelet[2555]: I0209 08:07:08.301458 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-cilium-cgroup\") pod \"cilium-nc4kv\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " pod="kube-system/cilium-nc4kv" Feb 9 08:07:08.301532 kubelet[2555]: I0209 08:07:08.301489 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-cni-path\") pod \"cilium-nc4kv\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " pod="kube-system/cilium-nc4kv" Feb 9 08:07:08.301768 kubelet[2555]: I0209 08:07:08.301536 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-lib-modules\") pod \"cilium-nc4kv\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " pod="kube-system/cilium-nc4kv" Feb 9 08:07:08.301768 kubelet[2555]: I0209 08:07:08.301678 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-cilium-run\") pod \"cilium-nc4kv\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " pod="kube-system/cilium-nc4kv" Feb 9 08:07:08.301768 kubelet[2555]: I0209 08:07:08.301763 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-xtables-lock\") pod \"cilium-nc4kv\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " pod="kube-system/cilium-nc4kv" Feb 9 08:07:08.301969 kubelet[2555]: I0209 08:07:08.301801 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-clustermesh-secrets\") pod \"cilium-nc4kv\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " pod="kube-system/cilium-nc4kv" Feb 9 08:07:08.301969 kubelet[2555]: I0209 08:07:08.301832 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-host-proc-sys-net\") pod \"cilium-nc4kv\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " pod="kube-system/cilium-nc4kv" Feb 9 08:07:08.301969 kubelet[2555]: I0209 08:07:08.301912 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-host-proc-sys-kernel\") pod \"cilium-nc4kv\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " pod="kube-system/cilium-nc4kv" Feb 9 08:07:08.302154 kubelet[2555]: I0209 08:07:08.302019 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-etc-cni-netd\") pod \"cilium-nc4kv\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " pod="kube-system/cilium-nc4kv" Feb 9 08:07:08.302154 kubelet[2555]: I0209 08:07:08.302065 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbsbq\" (UniqueName: \"kubernetes.io/projected/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-kube-api-access-jbsbq\") pod \"cilium-nc4kv\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " pod="kube-system/cilium-nc4kv" Feb 9 08:07:08.302282 kubelet[2555]: I0209 08:07:08.302180 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-hostproc\") pod \"cilium-nc4kv\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " pod="kube-system/cilium-nc4kv" Feb 9 08:07:08.319406 sshd[6608]: pam_unix(sshd:session): session closed for user core Feb 9 08:07:08.324056 systemd[1]: sshd@98-139.178.90.113:22-147.75.109.163:59726.service: Deactivated successfully. Feb 9 08:07:08.325119 systemd[1]: session-91.scope: Deactivated successfully. Feb 9 08:07:08.326201 systemd-logind[1462]: Session 91 logged out. Waiting for processes to exit. Feb 9 08:07:08.327987 systemd[1]: Started sshd@99-139.178.90.113:22-147.75.109.163:59730.service. Feb 9 08:07:08.329759 systemd-logind[1462]: Removed session 91. Feb 9 08:07:08.391506 sshd[6633]: Accepted publickey for core from 147.75.109.163 port 59730 ssh2: RSA SHA256:iyCj5yVZK3Ynnwi357zQkTbtqc3nOk8lkuinqpwqTo0 Feb 9 08:07:08.393103 sshd[6633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 08:07:08.397900 systemd-logind[1462]: New session 92 of user core. Feb 9 08:07:08.398946 systemd[1]: Started session-92.scope. Feb 9 08:07:08.457952 env[1474]: time="2024-02-09T08:07:08.457863799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nc4kv,Uid:aac8c542-1e32-4c11-8ad1-68ec3d3d59b1,Namespace:kube-system,Attempt:0,}" Feb 9 08:07:08.477081 env[1474]: time="2024-02-09T08:07:08.476912214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:07:08.477081 env[1474]: time="2024-02-09T08:07:08.476997593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:07:08.477081 env[1474]: time="2024-02-09T08:07:08.477031595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:07:08.477590 env[1474]: time="2024-02-09T08:07:08.477388338Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d661f958d332c25427496f0ddcae507a02fa22b7423065aecf8edbb8c785c2a pid=6652 runtime=io.containerd.runc.v2 Feb 9 08:07:08.509322 systemd[1]: Started cri-containerd-2d661f958d332c25427496f0ddcae507a02fa22b7423065aecf8edbb8c785c2a.scope. Feb 9 08:07:08.538317 env[1474]: time="2024-02-09T08:07:08.538283037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nc4kv,Uid:aac8c542-1e32-4c11-8ad1-68ec3d3d59b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d661f958d332c25427496f0ddcae507a02fa22b7423065aecf8edbb8c785c2a\"" Feb 9 08:07:08.540020 env[1474]: time="2024-02-09T08:07:08.539992417Z" level=info msg="CreateContainer within sandbox \"2d661f958d332c25427496f0ddcae507a02fa22b7423065aecf8edbb8c785c2a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 08:07:08.547064 env[1474]: time="2024-02-09T08:07:08.547003310Z" level=info msg="CreateContainer within sandbox \"2d661f958d332c25427496f0ddcae507a02fa22b7423065aecf8edbb8c785c2a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1b599fa13c83a882b41349b186205fee8d3cdf3f3aca730469bd8b6bc2e237a0\"" Feb 9 08:07:08.547387 env[1474]: time="2024-02-09T08:07:08.547353575Z" level=info msg="StartContainer for \"1b599fa13c83a882b41349b186205fee8d3cdf3f3aca730469bd8b6bc2e237a0\"" Feb 9 08:07:08.572393 systemd[1]: Started cri-containerd-1b599fa13c83a882b41349b186205fee8d3cdf3f3aca730469bd8b6bc2e237a0.scope. Feb 9 08:07:08.581987 systemd[1]: cri-containerd-1b599fa13c83a882b41349b186205fee8d3cdf3f3aca730469bd8b6bc2e237a0.scope: Deactivated successfully. Feb 9 08:07:08.582264 systemd[1]: Stopped cri-containerd-1b599fa13c83a882b41349b186205fee8d3cdf3f3aca730469bd8b6bc2e237a0.scope. Feb 9 08:07:08.591761 env[1474]: time="2024-02-09T08:07:08.591702462Z" level=info msg="shim disconnected" id=1b599fa13c83a882b41349b186205fee8d3cdf3f3aca730469bd8b6bc2e237a0 Feb 9 08:07:08.591908 env[1474]: time="2024-02-09T08:07:08.591759337Z" level=warning msg="cleaning up after shim disconnected" id=1b599fa13c83a882b41349b186205fee8d3cdf3f3aca730469bd8b6bc2e237a0 namespace=k8s.io Feb 9 08:07:08.591908 env[1474]: time="2024-02-09T08:07:08.591773119Z" level=info msg="cleaning up dead shim" Feb 9 08:07:08.600996 env[1474]: time="2024-02-09T08:07:08.600920722Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:07:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6726 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T08:07:08Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1b599fa13c83a882b41349b186205fee8d3cdf3f3aca730469bd8b6bc2e237a0/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 08:07:08.601347 env[1474]: time="2024-02-09T08:07:08.601198877Z" level=error msg="copy shim log" error="read /proc/self/fd/34: file already closed" Feb 9 08:07:08.601524 env[1474]: time="2024-02-09T08:07:08.601480809Z" level=error msg="Failed to pipe stdout of container \"1b599fa13c83a882b41349b186205fee8d3cdf3f3aca730469bd8b6bc2e237a0\"" error="reading from a closed fifo" Feb 9 08:07:08.601600 env[1474]: time="2024-02-09T08:07:08.601524442Z" level=error msg="Failed to pipe stderr of container \"1b599fa13c83a882b41349b186205fee8d3cdf3f3aca730469bd8b6bc2e237a0\"" error="reading from a closed fifo" Feb 9 08:07:08.602675 env[1474]: time="2024-02-09T08:07:08.602574278Z" level=error msg="StartContainer for \"1b599fa13c83a882b41349b186205fee8d3cdf3f3aca730469bd8b6bc2e237a0\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 08:07:08.602885 kubelet[2555]: E0209 08:07:08.602829 2555 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1b599fa13c83a882b41349b186205fee8d3cdf3f3aca730469bd8b6bc2e237a0" Feb 9 08:07:08.603018 kubelet[2555]: E0209 08:07:08.602974 2555 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 08:07:08.603018 kubelet[2555]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 08:07:08.603018 kubelet[2555]: rm /hostbin/cilium-mount Feb 9 08:07:08.603018 kubelet[2555]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jbsbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-nc4kv_kube-system(aac8c542-1e32-4c11-8ad1-68ec3d3d59b1): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 08:07:08.603336 kubelet[2555]: E0209 08:07:08.603045 2555 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-nc4kv" podUID=aac8c542-1e32-4c11-8ad1-68ec3d3d59b1 Feb 9 08:07:09.190574 env[1474]: time="2024-02-09T08:07:09.190470010Z" level=info msg="StopPodSandbox for \"2d661f958d332c25427496f0ddcae507a02fa22b7423065aecf8edbb8c785c2a\"" Feb 9 08:07:09.191137 env[1474]: time="2024-02-09T08:07:09.190599185Z" level=info msg="Container to stop \"1b599fa13c83a882b41349b186205fee8d3cdf3f3aca730469bd8b6bc2e237a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 08:07:09.210734 systemd[1]: cri-containerd-2d661f958d332c25427496f0ddcae507a02fa22b7423065aecf8edbb8c785c2a.scope: Deactivated successfully. Feb 9 08:07:09.241715 env[1474]: time="2024-02-09T08:07:09.241606128Z" level=info msg="shim disconnected" id=2d661f958d332c25427496f0ddcae507a02fa22b7423065aecf8edbb8c785c2a Feb 9 08:07:09.242227 env[1474]: time="2024-02-09T08:07:09.241713985Z" level=warning msg="cleaning up after shim disconnected" id=2d661f958d332c25427496f0ddcae507a02fa22b7423065aecf8edbb8c785c2a namespace=k8s.io Feb 9 08:07:09.242227 env[1474]: time="2024-02-09T08:07:09.241744417Z" level=info msg="cleaning up dead shim" Feb 9 08:07:09.259344 env[1474]: time="2024-02-09T08:07:09.259263363Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:07:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6756 runtime=io.containerd.runc.v2\n" Feb 9 08:07:09.260035 env[1474]: time="2024-02-09T08:07:09.259931008Z" level=info msg="TearDown network for sandbox \"2d661f958d332c25427496f0ddcae507a02fa22b7423065aecf8edbb8c785c2a\" successfully" Feb 9 08:07:09.260035 env[1474]: time="2024-02-09T08:07:09.259987676Z" level=info msg="StopPodSandbox for \"2d661f958d332c25427496f0ddcae507a02fa22b7423065aecf8edbb8c785c2a\" returns successfully" Feb 9 08:07:09.410657 kubelet[2555]: I0209 08:07:09.410634 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-cilium-run\") pod \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " Feb 9 08:07:09.410651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d661f958d332c25427496f0ddcae507a02fa22b7423065aecf8edbb8c785c2a-rootfs.mount: Deactivated successfully. Feb 9 08:07:09.410972 kubelet[2555]: I0209 08:07:09.410670 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1" (UID: "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 08:07:09.410972 kubelet[2555]: I0209 08:07:09.410668 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-cilium-config-path\") pod \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " Feb 9 08:07:09.410972 kubelet[2555]: I0209 08:07:09.410702 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-cni-path\") pod \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " Feb 9 08:07:09.410972 kubelet[2555]: I0209 08:07:09.410714 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-lib-modules\") pod \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " Feb 9 08:07:09.410972 kubelet[2555]: I0209 08:07:09.410728 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbsbq\" (UniqueName: \"kubernetes.io/projected/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-kube-api-access-jbsbq\") pod \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " Feb 9 08:07:09.410972 kubelet[2555]: I0209 08:07:09.410738 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-bpf-maps\") pod \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " Feb 9 08:07:09.410715 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d661f958d332c25427496f0ddcae507a02fa22b7423065aecf8edbb8c785c2a-shm.mount: Deactivated successfully. Feb 9 08:07:09.411153 kubelet[2555]: I0209 08:07:09.410750 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-xtables-lock\") pod \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " Feb 9 08:07:09.411153 kubelet[2555]: I0209 08:07:09.410763 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-etc-cni-netd\") pod \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " Feb 9 08:07:09.411153 kubelet[2555]: I0209 08:07:09.410768 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1" (UID: "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 08:07:09.411153 kubelet[2555]: I0209 08:07:09.410799 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-host-proc-sys-kernel\") pod \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " Feb 9 08:07:09.411153 kubelet[2555]: W0209 08:07:09.410799 2555 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 08:07:09.411153 kubelet[2555]: I0209 08:07:09.410798 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1" (UID: "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 08:07:09.411335 kubelet[2555]: I0209 08:07:09.410821 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1" (UID: "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 08:07:09.411335 kubelet[2555]: I0209 08:07:09.410827 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-cni-path" (OuterVolumeSpecName: "cni-path") pod "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1" (UID: "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 08:07:09.411335 kubelet[2555]: I0209 08:07:09.410836 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-clustermesh-secrets\") pod \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " Feb 9 08:07:09.411335 kubelet[2555]: I0209 08:07:09.410839 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1" (UID: "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 08:07:09.411335 kubelet[2555]: I0209 08:07:09.410854 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-host-proc-sys-net\") pod \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " Feb 9 08:07:09.411493 kubelet[2555]: I0209 08:07:09.410836 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1" (UID: "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 08:07:09.411493 kubelet[2555]: I0209 08:07:09.410892 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-cilium-ipsec-secrets\") pod \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " Feb 9 08:07:09.411493 kubelet[2555]: I0209 08:07:09.410912 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-hubble-tls\") pod \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " Feb 9 08:07:09.411493 kubelet[2555]: I0209 08:07:09.410905 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1" (UID: "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 08:07:09.411493 kubelet[2555]: I0209 08:07:09.410931 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-cilium-cgroup\") pod \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " Feb 9 08:07:09.411616 kubelet[2555]: I0209 08:07:09.410950 2555 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-hostproc\") pod \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\" (UID: \"aac8c542-1e32-4c11-8ad1-68ec3d3d59b1\") " Feb 9 08:07:09.411616 kubelet[2555]: I0209 08:07:09.410961 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1" (UID: "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 08:07:09.411616 kubelet[2555]: I0209 08:07:09.410973 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-hostproc" (OuterVolumeSpecName: "hostproc") pod "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1" (UID: "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 08:07:09.411616 kubelet[2555]: I0209 08:07:09.410983 2555 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-bpf-maps\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:09.411616 kubelet[2555]: I0209 08:07:09.410994 2555 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-xtables-lock\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:09.411616 kubelet[2555]: I0209 08:07:09.411004 2555 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-etc-cni-netd\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:09.411729 kubelet[2555]: I0209 08:07:09.411012 2555 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:09.411729 kubelet[2555]: I0209 08:07:09.411018 2555 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-host-proc-sys-net\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:09.411729 kubelet[2555]: I0209 08:07:09.411023 2555 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-cni-path\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:09.411729 kubelet[2555]: I0209 08:07:09.411030 2555 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-lib-modules\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:09.411729 kubelet[2555]: I0209 08:07:09.411036 2555 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-cilium-run\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:09.411974 kubelet[2555]: I0209 08:07:09.411965 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1" (UID: "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 08:07:09.412478 kubelet[2555]: I0209 08:07:09.412465 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-kube-api-access-jbsbq" (OuterVolumeSpecName: "kube-api-access-jbsbq") pod "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1" (UID: "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1"). InnerVolumeSpecName "kube-api-access-jbsbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 08:07:09.412543 kubelet[2555]: I0209 08:07:09.412523 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1" (UID: "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 08:07:09.412543 kubelet[2555]: I0209 08:07:09.412530 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1" (UID: "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 08:07:09.412631 kubelet[2555]: I0209 08:07:09.412547 2555 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1" (UID: "aac8c542-1e32-4c11-8ad1-68ec3d3d59b1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 08:07:09.413174 systemd[1]: var-lib-kubelet-pods-aac8c542\x2d1e32\x2d4c11\x2d8ad1\x2d68ec3d3d59b1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djbsbq.mount: Deactivated successfully. Feb 9 08:07:09.413225 systemd[1]: var-lib-kubelet-pods-aac8c542\x2d1e32\x2d4c11\x2d8ad1\x2d68ec3d3d59b1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 08:07:09.413264 systemd[1]: var-lib-kubelet-pods-aac8c542\x2d1e32\x2d4c11\x2d8ad1\x2d68ec3d3d59b1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 08:07:09.413300 systemd[1]: var-lib-kubelet-pods-aac8c542\x2d1e32\x2d4c11\x2d8ad1\x2d68ec3d3d59b1-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 08:07:09.511900 kubelet[2555]: I0209 08:07:09.511677 2555 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-clustermesh-secrets\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:09.511900 kubelet[2555]: I0209 08:07:09.511763 2555 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:09.511900 kubelet[2555]: I0209 08:07:09.511802 2555 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-hubble-tls\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:09.511900 kubelet[2555]: I0209 08:07:09.511837 2555 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-cilium-cgroup\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:09.511900 kubelet[2555]: I0209 08:07:09.511869 2555 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-hostproc\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:09.511900 kubelet[2555]: I0209 08:07:09.511901 2555 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-cilium-config-path\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:09.512926 kubelet[2555]: I0209 08:07:09.511934 2555 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-jbsbq\" (UniqueName: \"kubernetes.io/projected/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1-kube-api-access-jbsbq\") on node \"ci-3510.3.2-a-d9875e643b\" DevicePath \"\"" Feb 9 08:07:09.974286 systemd[1]: Removed slice kubepods-burstable-podaac8c542_1e32_4c11_8ad1_68ec3d3d59b1.slice. Feb 9 08:07:10.196077 kubelet[2555]: I0209 08:07:10.195973 2555 scope.go:115] "RemoveContainer" containerID="1b599fa13c83a882b41349b186205fee8d3cdf3f3aca730469bd8b6bc2e237a0" Feb 9 08:07:10.198544 env[1474]: time="2024-02-09T08:07:10.198464361Z" level=info msg="RemoveContainer for \"1b599fa13c83a882b41349b186205fee8d3cdf3f3aca730469bd8b6bc2e237a0\"" Feb 9 08:07:10.202903 env[1474]: time="2024-02-09T08:07:10.202828627Z" level=info msg="RemoveContainer for \"1b599fa13c83a882b41349b186205fee8d3cdf3f3aca730469bd8b6bc2e237a0\" returns successfully" Feb 9 08:07:10.231206 kubelet[2555]: I0209 08:07:10.231100 2555 topology_manager.go:210] "Topology Admit Handler" Feb 9 08:07:10.231206 kubelet[2555]: E0209 08:07:10.231147 2555 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aac8c542-1e32-4c11-8ad1-68ec3d3d59b1" containerName="mount-cgroup" Feb 9 08:07:10.231206 kubelet[2555]: I0209 08:07:10.231174 2555 memory_manager.go:346] "RemoveStaleState removing state" podUID="aac8c542-1e32-4c11-8ad1-68ec3d3d59b1" containerName="mount-cgroup" Feb 9 08:07:10.234500 systemd[1]: Created slice kubepods-burstable-pod0db488f9_5d09_4595_863a_b0d9c61481bf.slice. Feb 9 08:07:10.317647 kubelet[2555]: I0209 08:07:10.317545 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0db488f9-5d09-4595-863a-b0d9c61481bf-host-proc-sys-net\") pod \"cilium-srdmc\" (UID: \"0db488f9-5d09-4595-863a-b0d9c61481bf\") " pod="kube-system/cilium-srdmc" Feb 9 08:07:10.317983 kubelet[2555]: I0209 08:07:10.317753 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0db488f9-5d09-4595-863a-b0d9c61481bf-host-proc-sys-kernel\") pod \"cilium-srdmc\" (UID: \"0db488f9-5d09-4595-863a-b0d9c61481bf\") " pod="kube-system/cilium-srdmc" Feb 9 08:07:10.317983 kubelet[2555]: I0209 08:07:10.317865 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0db488f9-5d09-4595-863a-b0d9c61481bf-cilium-cgroup\") pod \"cilium-srdmc\" (UID: \"0db488f9-5d09-4595-863a-b0d9c61481bf\") " pod="kube-system/cilium-srdmc" Feb 9 08:07:10.318314 kubelet[2555]: I0209 08:07:10.317983 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0db488f9-5d09-4595-863a-b0d9c61481bf-lib-modules\") pod \"cilium-srdmc\" (UID: \"0db488f9-5d09-4595-863a-b0d9c61481bf\") " pod="kube-system/cilium-srdmc" Feb 9 08:07:10.318314 kubelet[2555]: I0209 08:07:10.318096 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0db488f9-5d09-4595-863a-b0d9c61481bf-cilium-config-path\") pod \"cilium-srdmc\" (UID: \"0db488f9-5d09-4595-863a-b0d9c61481bf\") " pod="kube-system/cilium-srdmc" Feb 9 08:07:10.318314 kubelet[2555]: I0209 08:07:10.318228 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0db488f9-5d09-4595-863a-b0d9c61481bf-cilium-ipsec-secrets\") pod \"cilium-srdmc\" (UID: \"0db488f9-5d09-4595-863a-b0d9c61481bf\") " pod="kube-system/cilium-srdmc" Feb 9 08:07:10.318793 kubelet[2555]: I0209 08:07:10.318407 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0db488f9-5d09-4595-863a-b0d9c61481bf-hubble-tls\") pod \"cilium-srdmc\" (UID: \"0db488f9-5d09-4595-863a-b0d9c61481bf\") " pod="kube-system/cilium-srdmc" Feb 9 08:07:10.318793 kubelet[2555]: I0209 08:07:10.318566 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0db488f9-5d09-4595-863a-b0d9c61481bf-etc-cni-netd\") pod \"cilium-srdmc\" (UID: \"0db488f9-5d09-4595-863a-b0d9c61481bf\") " pod="kube-system/cilium-srdmc" Feb 9 08:07:10.318793 kubelet[2555]: I0209 08:07:10.318729 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0db488f9-5d09-4595-863a-b0d9c61481bf-clustermesh-secrets\") pod \"cilium-srdmc\" (UID: \"0db488f9-5d09-4595-863a-b0d9c61481bf\") " pod="kube-system/cilium-srdmc" Feb 9 08:07:10.319204 kubelet[2555]: I0209 08:07:10.318840 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0db488f9-5d09-4595-863a-b0d9c61481bf-bpf-maps\") pod \"cilium-srdmc\" (UID: \"0db488f9-5d09-4595-863a-b0d9c61481bf\") " pod="kube-system/cilium-srdmc" Feb 9 08:07:10.319204 kubelet[2555]: I0209 08:07:10.318917 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vq76\" (UniqueName: \"kubernetes.io/projected/0db488f9-5d09-4595-863a-b0d9c61481bf-kube-api-access-8vq76\") pod \"cilium-srdmc\" (UID: \"0db488f9-5d09-4595-863a-b0d9c61481bf\") " pod="kube-system/cilium-srdmc" Feb 9 08:07:10.319204 kubelet[2555]: I0209 08:07:10.318977 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0db488f9-5d09-4595-863a-b0d9c61481bf-cni-path\") pod \"cilium-srdmc\" (UID: \"0db488f9-5d09-4595-863a-b0d9c61481bf\") " pod="kube-system/cilium-srdmc" Feb 9 08:07:10.319204 kubelet[2555]: I0209 08:07:10.319041 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0db488f9-5d09-4595-863a-b0d9c61481bf-cilium-run\") pod \"cilium-srdmc\" (UID: \"0db488f9-5d09-4595-863a-b0d9c61481bf\") " pod="kube-system/cilium-srdmc" Feb 9 08:07:10.319204 kubelet[2555]: I0209 08:07:10.319115 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0db488f9-5d09-4595-863a-b0d9c61481bf-hostproc\") pod \"cilium-srdmc\" (UID: \"0db488f9-5d09-4595-863a-b0d9c61481bf\") " pod="kube-system/cilium-srdmc" Feb 9 08:07:10.319204 kubelet[2555]: I0209 08:07:10.319177 2555 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0db488f9-5d09-4595-863a-b0d9c61481bf-xtables-lock\") pod \"cilium-srdmc\" (UID: \"0db488f9-5d09-4595-863a-b0d9c61481bf\") " pod="kube-system/cilium-srdmc" Feb 9 08:07:10.536952 env[1474]: time="2024-02-09T08:07:10.536714797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-srdmc,Uid:0db488f9-5d09-4595-863a-b0d9c61481bf,Namespace:kube-system,Attempt:0,}" Feb 9 08:07:10.552220 env[1474]: time="2024-02-09T08:07:10.552148468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 08:07:10.552220 env[1474]: time="2024-02-09T08:07:10.552169312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 08:07:10.552346 env[1474]: time="2024-02-09T08:07:10.552202779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 08:07:10.552346 env[1474]: time="2024-02-09T08:07:10.552320180Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/13d65393c8db712c2ae7a94a121757373aacfc7ae8f3012c2ece183f32f34bb1 pid=6785 runtime=io.containerd.runc.v2 Feb 9 08:07:10.572139 systemd[1]: Started cri-containerd-13d65393c8db712c2ae7a94a121757373aacfc7ae8f3012c2ece183f32f34bb1.scope. Feb 9 08:07:10.584824 env[1474]: time="2024-02-09T08:07:10.584767121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-srdmc,Uid:0db488f9-5d09-4595-863a-b0d9c61481bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"13d65393c8db712c2ae7a94a121757373aacfc7ae8f3012c2ece183f32f34bb1\"" Feb 9 08:07:10.586162 env[1474]: time="2024-02-09T08:07:10.586115529Z" level=info msg="CreateContainer within sandbox \"13d65393c8db712c2ae7a94a121757373aacfc7ae8f3012c2ece183f32f34bb1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 08:07:10.591037 env[1474]: time="2024-02-09T08:07:10.591012276Z" level=info msg="CreateContainer within sandbox \"13d65393c8db712c2ae7a94a121757373aacfc7ae8f3012c2ece183f32f34bb1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1a36ce89d279f639de0d36de69517058dab9fc6071f083967e437f789cc9d683\"" Feb 9 08:07:10.591279 env[1474]: time="2024-02-09T08:07:10.591221448Z" level=info msg="StartContainer for \"1a36ce89d279f639de0d36de69517058dab9fc6071f083967e437f789cc9d683\"" Feb 9 08:07:10.612969 systemd[1]: Started cri-containerd-1a36ce89d279f639de0d36de69517058dab9fc6071f083967e437f789cc9d683.scope. Feb 9 08:07:10.637248 env[1474]: time="2024-02-09T08:07:10.637198090Z" level=info msg="StartContainer for \"1a36ce89d279f639de0d36de69517058dab9fc6071f083967e437f789cc9d683\" returns successfully" Feb 9 08:07:10.650345 systemd[1]: cri-containerd-1a36ce89d279f639de0d36de69517058dab9fc6071f083967e437f789cc9d683.scope: Deactivated successfully. Feb 9 08:07:10.683490 env[1474]: time="2024-02-09T08:07:10.683393126Z" level=info msg="shim disconnected" id=1a36ce89d279f639de0d36de69517058dab9fc6071f083967e437f789cc9d683 Feb 9 08:07:10.683490 env[1474]: time="2024-02-09T08:07:10.683466021Z" level=warning msg="cleaning up after shim disconnected" id=1a36ce89d279f639de0d36de69517058dab9fc6071f083967e437f789cc9d683 namespace=k8s.io Feb 9 08:07:10.683857 env[1474]: time="2024-02-09T08:07:10.683496225Z" level=info msg="cleaning up dead shim" Feb 9 08:07:10.708382 env[1474]: time="2024-02-09T08:07:10.708266978Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:07:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6867 runtime=io.containerd.runc.v2\n" Feb 9 08:07:11.181410 kubelet[2555]: E0209 08:07:11.181320 2555 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 08:07:11.208945 env[1474]: time="2024-02-09T08:07:11.208853311Z" level=info msg="CreateContainer within sandbox \"13d65393c8db712c2ae7a94a121757373aacfc7ae8f3012c2ece183f32f34bb1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 08:07:11.222402 env[1474]: time="2024-02-09T08:07:11.222279040Z" level=info msg="CreateContainer within sandbox \"13d65393c8db712c2ae7a94a121757373aacfc7ae8f3012c2ece183f32f34bb1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"73c1e4173a88c26111d984544e0f2e484d78976a903acfc52fcfcef26ab61bea\"" Feb 9 08:07:11.223272 env[1474]: time="2024-02-09T08:07:11.223198526Z" level=info msg="StartContainer for \"73c1e4173a88c26111d984544e0f2e484d78976a903acfc52fcfcef26ab61bea\"" Feb 9 08:07:11.271107 systemd[1]: Started cri-containerd-73c1e4173a88c26111d984544e0f2e484d78976a903acfc52fcfcef26ab61bea.scope. Feb 9 08:07:11.337645 env[1474]: time="2024-02-09T08:07:11.337532622Z" level=info msg="StartContainer for \"73c1e4173a88c26111d984544e0f2e484d78976a903acfc52fcfcef26ab61bea\" returns successfully" Feb 9 08:07:11.352523 systemd[1]: cri-containerd-73c1e4173a88c26111d984544e0f2e484d78976a903acfc52fcfcef26ab61bea.scope: Deactivated successfully. Feb 9 08:07:11.405203 env[1474]: time="2024-02-09T08:07:11.405064557Z" level=info msg="shim disconnected" id=73c1e4173a88c26111d984544e0f2e484d78976a903acfc52fcfcef26ab61bea Feb 9 08:07:11.405203 env[1474]: time="2024-02-09T08:07:11.405155516Z" level=warning msg="cleaning up after shim disconnected" id=73c1e4173a88c26111d984544e0f2e484d78976a903acfc52fcfcef26ab61bea namespace=k8s.io Feb 9 08:07:11.405203 env[1474]: time="2024-02-09T08:07:11.405183799Z" level=info msg="cleaning up dead shim" Feb 9 08:07:11.421284 env[1474]: time="2024-02-09T08:07:11.421206585Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:07:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6928 runtime=io.containerd.runc.v2\n" Feb 9 08:07:11.499125 systemd[1]: Started sshd@100-139.178.90.113:22-61.177.172.140:34816.service. Feb 9 08:07:11.695265 kubelet[2555]: I0209 08:07:11.695217 2555 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-d9875e643b" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 08:07:11.695114416 +0000 UTC m=+795.866426281 LastTransitionTime:2024-02-09 08:07:11.695114416 +0000 UTC m=+795.866426281 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 08:07:11.699458 kubelet[2555]: W0209 08:07:11.699386 2555 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaac8c542_1e32_4c11_8ad1_68ec3d3d59b1.slice/cri-containerd-1b599fa13c83a882b41349b186205fee8d3cdf3f3aca730469bd8b6bc2e237a0.scope WatchSource:0}: container "1b599fa13c83a882b41349b186205fee8d3cdf3f3aca730469bd8b6bc2e237a0" in namespace "k8s.io": not found Feb 9 08:07:11.964332 kubelet[2555]: I0209 08:07:11.964275 2555 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=aac8c542-1e32-4c11-8ad1-68ec3d3d59b1 path="/var/lib/kubelet/pods/aac8c542-1e32-4c11-8ad1-68ec3d3d59b1/volumes" Feb 9 08:07:12.218880 env[1474]: time="2024-02-09T08:07:12.218662155Z" level=info msg="CreateContainer within sandbox \"13d65393c8db712c2ae7a94a121757373aacfc7ae8f3012c2ece183f32f34bb1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 08:07:12.229904 env[1474]: time="2024-02-09T08:07:12.229880017Z" level=info msg="CreateContainer within sandbox \"13d65393c8db712c2ae7a94a121757373aacfc7ae8f3012c2ece183f32f34bb1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a4b4ec73bbad97077f029f6aca5869849d3a878f02cffd015379cdef4d4eea7e\"" Feb 9 08:07:12.230299 env[1474]: time="2024-02-09T08:07:12.230254417Z" level=info msg="StartContainer for \"a4b4ec73bbad97077f029f6aca5869849d3a878f02cffd015379cdef4d4eea7e\"" Feb 9 08:07:12.230740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3518976365.mount: Deactivated successfully. Feb 9 08:07:12.265932 systemd[1]: Started cri-containerd-a4b4ec73bbad97077f029f6aca5869849d3a878f02cffd015379cdef4d4eea7e.scope. Feb 9 08:07:12.338096 env[1474]: time="2024-02-09T08:07:12.337973122Z" level=info msg="StartContainer for \"a4b4ec73bbad97077f029f6aca5869849d3a878f02cffd015379cdef4d4eea7e\" returns successfully" Feb 9 08:07:12.343448 systemd[1]: cri-containerd-a4b4ec73bbad97077f029f6aca5869849d3a878f02cffd015379cdef4d4eea7e.scope: Deactivated successfully. Feb 9 08:07:12.410536 env[1474]: time="2024-02-09T08:07:12.410440312Z" level=info msg="shim disconnected" id=a4b4ec73bbad97077f029f6aca5869849d3a878f02cffd015379cdef4d4eea7e Feb 9 08:07:12.410908 env[1474]: time="2024-02-09T08:07:12.410538091Z" level=warning msg="cleaning up after shim disconnected" id=a4b4ec73bbad97077f029f6aca5869849d3a878f02cffd015379cdef4d4eea7e namespace=k8s.io Feb 9 08:07:12.410908 env[1474]: time="2024-02-09T08:07:12.410586907Z" level=info msg="cleaning up dead shim" Feb 9 08:07:12.430865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4b4ec73bbad97077f029f6aca5869849d3a878f02cffd015379cdef4d4eea7e-rootfs.mount: Deactivated successfully. Feb 9 08:07:12.438847 env[1474]: time="2024-02-09T08:07:12.438719116Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:07:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6988 runtime=io.containerd.runc.v2\n" Feb 9 08:07:12.544340 sshd[6941]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 08:07:13.226958 env[1474]: time="2024-02-09T08:07:13.226860148Z" level=info msg="CreateContainer within sandbox \"13d65393c8db712c2ae7a94a121757373aacfc7ae8f3012c2ece183f32f34bb1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 08:07:13.237695 env[1474]: time="2024-02-09T08:07:13.237645471Z" level=info msg="CreateContainer within sandbox \"13d65393c8db712c2ae7a94a121757373aacfc7ae8f3012c2ece183f32f34bb1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bfd9c93bd67cc3bacd118d3ce4da485b21e70828b7d31ac2baa96bb4804fb4a8\"" Feb 9 08:07:13.237923 env[1474]: time="2024-02-09T08:07:13.237910127Z" level=info msg="StartContainer for \"bfd9c93bd67cc3bacd118d3ce4da485b21e70828b7d31ac2baa96bb4804fb4a8\"" Feb 9 08:07:13.246318 systemd[1]: Started cri-containerd-bfd9c93bd67cc3bacd118d3ce4da485b21e70828b7d31ac2baa96bb4804fb4a8.scope. Feb 9 08:07:13.270083 systemd[1]: cri-containerd-bfd9c93bd67cc3bacd118d3ce4da485b21e70828b7d31ac2baa96bb4804fb4a8.scope: Deactivated successfully. Feb 9 08:07:13.270516 env[1474]: time="2024-02-09T08:07:13.270495339Z" level=info msg="StartContainer for \"bfd9c93bd67cc3bacd118d3ce4da485b21e70828b7d31ac2baa96bb4804fb4a8\" returns successfully" Feb 9 08:07:13.296180 env[1474]: time="2024-02-09T08:07:13.296111282Z" level=info msg="shim disconnected" id=bfd9c93bd67cc3bacd118d3ce4da485b21e70828b7d31ac2baa96bb4804fb4a8 Feb 9 08:07:13.296180 env[1474]: time="2024-02-09T08:07:13.296157026Z" level=warning msg="cleaning up after shim disconnected" id=bfd9c93bd67cc3bacd118d3ce4da485b21e70828b7d31ac2baa96bb4804fb4a8 namespace=k8s.io Feb 9 08:07:13.296180 env[1474]: time="2024-02-09T08:07:13.296169222Z" level=info msg="cleaning up dead shim" Feb 9 08:07:13.317168 env[1474]: time="2024-02-09T08:07:13.317098357Z" level=warning msg="cleanup warnings time=\"2024-02-09T08:07:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7040 runtime=io.containerd.runc.v2\n" Feb 9 08:07:13.431270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfd9c93bd67cc3bacd118d3ce4da485b21e70828b7d31ac2baa96bb4804fb4a8-rootfs.mount: Deactivated successfully. Feb 9 08:07:14.236884 env[1474]: time="2024-02-09T08:07:14.236747866Z" level=info msg="CreateContainer within sandbox \"13d65393c8db712c2ae7a94a121757373aacfc7ae8f3012c2ece183f32f34bb1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 08:07:14.249215 env[1474]: time="2024-02-09T08:07:14.249190843Z" level=info msg="CreateContainer within sandbox \"13d65393c8db712c2ae7a94a121757373aacfc7ae8f3012c2ece183f32f34bb1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"eac2e0380baa665ce17ed4912720d6542d751a19a6cffc6a6a71dc933be62527\"" Feb 9 08:07:14.249502 env[1474]: time="2024-02-09T08:07:14.249488794Z" level=info msg="StartContainer for \"eac2e0380baa665ce17ed4912720d6542d751a19a6cffc6a6a71dc933be62527\"" Feb 9 08:07:14.257937 systemd[1]: Started cri-containerd-eac2e0380baa665ce17ed4912720d6542d751a19a6cffc6a6a71dc933be62527.scope. Feb 9 08:07:14.271973 env[1474]: time="2024-02-09T08:07:14.271919898Z" level=info msg="StartContainer for \"eac2e0380baa665ce17ed4912720d6542d751a19a6cffc6a6a71dc933be62527\" returns successfully" Feb 9 08:07:14.361595 sshd[6941]: Failed password for root from 61.177.172.140 port 34816 ssh2 Feb 9 08:07:14.412560 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 08:07:14.811831 kubelet[2555]: W0209 08:07:14.811711 2555 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0db488f9_5d09_4595_863a_b0d9c61481bf.slice/cri-containerd-1a36ce89d279f639de0d36de69517058dab9fc6071f083967e437f789cc9d683.scope WatchSource:0}: task 1a36ce89d279f639de0d36de69517058dab9fc6071f083967e437f789cc9d683 not found: not found Feb 9 08:07:15.258719 kubelet[2555]: I0209 08:07:15.258672 2555 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-srdmc" podStartSLOduration=5.258635041 pod.CreationTimestamp="2024-02-09 08:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 08:07:15.258252576 +0000 UTC m=+799.429564389" watchObservedRunningTime="2024-02-09 08:07:15.258635041 +0000 UTC m=+799.429946854" Feb 9 08:07:17.272188 systemd-networkd[1315]: lxc_health: Link UP Feb 9 08:07:17.302301 systemd-networkd[1315]: lxc_health: Gained carrier Feb 9 08:07:17.302590 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 08:07:17.604663 sshd[6941]: Failed password for root from 61.177.172.140 port 34816 ssh2 Feb 9 08:07:17.920087 kubelet[2555]: W0209 08:07:17.919964 2555 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0db488f9_5d09_4595_863a_b0d9c61481bf.slice/cri-containerd-73c1e4173a88c26111d984544e0f2e484d78976a903acfc52fcfcef26ab61bea.scope WatchSource:0}: task 73c1e4173a88c26111d984544e0f2e484d78976a903acfc52fcfcef26ab61bea not found: not found Feb 9 08:07:18.717707 systemd-networkd[1315]: lxc_health: Gained IPv6LL Feb 9 08:07:20.177955 sshd[6941]: Failed password for root from 61.177.172.140 port 34816 ssh2 Feb 9 08:07:21.025589 kubelet[2555]: W0209 08:07:21.025445 2555 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0db488f9_5d09_4595_863a_b0d9c61481bf.slice/cri-containerd-a4b4ec73bbad97077f029f6aca5869849d3a878f02cffd015379cdef4d4eea7e.scope WatchSource:0}: task a4b4ec73bbad97077f029f6aca5869849d3a878f02cffd015379cdef4d4eea7e not found: not found Feb 9 08:07:21.035308 sshd[6941]: Received disconnect from 61.177.172.140 port 34816:11: [preauth] Feb 9 08:07:21.035308 sshd[6941]: Disconnected from authenticating user root 61.177.172.140 port 34816 [preauth] Feb 9 08:07:21.035869 sshd[6941]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 08:07:21.038002 systemd[1]: sshd@100-139.178.90.113:22-61.177.172.140:34816.service: Deactivated successfully. Feb 9 08:07:21.176495 systemd[1]: Started sshd@101-139.178.90.113:22-61.177.172.140:38575.service. Feb 9 08:07:22.148544 sshd[7795]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 08:07:24.134572 kubelet[2555]: W0209 08:07:24.134461 2555 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0db488f9_5d09_4595_863a_b0d9c61481bf.slice/cri-containerd-bfd9c93bd67cc3bacd118d3ce4da485b21e70828b7d31ac2baa96bb4804fb4a8.scope WatchSource:0}: task bfd9c93bd67cc3bacd118d3ce4da485b21e70828b7d31ac2baa96bb4804fb4a8 not found: not found Feb 9 08:07:24.673193 sshd[7795]: Failed password for root from 61.177.172.140 port 38575 ssh2 Feb 9 08:07:24.967823 sshd[7795]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Feb 9 08:07:26.766043 sshd[7795]: Failed password for root from 61.177.172.140 port 38575 ssh2 Feb 9 08:07:29.995875 sshd[7795]: Failed password for root from 61.177.172.140 port 38575 ssh2 Feb 9 08:07:30.605182 sshd[7795]: Received disconnect from 61.177.172.140 port 38575:11: [preauth] Feb 9 08:07:30.605182 sshd[7795]: Disconnected from authenticating user root 61.177.172.140 port 38575 [preauth] Feb 9 08:07:30.605757 sshd[7795]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 08:07:30.607855 systemd[1]: sshd@101-139.178.90.113:22-61.177.172.140:38575.service: Deactivated successfully. Feb 9 08:07:30.754226 systemd[1]: Started sshd@102-139.178.90.113:22-61.177.172.140:43542.service. Feb 9 08:07:31.699016 sshd[7799]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 08:07:33.457317 sshd[7799]: Failed password for root from 61.177.172.140 port 43542 ssh2 Feb 9 08:07:34.073650 systemd[1]: Started sshd@103-139.178.90.113:22-218.92.0.33:1654.service. Feb 9 08:07:34.223012 sshd[7802]: Unable to negotiate with 218.92.0.33 port 1654: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Feb 9 08:07:34.224814 systemd[1]: sshd@103-139.178.90.113:22-218.92.0.33:1654.service: Deactivated successfully. Feb 9 08:07:37.023194 sshd[7799]: Failed password for root from 61.177.172.140 port 43542 ssh2 Feb 9 08:07:39.585388 sshd[7799]: Failed password for root from 61.177.172.140 port 43542 ssh2 Feb 9 08:07:40.152861 sshd[7799]: Received disconnect from 61.177.172.140 port 43542:11: [preauth] Feb 9 08:07:40.152861 sshd[7799]: Disconnected from authenticating user root 61.177.172.140 port 43542 [preauth] Feb 9 08:07:40.153407 sshd[7799]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.172.140 user=root Feb 9 08:07:40.155482 systemd[1]: sshd@102-139.178.90.113:22-61.177.172.140:43542.service: Deactivated successfully. Feb 9 08:07:53.578534 systemd[1]: Starting systemd-tmpfiles-clean.service... Feb 9 08:07:53.584612 systemd-tmpfiles[7835]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 08:07:53.584840 systemd-tmpfiles[7835]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 08:07:53.585509 systemd-tmpfiles[7835]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 08:07:53.603475 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Feb 9 08:07:53.603960 systemd[1]: Finished systemd-tmpfiles-clean.service. Feb 9 08:07:53.609816 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Feb 9 08:07:55.959433 env[1474]: time="2024-02-09T08:07:55.959407138Z" level=info msg="StopPodSandbox for \"210624dd0c82ea6d1f9b46dc78c2a8faab7b6568457fb484f48752fddf10062c\"" Feb 9 08:07:55.959694 env[1474]: time="2024-02-09T08:07:55.959478324Z" level=info msg="TearDown network for sandbox \"210624dd0c82ea6d1f9b46dc78c2a8faab7b6568457fb484f48752fddf10062c\" successfully" Feb 9 08:07:55.959694 env[1474]: time="2024-02-09T08:07:55.959502882Z" level=info msg="StopPodSandbox for \"210624dd0c82ea6d1f9b46dc78c2a8faab7b6568457fb484f48752fddf10062c\" returns successfully" Feb 9 08:07:55.959771 env[1474]: time="2024-02-09T08:07:55.959688747Z" level=info msg="RemovePodSandbox for \"210624dd0c82ea6d1f9b46dc78c2a8faab7b6568457fb484f48752fddf10062c\"" Feb 9 08:07:55.959771 env[1474]: time="2024-02-09T08:07:55.959708338Z" level=info msg="Forcibly stopping sandbox \"210624dd0c82ea6d1f9b46dc78c2a8faab7b6568457fb484f48752fddf10062c\"" Feb 9 08:07:55.959771 env[1474]: time="2024-02-09T08:07:55.959754992Z" level=info msg="TearDown network for sandbox \"210624dd0c82ea6d1f9b46dc78c2a8faab7b6568457fb484f48752fddf10062c\" successfully" Feb 9 08:07:55.961719 env[1474]: time="2024-02-09T08:07:55.961702761Z" level=info msg="RemovePodSandbox \"210624dd0c82ea6d1f9b46dc78c2a8faab7b6568457fb484f48752fddf10062c\" returns successfully" Feb 9 08:07:55.961939 env[1474]: time="2024-02-09T08:07:55.961925030Z" level=info msg="StopPodSandbox for \"2d661f958d332c25427496f0ddcae507a02fa22b7423065aecf8edbb8c785c2a\"" Feb 9 08:07:55.961991 env[1474]: time="2024-02-09T08:07:55.961970429Z" level=info msg="TearDown network for sandbox \"2d661f958d332c25427496f0ddcae507a02fa22b7423065aecf8edbb8c785c2a\" successfully" Feb 9 08:07:55.962018 env[1474]: time="2024-02-09T08:07:55.961991147Z" level=info msg="StopPodSandbox for \"2d661f958d332c25427496f0ddcae507a02fa22b7423065aecf8edbb8c785c2a\" returns successfully" Feb 9 08:07:55.962159 env[1474]: time="2024-02-09T08:07:55.962145804Z" level=info msg="RemovePodSandbox for \"2d661f958d332c25427496f0ddcae507a02fa22b7423065aecf8edbb8c785c2a\"" Feb 9 08:07:55.962187 env[1474]: time="2024-02-09T08:07:55.962163619Z" level=info msg="Forcibly stopping sandbox \"2d661f958d332c25427496f0ddcae507a02fa22b7423065aecf8edbb8c785c2a\"" Feb 9 08:07:55.962220 env[1474]: time="2024-02-09T08:07:55.962210462Z" level=info msg="TearDown network for sandbox \"2d661f958d332c25427496f0ddcae507a02fa22b7423065aecf8edbb8c785c2a\" successfully" Feb 9 08:07:55.963479 env[1474]: time="2024-02-09T08:07:55.963461801Z" level=info msg="RemovePodSandbox \"2d661f958d332c25427496f0ddcae507a02fa22b7423065aecf8edbb8c785c2a\" returns successfully" Feb 9 08:07:55.963620 env[1474]: time="2024-02-09T08:07:55.963604613Z" level=info msg="StopPodSandbox for \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\"" Feb 9 08:07:55.963672 env[1474]: time="2024-02-09T08:07:55.963650857Z" level=info msg="TearDown network for sandbox \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\" successfully" Feb 9 08:07:55.963705 env[1474]: time="2024-02-09T08:07:55.963671943Z" level=info msg="StopPodSandbox for \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\" returns successfully" Feb 9 08:07:55.963850 env[1474]: time="2024-02-09T08:07:55.963835684Z" level=info msg="RemovePodSandbox for \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\"" Feb 9 08:07:55.963879 env[1474]: time="2024-02-09T08:07:55.963854185Z" level=info msg="Forcibly stopping sandbox \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\"" Feb 9 08:07:55.963906 env[1474]: time="2024-02-09T08:07:55.963897923Z" level=info msg="TearDown network for sandbox \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\" successfully" Feb 9 08:07:55.965115 env[1474]: time="2024-02-09T08:07:55.965099909Z" level=info msg="RemovePodSandbox \"11fdf0b384a7eee6a1a22caecec0eba8de8e8fe26423732cbf5acda9a3112d02\" returns successfully" Feb 9 08:08:20.733429 sshd[6633]: pam_unix(sshd:session): session closed for user core Feb 9 08:08:20.739122 systemd[1]: sshd@99-139.178.90.113:22-147.75.109.163:59730.service: Deactivated successfully. Feb 9 08:08:20.741031 systemd[1]: session-92.scope: Deactivated successfully. Feb 9 08:08:20.742728 systemd-logind[1462]: Session 92 logged out. Waiting for processes to exit. Feb 9 08:08:20.745124 systemd-logind[1462]: Removed session 92.