Sep 13 02:31:03.553283 kernel: microcode: microcode updated early to revision 0xf4, date = 2022-07-31 Sep 13 02:31:03.553297 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 02:31:03.553303 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 02:31:03.553308 kernel: BIOS-provided physical RAM map: Sep 13 02:31:03.553311 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Sep 13 02:31:03.553315 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Sep 13 02:31:03.553320 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Sep 13 02:31:03.553324 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Sep 13 02:31:03.553328 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Sep 13 02:31:03.553332 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000006dfbdfff] usable Sep 13 02:31:03.553336 kernel: BIOS-e820: [mem 0x000000006dfbe000-0x000000006dfbefff] ACPI NVS Sep 13 02:31:03.553340 kernel: BIOS-e820: [mem 0x000000006dfbf000-0x000000006dfbffff] reserved Sep 13 02:31:03.553343 kernel: BIOS-e820: [mem 0x000000006dfc0000-0x0000000077fc6fff] usable Sep 13 02:31:03.553347 kernel: BIOS-e820: [mem 0x0000000077fc7000-0x00000000790a9fff] reserved Sep 13 02:31:03.553353 kernel: BIOS-e820: [mem 0x00000000790aa000-0x0000000079232fff] usable Sep 13 02:31:03.553360 kernel: BIOS-e820: [mem 0x0000000079233000-0x0000000079664fff] ACPI NVS Sep 13 02:31:03.553365 kernel: BIOS-e820: [mem 0x0000000079665000-0x000000007befefff] reserved Sep 13 02:31:03.553369 kernel: BIOS-e820: [mem 0x000000007beff000-0x000000007befffff] usable Sep 13 02:31:03.553386 kernel: BIOS-e820: [mem 0x000000007bf00000-0x000000007f7fffff] reserved Sep 13 02:31:03.553390 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 13 02:31:03.553395 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Sep 13 02:31:03.553399 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Sep 13 02:31:03.553403 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 13 02:31:03.553408 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Sep 13 02:31:03.553412 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000087f7fffff] usable Sep 13 02:31:03.553416 kernel: NX (Execute Disable) protection: active Sep 13 02:31:03.553420 kernel: SMBIOS 3.2.1 present. Sep 13 02:31:03.553424 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Sep 13 02:31:03.553428 kernel: tsc: Detected 3400.000 MHz processor Sep 13 02:31:03.553432 kernel: tsc: Detected 3399.906 MHz TSC Sep 13 02:31:03.553436 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 02:31:03.553441 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 02:31:03.553445 kernel: last_pfn = 0x87f800 max_arch_pfn = 0x400000000 Sep 13 02:31:03.553450 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 02:31:03.553455 kernel: last_pfn = 0x7bf00 max_arch_pfn = 0x400000000 Sep 13 02:31:03.553459 kernel: Using GB pages for direct mapping Sep 13 02:31:03.553463 kernel: ACPI: Early table checksum verification disabled Sep 13 02:31:03.553467 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Sep 13 02:31:03.553471 kernel: ACPI: XSDT 0x00000000795460C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Sep 13 02:31:03.553476 kernel: ACPI: FACP 0x0000000079582620 000114 (v06 01072009 AMI 00010013) Sep 13 02:31:03.553482 kernel: ACPI: DSDT 0x0000000079546268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Sep 13 02:31:03.553487 kernel: ACPI: FACS 0x0000000079664F80 000040 Sep 13 02:31:03.553492 kernel: ACPI: APIC 0x0000000079582738 00012C (v04 01072009 AMI 00010013) Sep 13 02:31:03.553496 kernel: ACPI: FPDT 0x0000000079582868 000044 (v01 01072009 AMI 00010013) Sep 13 02:31:03.553501 kernel: ACPI: FIDT 0x00000000795828B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Sep 13 02:31:03.553506 kernel: ACPI: MCFG 0x0000000079582950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Sep 13 02:31:03.553510 kernel: ACPI: SPMI 0x0000000079582990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Sep 13 02:31:03.553516 kernel: ACPI: SSDT 0x00000000795829D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Sep 13 02:31:03.553520 kernel: ACPI: SSDT 0x00000000795844F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Sep 13 02:31:03.553525 kernel: ACPI: SSDT 0x00000000795876C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Sep 13 02:31:03.553529 kernel: ACPI: HPET 0x00000000795899F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 13 02:31:03.553534 kernel: ACPI: SSDT 0x0000000079589A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Sep 13 02:31:03.553539 kernel: ACPI: SSDT 0x000000007958A9D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Sep 13 02:31:03.553543 kernel: ACPI: UEFI 0x000000007958B2D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 13 02:31:03.553548 kernel: ACPI: LPIT 0x000000007958B318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 13 02:31:03.553552 kernel: ACPI: SSDT 0x000000007958B3B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Sep 13 02:31:03.553558 kernel: ACPI: SSDT 0x000000007958DB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Sep 13 02:31:03.553562 kernel: ACPI: DBGP 0x000000007958F078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 13 02:31:03.553567 kernel: ACPI: DBG2 0x000000007958F0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Sep 13 02:31:03.553571 kernel: ACPI: SSDT 0x000000007958F108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Sep 13 02:31:03.553576 kernel: ACPI: DMAR 0x0000000079590C70 0000A8 (v01 INTEL EDK2 00000002 01000013) Sep 13 02:31:03.553580 kernel: ACPI: SSDT 0x0000000079590D18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Sep 13 02:31:03.553585 kernel: ACPI: TPM2 0x0000000079590E60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Sep 13 02:31:03.553590 kernel: ACPI: SSDT 0x0000000079590E98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Sep 13 02:31:03.553595 kernel: ACPI: WSMT 0x0000000079591C28 000028 (v01 \xf5m 01072009 AMI 00010013) Sep 13 02:31:03.553600 kernel: ACPI: EINJ 0x0000000079591C50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Sep 13 02:31:03.553605 kernel: ACPI: ERST 0x0000000079591D80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Sep 13 02:31:03.553609 kernel: ACPI: BERT 0x0000000079591FB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Sep 13 02:31:03.553614 kernel: ACPI: HEST 0x0000000079591FE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Sep 13 02:31:03.553618 kernel: ACPI: SSDT 0x0000000079592260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Sep 13 02:31:03.553623 kernel: ACPI: Reserving FACP table memory at [mem 0x79582620-0x79582733] Sep 13 02:31:03.553627 kernel: ACPI: Reserving DSDT table memory at [mem 0x79546268-0x7958261e] Sep 13 02:31:03.553632 kernel: ACPI: Reserving FACS table memory at [mem 0x79664f80-0x79664fbf] Sep 13 02:31:03.553637 kernel: ACPI: Reserving APIC table memory at [mem 0x79582738-0x79582863] Sep 13 02:31:03.553642 kernel: ACPI: Reserving FPDT table memory at [mem 0x79582868-0x795828ab] Sep 13 02:31:03.553646 kernel: ACPI: Reserving FIDT table memory at [mem 0x795828b0-0x7958294b] Sep 13 02:31:03.553651 kernel: ACPI: Reserving MCFG table memory at [mem 0x79582950-0x7958298b] Sep 13 02:31:03.553655 kernel: ACPI: Reserving SPMI table memory at [mem 0x79582990-0x795829d0] Sep 13 02:31:03.553660 kernel: ACPI: Reserving SSDT table memory at [mem 0x795829d8-0x795844f3] Sep 13 02:31:03.553664 kernel: ACPI: Reserving SSDT table memory at [mem 0x795844f8-0x795876bd] Sep 13 02:31:03.553669 kernel: ACPI: Reserving SSDT table memory at [mem 0x795876c0-0x795899ea] Sep 13 02:31:03.553673 kernel: ACPI: Reserving HPET table memory at [mem 0x795899f0-0x79589a27] Sep 13 02:31:03.553679 kernel: ACPI: Reserving SSDT table memory at [mem 0x79589a28-0x7958a9d5] Sep 13 02:31:03.553683 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958a9d8-0x7958b2ce] Sep 13 02:31:03.553688 kernel: ACPI: Reserving UEFI table memory at [mem 0x7958b2d0-0x7958b311] Sep 13 02:31:03.553692 kernel: ACPI: Reserving LPIT table memory at [mem 0x7958b318-0x7958b3ab] Sep 13 02:31:03.553697 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958b3b0-0x7958db8d] Sep 13 02:31:03.553701 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958db90-0x7958f071] Sep 13 02:31:03.553706 kernel: ACPI: Reserving DBGP table memory at [mem 0x7958f078-0x7958f0ab] Sep 13 02:31:03.553710 kernel: ACPI: Reserving DBG2 table memory at [mem 0x7958f0b0-0x7958f103] Sep 13 02:31:03.553715 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958f108-0x79590c6e] Sep 13 02:31:03.553720 kernel: ACPI: Reserving DMAR table memory at [mem 0x79590c70-0x79590d17] Sep 13 02:31:03.553725 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590d18-0x79590e5b] Sep 13 02:31:03.553729 kernel: ACPI: Reserving TPM2 table memory at [mem 0x79590e60-0x79590e93] Sep 13 02:31:03.553734 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590e98-0x79591c26] Sep 13 02:31:03.553738 kernel: ACPI: Reserving WSMT table memory at [mem 0x79591c28-0x79591c4f] Sep 13 02:31:03.553743 kernel: ACPI: Reserving EINJ table memory at [mem 0x79591c50-0x79591d7f] Sep 13 02:31:03.553747 kernel: ACPI: Reserving ERST table memory at [mem 0x79591d80-0x79591faf] Sep 13 02:31:03.553752 kernel: ACPI: Reserving BERT table memory at [mem 0x79591fb0-0x79591fdf] Sep 13 02:31:03.553756 kernel: ACPI: Reserving HEST table memory at [mem 0x79591fe0-0x7959225b] Sep 13 02:31:03.553761 kernel: ACPI: Reserving SSDT table memory at [mem 0x79592260-0x795923c1] Sep 13 02:31:03.553766 kernel: No NUMA configuration found Sep 13 02:31:03.553771 kernel: Faking a node at [mem 0x0000000000000000-0x000000087f7fffff] Sep 13 02:31:03.553775 kernel: NODE_DATA(0) allocated [mem 0x87f7fa000-0x87f7fffff] Sep 13 02:31:03.553780 kernel: Zone ranges: Sep 13 02:31:03.553784 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 02:31:03.553789 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 13 02:31:03.553793 kernel: Normal [mem 0x0000000100000000-0x000000087f7fffff] Sep 13 02:31:03.553798 kernel: Movable zone start for each node Sep 13 02:31:03.553804 kernel: Early memory node ranges Sep 13 02:31:03.553808 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Sep 13 02:31:03.553813 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Sep 13 02:31:03.553818 kernel: node 0: [mem 0x0000000040400000-0x000000006dfbdfff] Sep 13 02:31:03.553822 kernel: node 0: [mem 0x000000006dfc0000-0x0000000077fc6fff] Sep 13 02:31:03.553827 kernel: node 0: [mem 0x00000000790aa000-0x0000000079232fff] Sep 13 02:31:03.553831 kernel: node 0: [mem 0x000000007beff000-0x000000007befffff] Sep 13 02:31:03.553836 kernel: node 0: [mem 0x0000000100000000-0x000000087f7fffff] Sep 13 02:31:03.553840 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000087f7fffff] Sep 13 02:31:03.553849 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 02:31:03.553854 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Sep 13 02:31:03.553858 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 13 02:31:03.553865 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Sep 13 02:31:03.553870 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Sep 13 02:31:03.553874 kernel: On node 0, zone DMA32: 11468 pages in unavailable ranges Sep 13 02:31:03.553879 kernel: On node 0, zone Normal: 16640 pages in unavailable ranges Sep 13 02:31:03.553884 kernel: On node 0, zone Normal: 2048 pages in unavailable ranges Sep 13 02:31:03.553890 kernel: ACPI: PM-Timer IO Port: 0x1808 Sep 13 02:31:03.553895 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 13 02:31:03.553900 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 13 02:31:03.553905 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 13 02:31:03.553910 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 13 02:31:03.553915 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 13 02:31:03.553919 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 13 02:31:03.553924 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 13 02:31:03.553929 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 13 02:31:03.553935 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 13 02:31:03.553940 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 13 02:31:03.553945 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 13 02:31:03.553949 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 13 02:31:03.553954 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 13 02:31:03.553959 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 13 02:31:03.553964 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 13 02:31:03.553969 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 13 02:31:03.553973 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Sep 13 02:31:03.553979 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 02:31:03.553984 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 02:31:03.553989 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 02:31:03.553994 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 02:31:03.553999 kernel: TSC deadline timer available Sep 13 02:31:03.554003 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Sep 13 02:31:03.554008 kernel: [mem 0x7f800000-0xdfffffff] available for PCI devices Sep 13 02:31:03.554013 kernel: Booting paravirtualized kernel on bare hardware Sep 13 02:31:03.554018 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 02:31:03.554024 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Sep 13 02:31:03.554029 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Sep 13 02:31:03.554034 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Sep 13 02:31:03.554038 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 13 02:31:03.554043 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8222329 Sep 13 02:31:03.554048 kernel: Policy zone: Normal Sep 13 02:31:03.554054 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 02:31:03.554059 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 02:31:03.554064 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Sep 13 02:31:03.554069 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Sep 13 02:31:03.554074 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 02:31:03.554079 kernel: Memory: 32681620K/33411996K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 730116K reserved, 0K cma-reserved) Sep 13 02:31:03.554084 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 13 02:31:03.554089 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 02:31:03.554094 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 02:31:03.554099 kernel: rcu: Hierarchical RCU implementation. Sep 13 02:31:03.554104 kernel: rcu: RCU event tracing is enabled. Sep 13 02:31:03.554110 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 13 02:31:03.554115 kernel: Rude variant of Tasks RCU enabled. Sep 13 02:31:03.554120 kernel: Tracing variant of Tasks RCU enabled. Sep 13 02:31:03.554125 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 02:31:03.554129 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 13 02:31:03.554134 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Sep 13 02:31:03.554139 kernel: random: crng init done Sep 13 02:31:03.554144 kernel: Console: colour dummy device 80x25 Sep 13 02:31:03.554149 kernel: printk: console [tty0] enabled Sep 13 02:31:03.554154 kernel: printk: console [ttyS1] enabled Sep 13 02:31:03.554159 kernel: ACPI: Core revision 20210730 Sep 13 02:31:03.554164 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Sep 13 02:31:03.554169 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 02:31:03.554174 kernel: DMAR: Host address width 39 Sep 13 02:31:03.554179 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Sep 13 02:31:03.554184 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Sep 13 02:31:03.554189 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Sep 13 02:31:03.554194 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Sep 13 02:31:03.554199 kernel: DMAR: RMRR base: 0x00000079f11000 end: 0x0000007a15afff Sep 13 02:31:03.554204 kernel: DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff Sep 13 02:31:03.554209 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Sep 13 02:31:03.554214 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Sep 13 02:31:03.554219 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Sep 13 02:31:03.554224 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Sep 13 02:31:03.554229 kernel: x2apic enabled Sep 13 02:31:03.554233 kernel: Switched APIC routing to cluster x2apic. Sep 13 02:31:03.554238 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 02:31:03.554244 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Sep 13 02:31:03.554249 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Sep 13 02:31:03.554254 kernel: CPU0: Thermal monitoring enabled (TM1) Sep 13 02:31:03.554259 kernel: process: using mwait in idle threads Sep 13 02:31:03.554264 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 02:31:03.554269 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 02:31:03.554274 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 02:31:03.554279 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Sep 13 02:31:03.554284 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 13 02:31:03.554289 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 13 02:31:03.554294 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 13 02:31:03.554299 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 13 02:31:03.554304 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 13 02:31:03.554309 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 02:31:03.554314 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 13 02:31:03.554319 kernel: TAA: Mitigation: TSX disabled Sep 13 02:31:03.554324 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Sep 13 02:31:03.554328 kernel: SRBDS: Mitigation: Microcode Sep 13 02:31:03.554334 kernel: GDS: Vulnerable: No microcode Sep 13 02:31:03.554339 kernel: active return thunk: its_return_thunk Sep 13 02:31:03.554344 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 02:31:03.554349 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 02:31:03.554354 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 02:31:03.554360 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 02:31:03.554365 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 13 02:31:03.554370 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 13 02:31:03.554393 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 02:31:03.554399 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 13 02:31:03.554418 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 13 02:31:03.554423 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Sep 13 02:31:03.554428 kernel: Freeing SMP alternatives memory: 32K Sep 13 02:31:03.554433 kernel: pid_max: default: 32768 minimum: 301 Sep 13 02:31:03.554437 kernel: LSM: Security Framework initializing Sep 13 02:31:03.554442 kernel: SELinux: Initializing. Sep 13 02:31:03.554447 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 02:31:03.554452 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 02:31:03.554458 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Sep 13 02:31:03.554463 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 13 02:31:03.554468 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Sep 13 02:31:03.554473 kernel: ... version: 4 Sep 13 02:31:03.554478 kernel: ... bit width: 48 Sep 13 02:31:03.554483 kernel: ... generic registers: 4 Sep 13 02:31:03.554487 kernel: ... value mask: 0000ffffffffffff Sep 13 02:31:03.554492 kernel: ... max period: 00007fffffffffff Sep 13 02:31:03.554497 kernel: ... fixed-purpose events: 3 Sep 13 02:31:03.554503 kernel: ... event mask: 000000070000000f Sep 13 02:31:03.554508 kernel: signal: max sigframe size: 2032 Sep 13 02:31:03.554512 kernel: rcu: Hierarchical SRCU implementation. Sep 13 02:31:03.554517 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Sep 13 02:31:03.554522 kernel: smp: Bringing up secondary CPUs ... Sep 13 02:31:03.554527 kernel: x86: Booting SMP configuration: Sep 13 02:31:03.554532 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Sep 13 02:31:03.554537 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 02:31:03.554543 kernel: #9 #10 #11 #12 #13 #14 #15 Sep 13 02:31:03.554548 kernel: smp: Brought up 1 node, 16 CPUs Sep 13 02:31:03.554553 kernel: smpboot: Max logical packages: 1 Sep 13 02:31:03.554558 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Sep 13 02:31:03.554563 kernel: devtmpfs: initialized Sep 13 02:31:03.554567 kernel: x86/mm: Memory block size: 128MB Sep 13 02:31:03.554572 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6dfbe000-0x6dfbefff] (4096 bytes) Sep 13 02:31:03.554577 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x79233000-0x79664fff] (4399104 bytes) Sep 13 02:31:03.554582 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 02:31:03.554588 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 13 02:31:03.554593 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 02:31:03.554598 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 02:31:03.554602 kernel: audit: initializing netlink subsys (disabled) Sep 13 02:31:03.554607 kernel: audit: type=2000 audit(1757730657.132:1): state=initialized audit_enabled=0 res=1 Sep 13 02:31:03.554612 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 02:31:03.554617 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 02:31:03.554622 kernel: cpuidle: using governor menu Sep 13 02:31:03.554627 kernel: ACPI: bus type PCI registered Sep 13 02:31:03.554633 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 02:31:03.554637 kernel: dca service started, version 1.12.1 Sep 13 02:31:03.554643 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 13 02:31:03.554647 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Sep 13 02:31:03.554652 kernel: PCI: Using configuration type 1 for base access Sep 13 02:31:03.554657 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Sep 13 02:31:03.554662 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 02:31:03.554667 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 02:31:03.554672 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 02:31:03.554677 kernel: ACPI: Added _OSI(Module Device) Sep 13 02:31:03.554682 kernel: ACPI: Added _OSI(Processor Device) Sep 13 02:31:03.554687 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 02:31:03.554692 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 02:31:03.554697 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 02:31:03.554701 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 02:31:03.554706 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Sep 13 02:31:03.554711 kernel: ACPI: Dynamic OEM Table Load: Sep 13 02:31:03.554716 kernel: ACPI: SSDT 0xFFFF93264021B400 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Sep 13 02:31:03.554722 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Sep 13 02:31:03.554727 kernel: ACPI: Dynamic OEM Table Load: Sep 13 02:31:03.554731 kernel: ACPI: SSDT 0xFFFF932641CE8800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Sep 13 02:31:03.554736 kernel: ACPI: Dynamic OEM Table Load: Sep 13 02:31:03.554741 kernel: ACPI: SSDT 0xFFFF932641C5F000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Sep 13 02:31:03.554746 kernel: ACPI: Dynamic OEM Table Load: Sep 13 02:31:03.554751 kernel: ACPI: SSDT 0xFFFF932641D4E800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Sep 13 02:31:03.554755 kernel: ACPI: Dynamic OEM Table Load: Sep 13 02:31:03.554760 kernel: ACPI: SSDT 0xFFFF93264014D000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Sep 13 02:31:03.554765 kernel: ACPI: Dynamic OEM Table Load: Sep 13 02:31:03.554771 kernel: ACPI: SSDT 0xFFFF932641CED800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Sep 13 02:31:03.554776 kernel: ACPI: Interpreter enabled Sep 13 02:31:03.554781 kernel: ACPI: PM: (supports S0 S5) Sep 13 02:31:03.554785 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 02:31:03.554790 kernel: HEST: Enabling Firmware First mode for corrected errors. Sep 13 02:31:03.554795 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Sep 13 02:31:03.554800 kernel: HEST: Table parsing has been initialized. Sep 13 02:31:03.554805 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Sep 13 02:31:03.554810 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 02:31:03.554815 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Sep 13 02:31:03.554820 kernel: ACPI: PM: Power Resource [USBC] Sep 13 02:31:03.554825 kernel: ACPI: PM: Power Resource [V0PR] Sep 13 02:31:03.554830 kernel: ACPI: PM: Power Resource [V1PR] Sep 13 02:31:03.554835 kernel: ACPI: PM: Power Resource [V2PR] Sep 13 02:31:03.554840 kernel: ACPI: PM: Power Resource [WRST] Sep 13 02:31:03.554844 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Sep 13 02:31:03.554849 kernel: ACPI: PM: Power Resource [FN00] Sep 13 02:31:03.554854 kernel: ACPI: PM: Power Resource [FN01] Sep 13 02:31:03.554860 kernel: ACPI: PM: Power Resource [FN02] Sep 13 02:31:03.554865 kernel: ACPI: PM: Power Resource [FN03] Sep 13 02:31:03.554869 kernel: ACPI: PM: Power Resource [FN04] Sep 13 02:31:03.554874 kernel: ACPI: PM: Power Resource [PIN] Sep 13 02:31:03.554879 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Sep 13 02:31:03.554946 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 02:31:03.554993 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Sep 13 02:31:03.555035 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Sep 13 02:31:03.555043 kernel: PCI host bridge to bus 0000:00 Sep 13 02:31:03.555091 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 02:31:03.555130 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 02:31:03.555168 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 02:31:03.555206 kernel: pci_bus 0000:00: root bus resource [mem 0x7f800000-0xdfffffff window] Sep 13 02:31:03.555243 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Sep 13 02:31:03.555281 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Sep 13 02:31:03.555334 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Sep 13 02:31:03.555405 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Sep 13 02:31:03.555464 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Sep 13 02:31:03.555512 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Sep 13 02:31:03.555556 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Sep 13 02:31:03.555603 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Sep 13 02:31:03.555649 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x94000000-0x94ffffff 64bit] Sep 13 02:31:03.555691 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Sep 13 02:31:03.555735 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Sep 13 02:31:03.555783 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Sep 13 02:31:03.555827 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9651f000-0x9651ffff 64bit] Sep 13 02:31:03.555876 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Sep 13 02:31:03.555921 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9651e000-0x9651efff 64bit] Sep 13 02:31:03.555968 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Sep 13 02:31:03.556012 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x96500000-0x9650ffff 64bit] Sep 13 02:31:03.556055 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Sep 13 02:31:03.556100 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Sep 13 02:31:03.556144 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x96512000-0x96513fff 64bit] Sep 13 02:31:03.556188 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9651d000-0x9651dfff 64bit] Sep 13 02:31:03.556234 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Sep 13 02:31:03.556276 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 13 02:31:03.556324 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Sep 13 02:31:03.556387 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 13 02:31:03.556456 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Sep 13 02:31:03.556500 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9651a000-0x9651afff 64bit] Sep 13 02:31:03.556545 kernel: pci 0000:00:16.0: PME# supported from D3hot Sep 13 02:31:03.556593 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Sep 13 02:31:03.556636 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x96519000-0x96519fff 64bit] Sep 13 02:31:03.556679 kernel: pci 0000:00:16.1: PME# supported from D3hot Sep 13 02:31:03.556725 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Sep 13 02:31:03.556769 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x96518000-0x96518fff 64bit] Sep 13 02:31:03.556813 kernel: pci 0000:00:16.4: PME# supported from D3hot Sep 13 02:31:03.556860 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Sep 13 02:31:03.556903 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x96510000-0x96511fff] Sep 13 02:31:03.556945 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x96517000-0x965170ff] Sep 13 02:31:03.556988 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Sep 13 02:31:03.557030 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Sep 13 02:31:03.557072 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Sep 13 02:31:03.557117 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x96516000-0x965167ff] Sep 13 02:31:03.557158 kernel: pci 0000:00:17.0: PME# supported from D3hot Sep 13 02:31:03.557207 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Sep 13 02:31:03.557251 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Sep 13 02:31:03.557301 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Sep 13 02:31:03.557345 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Sep 13 02:31:03.557433 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Sep 13 02:31:03.557477 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Sep 13 02:31:03.557523 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Sep 13 02:31:03.557567 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Sep 13 02:31:03.557618 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Sep 13 02:31:03.557664 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Sep 13 02:31:03.557710 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Sep 13 02:31:03.557755 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 13 02:31:03.557802 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Sep 13 02:31:03.557850 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Sep 13 02:31:03.557894 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x96514000-0x965140ff 64bit] Sep 13 02:31:03.557937 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Sep 13 02:31:03.558040 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Sep 13 02:31:03.558084 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Sep 13 02:31:03.558130 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 02:31:03.558178 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Sep 13 02:31:03.558224 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Sep 13 02:31:03.558271 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x96200000-0x962fffff pref] Sep 13 02:31:03.558316 kernel: pci 0000:02:00.0: PME# supported from D3cold Sep 13 02:31:03.558380 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 13 02:31:03.558444 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 13 02:31:03.558496 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Sep 13 02:31:03.558541 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Sep 13 02:31:03.558586 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x96100000-0x961fffff pref] Sep 13 02:31:03.558632 kernel: pci 0000:02:00.1: PME# supported from D3cold Sep 13 02:31:03.558677 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 13 02:31:03.558721 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 13 02:31:03.558765 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Sep 13 02:31:03.558808 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Sep 13 02:31:03.558851 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 13 02:31:03.558894 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Sep 13 02:31:03.558944 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Sep 13 02:31:03.558991 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Sep 13 02:31:03.559037 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x96400000-0x9647ffff] Sep 13 02:31:03.559081 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Sep 13 02:31:03.559124 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x96480000-0x96483fff] Sep 13 02:31:03.559169 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Sep 13 02:31:03.559212 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Sep 13 02:31:03.559256 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 13 02:31:03.559299 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Sep 13 02:31:03.559348 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Sep 13 02:31:03.559432 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Sep 13 02:31:03.559477 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x96300000-0x9637ffff] Sep 13 02:31:03.559521 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Sep 13 02:31:03.559567 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x96380000-0x96383fff] Sep 13 02:31:03.559610 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Sep 13 02:31:03.559656 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Sep 13 02:31:03.559699 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 13 02:31:03.559741 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Sep 13 02:31:03.559785 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Sep 13 02:31:03.559833 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Sep 13 02:31:03.559880 kernel: pci 0000:07:00.0: enabling Extended Tags Sep 13 02:31:03.559924 kernel: pci 0000:07:00.0: supports D1 D2 Sep 13 02:31:03.559968 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 02:31:03.560013 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Sep 13 02:31:03.560057 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Sep 13 02:31:03.560099 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Sep 13 02:31:03.560148 kernel: pci_bus 0000:08: extended config space not accessible Sep 13 02:31:03.560201 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Sep 13 02:31:03.560249 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x95000000-0x95ffffff] Sep 13 02:31:03.560297 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x96000000-0x9601ffff] Sep 13 02:31:03.560345 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Sep 13 02:31:03.560428 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 02:31:03.560474 kernel: pci 0000:08:00.0: supports D1 D2 Sep 13 02:31:03.560522 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 02:31:03.560567 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Sep 13 02:31:03.560612 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Sep 13 02:31:03.560657 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Sep 13 02:31:03.560666 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Sep 13 02:31:03.560672 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Sep 13 02:31:03.560677 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Sep 13 02:31:03.560682 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Sep 13 02:31:03.560688 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Sep 13 02:31:03.560693 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Sep 13 02:31:03.560698 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Sep 13 02:31:03.560703 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Sep 13 02:31:03.560709 kernel: iommu: Default domain type: Translated Sep 13 02:31:03.560715 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 02:31:03.560761 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Sep 13 02:31:03.560808 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 02:31:03.560855 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Sep 13 02:31:03.560862 kernel: vgaarb: loaded Sep 13 02:31:03.560868 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 02:31:03.560873 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 02:31:03.560879 kernel: PTP clock support registered Sep 13 02:31:03.560884 kernel: PCI: Using ACPI for IRQ routing Sep 13 02:31:03.560890 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 02:31:03.560895 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Sep 13 02:31:03.560901 kernel: e820: reserve RAM buffer [mem 0x6dfbe000-0x6fffffff] Sep 13 02:31:03.560906 kernel: e820: reserve RAM buffer [mem 0x77fc7000-0x77ffffff] Sep 13 02:31:03.560911 kernel: e820: reserve RAM buffer [mem 0x79233000-0x7bffffff] Sep 13 02:31:03.560916 kernel: e820: reserve RAM buffer [mem 0x7bf00000-0x7bffffff] Sep 13 02:31:03.560921 kernel: e820: reserve RAM buffer [mem 0x87f800000-0x87fffffff] Sep 13 02:31:03.560926 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 13 02:31:03.560931 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Sep 13 02:31:03.560937 kernel: clocksource: Switched to clocksource tsc-early Sep 13 02:31:03.560943 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 02:31:03.560948 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 02:31:03.560953 kernel: pnp: PnP ACPI init Sep 13 02:31:03.561000 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Sep 13 02:31:03.561045 kernel: pnp 00:02: [dma 0 disabled] Sep 13 02:31:03.561089 kernel: pnp 00:03: [dma 0 disabled] Sep 13 02:31:03.561134 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Sep 13 02:31:03.561174 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Sep 13 02:31:03.561217 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Sep 13 02:31:03.561262 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Sep 13 02:31:03.561302 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Sep 13 02:31:03.561341 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Sep 13 02:31:03.561382 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Sep 13 02:31:03.561424 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Sep 13 02:31:03.561463 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Sep 13 02:31:03.561503 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Sep 13 02:31:03.561541 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Sep 13 02:31:03.561582 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Sep 13 02:31:03.561623 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Sep 13 02:31:03.561664 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Sep 13 02:31:03.561703 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Sep 13 02:31:03.561741 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Sep 13 02:31:03.561780 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Sep 13 02:31:03.561820 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Sep 13 02:31:03.561863 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Sep 13 02:31:03.561871 kernel: pnp: PnP ACPI: found 10 devices Sep 13 02:31:03.561877 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 02:31:03.561883 kernel: NET: Registered PF_INET protocol family Sep 13 02:31:03.561889 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 02:31:03.561894 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 13 02:31:03.561899 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 02:31:03.561904 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 02:31:03.561910 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Sep 13 02:31:03.561915 kernel: TCP: Hash tables configured (established 262144 bind 65536) Sep 13 02:31:03.561920 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 02:31:03.561926 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 02:31:03.561931 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 02:31:03.561937 kernel: NET: Registered PF_XDP protocol family Sep 13 02:31:03.561980 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7f800000-0x7f800fff 64bit] Sep 13 02:31:03.562024 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7f801000-0x7f801fff 64bit] Sep 13 02:31:03.562068 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7f802000-0x7f802fff 64bit] Sep 13 02:31:03.562111 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 02:31:03.562159 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 13 02:31:03.562204 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 13 02:31:03.562249 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 13 02:31:03.562294 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 13 02:31:03.562337 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Sep 13 02:31:03.562383 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Sep 13 02:31:03.562428 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 13 02:31:03.562472 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Sep 13 02:31:03.562515 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Sep 13 02:31:03.562560 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 13 02:31:03.562603 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Sep 13 02:31:03.562647 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Sep 13 02:31:03.562692 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 13 02:31:03.562736 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Sep 13 02:31:03.562779 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Sep 13 02:31:03.562824 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Sep 13 02:31:03.562871 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Sep 13 02:31:03.562916 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Sep 13 02:31:03.562960 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Sep 13 02:31:03.563002 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Sep 13 02:31:03.563047 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Sep 13 02:31:03.563085 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Sep 13 02:31:03.563124 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 02:31:03.563164 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 02:31:03.563202 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 02:31:03.563240 kernel: pci_bus 0000:00: resource 7 [mem 0x7f800000-0xdfffffff window] Sep 13 02:31:03.563278 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Sep 13 02:31:03.563322 kernel: pci_bus 0000:02: resource 1 [mem 0x96100000-0x962fffff] Sep 13 02:31:03.563366 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Sep 13 02:31:03.563411 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Sep 13 02:31:03.563454 kernel: pci_bus 0000:04: resource 1 [mem 0x96400000-0x964fffff] Sep 13 02:31:03.563497 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Sep 13 02:31:03.563539 kernel: pci_bus 0000:05: resource 1 [mem 0x96300000-0x963fffff] Sep 13 02:31:03.563584 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Sep 13 02:31:03.563625 kernel: pci_bus 0000:07: resource 1 [mem 0x95000000-0x960fffff] Sep 13 02:31:03.563667 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Sep 13 02:31:03.563710 kernel: pci_bus 0000:08: resource 1 [mem 0x95000000-0x960fffff] Sep 13 02:31:03.563718 kernel: PCI: CLS 64 bytes, default 64 Sep 13 02:31:03.563724 kernel: DMAR: No ATSR found Sep 13 02:31:03.563729 kernel: DMAR: No SATC found Sep 13 02:31:03.563734 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Sep 13 02:31:03.563740 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Sep 13 02:31:03.563745 kernel: DMAR: IOMMU feature nwfs inconsistent Sep 13 02:31:03.563750 kernel: DMAR: IOMMU feature pasid inconsistent Sep 13 02:31:03.563755 kernel: DMAR: IOMMU feature eafs inconsistent Sep 13 02:31:03.563761 kernel: DMAR: IOMMU feature prs inconsistent Sep 13 02:31:03.563767 kernel: DMAR: IOMMU feature nest inconsistent Sep 13 02:31:03.563772 kernel: DMAR: IOMMU feature mts inconsistent Sep 13 02:31:03.563777 kernel: DMAR: IOMMU feature sc_support inconsistent Sep 13 02:31:03.563783 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Sep 13 02:31:03.563788 kernel: DMAR: dmar0: Using Queued invalidation Sep 13 02:31:03.563793 kernel: DMAR: dmar1: Using Queued invalidation Sep 13 02:31:03.563837 kernel: pci 0000:00:00.0: Adding to iommu group 0 Sep 13 02:31:03.563881 kernel: pci 0000:00:01.0: Adding to iommu group 1 Sep 13 02:31:03.563925 kernel: pci 0000:00:01.1: Adding to iommu group 1 Sep 13 02:31:03.563970 kernel: pci 0000:00:02.0: Adding to iommu group 2 Sep 13 02:31:03.564014 kernel: pci 0000:00:08.0: Adding to iommu group 3 Sep 13 02:31:03.564057 kernel: pci 0000:00:12.0: Adding to iommu group 4 Sep 13 02:31:03.564101 kernel: pci 0000:00:14.0: Adding to iommu group 5 Sep 13 02:31:03.564143 kernel: pci 0000:00:14.2: Adding to iommu group 5 Sep 13 02:31:03.564185 kernel: pci 0000:00:15.0: Adding to iommu group 6 Sep 13 02:31:03.564228 kernel: pci 0000:00:15.1: Adding to iommu group 6 Sep 13 02:31:03.564270 kernel: pci 0000:00:16.0: Adding to iommu group 7 Sep 13 02:31:03.564315 kernel: pci 0000:00:16.1: Adding to iommu group 7 Sep 13 02:31:03.564379 kernel: pci 0000:00:16.4: Adding to iommu group 7 Sep 13 02:31:03.564440 kernel: pci 0000:00:17.0: Adding to iommu group 8 Sep 13 02:31:03.564484 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Sep 13 02:31:03.564527 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Sep 13 02:31:03.564570 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Sep 13 02:31:03.564613 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Sep 13 02:31:03.564657 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Sep 13 02:31:03.564700 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Sep 13 02:31:03.564745 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Sep 13 02:31:03.564788 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Sep 13 02:31:03.564832 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Sep 13 02:31:03.564876 kernel: pci 0000:02:00.0: Adding to iommu group 1 Sep 13 02:31:03.564922 kernel: pci 0000:02:00.1: Adding to iommu group 1 Sep 13 02:31:03.564966 kernel: pci 0000:04:00.0: Adding to iommu group 16 Sep 13 02:31:03.565010 kernel: pci 0000:05:00.0: Adding to iommu group 17 Sep 13 02:31:03.565057 kernel: pci 0000:07:00.0: Adding to iommu group 18 Sep 13 02:31:03.565104 kernel: pci 0000:08:00.0: Adding to iommu group 18 Sep 13 02:31:03.565112 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Sep 13 02:31:03.565118 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 13 02:31:03.565123 kernel: software IO TLB: mapped [mem 0x0000000073fc7000-0x0000000077fc7000] (64MB) Sep 13 02:31:03.565128 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Sep 13 02:31:03.565134 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Sep 13 02:31:03.565139 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Sep 13 02:31:03.565144 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Sep 13 02:31:03.565151 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Sep 13 02:31:03.565197 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Sep 13 02:31:03.565205 kernel: Initialise system trusted keyrings Sep 13 02:31:03.565210 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Sep 13 02:31:03.565215 kernel: Key type asymmetric registered Sep 13 02:31:03.565220 kernel: Asymmetric key parser 'x509' registered Sep 13 02:31:03.565226 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 02:31:03.565231 kernel: io scheduler mq-deadline registered Sep 13 02:31:03.565237 kernel: io scheduler kyber registered Sep 13 02:31:03.565243 kernel: io scheduler bfq registered Sep 13 02:31:03.565286 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Sep 13 02:31:03.565330 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Sep 13 02:31:03.565394 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Sep 13 02:31:03.565458 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Sep 13 02:31:03.565502 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Sep 13 02:31:03.565545 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Sep 13 02:31:03.565591 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Sep 13 02:31:03.565640 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Sep 13 02:31:03.565649 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Sep 13 02:31:03.565654 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Sep 13 02:31:03.565659 kernel: pstore: Registered erst as persistent store backend Sep 13 02:31:03.565665 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 02:31:03.565670 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 02:31:03.565675 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 02:31:03.565682 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 13 02:31:03.565726 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Sep 13 02:31:03.565734 kernel: i8042: PNP: No PS/2 controller found. Sep 13 02:31:03.565773 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Sep 13 02:31:03.565814 kernel: rtc_cmos rtc_cmos: registered as rtc0 Sep 13 02:31:03.565853 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-09-13T02:31:02 UTC (1757730662) Sep 13 02:31:03.565893 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Sep 13 02:31:03.565902 kernel: intel_pstate: Intel P-state driver initializing Sep 13 02:31:03.565907 kernel: intel_pstate: Disabling energy efficiency optimization Sep 13 02:31:03.565913 kernel: intel_pstate: HWP enabled Sep 13 02:31:03.565918 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Sep 13 02:31:03.565923 kernel: vesafb: scrolling: redraw Sep 13 02:31:03.565928 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Sep 13 02:31:03.565934 kernel: vesafb: framebuffer at 0x95000000, mapped to 0x00000000f1b6811d, using 768k, total 768k Sep 13 02:31:03.565939 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 02:31:03.565944 kernel: fb0: VESA VGA frame buffer device Sep 13 02:31:03.565950 kernel: NET: Registered PF_INET6 protocol family Sep 13 02:31:03.565956 kernel: Segment Routing with IPv6 Sep 13 02:31:03.565961 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 02:31:03.565966 kernel: NET: Registered PF_PACKET protocol family Sep 13 02:31:03.565971 kernel: Key type dns_resolver registered Sep 13 02:31:03.565976 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Sep 13 02:31:03.565982 kernel: microcode: Microcode Update Driver: v2.2. Sep 13 02:31:03.565987 kernel: IPI shorthand broadcast: enabled Sep 13 02:31:03.565992 kernel: sched_clock: Marking stable (1864220744, 1360190691)->(4671535685, -1447124250) Sep 13 02:31:03.565998 kernel: registered taskstats version 1 Sep 13 02:31:03.566003 kernel: Loading compiled-in X.509 certificates Sep 13 02:31:03.566009 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 02:31:03.566014 kernel: Key type .fscrypt registered Sep 13 02:31:03.566019 kernel: Key type fscrypt-provisioning registered Sep 13 02:31:03.566024 kernel: pstore: Using crash dump compression: deflate Sep 13 02:31:03.566029 kernel: ima: Allocated hash algorithm: sha1 Sep 13 02:31:03.566035 kernel: ima: No architecture policies found Sep 13 02:31:03.566040 kernel: clk: Disabling unused clocks Sep 13 02:31:03.566046 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 02:31:03.566051 kernel: Write protecting the kernel read-only data: 28672k Sep 13 02:31:03.566056 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 02:31:03.566062 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 02:31:03.566067 kernel: Run /init as init process Sep 13 02:31:03.566072 kernel: with arguments: Sep 13 02:31:03.566077 kernel: /init Sep 13 02:31:03.566083 kernel: with environment: Sep 13 02:31:03.566088 kernel: HOME=/ Sep 13 02:31:03.566094 kernel: TERM=linux Sep 13 02:31:03.566099 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 02:31:03.566105 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 02:31:03.566112 systemd[1]: Detected architecture x86-64. Sep 13 02:31:03.566117 systemd[1]: Running in initrd. Sep 13 02:31:03.566123 systemd[1]: No hostname configured, using default hostname. Sep 13 02:31:03.566128 systemd[1]: Hostname set to . Sep 13 02:31:03.566133 systemd[1]: Initializing machine ID from random generator. Sep 13 02:31:03.566140 systemd[1]: Queued start job for default target initrd.target. Sep 13 02:31:03.566145 systemd[1]: Started systemd-ask-password-console.path. Sep 13 02:31:03.566150 systemd[1]: Reached target cryptsetup.target. Sep 13 02:31:03.566155 systemd[1]: Reached target paths.target. Sep 13 02:31:03.566162 systemd[1]: Reached target slices.target. Sep 13 02:31:03.566169 systemd[1]: Reached target swap.target. Sep 13 02:31:03.566201 systemd[1]: Reached target timers.target. Sep 13 02:31:03.566209 systemd[1]: Listening on iscsid.socket. Sep 13 02:31:03.566218 systemd[1]: Listening on iscsiuio.socket. Sep 13 02:31:03.566224 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 02:31:03.566419 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 02:31:03.566427 systemd[1]: Listening on systemd-journald.socket. Sep 13 02:31:03.566438 kernel: tsc: Refined TSC clocksource calibration: 3408.091 MHz Sep 13 02:31:03.566446 systemd[1]: Listening on systemd-networkd.socket. Sep 13 02:31:03.566452 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x312029d2519, max_idle_ns: 440795330833 ns Sep 13 02:31:03.566458 kernel: clocksource: Switched to clocksource tsc Sep 13 02:31:03.566465 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 02:31:03.566484 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 02:31:03.566490 systemd[1]: Reached target sockets.target. Sep 13 02:31:03.566495 systemd[1]: Starting kmod-static-nodes.service... Sep 13 02:31:03.566526 systemd[1]: Finished network-cleanup.service. Sep 13 02:31:03.566534 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 02:31:03.566540 systemd[1]: Starting systemd-journald.service... Sep 13 02:31:03.566546 systemd[1]: Starting systemd-modules-load.service... Sep 13 02:31:03.566556 systemd-journald[269]: Journal started Sep 13 02:31:03.566631 systemd-journald[269]: Runtime Journal (/run/log/journal/2075fa8facc647d4bd9a7b87224ea791) is 8.0M, max 639.3M, 631.3M free. Sep 13 02:31:03.567525 systemd-modules-load[270]: Inserted module 'overlay' Sep 13 02:31:03.573000 audit: BPF prog-id=6 op=LOAD Sep 13 02:31:03.591400 kernel: audit: type=1334 audit(1757730663.573:2): prog-id=6 op=LOAD Sep 13 02:31:03.591431 systemd[1]: Starting systemd-resolved.service... Sep 13 02:31:03.642409 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 02:31:03.642425 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 02:31:03.675392 kernel: Bridge firewalling registered Sep 13 02:31:03.675410 systemd[1]: Started systemd-journald.service. Sep 13 02:31:03.690314 systemd-modules-load[270]: Inserted module 'br_netfilter' Sep 13 02:31:03.739908 kernel: audit: type=1130 audit(1757730663.698:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:03.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:03.692893 systemd-resolved[271]: Positive Trust Anchors: Sep 13 02:31:03.798753 kernel: SCSI subsystem initialized Sep 13 02:31:03.798763 kernel: audit: type=1130 audit(1757730663.751:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:03.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:03.692900 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 02:31:03.902696 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 02:31:03.902711 kernel: audit: type=1130 audit(1757730663.824:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:03.902719 kernel: device-mapper: uevent: version 1.0.3 Sep 13 02:31:03.902726 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 02:31:03.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:03.692920 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 02:31:04.017565 kernel: audit: type=1130 audit(1757730663.928:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:03.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:03.694525 systemd-resolved[271]: Defaulting to hostname 'linux'. Sep 13 02:31:04.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:03.698583 systemd[1]: Started systemd-resolved.service. Sep 13 02:31:04.126920 kernel: audit: type=1130 audit(1757730664.025:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:04.126933 kernel: audit: type=1130 audit(1757730664.080:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:04.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:03.751536 systemd[1]: Finished kmod-static-nodes.service. Sep 13 02:31:03.824504 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 02:31:03.926864 systemd-modules-load[270]: Inserted module 'dm_multipath' Sep 13 02:31:03.928479 systemd[1]: Finished systemd-modules-load.service. Sep 13 02:31:04.025890 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 02:31:04.080657 systemd[1]: Reached target nss-lookup.target. Sep 13 02:31:04.135943 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 02:31:04.156885 systemd[1]: Starting systemd-sysctl.service... Sep 13 02:31:04.157190 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 02:31:04.160112 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 02:31:04.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:04.160896 systemd[1]: Finished systemd-sysctl.service. Sep 13 02:31:04.210566 kernel: audit: type=1130 audit(1757730664.159:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:04.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:04.222708 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 02:31:04.288464 kernel: audit: type=1130 audit(1757730664.222:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:04.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:04.279981 systemd[1]: Starting dracut-cmdline.service... Sep 13 02:31:04.302443 dracut-cmdline[295]: dracut-dracut-053 Sep 13 02:31:04.302443 dracut-cmdline[295]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Sep 13 02:31:04.302443 dracut-cmdline[295]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 02:31:04.401635 kernel: Loading iSCSI transport class v2.0-870. Sep 13 02:31:04.401649 kernel: iscsi: registered transport (tcp) Sep 13 02:31:04.401659 kernel: iscsi: registered transport (qla4xxx) Sep 13 02:31:04.434126 kernel: QLogic iSCSI HBA Driver Sep 13 02:31:04.449957 systemd[1]: Finished dracut-cmdline.service. Sep 13 02:31:04.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:04.450494 systemd[1]: Starting dracut-pre-udev.service... Sep 13 02:31:04.505433 kernel: raid6: avx2x4 gen() 48347 MB/s Sep 13 02:31:04.540429 kernel: raid6: avx2x4 xor() 22590 MB/s Sep 13 02:31:04.575392 kernel: raid6: avx2x2 gen() 53617 MB/s Sep 13 02:31:04.610392 kernel: raid6: avx2x2 xor() 32085 MB/s Sep 13 02:31:04.645433 kernel: raid6: avx2x1 gen() 45232 MB/s Sep 13 02:31:04.680436 kernel: raid6: avx2x1 xor() 27868 MB/s Sep 13 02:31:04.714430 kernel: raid6: sse2x4 gen() 21259 MB/s Sep 13 02:31:04.748392 kernel: raid6: sse2x4 xor() 11973 MB/s Sep 13 02:31:04.782394 kernel: raid6: sse2x2 gen() 21663 MB/s Sep 13 02:31:04.816392 kernel: raid6: sse2x2 xor() 13387 MB/s Sep 13 02:31:04.850392 kernel: raid6: sse2x1 gen() 18266 MB/s Sep 13 02:31:04.902377 kernel: raid6: sse2x1 xor() 8921 MB/s Sep 13 02:31:04.902392 kernel: raid6: using algorithm avx2x2 gen() 53617 MB/s Sep 13 02:31:04.902400 kernel: raid6: .... xor() 32085 MB/s, rmw enabled Sep 13 02:31:04.920629 kernel: raid6: using avx2x2 recovery algorithm Sep 13 02:31:04.967362 kernel: xor: automatically using best checksumming function avx Sep 13 02:31:05.047392 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 02:31:05.052349 systemd[1]: Finished dracut-pre-udev.service. Sep 13 02:31:05.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:05.052000 audit: BPF prog-id=7 op=LOAD Sep 13 02:31:05.052000 audit: BPF prog-id=8 op=LOAD Sep 13 02:31:05.053145 systemd[1]: Starting systemd-udevd.service... Sep 13 02:31:05.060571 systemd-udevd[475]: Using default interface naming scheme 'v252'. Sep 13 02:31:05.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:05.074660 systemd[1]: Started systemd-udevd.service. Sep 13 02:31:05.115488 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Sep 13 02:31:05.091974 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 02:31:05.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:05.119423 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 02:31:05.132451 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 02:31:05.186096 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 02:31:05.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:05.225368 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 02:31:05.261888 kernel: ACPI: bus type USB registered Sep 13 02:31:05.261922 kernel: usbcore: registered new interface driver usbfs Sep 13 02:31:05.261931 kernel: usbcore: registered new interface driver hub Sep 13 02:31:05.279736 kernel: usbcore: registered new device driver usb Sep 13 02:31:05.298365 kernel: libata version 3.00 loaded. Sep 13 02:31:05.323412 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 02:31:05.323445 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Sep 13 02:31:06.260169 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 13 02:31:06.260512 kernel: AES CTR mode by8 optimization enabled Sep 13 02:31:06.260549 kernel: ahci 0000:00:17.0: version 3.0 Sep 13 02:31:06.260817 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Sep 13 02:31:06.261040 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Sep 13 02:31:06.261274 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Sep 13 02:31:06.261319 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Sep 13 02:31:06.261368 kernel: scsi host0: ahci Sep 13 02:31:06.261641 kernel: scsi host1: ahci Sep 13 02:31:06.261880 kernel: scsi host2: ahci Sep 13 02:31:06.262120 kernel: igb 0000:04:00.0: added PHC on eth0 Sep 13 02:31:06.262401 kernel: scsi host3: ahci Sep 13 02:31:06.262658 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 13 02:31:06.262892 kernel: scsi host4: ahci Sep 13 02:31:06.263129 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1c:30 Sep 13 02:31:06.263401 kernel: scsi host5: ahci Sep 13 02:31:06.263641 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Sep 13 02:31:06.263866 kernel: scsi host6: ahci Sep 13 02:31:06.264097 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 13 02:31:06.264368 kernel: scsi host7: ahci Sep 13 02:31:06.264637 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 13 02:31:06.264861 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Sep 13 02:31:06.265083 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Sep 13 02:31:06.265330 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 13 02:31:06.265569 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Sep 13 02:31:06.265787 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Sep 13 02:31:06.266002 kernel: hub 1-0:1.0: USB hub found Sep 13 02:31:06.266299 kernel: hub 1-0:1.0: 16 ports detected Sep 13 02:31:06.266564 kernel: hub 2-0:1.0: USB hub found Sep 13 02:31:06.266815 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 13 02:31:06.267043 kernel: ata1: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516100 irq 129 Sep 13 02:31:06.267087 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Sep 13 02:31:06.267333 kernel: hub 2-0:1.0: 10 ports detected Sep 13 02:31:06.267592 kernel: ata2: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516180 irq 129 Sep 13 02:31:06.267634 kernel: igb 0000:05:00.0: added PHC on eth1 Sep 13 02:31:06.267866 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 13 02:31:06.268098 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1c:31 Sep 13 02:31:06.268338 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Sep 13 02:31:06.268581 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 13 02:31:06.268806 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Sep 13 02:31:06.269241 kernel: ata3: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516200 irq 129 Sep 13 02:31:06.269284 kernel: ata4: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516280 irq 129 Sep 13 02:31:06.269312 kernel: ata5: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516300 irq 129 Sep 13 02:31:06.269338 kernel: ata6: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516380 irq 129 Sep 13 02:31:06.269379 kernel: ata7: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516400 irq 129 Sep 13 02:31:06.269408 kernel: hub 1-14:1.0: USB hub found Sep 13 02:31:06.269707 kernel: ata8: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516480 irq 129 Sep 13 02:31:06.269740 kernel: hub 1-14:1.0: 4 ports detected Sep 13 02:31:06.269989 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Sep 13 02:31:06.270253 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Sep 13 02:31:06.270508 kernel: mlx5_core 0000:02:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Sep 13 02:31:06.270741 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Sep 13 02:31:06.861677 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 13 02:31:06.861747 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 13 02:31:06.861755 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Sep 13 02:31:06.861862 kernel: ata8: SATA link down (SStatus 0 SControl 300) Sep 13 02:31:06.861872 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 13 02:31:06.861879 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 02:31:06.861886 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 13 02:31:06.861892 kernel: ata7: SATA link down (SStatus 0 SControl 300) Sep 13 02:31:06.861899 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 02:31:06.861906 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 02:31:06.861912 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 13 02:31:06.861919 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 13 02:31:06.861925 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 02:31:06.861933 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 13 02:31:06.861940 kernel: ata1.00: Features: NCQ-prio Sep 13 02:31:06.861946 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 13 02:31:06.861952 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 13 02:31:06.862010 kernel: ata2.00: Features: NCQ-prio Sep 13 02:31:06.862017 kernel: ata1.00: configured for UDMA/133 Sep 13 02:31:06.862024 kernel: port_module: 9 callbacks suppressed Sep 13 02:31:06.862031 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Sep 13 02:31:06.862086 kernel: ata2.00: configured for UDMA/133 Sep 13 02:31:06.862094 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 13 02:31:07.170544 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Sep 13 02:31:07.170609 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 13 02:31:07.266206 kernel: usbcore: registered new interface driver usbhid Sep 13 02:31:07.266236 kernel: usbhid: USB HID core driver Sep 13 02:31:07.266248 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Sep 13 02:31:07.266284 kernel: ata1.00: Enabling discard_zeroes_data Sep 13 02:31:07.266297 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 02:31:07.266309 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 13 02:31:07.266422 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 13 02:31:07.266518 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Sep 13 02:31:07.266607 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 13 02:31:07.266688 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Sep 13 02:31:07.266765 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Sep 13 02:31:07.266774 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Sep 13 02:31:07.266852 kernel: mlx5_core 0000:02:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Sep 13 02:31:07.266915 kernel: sd 1:0:0:0: [sdb] Write Protect is off Sep 13 02:31:07.266983 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 02:31:07.267047 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Sep 13 02:31:07.267107 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Sep 13 02:31:07.267172 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 02:31:07.267237 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 02:31:07.267304 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 02:31:07.267312 kernel: ata1.00: Enabling discard_zeroes_data Sep 13 02:31:07.267318 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 02:31:07.267325 kernel: ata1.00: Enabling discard_zeroes_data Sep 13 02:31:07.267331 kernel: GPT:9289727 != 937703087 Sep 13 02:31:07.267338 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 02:31:07.267408 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 02:31:07.267417 kernel: GPT:9289727 != 937703087 Sep 13 02:31:07.267424 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 02:31:07.267430 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 02:31:07.267437 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 02:31:07.267443 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Sep 13 02:31:07.286394 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Sep 13 02:31:07.316798 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 02:31:07.352418 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sdb6 scanned by (udev-worker) (542) Sep 13 02:31:07.352433 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Sep 13 02:31:07.336881 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 02:31:07.362599 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 02:31:07.375466 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 02:31:07.407201 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 02:31:07.429582 systemd[1]: Starting disk-uuid.service... Sep 13 02:31:07.482860 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 02:31:07.482872 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 02:31:07.482881 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 02:31:07.482933 disk-uuid[692]: Primary Header is updated. Sep 13 02:31:07.482933 disk-uuid[692]: Secondary Entries is updated. Sep 13 02:31:07.482933 disk-uuid[692]: Secondary Header is updated. Sep 13 02:31:07.562444 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 02:31:07.562457 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 02:31:07.562465 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 02:31:08.531736 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 02:31:08.550388 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 02:31:08.550420 disk-uuid[693]: The operation has completed successfully. Sep 13 02:31:08.591144 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 02:31:08.684859 kernel: audit: type=1130 audit(1757730668.598:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:08.684877 kernel: audit: type=1131 audit(1757730668.598:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:08.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:08.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:08.591204 systemd[1]: Finished disk-uuid.service. Sep 13 02:31:08.714396 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 02:31:08.599044 systemd[1]: Starting verity-setup.service... Sep 13 02:31:08.781158 systemd[1]: Found device dev-mapper-usr.device. Sep 13 02:31:08.792624 systemd[1]: Mounting sysusr-usr.mount... Sep 13 02:31:08.804051 systemd[1]: Finished verity-setup.service. Sep 13 02:31:08.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:08.869366 kernel: audit: type=1130 audit(1757730668.818:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:08.924359 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 02:31:08.924433 systemd[1]: Mounted sysusr-usr.mount. Sep 13 02:31:08.932650 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 02:31:08.933044 systemd[1]: Starting ignition-setup.service... Sep 13 02:31:09.025476 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 13 02:31:09.025491 kernel: BTRFS info (device sdb6): using free space tree Sep 13 02:31:09.025499 kernel: BTRFS info (device sdb6): has skinny extents Sep 13 02:31:09.025506 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 13 02:31:08.940845 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 02:31:09.090608 kernel: audit: type=1130 audit(1757730669.034:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:09.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:09.017997 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 02:31:09.153465 kernel: audit: type=1130 audit(1757730669.099:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:09.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:09.035312 systemd[1]: Finished ignition-setup.service. Sep 13 02:31:09.162000 audit: BPF prog-id=9 op=LOAD Sep 13 02:31:09.100523 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 02:31:09.198407 kernel: audit: type=1334 audit(1757730669.162:24): prog-id=9 op=LOAD Sep 13 02:31:09.163413 systemd[1]: Starting systemd-networkd.service... Sep 13 02:31:09.261567 kernel: audit: type=1130 audit(1757730669.206:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:09.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:09.223975 ignition[869]: Ignition 2.14.0 Sep 13 02:31:09.198013 systemd-networkd[880]: lo: Link UP Sep 13 02:31:09.223989 ignition[869]: Stage: fetch-offline Sep 13 02:31:09.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:09.198015 systemd-networkd[880]: lo: Gained carrier Sep 13 02:31:09.410600 kernel: audit: type=1130 audit(1757730669.288:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:09.410615 kernel: audit: type=1130 audit(1757730669.343:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:09.410623 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Sep 13 02:31:09.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:09.224057 ignition[869]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 02:31:09.444493 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f1np1: link becomes ready Sep 13 02:31:09.198360 systemd-networkd[880]: Enumeration completed Sep 13 02:31:09.224097 ignition[869]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 13 02:31:09.198430 systemd[1]: Started systemd-networkd.service. Sep 13 02:31:09.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:09.228404 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 02:31:09.199161 systemd-networkd[880]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 02:31:09.487567 iscsid[902]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 02:31:09.487567 iscsid[902]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 13 02:31:09.487567 iscsid[902]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 02:31:09.487567 iscsid[902]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 02:31:09.487567 iscsid[902]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 02:31:09.487567 iscsid[902]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 02:31:09.487567 iscsid[902]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 02:31:09.665548 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Sep 13 02:31:09.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:09.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:09.228556 ignition[869]: parsed url from cmdline: "" Sep 13 02:31:09.206460 systemd[1]: Reached target network.target. Sep 13 02:31:09.228569 ignition[869]: no config URL provided Sep 13 02:31:09.239577 unknown[869]: fetched base config from "system" Sep 13 02:31:09.228585 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 02:31:09.239581 unknown[869]: fetched user config from "system" Sep 13 02:31:09.228648 ignition[869]: parsing config with SHA512: 559fa2c5c7f6caa72b41a86d0211748ea6d738384d779b342cab87b987f2eba9d4dc8415c53cbf95d6d67a404769de580fb1f6e928fc781dc1e9da36c97a332d Sep 13 02:31:09.270667 systemd[1]: Starting iscsiuio.service... Sep 13 02:31:09.239868 ignition[869]: fetch-offline: fetch-offline passed Sep 13 02:31:09.281743 systemd[1]: Started iscsiuio.service. Sep 13 02:31:09.239871 ignition[869]: POST message to Packet Timeline Sep 13 02:31:09.288804 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 02:31:09.239875 ignition[869]: POST Status error: resource requires networking Sep 13 02:31:09.343618 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 02:31:09.239910 ignition[869]: Ignition finished successfully Sep 13 02:31:09.344066 systemd[1]: Starting ignition-kargs.service... Sep 13 02:31:09.414919 ignition[891]: Ignition 2.14.0 Sep 13 02:31:09.413435 systemd-networkd[880]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 02:31:09.414923 ignition[891]: Stage: kargs Sep 13 02:31:09.424923 systemd[1]: Starting iscsid.service... Sep 13 02:31:09.414982 ignition[891]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 02:31:09.451620 systemd[1]: Started iscsid.service. Sep 13 02:31:09.414992 ignition[891]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 13 02:31:09.459004 systemd[1]: Starting dracut-initqueue.service... Sep 13 02:31:09.416306 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 02:31:09.477576 systemd[1]: Finished dracut-initqueue.service. Sep 13 02:31:09.417607 ignition[891]: kargs: kargs passed Sep 13 02:31:09.506762 systemd[1]: Reached target remote-fs-pre.target. Sep 13 02:31:09.417610 ignition[891]: POST message to Packet Timeline Sep 13 02:31:09.526564 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 02:31:09.417620 ignition[891]: GET https://metadata.packet.net/metadata: attempt #1 Sep 13 02:31:09.561648 systemd[1]: Reached target remote-fs.target. Sep 13 02:31:09.420692 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37455->[::1]:53: read: connection refused Sep 13 02:31:09.583172 systemd[1]: Starting dracut-pre-mount.service... Sep 13 02:31:09.621126 ignition[891]: GET https://metadata.packet.net/metadata: attempt #2 Sep 13 02:31:09.612709 systemd[1]: Finished dracut-pre-mount.service. Sep 13 02:31:09.621806 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38002->[::1]:53: read: connection refused Sep 13 02:31:09.649567 systemd-networkd[880]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 02:31:09.678347 systemd-networkd[880]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 02:31:09.710950 systemd-networkd[880]: enp2s0f1np1: Link UP Sep 13 02:31:09.711436 systemd-networkd[880]: enp2s0f1np1: Gained carrier Sep 13 02:31:09.726915 systemd-networkd[880]: enp2s0f0np0: Link UP Sep 13 02:31:09.727324 systemd-networkd[880]: eno2: Link UP Sep 13 02:31:09.727732 systemd-networkd[880]: eno1: Link UP Sep 13 02:31:10.021930 ignition[891]: GET https://metadata.packet.net/metadata: attempt #3 Sep 13 02:31:10.023277 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50213->[::1]:53: read: connection refused Sep 13 02:31:10.443873 systemd-networkd[880]: enp2s0f0np0: Gained carrier Sep 13 02:31:10.452626 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0np0: link becomes ready Sep 13 02:31:10.486693 systemd-networkd[880]: enp2s0f0np0: DHCPv4 address 145.40.90.231/31, gateway 145.40.90.230 acquired from 145.40.83.140 Sep 13 02:31:10.823692 ignition[891]: GET https://metadata.packet.net/metadata: attempt #4 Sep 13 02:31:10.825033 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44543->[::1]:53: read: connection refused Sep 13 02:31:10.870823 systemd-networkd[880]: enp2s0f1np1: Gained IPv6LL Sep 13 02:31:12.150811 systemd-networkd[880]: enp2s0f0np0: Gained IPv6LL Sep 13 02:31:12.426468 ignition[891]: GET https://metadata.packet.net/metadata: attempt #5 Sep 13 02:31:12.427827 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36116->[::1]:53: read: connection refused Sep 13 02:31:15.631098 ignition[891]: GET https://metadata.packet.net/metadata: attempt #6 Sep 13 02:31:16.668065 ignition[891]: GET result: OK Sep 13 02:31:19.033790 ignition[891]: Ignition finished successfully Sep 13 02:31:19.038613 systemd[1]: Finished ignition-kargs.service. Sep 13 02:31:19.120876 kernel: kauditd_printk_skb: 3 callbacks suppressed Sep 13 02:31:19.120905 kernel: audit: type=1130 audit(1757730679.049:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:19.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:19.057673 ignition[918]: Ignition 2.14.0 Sep 13 02:31:19.051733 systemd[1]: Starting ignition-disks.service... Sep 13 02:31:19.057676 ignition[918]: Stage: disks Sep 13 02:31:19.057733 ignition[918]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 02:31:19.057743 ignition[918]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 13 02:31:19.059211 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 02:31:19.061029 ignition[918]: disks: disks passed Sep 13 02:31:19.061032 ignition[918]: POST message to Packet Timeline Sep 13 02:31:19.061043 ignition[918]: GET https://metadata.packet.net/metadata: attempt #1 Sep 13 02:31:20.897693 ignition[918]: GET result: OK Sep 13 02:31:21.307086 ignition[918]: Ignition finished successfully Sep 13 02:31:21.310176 systemd[1]: Finished ignition-disks.service. Sep 13 02:31:21.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:21.323960 systemd[1]: Reached target initrd-root-device.target. Sep 13 02:31:21.400620 kernel: audit: type=1130 audit(1757730681.323:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:21.385577 systemd[1]: Reached target local-fs-pre.target. Sep 13 02:31:21.385613 systemd[1]: Reached target local-fs.target. Sep 13 02:31:21.409609 systemd[1]: Reached target sysinit.target. Sep 13 02:31:21.417583 systemd[1]: Reached target basic.target. Sep 13 02:31:21.438427 systemd[1]: Starting systemd-fsck-root.service... Sep 13 02:31:21.458757 systemd-fsck[935]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 13 02:31:21.470799 systemd[1]: Finished systemd-fsck-root.service. Sep 13 02:31:21.559379 kernel: audit: type=1130 audit(1757730681.479:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:21.559409 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 02:31:21.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:21.484656 systemd[1]: Mounting sysroot.mount... Sep 13 02:31:21.567023 systemd[1]: Mounted sysroot.mount. Sep 13 02:31:21.580622 systemd[1]: Reached target initrd-root-fs.target. Sep 13 02:31:21.588310 systemd[1]: Mounting sysroot-usr.mount... Sep 13 02:31:21.609347 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 13 02:31:21.625144 systemd[1]: Starting flatcar-static-network.service... Sep 13 02:31:21.641607 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 02:31:21.641690 systemd[1]: Reached target ignition-diskful.target. Sep 13 02:31:21.660584 systemd[1]: Mounted sysroot-usr.mount. Sep 13 02:31:21.685168 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 02:31:21.757477 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sdb6 scanned by mount (948) Sep 13 02:31:21.757493 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 13 02:31:21.698233 systemd[1]: Starting initrd-setup-root.service... Sep 13 02:31:21.803771 kernel: BTRFS info (device sdb6): using free space tree Sep 13 02:31:21.803785 kernel: BTRFS info (device sdb6): has skinny extents Sep 13 02:31:21.783437 systemd[1]: Finished initrd-setup-root.service. Sep 13 02:31:21.892498 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 13 02:31:21.892511 kernel: audit: type=1130 audit(1757730681.839:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:21.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:21.892549 initrd-setup-root[953]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 02:31:21.840668 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 02:31:21.926585 coreos-metadata[943]: Sep 13 02:31:21.782 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 13 02:31:21.945589 coreos-metadata[942]: Sep 13 02:31:21.782 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 13 02:31:21.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:21.985488 initrd-setup-root[961]: cut: /sysroot/etc/group: No such file or directory Sep 13 02:31:22.016571 kernel: audit: type=1130 audit(1757730681.953:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:21.901959 systemd[1]: Starting ignition-mount.service... Sep 13 02:31:22.023570 initrd-setup-root[969]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 02:31:21.918925 systemd[1]: Starting sysroot-boot.service... Sep 13 02:31:22.040551 initrd-setup-root[977]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 02:31:21.933806 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 13 02:31:22.060637 ignition[1019]: INFO : Ignition 2.14.0 Sep 13 02:31:22.060637 ignition[1019]: INFO : Stage: mount Sep 13 02:31:22.060637 ignition[1019]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 02:31:22.060637 ignition[1019]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 13 02:31:22.060637 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 02:31:22.060637 ignition[1019]: INFO : mount: mount passed Sep 13 02:31:22.060637 ignition[1019]: INFO : POST message to Packet Timeline Sep 13 02:31:22.060637 ignition[1019]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 13 02:31:21.933851 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 13 02:31:21.936622 systemd[1]: Finished sysroot-boot.service. Sep 13 02:31:22.834493 coreos-metadata[943]: Sep 13 02:31:22.834 INFO Fetch successful Sep 13 02:31:22.914511 systemd[1]: flatcar-static-network.service: Deactivated successfully. Sep 13 02:31:22.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:22.914567 systemd[1]: Finished flatcar-static-network.service. Sep 13 02:31:23.045589 kernel: audit: type=1130 audit(1757730682.922:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:23.045604 kernel: audit: type=1131 audit(1757730682.922:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:22.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:23.045631 ignition[1019]: INFO : GET result: OK Sep 13 02:31:23.671735 coreos-metadata[942]: Sep 13 02:31:23.671 INFO Fetch successful Sep 13 02:31:23.683708 ignition[1019]: INFO : Ignition finished successfully Sep 13 02:31:23.684326 systemd[1]: Finished ignition-mount.service. Sep 13 02:31:23.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:23.758638 coreos-metadata[942]: Sep 13 02:31:23.703 INFO wrote hostname ci-3510.3.8-n-6378d470a1 to /sysroot/etc/hostname Sep 13 02:31:23.825812 kernel: audit: type=1130 audit(1757730683.701:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:23.825827 kernel: audit: type=1130 audit(1757730683.768:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:23.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:23.703720 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 13 02:31:23.768998 systemd[1]: Starting ignition-files.service... Sep 13 02:31:23.835170 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 02:31:23.962431 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by mount (1035) Sep 13 02:31:23.962446 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 13 02:31:23.962454 kernel: BTRFS info (device sdb6): using free space tree Sep 13 02:31:23.962461 kernel: BTRFS info (device sdb6): has skinny extents Sep 13 02:31:23.962467 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 13 02:31:23.973697 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 02:31:23.991533 ignition[1054]: INFO : Ignition 2.14.0 Sep 13 02:31:23.991533 ignition[1054]: INFO : Stage: files Sep 13 02:31:23.991533 ignition[1054]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 02:31:23.991533 ignition[1054]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 13 02:31:23.991533 ignition[1054]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 02:31:23.991533 ignition[1054]: DEBUG : files: compiled without relabeling support, skipping Sep 13 02:31:23.991533 ignition[1054]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 02:31:23.991533 ignition[1054]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 02:31:23.994658 unknown[1054]: wrote ssh authorized keys file for user: core Sep 13 02:31:24.093610 ignition[1054]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 02:31:24.093610 ignition[1054]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 02:31:24.093610 ignition[1054]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 02:31:24.093610 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 02:31:24.093610 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 13 02:31:24.172354 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 02:31:25.517487 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 02:31:25.534639 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 02:31:25.534639 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 02:31:25.825758 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 02:31:25.941518 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 02:31:25.956646 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 02:31:25.956646 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 02:31:25.956646 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 02:31:25.956646 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 02:31:25.956646 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 02:31:25.956646 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 02:31:25.956646 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 02:31:25.956646 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 02:31:25.956646 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 02:31:25.956646 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 02:31:25.956646 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 02:31:25.956646 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 02:31:25.956646 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Sep 13 02:31:25.956646 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Sep 13 02:31:25.956646 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1468464343" Sep 13 02:31:25.953673 systemd[1]: mnt-oem1468464343.mount: Deactivated successfully. Sep 13 02:31:26.218657 ignition[1054]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1468464343": device or resource busy Sep 13 02:31:26.218657 ignition[1054]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1468464343", trying btrfs: device or resource busy Sep 13 02:31:26.218657 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1468464343" Sep 13 02:31:26.218657 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1468464343" Sep 13 02:31:26.218657 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1468464343" Sep 13 02:31:26.218657 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1468464343" Sep 13 02:31:26.218657 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Sep 13 02:31:26.218657 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 02:31:26.218657 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 13 02:31:26.374470 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Sep 13 02:31:27.023094 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 02:31:27.023094 ignition[1054]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 13 02:31:27.023094 ignition[1054]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 13 02:31:27.023094 ignition[1054]: INFO : files: op(11): [started] processing unit "packet-phone-home.service" Sep 13 02:31:27.023094 ignition[1054]: INFO : files: op(11): [finished] processing unit "packet-phone-home.service" Sep 13 02:31:27.023094 ignition[1054]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Sep 13 02:31:27.105679 ignition[1054]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 02:31:27.105679 ignition[1054]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 02:31:27.105679 ignition[1054]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Sep 13 02:31:27.105679 ignition[1054]: INFO : files: op(14): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 02:31:27.105679 ignition[1054]: INFO : files: op(14): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 02:31:27.105679 ignition[1054]: INFO : files: op(15): [started] setting preset to enabled for "packet-phone-home.service" Sep 13 02:31:27.105679 ignition[1054]: INFO : files: op(15): [finished] setting preset to enabled for "packet-phone-home.service" Sep 13 02:31:27.105679 ignition[1054]: INFO : files: op(16): [started] setting preset to enabled for "prepare-helm.service" Sep 13 02:31:27.105679 ignition[1054]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 02:31:27.105679 ignition[1054]: INFO : files: createResultFile: createFiles: op(17): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 02:31:27.105679 ignition[1054]: INFO : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 02:31:27.105679 ignition[1054]: INFO : files: files passed Sep 13 02:31:27.105679 ignition[1054]: INFO : POST message to Packet Timeline Sep 13 02:31:27.105679 ignition[1054]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 13 02:31:28.071804 ignition[1054]: INFO : GET result: OK Sep 13 02:31:29.682928 ignition[1054]: INFO : Ignition finished successfully Sep 13 02:31:29.686247 systemd[1]: Finished ignition-files.service. Sep 13 02:31:29.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:29.706468 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 02:31:29.777614 kernel: audit: type=1130 audit(1757730689.700:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:29.767613 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 02:31:29.801553 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 02:31:29.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:29.768001 systemd[1]: Starting ignition-quench.service... Sep 13 02:31:29.992667 kernel: audit: type=1130 audit(1757730689.811:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:29.992684 kernel: audit: type=1130 audit(1757730689.879:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:29.992692 kernel: audit: type=1131 audit(1757730689.879:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:29.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:29.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:29.784809 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 02:31:29.811837 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 02:31:29.811908 systemd[1]: Finished ignition-quench.service. Sep 13 02:31:30.146831 kernel: audit: type=1130 audit(1757730690.032:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:30.146845 kernel: audit: type=1131 audit(1757730690.032:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:30.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:30.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:29.899575 systemd[1]: Reached target ignition-complete.target. Sep 13 02:31:30.002042 systemd[1]: Starting initrd-parse-etc.service... Sep 13 02:31:30.014800 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 02:31:30.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:30.014842 systemd[1]: Finished initrd-parse-etc.service. Sep 13 02:31:30.268587 kernel: audit: type=1130 audit(1757730690.194:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:30.052739 systemd[1]: Reached target initrd-fs.target. Sep 13 02:31:30.155577 systemd[1]: Reached target initrd.target. Sep 13 02:31:30.155710 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 02:31:30.156058 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 02:31:30.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:30.176789 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 02:31:30.406580 kernel: audit: type=1131 audit(1757730690.328:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:30.195273 systemd[1]: Starting initrd-cleanup.service... Sep 13 02:31:30.264512 systemd[1]: Stopped target nss-lookup.target. Sep 13 02:31:30.278700 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 02:31:30.294760 systemd[1]: Stopped target timers.target. Sep 13 02:31:30.308700 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 02:31:30.308812 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 02:31:30.328982 systemd[1]: Stopped target initrd.target. Sep 13 02:31:30.398704 systemd[1]: Stopped target basic.target. Sep 13 02:31:30.406754 systemd[1]: Stopped target ignition-complete.target. Sep 13 02:31:30.429722 systemd[1]: Stopped target ignition-diskful.target. Sep 13 02:31:30.447720 systemd[1]: Stopped target initrd-root-device.target. Sep 13 02:31:30.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:30.464809 systemd[1]: Stopped target remote-fs.target. Sep 13 02:31:30.662604 kernel: audit: type=1131 audit(1757730690.577:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:30.481894 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 02:31:30.733417 kernel: audit: type=1131 audit(1757730690.672:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:30.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:30.498086 systemd[1]: Stopped target sysinit.target. Sep 13 02:31:30.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:30.513075 systemd[1]: Stopped target local-fs.target. Sep 13 02:31:30.529077 systemd[1]: Stopped target local-fs-pre.target. Sep 13 02:31:30.546069 systemd[1]: Stopped target swap.target. Sep 13 02:31:30.560973 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 02:31:30.561347 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 02:31:30.578302 systemd[1]: Stopped target cryptsetup.target. Sep 13 02:31:30.655747 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 02:31:30.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:30.655827 systemd[1]: Stopped dracut-initqueue.service. Sep 13 02:31:30.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:30.672809 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 02:31:30.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:30.672882 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 02:31:30.909577 ignition[1105]: INFO : Ignition 2.14.0 Sep 13 02:31:30.909577 ignition[1105]: INFO : Stage: umount Sep 13 02:31:30.909577 ignition[1105]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 02:31:30.909577 ignition[1105]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 13 02:31:30.909577 ignition[1105]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 02:31:30.909577 ignition[1105]: INFO : umount: umount passed Sep 13 02:31:30.909577 ignition[1105]: INFO : POST message to Packet Timeline Sep 13 02:31:30.909577 ignition[1105]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 13 02:31:30.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:30.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:30.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:31.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:31.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:31.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:30.741822 systemd[1]: Stopped target paths.target. Sep 13 02:31:30.755739 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 02:31:30.759594 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 02:31:30.771741 systemd[1]: Stopped target slices.target. Sep 13 02:31:30.785747 systemd[1]: Stopped target sockets.target. Sep 13 02:31:30.801771 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 02:31:30.801867 systemd[1]: Closed iscsid.socket. Sep 13 02:31:30.817896 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 02:31:30.818041 systemd[1]: Closed iscsiuio.socket. Sep 13 02:31:30.833163 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 02:31:30.833558 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 02:31:30.852170 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 02:31:30.852544 systemd[1]: Stopped ignition-files.service. Sep 13 02:31:30.868163 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 02:31:30.868551 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 13 02:31:30.886455 systemd[1]: Stopping ignition-mount.service... Sep 13 02:31:30.901470 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 02:31:30.901639 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 02:31:30.918360 systemd[1]: Stopping sysroot-boot.service... Sep 13 02:31:30.932525 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 02:31:30.932740 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 02:31:30.958148 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 02:31:30.958487 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 02:31:30.994691 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 02:31:30.996611 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 02:31:30.996943 systemd[1]: Stopped sysroot-boot.service. Sep 13 02:31:31.016519 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 02:31:31.016835 systemd[1]: Finished initrd-cleanup.service. Sep 13 02:31:31.965704 ignition[1105]: INFO : GET result: OK Sep 13 02:31:32.395586 ignition[1105]: INFO : Ignition finished successfully Sep 13 02:31:32.397065 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 02:31:32.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:32.397190 systemd[1]: Stopped ignition-mount.service. Sep 13 02:31:32.413006 systemd[1]: Stopped target network.target. Sep 13 02:31:32.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:32.428583 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 02:31:32.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:32.428815 systemd[1]: Stopped ignition-disks.service. Sep 13 02:31:32.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:32.443768 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 02:31:32.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:32.443896 systemd[1]: Stopped ignition-kargs.service. Sep 13 02:31:32.459888 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 02:31:32.460043 systemd[1]: Stopped ignition-setup.service. Sep 13 02:31:32.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:32.476882 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 02:31:32.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:32.558000 audit: BPF prog-id=6 op=UNLOAD Sep 13 02:31:32.477036 systemd[1]: Stopped initrd-setup-root.service. Sep 13 02:31:32.493180 systemd[1]: Stopping systemd-networkd.service... Sep 13 02:31:32.504492 systemd-networkd[880]: enp2s0f0np0: DHCPv6 lease lost Sep 13 02:31:32.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:32.508901 systemd[1]: Stopping systemd-resolved.service... Sep 13 02:31:32.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:32.512556 systemd-networkd[880]: enp2s0f1np1: DHCPv6 lease lost Sep 13 02:31:32.630000 audit: BPF prog-id=9 op=UNLOAD Sep 13 02:31:32.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:32.524309 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 02:31:32.524608 systemd[1]: Stopped systemd-resolved.service. Sep 13 02:31:32.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:32.542030 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 02:31:32.542352 systemd[1]: Stopped systemd-networkd.service. Sep 13 02:31:32.557024 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 02:31:32.557112 systemd[1]: Closed systemd-networkd.socket. Sep 13 02:31:32.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:32.577113 systemd[1]: Stopping network-cleanup.service... Sep 13 02:31:32.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:32.590566 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 02:31:32.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:32.590759 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 02:31:32.606814 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 02:31:32.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:32.606964 systemd[1]: Stopped systemd-sysctl.service. Sep 13 02:31:32.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:32.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:32.622998 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 02:31:32.623135 systemd[1]: Stopped systemd-modules-load.service. Sep 13 02:31:32.640082 systemd[1]: Stopping systemd-udevd.service... Sep 13 02:31:32.658398 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 02:31:32.659868 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 02:31:32.660237 systemd[1]: Stopped systemd-udevd.service. Sep 13 02:31:32.674087 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 02:31:32.674202 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 02:31:32.687702 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 02:31:32.687807 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 02:31:32.704591 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 02:31:32.704614 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 02:31:32.726625 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 02:31:32.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:32.726676 systemd[1]: Stopped dracut-cmdline.service. Sep 13 02:31:32.741506 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 02:31:32.741553 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 02:31:32.757660 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 02:31:32.997516 iscsid[902]: iscsid shutting down. Sep 13 02:31:32.774442 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 02:31:32.774474 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 02:31:32.790744 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 02:31:32.790811 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 02:31:32.913400 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 02:31:32.913648 systemd[1]: Stopped network-cleanup.service. Sep 13 02:31:32.923880 systemd[1]: Reached target initrd-switch-root.target. Sep 13 02:31:32.942541 systemd[1]: Starting initrd-switch-root.service... Sep 13 02:31:32.953381 systemd[1]: Switching root. Sep 13 02:31:32.997747 systemd-journald[269]: Journal stopped Sep 13 02:31:36.937008 systemd-journald[269]: Received SIGTERM from PID 1 (n/a). Sep 13 02:31:36.937025 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 02:31:36.937033 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 02:31:36.937038 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 02:31:36.937044 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 02:31:36.937049 kernel: SELinux: policy capability open_perms=1 Sep 13 02:31:36.937055 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 02:31:36.937062 kernel: SELinux: policy capability always_check_network=0 Sep 13 02:31:36.937067 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 02:31:36.937073 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 02:31:36.937078 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 02:31:36.937083 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 02:31:36.937089 systemd[1]: Successfully loaded SELinux policy in 321.843ms. Sep 13 02:31:36.937096 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.345ms. Sep 13 02:31:36.937104 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 02:31:36.937111 systemd[1]: Detected architecture x86-64. Sep 13 02:31:36.937117 systemd[1]: Detected first boot. Sep 13 02:31:36.937123 systemd[1]: Hostname set to . Sep 13 02:31:36.937130 systemd[1]: Initializing machine ID from random generator. Sep 13 02:31:36.937137 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 02:31:36.937144 systemd[1]: Populated /etc with preset unit settings. Sep 13 02:31:36.937150 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 02:31:36.937157 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 02:31:36.937164 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 02:31:36.937170 kernel: kauditd_printk_skb: 47 callbacks suppressed Sep 13 02:31:36.937176 kernel: audit: type=1334 audit(1757730695.394:90): prog-id=12 op=LOAD Sep 13 02:31:36.937183 kernel: audit: type=1334 audit(1757730695.395:91): prog-id=3 op=UNLOAD Sep 13 02:31:36.937189 kernel: audit: type=1334 audit(1757730695.462:92): prog-id=13 op=LOAD Sep 13 02:31:36.937194 kernel: audit: type=1334 audit(1757730695.484:93): prog-id=14 op=LOAD Sep 13 02:31:36.937200 kernel: audit: type=1334 audit(1757730695.484:94): prog-id=4 op=UNLOAD Sep 13 02:31:36.937206 kernel: audit: type=1334 audit(1757730695.484:95): prog-id=5 op=UNLOAD Sep 13 02:31:36.937211 kernel: audit: type=1334 audit(1757730695.549:96): prog-id=15 op=LOAD Sep 13 02:31:36.937217 kernel: audit: type=1334 audit(1757730695.549:97): prog-id=12 op=UNLOAD Sep 13 02:31:36.937223 kernel: audit: type=1334 audit(1757730695.590:98): prog-id=16 op=LOAD Sep 13 02:31:36.937229 kernel: audit: type=1334 audit(1757730695.609:99): prog-id=17 op=LOAD Sep 13 02:31:36.937235 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 02:31:36.937241 systemd[1]: Stopped iscsiuio.service. Sep 13 02:31:36.937247 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 02:31:36.937254 systemd[1]: Stopped iscsid.service. Sep 13 02:31:36.937260 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 02:31:36.937268 systemd[1]: Stopped initrd-switch-root.service. Sep 13 02:31:36.937276 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 02:31:36.937282 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 02:31:36.937289 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 02:31:36.937295 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 13 02:31:36.937302 systemd[1]: Created slice system-getty.slice. Sep 13 02:31:36.937309 systemd[1]: Created slice system-modprobe.slice. Sep 13 02:31:36.937316 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 02:31:36.937322 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 02:31:36.937329 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 02:31:36.937336 systemd[1]: Created slice user.slice. Sep 13 02:31:36.937343 systemd[1]: Started systemd-ask-password-console.path. Sep 13 02:31:36.937349 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 02:31:36.937358 systemd[1]: Set up automount boot.automount. Sep 13 02:31:36.937365 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 02:31:36.937372 systemd[1]: Stopped target initrd-switch-root.target. Sep 13 02:31:36.937400 systemd[1]: Stopped target initrd-fs.target. Sep 13 02:31:36.937407 systemd[1]: Stopped target initrd-root-fs.target. Sep 13 02:31:36.937429 systemd[1]: Reached target integritysetup.target. Sep 13 02:31:36.937436 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 02:31:36.937443 systemd[1]: Reached target remote-fs.target. Sep 13 02:31:36.937449 systemd[1]: Reached target slices.target. Sep 13 02:31:36.937456 systemd[1]: Reached target swap.target. Sep 13 02:31:36.937462 systemd[1]: Reached target torcx.target. Sep 13 02:31:36.937470 systemd[1]: Reached target veritysetup.target. Sep 13 02:31:36.937477 systemd[1]: Listening on systemd-coredump.socket. Sep 13 02:31:36.937484 systemd[1]: Listening on systemd-initctl.socket. Sep 13 02:31:36.937491 systemd[1]: Listening on systemd-networkd.socket. Sep 13 02:31:36.937497 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 02:31:36.937504 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 02:31:36.937511 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 02:31:36.937518 systemd[1]: Mounting dev-hugepages.mount... Sep 13 02:31:36.937525 systemd[1]: Mounting dev-mqueue.mount... Sep 13 02:31:36.937531 systemd[1]: Mounting media.mount... Sep 13 02:31:36.937538 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 02:31:36.937545 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 02:31:36.937551 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 02:31:36.937558 systemd[1]: Mounting tmp.mount... Sep 13 02:31:36.937564 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 02:31:36.937571 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 02:31:36.937579 systemd[1]: Starting kmod-static-nodes.service... Sep 13 02:31:36.937585 systemd[1]: Starting modprobe@configfs.service... Sep 13 02:31:36.937592 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 02:31:36.937599 systemd[1]: Starting modprobe@drm.service... Sep 13 02:31:36.937605 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 02:31:36.937612 systemd[1]: Starting modprobe@fuse.service... Sep 13 02:31:36.937619 kernel: fuse: init (API version 7.34) Sep 13 02:31:36.937625 systemd[1]: Starting modprobe@loop.service... Sep 13 02:31:36.937632 kernel: loop: module loaded Sep 13 02:31:36.937639 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 02:31:36.937646 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 02:31:36.937653 systemd[1]: Stopped systemd-fsck-root.service. Sep 13 02:31:36.937659 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 02:31:36.937666 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 02:31:36.937673 systemd[1]: Stopped systemd-journald.service. Sep 13 02:31:36.937679 systemd[1]: Starting systemd-journald.service... Sep 13 02:31:36.937686 systemd[1]: Starting systemd-modules-load.service... Sep 13 02:31:36.937695 systemd-journald[1258]: Journal started Sep 13 02:31:36.937722 systemd-journald[1258]: Runtime Journal (/run/log/journal/408bc111c1074ad88d12dc40ba3e1d28) is 8.0M, max 639.3M, 631.3M free. Sep 13 02:31:33.466000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 02:31:33.743000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 02:31:33.745000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 02:31:33.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 02:31:33.745000 audit: BPF prog-id=10 op=LOAD Sep 13 02:31:33.745000 audit: BPF prog-id=10 op=UNLOAD Sep 13 02:31:33.745000 audit: BPF prog-id=11 op=LOAD Sep 13 02:31:33.745000 audit: BPF prog-id=11 op=UNLOAD Sep 13 02:31:33.810000 audit[1147]: AVC avc: denied { associate } for pid=1147 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 02:31:33.810000 audit[1147]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001278e4 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=1130 pid=1147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 02:31:33.810000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 02:31:33.836000 audit[1147]: AVC avc: denied { associate } for pid=1147 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 02:31:33.836000 audit[1147]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001279c9 a2=1ed a3=0 items=2 ppid=1130 pid=1147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 02:31:33.836000 audit: CWD cwd="/" Sep 13 02:31:33.836000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:33.836000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:33.836000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 02:31:35.394000 audit: BPF prog-id=12 op=LOAD Sep 13 02:31:35.395000 audit: BPF prog-id=3 op=UNLOAD Sep 13 02:31:35.462000 audit: BPF prog-id=13 op=LOAD Sep 13 02:31:35.484000 audit: BPF prog-id=14 op=LOAD Sep 13 02:31:35.484000 audit: BPF prog-id=4 op=UNLOAD Sep 13 02:31:35.484000 audit: BPF prog-id=5 op=UNLOAD Sep 13 02:31:35.549000 audit: BPF prog-id=15 op=LOAD Sep 13 02:31:35.549000 audit: BPF prog-id=12 op=UNLOAD Sep 13 02:31:35.590000 audit: BPF prog-id=16 op=LOAD Sep 13 02:31:35.609000 audit: BPF prog-id=17 op=LOAD Sep 13 02:31:35.609000 audit: BPF prog-id=13 op=UNLOAD Sep 13 02:31:35.609000 audit: BPF prog-id=14 op=UNLOAD Sep 13 02:31:35.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:35.667000 audit: BPF prog-id=15 op=UNLOAD Sep 13 02:31:35.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:35.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:35.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:35.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:36.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:36.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:36.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:36.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:36.909000 audit: BPF prog-id=18 op=LOAD Sep 13 02:31:36.910000 audit: BPF prog-id=19 op=LOAD Sep 13 02:31:36.910000 audit: BPF prog-id=20 op=LOAD Sep 13 02:31:36.910000 audit: BPF prog-id=16 op=UNLOAD Sep 13 02:31:36.910000 audit: BPF prog-id=17 op=UNLOAD Sep 13 02:31:36.934000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 02:31:36.934000 audit[1258]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc40722f50 a2=4000 a3=7ffc40722fec items=0 ppid=1 pid=1258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 02:31:36.934000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 02:31:35.393810 systemd[1]: Queued start job for default target multi-user.target. Sep 13 02:31:33.808147 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 02:31:35.393817 systemd[1]: Unnecessary job was removed for dev-sdb6.device. Sep 13 02:31:33.808617 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 02:31:35.610877 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 02:31:33.808630 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 02:31:33.808649 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:33Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 13 02:31:33.808656 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:33Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 13 02:31:33.808677 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:33Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 13 02:31:33.808685 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:33Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 13 02:31:33.808811 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:33Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 13 02:31:33.808842 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 02:31:33.808856 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 02:31:33.809811 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 13 02:31:33.809832 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 13 02:31:33.809843 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 13 02:31:33.809852 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 13 02:31:33.809862 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 13 02:31:33.809869 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 13 02:31:35.013643 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:35Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 02:31:35.013789 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:35Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 02:31:35.013845 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:35Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 02:31:35.013939 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:35Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 02:31:35.013968 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:35Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 13 02:31:35.014003 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2025-09-13T02:31:35Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 13 02:31:36.968528 systemd[1]: Starting systemd-network-generator.service... Sep 13 02:31:36.990409 systemd[1]: Starting systemd-remount-fs.service... Sep 13 02:31:37.012420 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 02:31:37.044947 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 02:31:37.044968 systemd[1]: Stopped verity-setup.service. Sep 13 02:31:37.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.079404 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 02:31:37.094550 systemd[1]: Started systemd-journald.service. Sep 13 02:31:37.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.101910 systemd[1]: Mounted dev-hugepages.mount. Sep 13 02:31:37.108631 systemd[1]: Mounted dev-mqueue.mount. Sep 13 02:31:37.115627 systemd[1]: Mounted media.mount. Sep 13 02:31:37.122633 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 02:31:37.131610 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 02:31:37.140608 systemd[1]: Mounted tmp.mount. Sep 13 02:31:37.147674 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 02:31:37.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.155719 systemd[1]: Finished kmod-static-nodes.service. Sep 13 02:31:37.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.164716 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 02:31:37.164825 systemd[1]: Finished modprobe@configfs.service. Sep 13 02:31:37.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.173793 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 02:31:37.173929 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 02:31:37.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.182918 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 02:31:37.183108 systemd[1]: Finished modprobe@drm.service. Sep 13 02:31:37.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.192201 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 02:31:37.192526 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 02:31:37.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.201288 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 02:31:37.201634 systemd[1]: Finished modprobe@fuse.service. Sep 13 02:31:37.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.211250 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 02:31:37.211623 systemd[1]: Finished modprobe@loop.service. Sep 13 02:31:37.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.220287 systemd[1]: Finished systemd-modules-load.service. Sep 13 02:31:37.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.229337 systemd[1]: Finished systemd-network-generator.service. Sep 13 02:31:37.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.239239 systemd[1]: Finished systemd-remount-fs.service. Sep 13 02:31:37.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.249220 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 02:31:37.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.258863 systemd[1]: Reached target network-pre.target. Sep 13 02:31:37.271176 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 02:31:37.282055 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 02:31:37.289630 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 02:31:37.292918 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 02:31:37.300725 systemd[1]: Starting systemd-journal-flush.service... Sep 13 02:31:37.303812 systemd-journald[1258]: Time spent on flushing to /var/log/journal/408bc111c1074ad88d12dc40ba3e1d28 is 14.981ms for 1620 entries. Sep 13 02:31:37.303812 systemd-journald[1258]: System Journal (/var/log/journal/408bc111c1074ad88d12dc40ba3e1d28) is 8.0M, max 195.6M, 187.6M free. Sep 13 02:31:37.346767 systemd-journald[1258]: Received client request to flush runtime journal. Sep 13 02:31:37.316497 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 02:31:37.316982 systemd[1]: Starting systemd-random-seed.service... Sep 13 02:31:37.329502 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 02:31:37.330002 systemd[1]: Starting systemd-sysctl.service... Sep 13 02:31:37.336970 systemd[1]: Starting systemd-sysusers.service... Sep 13 02:31:37.343955 systemd[1]: Starting systemd-udev-settle.service... Sep 13 02:31:37.351530 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 02:31:37.359555 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 02:31:37.368573 systemd[1]: Finished systemd-journal-flush.service. Sep 13 02:31:37.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.376602 systemd[1]: Finished systemd-random-seed.service. Sep 13 02:31:37.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.385558 systemd[1]: Finished systemd-sysctl.service. Sep 13 02:31:37.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.393574 systemd[1]: Finished systemd-sysusers.service. Sep 13 02:31:37.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.402548 systemd[1]: Reached target first-boot-complete.target. Sep 13 02:31:37.410698 udevadm[1274]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 02:31:37.599769 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 02:31:37.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.609000 audit: BPF prog-id=21 op=LOAD Sep 13 02:31:37.609000 audit: BPF prog-id=22 op=LOAD Sep 13 02:31:37.609000 audit: BPF prog-id=7 op=UNLOAD Sep 13 02:31:37.609000 audit: BPF prog-id=8 op=UNLOAD Sep 13 02:31:37.610594 systemd[1]: Starting systemd-udevd.service... Sep 13 02:31:37.621946 systemd-udevd[1275]: Using default interface naming scheme 'v252'. Sep 13 02:31:37.637628 systemd[1]: Started systemd-udevd.service. Sep 13 02:31:37.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.648644 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Sep 13 02:31:37.649000 audit: BPF prog-id=23 op=LOAD Sep 13 02:31:37.649900 systemd[1]: Starting systemd-networkd.service... Sep 13 02:31:37.670000 audit: BPF prog-id=24 op=LOAD Sep 13 02:31:37.670000 audit: BPF prog-id=25 op=LOAD Sep 13 02:31:37.684369 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Sep 13 02:31:37.684485 kernel: ACPI: button: Sleep Button [SLPB] Sep 13 02:31:37.684501 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 02:31:37.700000 audit: BPF prog-id=26 op=LOAD Sep 13 02:31:37.714909 kernel: IPMI message handler: version 39.2 Sep 13 02:31:37.715369 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 02:31:37.715472 systemd[1]: Starting systemd-userdbd.service... Sep 13 02:31:37.730392 kernel: ACPI: button: Power Button [PWRF] Sep 13 02:31:37.760268 kernel: ipmi device interface Sep 13 02:31:37.763767 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 02:31:37.782369 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Sep 13 02:31:37.816472 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Sep 13 02:31:37.816625 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Sep 13 02:31:37.822407 systemd[1]: Started systemd-userdbd.service. Sep 13 02:31:37.679000 audit[1288]: AVC avc: denied { confidentiality } for pid=1288 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 02:31:37.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:37.679000 audit[1288]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5556e508fa20 a1=4d9cc a2=7f1a3c737bc5 a3=5 items=42 ppid=1275 pid=1288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 02:31:37.679000 audit: CWD cwd="/" Sep 13 02:31:37.679000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=1 name=(null) inode=24064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=2 name=(null) inode=24064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=3 name=(null) inode=24065 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=4 name=(null) inode=24064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=5 name=(null) inode=24066 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=6 name=(null) inode=24064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=7 name=(null) inode=24067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=8 name=(null) inode=24067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=9 name=(null) inode=24068 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=10 name=(null) inode=24067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=11 name=(null) inode=24069 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=12 name=(null) inode=24067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=13 name=(null) inode=24070 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=14 name=(null) inode=24067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=15 name=(null) inode=24071 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=16 name=(null) inode=24067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=17 name=(null) inode=24072 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=18 name=(null) inode=24064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=19 name=(null) inode=24073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=20 name=(null) inode=24073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=21 name=(null) inode=24074 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=22 name=(null) inode=24073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=23 name=(null) inode=24075 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=24 name=(null) inode=24073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=25 name=(null) inode=24076 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=26 name=(null) inode=24073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=27 name=(null) inode=24077 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=28 name=(null) inode=24073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=29 name=(null) inode=24078 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=30 name=(null) inode=24064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=31 name=(null) inode=24079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=32 name=(null) inode=24079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=33 name=(null) inode=24080 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=34 name=(null) inode=24079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=35 name=(null) inode=24081 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=36 name=(null) inode=24079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=37 name=(null) inode=24082 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=38 name=(null) inode=24079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=39 name=(null) inode=24083 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=40 name=(null) inode=24079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PATH item=41 name=(null) inode=24084 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 02:31:37.679000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 02:31:37.873018 kernel: ipmi_si: IPMI System Interface driver Sep 13 02:31:37.873045 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Sep 13 02:31:37.905969 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Sep 13 02:31:37.905986 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Sep 13 02:31:37.906001 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Sep 13 02:31:38.025039 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Sep 13 02:31:38.025175 kernel: iTCO_vendor_support: vendor-support=0 Sep 13 02:31:38.025197 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Sep 13 02:31:38.025324 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Sep 13 02:31:38.025440 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Sep 13 02:31:38.025535 kernel: ipmi_si: Adding ACPI-specified kcs state machine Sep 13 02:31:38.025557 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Sep 13 02:31:38.063724 systemd-networkd[1319]: bond0: netdev ready Sep 13 02:31:38.066562 systemd-networkd[1319]: lo: Link UP Sep 13 02:31:38.066565 systemd-networkd[1319]: lo: Gained carrier Sep 13 02:31:38.067109 systemd-networkd[1319]: Enumeration completed Sep 13 02:31:38.067215 systemd[1]: Started systemd-networkd.service. Sep 13 02:31:38.067367 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Sep 13 02:31:38.067514 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Sep 13 02:31:38.067526 systemd-networkd[1319]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Sep 13 02:31:38.068269 systemd-networkd[1319]: enp2s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:7e:a0:c1.network. Sep 13 02:31:38.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:38.151186 kernel: intel_rapl_common: Found RAPL domain package Sep 13 02:31:38.151219 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Sep 13 02:31:38.151316 kernel: intel_rapl_common: Found RAPL domain core Sep 13 02:31:38.151330 kernel: intel_rapl_common: Found RAPL domain uncore Sep 13 02:31:38.151342 kernel: intel_rapl_common: Found RAPL domain dram Sep 13 02:31:38.223361 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Sep 13 02:31:38.223475 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Sep 13 02:31:38.239364 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Sep 13 02:31:38.258009 systemd-networkd[1319]: enp2s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:7e:a0:c0.network. Sep 13 02:31:38.281368 kernel: ipmi_ssif: IPMI SSIF Interface driver Sep 13 02:31:38.281436 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Sep 13 02:31:38.426424 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Sep 13 02:31:38.426508 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Sep 13 02:31:38.464405 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Sep 13 02:31:38.484404 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Sep 13 02:31:38.494740 systemd-networkd[1319]: bond0: Link UP Sep 13 02:31:38.495000 systemd-networkd[1319]: enp2s0f1np1: Link UP Sep 13 02:31:38.495159 systemd-networkd[1319]: enp2s0f1np1: Gained carrier Sep 13 02:31:38.496380 systemd-networkd[1319]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:7e:a0:c0.network. Sep 13 02:31:38.532507 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Sep 13 02:31:38.532532 kernel: bond0: active interface up! Sep 13 02:31:38.559387 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Sep 13 02:31:38.572730 systemd[1]: Finished systemd-udev-settle.service. Sep 13 02:31:38.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:38.582086 systemd[1]: Starting lvm2-activation-early.service... Sep 13 02:31:38.597712 lvm[1379]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 02:31:38.632929 systemd[1]: Finished lvm2-activation-early.service. Sep 13 02:31:38.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:38.642527 systemd[1]: Reached target cryptsetup.target. Sep 13 02:31:38.651659 systemd[1]: Starting lvm2-activation.service... Sep 13 02:31:38.657304 lvm[1380]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 02:31:38.688378 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:38.711366 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:38.733362 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:38.755364 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:38.755972 systemd[1]: Finished lvm2-activation.service. Sep 13 02:31:38.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:38.773475 systemd[1]: Reached target local-fs-pre.target. Sep 13 02:31:38.778363 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:38.795412 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 02:31:38.795429 systemd[1]: Reached target local-fs.target. Sep 13 02:31:38.799361 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:38.816449 systemd[1]: Reached target machines.target. Sep 13 02:31:38.820362 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:38.839003 systemd[1]: Starting ldconfig.service... Sep 13 02:31:38.840362 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:38.855987 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 02:31:38.856008 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 02:31:38.856656 systemd[1]: Starting systemd-boot-update.service... Sep 13 02:31:38.861394 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:38.877960 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 02:31:38.882359 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:38.902367 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:38.918211 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 02:31:38.923238 systemd[1]: Starting systemd-sysext.service... Sep 13 02:31:38.923395 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:38.923483 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1382 (bootctl) Sep 13 02:31:38.924106 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 02:31:38.943398 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:38.951195 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 02:31:38.962417 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:38.982364 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:38.982814 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 02:31:38.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:38.982983 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 02:31:38.983065 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 02:31:39.002404 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:39.002435 kernel: loop0: detected capacity change from 0 to 229808 Sep 13 02:31:39.016425 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:39.050410 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:39.068403 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:39.087362 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:39.087767 systemd-networkd[1319]: enp2s0f0np0: Link UP Sep 13 02:31:39.087960 systemd-networkd[1319]: bond0: Gained carrier Sep 13 02:31:39.088088 systemd-networkd[1319]: enp2s0f0np0: Gained carrier Sep 13 02:31:39.101371 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 02:31:39.101427 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Sep 13 02:31:39.101583 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 02:31:39.131830 kernel: bond0: (slave enp2s0f1np1): invalid new link 1 on slave Sep 13 02:31:39.132307 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 02:31:39.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:39.145512 systemd-fsck[1393]: fsck.fat 4.2 (2021-01-31) Sep 13 02:31:39.145512 systemd-fsck[1393]: /dev/sdb1: 790 files, 120761/258078 clusters Sep 13 02:31:39.146231 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 02:31:39.147729 systemd-networkd[1319]: enp2s0f1np1: Link DOWN Sep 13 02:31:39.147732 systemd-networkd[1319]: enp2s0f1np1: Lost carrier Sep 13 02:31:39.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:39.158565 systemd[1]: Mounting boot.mount... Sep 13 02:31:39.178363 kernel: loop1: detected capacity change from 0 to 229808 Sep 13 02:31:39.179675 systemd[1]: Mounted boot.mount. Sep 13 02:31:39.195574 (sd-sysext)[1396]: Using extensions 'kubernetes'. Sep 13 02:31:39.195775 (sd-sysext)[1396]: Merged extensions into '/usr'. Sep 13 02:31:39.200730 systemd[1]: Finished systemd-boot-update.service. Sep 13 02:31:39.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:39.214354 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 02:31:39.215222 systemd[1]: Mounting usr-share-oem.mount... Sep 13 02:31:39.221590 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 02:31:39.222277 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 02:31:39.230033 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 02:31:39.236990 systemd[1]: Starting modprobe@loop.service... Sep 13 02:31:39.243447 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 02:31:39.243533 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 02:31:39.243618 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 02:31:39.245841 systemd[1]: Mounted usr-share-oem.mount. Sep 13 02:31:39.252613 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 02:31:39.252693 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 02:31:39.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:39.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:39.260614 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 02:31:39.260675 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 02:31:39.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:39.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:39.268576 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 02:31:39.268637 systemd[1]: Finished modprobe@loop.service. Sep 13 02:31:39.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:39.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:39.276640 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 02:31:39.276705 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 02:31:39.277238 systemd[1]: Finished systemd-sysext.service. Sep 13 02:31:39.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:39.286029 systemd[1]: Starting ensure-sysext.service... Sep 13 02:31:39.287399 ldconfig[1381]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 02:31:39.299362 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Sep 13 02:31:39.311906 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 02:31:39.317361 kernel: bond0: (slave enp2s0f1np1): speed changed to 0 on port 1 Sep 13 02:31:39.317896 systemd-networkd[1319]: enp2s0f1np1: Link UP Sep 13 02:31:39.318082 systemd-networkd[1319]: enp2s0f1np1: Gained carrier Sep 13 02:31:39.325597 systemd[1]: Finished ldconfig.service. Sep 13 02:31:39.326513 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 02:31:39.329971 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 02:31:39.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:39.333526 systemd[1]: Reloading. Sep 13 02:31:39.333657 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 02:31:39.357366 kernel: bond0: (slave enp2s0f1np1): link status up again after 200 ms Sep 13 02:31:39.360112 /usr/lib/systemd/system-generators/torcx-generator[1422]: time="2025-09-13T02:31:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 02:31:39.360128 /usr/lib/systemd/system-generators/torcx-generator[1422]: time="2025-09-13T02:31:39Z" level=info msg="torcx already run" Sep 13 02:31:39.374365 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Sep 13 02:31:39.410904 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 02:31:39.410911 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 02:31:39.421887 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 02:31:39.463000 audit: BPF prog-id=27 op=LOAD Sep 13 02:31:39.463000 audit: BPF prog-id=28 op=LOAD Sep 13 02:31:39.463000 audit: BPF prog-id=21 op=UNLOAD Sep 13 02:31:39.463000 audit: BPF prog-id=22 op=UNLOAD Sep 13 02:31:39.464000 audit: BPF prog-id=29 op=LOAD Sep 13 02:31:39.464000 audit: BPF prog-id=23 op=UNLOAD Sep 13 02:31:39.464000 audit: BPF prog-id=30 op=LOAD Sep 13 02:31:39.464000 audit: BPF prog-id=24 op=UNLOAD Sep 13 02:31:39.465000 audit: BPF prog-id=31 op=LOAD Sep 13 02:31:39.465000 audit: BPF prog-id=32 op=LOAD Sep 13 02:31:39.465000 audit: BPF prog-id=25 op=UNLOAD Sep 13 02:31:39.465000 audit: BPF prog-id=26 op=UNLOAD Sep 13 02:31:39.465000 audit: BPF prog-id=33 op=LOAD Sep 13 02:31:39.465000 audit: BPF prog-id=18 op=UNLOAD Sep 13 02:31:39.465000 audit: BPF prog-id=34 op=LOAD Sep 13 02:31:39.465000 audit: BPF prog-id=35 op=LOAD Sep 13 02:31:39.465000 audit: BPF prog-id=19 op=UNLOAD Sep 13 02:31:39.465000 audit: BPF prog-id=20 op=UNLOAD Sep 13 02:31:39.467585 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 02:31:39.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 02:31:39.477240 systemd[1]: Starting audit-rules.service... Sep 13 02:31:39.484959 systemd[1]: Starting clean-ca-certificates.service... Sep 13 02:31:39.494025 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 02:31:39.494000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 02:31:39.494000 audit[1502]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdb5454820 a2=420 a3=0 items=0 ppid=1485 pid=1502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 02:31:39.494000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 02:31:39.495441 augenrules[1502]: No rules Sep 13 02:31:39.503403 systemd[1]: Starting systemd-resolved.service... Sep 13 02:31:39.511270 systemd[1]: Starting systemd-timesyncd.service... Sep 13 02:31:39.518943 systemd[1]: Starting systemd-update-utmp.service... Sep 13 02:31:39.525802 systemd[1]: Finished audit-rules.service. Sep 13 02:31:39.532523 systemd[1]: Finished clean-ca-certificates.service. Sep 13 02:31:39.540513 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 02:31:39.553936 systemd[1]: Finished systemd-update-utmp.service. Sep 13 02:31:39.562981 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 02:31:39.563600 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 02:31:39.571984 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 02:31:39.578967 systemd[1]: Starting modprobe@loop.service... Sep 13 02:31:39.581433 systemd-resolved[1507]: Positive Trust Anchors: Sep 13 02:31:39.581441 systemd-resolved[1507]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 02:31:39.581460 systemd-resolved[1507]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 02:31:39.585428 systemd-resolved[1507]: Using system hostname 'ci-3510.3.8-n-6378d470a1'. Sep 13 02:31:39.586422 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 02:31:39.586496 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 02:31:39.587193 systemd[1]: Starting systemd-update-done.service... Sep 13 02:31:39.594390 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 02:31:39.594898 systemd[1]: Started systemd-timesyncd.service. Sep 13 02:31:39.604599 systemd[1]: Started systemd-resolved.service. Sep 13 02:31:39.613594 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 02:31:39.613665 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 02:31:39.621644 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 02:31:39.621707 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 02:31:39.629604 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 02:31:39.629665 systemd[1]: Finished modprobe@loop.service. Sep 13 02:31:39.637580 systemd[1]: Finished systemd-update-done.service. Sep 13 02:31:39.647208 systemd[1]: Reached target network.target. Sep 13 02:31:39.655442 systemd[1]: Reached target nss-lookup.target. Sep 13 02:31:39.663439 systemd[1]: Reached target time-set.target. Sep 13 02:31:39.671541 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 02:31:39.672177 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 02:31:39.678926 systemd[1]: Starting modprobe@drm.service... Sep 13 02:31:39.685912 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 02:31:39.692910 systemd[1]: Starting modprobe@loop.service... Sep 13 02:31:39.699427 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 02:31:39.699490 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 02:31:39.700091 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 02:31:39.708408 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 02:31:39.709089 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 02:31:39.709157 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 02:31:39.717637 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 02:31:39.717701 systemd[1]: Finished modprobe@drm.service. Sep 13 02:31:39.725590 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 02:31:39.725654 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 02:31:39.733582 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 02:31:39.733644 systemd[1]: Finished modprobe@loop.service. Sep 13 02:31:39.741727 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 02:31:39.741764 systemd[1]: Reached target sysinit.target. Sep 13 02:31:39.749488 systemd[1]: Started motdgen.path. Sep 13 02:31:39.756464 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 02:31:39.766532 systemd[1]: Started logrotate.timer. Sep 13 02:31:39.773485 systemd[1]: Started mdadm.timer. Sep 13 02:31:39.780452 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 02:31:39.788444 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 02:31:39.788461 systemd[1]: Reached target paths.target. Sep 13 02:31:39.795407 systemd[1]: Reached target timers.target. Sep 13 02:31:39.802515 systemd[1]: Listening on dbus.socket. Sep 13 02:31:39.809855 systemd[1]: Starting docker.socket... Sep 13 02:31:39.817673 systemd[1]: Listening on sshd.socket. Sep 13 02:31:39.824448 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 02:31:39.824472 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 02:31:39.824774 systemd[1]: Finished ensure-sysext.service. Sep 13 02:31:39.832514 systemd[1]: Listening on docker.socket. Sep 13 02:31:39.839864 systemd[1]: Reached target sockets.target. Sep 13 02:31:39.848548 systemd[1]: Reached target basic.target. Sep 13 02:31:39.855482 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 02:31:39.855498 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 02:31:39.856016 systemd[1]: Starting containerd.service... Sep 13 02:31:39.863019 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 13 02:31:39.873127 systemd[1]: Starting coreos-metadata.service... Sep 13 02:31:39.879964 systemd[1]: Starting dbus.service... Sep 13 02:31:39.886990 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 02:31:39.891797 jq[1531]: false Sep 13 02:31:39.895021 systemd[1]: Starting extend-filesystems.service... Sep 13 02:31:39.896669 coreos-metadata[1524]: Sep 13 02:31:39.896 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 13 02:31:39.898573 dbus-daemon[1530]: [system] SELinux support is enabled Sep 13 02:31:39.901473 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 02:31:39.902054 systemd[1]: Starting motdgen.service... Sep 13 02:31:39.903426 extend-filesystems[1532]: Found loop1 Sep 13 02:31:39.922480 extend-filesystems[1532]: Found sda Sep 13 02:31:39.922480 extend-filesystems[1532]: Found sdb Sep 13 02:31:39.922480 extend-filesystems[1532]: Found sdb1 Sep 13 02:31:39.922480 extend-filesystems[1532]: Found sdb2 Sep 13 02:31:39.922480 extend-filesystems[1532]: Found sdb3 Sep 13 02:31:39.922480 extend-filesystems[1532]: Found usr Sep 13 02:31:39.922480 extend-filesystems[1532]: Found sdb4 Sep 13 02:31:39.922480 extend-filesystems[1532]: Found sdb6 Sep 13 02:31:39.922480 extend-filesystems[1532]: Found sdb7 Sep 13 02:31:39.922480 extend-filesystems[1532]: Found sdb9 Sep 13 02:31:39.922480 extend-filesystems[1532]: Checking size of /dev/sdb9 Sep 13 02:31:39.922480 extend-filesystems[1532]: Resized partition /dev/sdb9 Sep 13 02:31:40.074478 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Sep 13 02:31:40.074585 coreos-metadata[1527]: Sep 13 02:31:39.904 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 13 02:31:39.909149 systemd[1]: Starting prepare-helm.service... Sep 13 02:31:40.074758 extend-filesystems[1548]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 02:31:39.936123 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 02:31:40.085658 dbus-daemon[1530]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 02:31:39.955002 systemd[1]: Starting sshd-keygen.service... Sep 13 02:31:39.969751 systemd[1]: Starting systemd-logind.service... Sep 13 02:31:39.982405 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 02:31:39.982953 systemd[1]: Starting tcsd.service... Sep 13 02:31:40.091002 update_engine[1561]: I0913 02:31:40.040186 1561 main.cc:92] Flatcar Update Engine starting Sep 13 02:31:40.091002 update_engine[1561]: I0913 02:31:40.043587 1561 update_check_scheduler.cc:74] Next update check in 9m56s Sep 13 02:31:39.992305 systemd-logind[1559]: Watching system buttons on /dev/input/event3 (Power Button) Sep 13 02:31:40.091326 jq[1562]: true Sep 13 02:31:39.992316 systemd-logind[1559]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 13 02:31:39.992326 systemd-logind[1559]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Sep 13 02:31:40.091569 tar[1564]: linux-amd64/LICENSE Sep 13 02:31:40.091569 tar[1564]: linux-amd64/helm Sep 13 02:31:39.992540 systemd-logind[1559]: New seat seat0. Sep 13 02:31:40.091798 jq[1566]: true Sep 13 02:31:39.994783 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 02:31:39.995185 systemd[1]: Starting update-engine.service... Sep 13 02:31:40.008940 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 02:31:40.026918 systemd[1]: Started dbus.service. Sep 13 02:31:40.053013 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 02:31:40.053110 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 02:31:40.053264 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 02:31:40.053347 systemd[1]: Finished motdgen.service. Sep 13 02:31:40.066850 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 02:31:40.066939 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 02:31:40.089793 systemd[1]: Started update-engine.service. Sep 13 02:31:40.095053 env[1567]: time="2025-09-13T02:31:40.095028213Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 02:31:40.098946 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Sep 13 02:31:40.099047 systemd[1]: Condition check resulted in tcsd.service being skipped. Sep 13 02:31:40.103288 env[1567]: time="2025-09-13T02:31:40.103271744Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 02:31:40.103311 systemd[1]: Started systemd-logind.service. Sep 13 02:31:40.104862 env[1567]: time="2025-09-13T02:31:40.104816770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 02:31:40.105483 env[1567]: time="2025-09-13T02:31:40.105435608Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 02:31:40.105483 env[1567]: time="2025-09-13T02:31:40.105456438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 02:31:40.107292 env[1567]: time="2025-09-13T02:31:40.107281544Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 02:31:40.107325 env[1567]: time="2025-09-13T02:31:40.107292434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 02:31:40.107325 env[1567]: time="2025-09-13T02:31:40.107300285Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 02:31:40.107325 env[1567]: time="2025-09-13T02:31:40.107305931Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 02:31:40.107380 env[1567]: time="2025-09-13T02:31:40.107347319Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 02:31:40.109530 env[1567]: time="2025-09-13T02:31:40.109488080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 02:31:40.109634 env[1567]: time="2025-09-13T02:31:40.109569639Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 02:31:40.109634 env[1567]: time="2025-09-13T02:31:40.109580322Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 02:31:40.109634 env[1567]: time="2025-09-13T02:31:40.109606905Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 02:31:40.109634 env[1567]: time="2025-09-13T02:31:40.109616712Z" level=info msg="metadata content store policy set" policy=shared Sep 13 02:31:40.113121 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 02:31:40.113812 bash[1593]: Updated "/home/core/.ssh/authorized_keys" Sep 13 02:31:40.114197 systemd[1]: Started locksmithd.service. Sep 13 02:31:40.118538 env[1567]: time="2025-09-13T02:31:40.118524536Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 02:31:40.118572 env[1567]: time="2025-09-13T02:31:40.118541909Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 02:31:40.118572 env[1567]: time="2025-09-13T02:31:40.118550216Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 02:31:40.118572 env[1567]: time="2025-09-13T02:31:40.118568226Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 02:31:40.118631 env[1567]: time="2025-09-13T02:31:40.118576135Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 02:31:40.118631 env[1567]: time="2025-09-13T02:31:40.118583696Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 02:31:40.118631 env[1567]: time="2025-09-13T02:31:40.118594030Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 02:31:40.118631 env[1567]: time="2025-09-13T02:31:40.118601535Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 02:31:40.118631 env[1567]: time="2025-09-13T02:31:40.118608204Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 02:31:40.118631 env[1567]: time="2025-09-13T02:31:40.118615468Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 02:31:40.118631 env[1567]: time="2025-09-13T02:31:40.118623292Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 02:31:40.118766 env[1567]: time="2025-09-13T02:31:40.118633030Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 02:31:40.118766 env[1567]: time="2025-09-13T02:31:40.118681570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 02:31:40.118766 env[1567]: time="2025-09-13T02:31:40.118730655Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 02:31:40.118880 env[1567]: time="2025-09-13T02:31:40.118872592Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 02:31:40.118901 env[1567]: time="2025-09-13T02:31:40.118890412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 02:31:40.118918 env[1567]: time="2025-09-13T02:31:40.118899888Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 02:31:40.118938 env[1567]: time="2025-09-13T02:31:40.118927749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 02:31:40.118938 env[1567]: time="2025-09-13T02:31:40.118935374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 02:31:40.118972 env[1567]: time="2025-09-13T02:31:40.118942299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 02:31:40.118972 env[1567]: time="2025-09-13T02:31:40.118948578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 02:31:40.118972 env[1567]: time="2025-09-13T02:31:40.118958163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 02:31:40.118972 env[1567]: time="2025-09-13T02:31:40.118965913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 02:31:40.119046 env[1567]: time="2025-09-13T02:31:40.118974504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 02:31:40.119046 env[1567]: time="2025-09-13T02:31:40.118983203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 02:31:40.119046 env[1567]: time="2025-09-13T02:31:40.118991022Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 02:31:40.119097 env[1567]: time="2025-09-13T02:31:40.119055402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 02:31:40.119097 env[1567]: time="2025-09-13T02:31:40.119063868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 02:31:40.119097 env[1567]: time="2025-09-13T02:31:40.119070120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 02:31:40.119097 env[1567]: time="2025-09-13T02:31:40.119076131Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 02:31:40.119097 env[1567]: time="2025-09-13T02:31:40.119086413Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 02:31:40.119097 env[1567]: time="2025-09-13T02:31:40.119093312Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 02:31:40.119186 env[1567]: time="2025-09-13T02:31:40.119104323Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 02:31:40.119186 env[1567]: time="2025-09-13T02:31:40.119127023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 02:31:40.119419 env[1567]: time="2025-09-13T02:31:40.119240027Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 02:31:40.122131 env[1567]: time="2025-09-13T02:31:40.119444119Z" level=info msg="Connect containerd service" Sep 13 02:31:40.122131 env[1567]: time="2025-09-13T02:31:40.119485524Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 02:31:40.122131 env[1567]: time="2025-09-13T02:31:40.119905217Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 02:31:40.122131 env[1567]: time="2025-09-13T02:31:40.120008796Z" level=info msg="Start subscribing containerd event" Sep 13 02:31:40.122131 env[1567]: time="2025-09-13T02:31:40.120023803Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 02:31:40.122131 env[1567]: time="2025-09-13T02:31:40.120038906Z" level=info msg="Start recovering state" Sep 13 02:31:40.122131 env[1567]: time="2025-09-13T02:31:40.120047867Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 02:31:40.122131 env[1567]: time="2025-09-13T02:31:40.120070639Z" level=info msg="containerd successfully booted in 0.025376s" Sep 13 02:31:40.122131 env[1567]: time="2025-09-13T02:31:40.120072196Z" level=info msg="Start event monitor" Sep 13 02:31:40.122131 env[1567]: time="2025-09-13T02:31:40.120083482Z" level=info msg="Start snapshots syncer" Sep 13 02:31:40.122131 env[1567]: time="2025-09-13T02:31:40.120090628Z" level=info msg="Start cni network conf syncer for default" Sep 13 02:31:40.122131 env[1567]: time="2025-09-13T02:31:40.120101822Z" level=info msg="Start streaming server" Sep 13 02:31:40.120522 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 02:31:40.120615 systemd[1]: Reached target system-config.target. Sep 13 02:31:40.128511 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 02:31:40.128584 systemd[1]: Reached target user-config.target. Sep 13 02:31:40.136431 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 02:31:40.138154 systemd[1]: Started containerd.service. Sep 13 02:31:40.144683 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 02:31:40.172675 locksmithd[1602]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 02:31:40.393070 tar[1564]: linux-amd64/README.md Sep 13 02:31:40.395822 systemd[1]: Finished prepare-helm.service. Sep 13 02:31:40.438482 systemd-networkd[1319]: bond0: Gained IPv6LL Sep 13 02:31:40.459362 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Sep 13 02:31:40.487715 kernel: mlx5_core 0000:02:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Sep 13 02:31:40.487820 extend-filesystems[1548]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Sep 13 02:31:40.487820 extend-filesystems[1548]: old_desc_blocks = 1, new_desc_blocks = 56 Sep 13 02:31:40.487820 extend-filesystems[1548]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Sep 13 02:31:40.538457 extend-filesystems[1532]: Resized filesystem in /dev/sdb9 Sep 13 02:31:40.488132 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 02:31:40.488214 systemd[1]: Finished extend-filesystems.service. Sep 13 02:31:40.658880 sshd_keygen[1558]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 02:31:40.670760 systemd[1]: Finished sshd-keygen.service. Sep 13 02:31:40.678504 systemd[1]: Starting issuegen.service... Sep 13 02:31:40.686683 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 02:31:40.686781 systemd[1]: Finished issuegen.service. Sep 13 02:31:40.695451 systemd[1]: Starting systemd-user-sessions.service... Sep 13 02:31:40.704653 systemd[1]: Finished systemd-user-sessions.service. Sep 13 02:31:40.714179 systemd[1]: Started getty@tty1.service. Sep 13 02:31:40.722179 systemd[1]: Started serial-getty@ttyS1.service. Sep 13 02:31:40.731558 systemd[1]: Reached target getty.target. Sep 13 02:31:40.891135 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 02:31:40.901403 systemd[1]: Reached target network-online.target. Sep 13 02:31:40.913175 systemd[1]: Starting kubelet.service... Sep 13 02:31:42.006595 systemd[1]: Started kubelet.service. Sep 13 02:31:42.675604 kubelet[1635]: E0913 02:31:42.675582 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 02:31:42.676836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 02:31:42.676910 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 02:31:42.677086 systemd[1]: kubelet.service: Consumed 1.168s CPU time. Sep 13 02:31:45.751665 coreos-metadata[1524]: Sep 13 02:31:45.751 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Sep 13 02:31:45.751817 coreos-metadata[1527]: Sep 13 02:31:45.751 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Sep 13 02:31:45.759421 login[1630]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Sep 13 02:31:45.759854 login[1629]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 02:31:45.767684 systemd-logind[1559]: New session 2 of user core. Sep 13 02:31:45.768181 systemd[1]: Created slice user-500.slice. Sep 13 02:31:45.768742 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 02:31:45.774252 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 02:31:45.774935 systemd[1]: Starting user@500.service... Sep 13 02:31:45.776922 (systemd)[1652]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:31:45.845635 systemd[1652]: Queued start job for default target default.target. Sep 13 02:31:45.845873 systemd[1652]: Reached target paths.target. Sep 13 02:31:45.845885 systemd[1652]: Reached target sockets.target. Sep 13 02:31:45.845893 systemd[1652]: Reached target timers.target. Sep 13 02:31:45.845900 systemd[1652]: Reached target basic.target. Sep 13 02:31:45.845919 systemd[1652]: Reached target default.target. Sep 13 02:31:45.845934 systemd[1652]: Startup finished in 65ms. Sep 13 02:31:45.845983 systemd[1]: Started user@500.service. Sep 13 02:31:45.846525 systemd[1]: Started session-2.scope. Sep 13 02:31:46.588712 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:2 port 2:2 Sep 13 02:31:46.588869 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:2 Sep 13 02:31:46.752157 coreos-metadata[1527]: Sep 13 02:31:46.752 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Sep 13 02:31:46.752946 coreos-metadata[1524]: Sep 13 02:31:46.752 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Sep 13 02:31:46.764705 login[1630]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 02:31:46.767836 systemd-logind[1559]: New session 1 of user core. Sep 13 02:31:46.768278 systemd[1]: Started session-1.scope. Sep 13 02:31:46.777719 systemd[1]: Created slice system-sshd.slice. Sep 13 02:31:46.778321 systemd[1]: Started sshd@0-145.40.90.231:22-139.178.89.65:33946.service. Sep 13 02:31:46.822523 sshd[1673]: Accepted publickey for core from 139.178.89.65 port 33946 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:31:46.823308 sshd[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:31:46.826143 systemd-logind[1559]: New session 3 of user core. Sep 13 02:31:46.826764 systemd[1]: Started session-3.scope. Sep 13 02:31:46.881749 systemd[1]: Started sshd@1-145.40.90.231:22-139.178.89.65:33960.service. Sep 13 02:31:46.908099 sshd[1678]: Accepted publickey for core from 139.178.89.65 port 33960 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:31:46.908810 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:31:46.911121 systemd-logind[1559]: New session 4 of user core. Sep 13 02:31:46.911605 systemd[1]: Started session-4.scope. Sep 13 02:31:46.962006 sshd[1678]: pam_unix(sshd:session): session closed for user core Sep 13 02:31:46.964196 systemd[1]: sshd@1-145.40.90.231:22-139.178.89.65:33960.service: Deactivated successfully. Sep 13 02:31:46.964743 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 02:31:46.965248 systemd-logind[1559]: Session 4 logged out. Waiting for processes to exit. Sep 13 02:31:46.966185 systemd[1]: Started sshd@2-145.40.90.231:22-139.178.89.65:33964.service. Sep 13 02:31:46.966863 systemd-logind[1559]: Removed session 4. Sep 13 02:31:46.995930 sshd[1684]: Accepted publickey for core from 139.178.89.65 port 33964 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:31:46.996863 sshd[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:31:47.000111 systemd-logind[1559]: New session 5 of user core. Sep 13 02:31:47.000790 systemd[1]: Started session-5.scope. Sep 13 02:31:47.055795 sshd[1684]: pam_unix(sshd:session): session closed for user core Sep 13 02:31:47.057184 systemd[1]: sshd@2-145.40.90.231:22-139.178.89.65:33964.service: Deactivated successfully. Sep 13 02:31:47.057573 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 02:31:47.057884 systemd-logind[1559]: Session 5 logged out. Waiting for processes to exit. Sep 13 02:31:47.058319 systemd-logind[1559]: Removed session 5. Sep 13 02:31:47.104493 systemd-timesyncd[1508]: Contacted time server 23.142.248.8:123 (0.flatcar.pool.ntp.org). Sep 13 02:31:47.104655 systemd-timesyncd[1508]: Initial clock synchronization to Sat 2025-09-13 02:31:46.829678 UTC. Sep 13 02:31:47.862477 coreos-metadata[1524]: Sep 13 02:31:47.862 INFO Fetch successful Sep 13 02:31:47.899749 unknown[1524]: wrote ssh authorized keys file for user: core Sep 13 02:31:47.913944 update-ssh-keys[1689]: Updated "/home/core/.ssh/authorized_keys" Sep 13 02:31:47.915344 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 13 02:31:48.415594 coreos-metadata[1527]: Sep 13 02:31:48.415 INFO Fetch successful Sep 13 02:31:48.495999 systemd[1]: Finished coreos-metadata.service. Sep 13 02:31:48.496795 systemd[1]: Started packet-phone-home.service. Sep 13 02:31:48.496913 systemd[1]: Reached target multi-user.target. Sep 13 02:31:48.497523 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 02:31:48.501713 curl[1692]: % Total % Received % Xferd Average Speed Time Time Time Current Sep 13 02:31:48.501864 curl[1692]: Dload Upload Total Spent Left Speed Sep 13 02:31:48.501754 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 02:31:48.501828 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 02:31:48.501967 systemd[1]: Startup finished in 2.046s (kernel) + 30.280s (initrd) + 15.381s (userspace) = 47.708s. Sep 13 02:31:48.934275 curl[1692]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Sep 13 02:31:48.936667 systemd[1]: packet-phone-home.service: Deactivated successfully. Sep 13 02:31:52.733608 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 02:31:52.734124 systemd[1]: Stopped kubelet.service. Sep 13 02:31:52.734225 systemd[1]: kubelet.service: Consumed 1.168s CPU time. Sep 13 02:31:52.735919 systemd[1]: Starting kubelet.service... Sep 13 02:31:52.983740 systemd[1]: Started kubelet.service. Sep 13 02:31:53.007663 kubelet[1699]: E0913 02:31:53.007610 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 02:31:53.009718 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 02:31:53.009795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 02:31:56.870850 systemd[1]: Started sshd@3-145.40.90.231:22-139.178.89.65:34222.service. Sep 13 02:31:56.898051 sshd[1717]: Accepted publickey for core from 139.178.89.65 port 34222 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:31:56.898921 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:31:56.901948 systemd-logind[1559]: New session 6 of user core. Sep 13 02:31:56.902566 systemd[1]: Started session-6.scope. Sep 13 02:31:56.956537 sshd[1717]: pam_unix(sshd:session): session closed for user core Sep 13 02:31:56.958027 systemd[1]: sshd@3-145.40.90.231:22-139.178.89.65:34222.service: Deactivated successfully. Sep 13 02:31:56.958318 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 02:31:56.958681 systemd-logind[1559]: Session 6 logged out. Waiting for processes to exit. Sep 13 02:31:56.959194 systemd[1]: Started sshd@4-145.40.90.231:22-139.178.89.65:34224.service. Sep 13 02:31:56.959624 systemd-logind[1559]: Removed session 6. Sep 13 02:31:56.986523 sshd[1723]: Accepted publickey for core from 139.178.89.65 port 34224 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:31:56.987334 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:31:56.990100 systemd-logind[1559]: New session 7 of user core. Sep 13 02:31:56.990706 systemd[1]: Started session-7.scope. Sep 13 02:31:57.042008 sshd[1723]: pam_unix(sshd:session): session closed for user core Sep 13 02:31:57.043601 systemd[1]: sshd@4-145.40.90.231:22-139.178.89.65:34224.service: Deactivated successfully. Sep 13 02:31:57.043893 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 02:31:57.044214 systemd-logind[1559]: Session 7 logged out. Waiting for processes to exit. Sep 13 02:31:57.044795 systemd[1]: Started sshd@5-145.40.90.231:22-139.178.89.65:34228.service. Sep 13 02:31:57.045233 systemd-logind[1559]: Removed session 7. Sep 13 02:31:57.072086 sshd[1729]: Accepted publickey for core from 139.178.89.65 port 34228 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:31:57.073058 sshd[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:31:57.076153 systemd-logind[1559]: New session 8 of user core. Sep 13 02:31:57.076900 systemd[1]: Started session-8.scope. Sep 13 02:31:57.131789 sshd[1729]: pam_unix(sshd:session): session closed for user core Sep 13 02:31:57.133288 systemd[1]: sshd@5-145.40.90.231:22-139.178.89.65:34228.service: Deactivated successfully. Sep 13 02:31:57.133623 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 02:31:57.133949 systemd-logind[1559]: Session 8 logged out. Waiting for processes to exit. Sep 13 02:31:57.134491 systemd[1]: Started sshd@6-145.40.90.231:22-139.178.89.65:34238.service. Sep 13 02:31:57.134934 systemd-logind[1559]: Removed session 8. Sep 13 02:31:57.161795 sshd[1735]: Accepted publickey for core from 139.178.89.65 port 34238 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:31:57.162765 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:31:57.165712 systemd-logind[1559]: New session 9 of user core. Sep 13 02:31:57.166471 systemd[1]: Started session-9.scope. Sep 13 02:31:57.246677 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 02:31:57.247385 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 02:31:57.271141 systemd[1]: Starting docker.service... Sep 13 02:31:57.288859 env[1752]: time="2025-09-13T02:31:57.288831765Z" level=info msg="Starting up" Sep 13 02:31:57.289534 env[1752]: time="2025-09-13T02:31:57.289521627Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 02:31:57.289534 env[1752]: time="2025-09-13T02:31:57.289531998Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 02:31:57.289584 env[1752]: time="2025-09-13T02:31:57.289544075Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 02:31:57.289584 env[1752]: time="2025-09-13T02:31:57.289550643Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 02:31:57.290353 env[1752]: time="2025-09-13T02:31:57.290341890Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 02:31:57.290353 env[1752]: time="2025-09-13T02:31:57.290351479Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 02:31:57.290410 env[1752]: time="2025-09-13T02:31:57.290363232Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 02:31:57.290410 env[1752]: time="2025-09-13T02:31:57.290371149Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 02:31:57.315267 env[1752]: time="2025-09-13T02:31:57.315254147Z" level=info msg="Loading containers: start." Sep 13 02:31:57.443383 kernel: Initializing XFRM netlink socket Sep 13 02:31:57.508152 env[1752]: time="2025-09-13T02:31:57.508106331Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 02:31:57.575515 systemd-networkd[1319]: docker0: Link UP Sep 13 02:31:57.599507 env[1752]: time="2025-09-13T02:31:57.599435113Z" level=info msg="Loading containers: done." Sep 13 02:31:57.614922 env[1752]: time="2025-09-13T02:31:57.614858882Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 02:31:57.615217 env[1752]: time="2025-09-13T02:31:57.615185912Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 02:31:57.615431 env[1752]: time="2025-09-13T02:31:57.615393285Z" level=info msg="Daemon has completed initialization" Sep 13 02:31:57.617128 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4002513110-merged.mount: Deactivated successfully. Sep 13 02:31:57.622227 systemd[1]: Started docker.service. Sep 13 02:31:57.625207 env[1752]: time="2025-09-13T02:31:57.625184987Z" level=info msg="API listen on /run/docker.sock" Sep 13 02:31:58.807850 env[1567]: time="2025-09-13T02:31:58.807707933Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 02:31:59.431663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2139532942.mount: Deactivated successfully. Sep 13 02:32:00.640063 env[1567]: time="2025-09-13T02:32:00.640033900Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:00.640775 env[1567]: time="2025-09-13T02:32:00.640762763Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:00.641821 env[1567]: time="2025-09-13T02:32:00.641805322Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:00.643567 env[1567]: time="2025-09-13T02:32:00.643341655Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:00.644043 env[1567]: time="2025-09-13T02:32:00.644027700Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 13 02:32:00.644416 env[1567]: time="2025-09-13T02:32:00.644374718Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 02:32:02.099071 env[1567]: time="2025-09-13T02:32:02.099011631Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:02.099699 env[1567]: time="2025-09-13T02:32:02.099661271Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:02.100751 env[1567]: time="2025-09-13T02:32:02.100709498Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:02.101755 env[1567]: time="2025-09-13T02:32:02.101713043Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:02.102163 env[1567]: time="2025-09-13T02:32:02.102124867Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 13 02:32:02.102537 env[1567]: time="2025-09-13T02:32:02.102523334Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 02:32:03.232298 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 02:32:03.232449 systemd[1]: Stopped kubelet.service. Sep 13 02:32:03.233322 systemd[1]: Starting kubelet.service... Sep 13 02:32:03.370633 env[1567]: time="2025-09-13T02:32:03.370568766Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:03.430370 env[1567]: time="2025-09-13T02:32:03.430315806Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:03.442061 systemd[1]: Started kubelet.service. Sep 13 02:32:03.458314 env[1567]: time="2025-09-13T02:32:03.458291069Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:03.459367 env[1567]: time="2025-09-13T02:32:03.459344509Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:03.459823 env[1567]: time="2025-09-13T02:32:03.459808346Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 13 02:32:03.460095 env[1567]: time="2025-09-13T02:32:03.460080762Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 02:32:03.472321 kubelet[1907]: E0913 02:32:03.472270 1907 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 02:32:03.473349 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 02:32:03.473433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 02:32:04.517580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1519359492.mount: Deactivated successfully. Sep 13 02:32:04.951922 env[1567]: time="2025-09-13T02:32:04.951828194Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:04.952505 env[1567]: time="2025-09-13T02:32:04.952474243Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:04.953160 env[1567]: time="2025-09-13T02:32:04.953125878Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:04.953831 env[1567]: time="2025-09-13T02:32:04.953804716Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:04.954154 env[1567]: time="2025-09-13T02:32:04.954141227Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 13 02:32:04.954484 env[1567]: time="2025-09-13T02:32:04.954468343Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 02:32:05.598017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4191809598.mount: Deactivated successfully. Sep 13 02:32:06.393608 env[1567]: time="2025-09-13T02:32:06.393550369Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:06.394228 env[1567]: time="2025-09-13T02:32:06.394149935Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:06.395477 env[1567]: time="2025-09-13T02:32:06.395462766Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:06.396461 env[1567]: time="2025-09-13T02:32:06.396449588Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:06.397342 env[1567]: time="2025-09-13T02:32:06.397325552Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 13 02:32:06.397778 env[1567]: time="2025-09-13T02:32:06.397766040Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 02:32:07.005820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1632276640.mount: Deactivated successfully. Sep 13 02:32:07.007256 env[1567]: time="2025-09-13T02:32:07.007210241Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:07.007758 env[1567]: time="2025-09-13T02:32:07.007746447Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:07.008453 env[1567]: time="2025-09-13T02:32:07.008405227Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:07.009144 env[1567]: time="2025-09-13T02:32:07.009119701Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:07.009498 env[1567]: time="2025-09-13T02:32:07.009486326Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 02:32:07.009815 env[1567]: time="2025-09-13T02:32:07.009804116Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 02:32:07.585488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1042436017.mount: Deactivated successfully. Sep 13 02:32:09.291071 env[1567]: time="2025-09-13T02:32:09.291042257Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:09.291820 env[1567]: time="2025-09-13T02:32:09.291804686Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:09.293688 env[1567]: time="2025-09-13T02:32:09.293673420Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:09.294776 env[1567]: time="2025-09-13T02:32:09.294761773Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:09.295251 env[1567]: time="2025-09-13T02:32:09.295237009Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 13 02:32:11.710848 systemd[1]: Stopped kubelet.service. Sep 13 02:32:11.712092 systemd[1]: Starting kubelet.service... Sep 13 02:32:11.726203 systemd[1]: Reloading. Sep 13 02:32:11.758291 /usr/lib/systemd/system-generators/torcx-generator[1993]: time="2025-09-13T02:32:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 02:32:11.758307 /usr/lib/systemd/system-generators/torcx-generator[1993]: time="2025-09-13T02:32:11Z" level=info msg="torcx already run" Sep 13 02:32:11.812432 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 02:32:11.812442 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 02:32:11.825077 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 02:32:11.896550 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 02:32:11.896589 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 02:32:11.896688 systemd[1]: Stopped kubelet.service. Sep 13 02:32:11.897496 systemd[1]: Starting kubelet.service... Sep 13 02:32:12.138136 systemd[1]: Started kubelet.service. Sep 13 02:32:12.169306 kubelet[2058]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 02:32:12.169306 kubelet[2058]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 02:32:12.169306 kubelet[2058]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 02:32:12.169674 kubelet[2058]: I0913 02:32:12.169349 2058 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 02:32:12.458662 kubelet[2058]: I0913 02:32:12.458604 2058 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 02:32:12.458662 kubelet[2058]: I0913 02:32:12.458615 2058 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 02:32:12.458733 kubelet[2058]: I0913 02:32:12.458723 2058 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 02:32:12.485462 kubelet[2058]: I0913 02:32:12.485400 2058 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 02:32:12.486534 kubelet[2058]: E0913 02:32:12.486481 2058 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://145.40.90.231:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 145.40.90.231:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 02:32:12.489267 kubelet[2058]: E0913 02:32:12.489225 2058 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 02:32:12.489267 kubelet[2058]: I0913 02:32:12.489238 2058 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 02:32:12.513434 kubelet[2058]: I0913 02:32:12.513394 2058 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 02:32:12.513564 kubelet[2058]: I0913 02:32:12.513521 2058 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 02:32:12.513674 kubelet[2058]: I0913 02:32:12.513537 2058 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-6378d470a1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 02:32:12.513674 kubelet[2058]: I0913 02:32:12.513645 2058 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 02:32:12.513674 kubelet[2058]: I0913 02:32:12.513652 2058 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 02:32:12.513815 kubelet[2058]: I0913 02:32:12.513731 2058 state_mem.go:36] "Initialized new in-memory state store" Sep 13 02:32:12.517834 kubelet[2058]: I0913 02:32:12.517788 2058 kubelet.go:480] "Attempting to sync node with API server" Sep 13 02:32:12.517834 kubelet[2058]: I0913 02:32:12.517799 2058 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 02:32:12.517834 kubelet[2058]: I0913 02:32:12.517814 2058 kubelet.go:386] "Adding apiserver pod source" Sep 13 02:32:12.517834 kubelet[2058]: I0913 02:32:12.517825 2058 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 02:32:12.545220 kubelet[2058]: I0913 02:32:12.545194 2058 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 02:32:12.545845 kubelet[2058]: I0913 02:32:12.545794 2058 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 02:32:12.550151 kubelet[2058]: W0913 02:32:12.550105 2058 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 02:32:12.554773 kubelet[2058]: E0913 02:32:12.554713 2058 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://145.40.90.231:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 145.40.90.231:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 02:32:12.554913 kubelet[2058]: E0913 02:32:12.554890 2058 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://145.40.90.231:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-6378d470a1&limit=500&resourceVersion=0\": dial tcp 145.40.90.231:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 02:32:12.556432 kubelet[2058]: I0913 02:32:12.556410 2058 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 02:32:12.556558 kubelet[2058]: I0913 02:32:12.556542 2058 server.go:1289] "Started kubelet" Sep 13 02:32:12.556688 kubelet[2058]: I0913 02:32:12.556631 2058 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 02:32:12.558307 kubelet[2058]: E0913 02:32:12.558285 2058 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 02:32:12.559728 kubelet[2058]: I0913 02:32:12.559707 2058 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 02:32:12.564233 kubelet[2058]: I0913 02:32:12.564185 2058 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 02:32:12.564862 kubelet[2058]: I0913 02:32:12.564822 2058 server.go:317] "Adding debug handlers to kubelet server" Sep 13 02:32:12.565010 kubelet[2058]: E0913 02:32:12.563938 2058 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://145.40.90.231:6443/api/v1/namespaces/default/events\": dial tcp 145.40.90.231:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-6378d470a1.1864b6c9d4d04284 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-6378d470a1,UID:ci-3510.3.8-n-6378d470a1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-6378d470a1,},FirstTimestamp:2025-09-13 02:32:12.5564361 +0000 UTC m=+0.414913138,LastTimestamp:2025-09-13 02:32:12.5564361 +0000 UTC m=+0.414913138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-6378d470a1,}" Sep 13 02:32:12.567025 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 02:32:12.567061 kubelet[2058]: I0913 02:32:12.567046 2058 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 02:32:12.567159 kubelet[2058]: I0913 02:32:12.567105 2058 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 02:32:12.567194 kubelet[2058]: E0913 02:32:12.567179 2058 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-6378d470a1\" not found" Sep 13 02:32:12.567194 kubelet[2058]: I0913 02:32:12.567188 2058 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 02:32:12.567253 kubelet[2058]: I0913 02:32:12.567209 2058 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 02:32:12.567253 kubelet[2058]: I0913 02:32:12.567246 2058 reconciler.go:26] "Reconciler: start to sync state" Sep 13 02:32:12.567327 kubelet[2058]: E0913 02:32:12.567315 2058 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-6378d470a1?timeout=10s\": dial tcp 145.40.90.231:6443: connect: connection refused" interval="200ms" Sep 13 02:32:12.567420 kubelet[2058]: I0913 02:32:12.567380 2058 factory.go:223] Registration of the systemd container factory successfully Sep 13 02:32:12.567460 kubelet[2058]: I0913 02:32:12.567426 2058 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 02:32:12.567487 kubelet[2058]: E0913 02:32:12.567468 2058 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://145.40.90.231:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 145.40.90.231:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 02:32:12.568017 kubelet[2058]: I0913 02:32:12.568006 2058 factory.go:223] Registration of the containerd container factory successfully Sep 13 02:32:12.577181 kubelet[2058]: I0913 02:32:12.577162 2058 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 02:32:12.577688 kubelet[2058]: I0913 02:32:12.577679 2058 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 02:32:12.577719 kubelet[2058]: I0913 02:32:12.577692 2058 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 02:32:12.577719 kubelet[2058]: I0913 02:32:12.577707 2058 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 02:32:12.577719 kubelet[2058]: I0913 02:32:12.577714 2058 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 02:32:12.577778 kubelet[2058]: E0913 02:32:12.577741 2058 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 02:32:12.578030 kubelet[2058]: E0913 02:32:12.578016 2058 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://145.40.90.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 145.40.90.231:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 02:32:12.585473 kubelet[2058]: I0913 02:32:12.585465 2058 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 02:32:12.585473 kubelet[2058]: I0913 02:32:12.585472 2058 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 02:32:12.585536 kubelet[2058]: I0913 02:32:12.585481 2058 state_mem.go:36] "Initialized new in-memory state store" Sep 13 02:32:12.586346 kubelet[2058]: I0913 02:32:12.586340 2058 policy_none.go:49] "None policy: Start" Sep 13 02:32:12.586376 kubelet[2058]: I0913 02:32:12.586349 2058 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 02:32:12.586376 kubelet[2058]: I0913 02:32:12.586359 2058 state_mem.go:35] "Initializing new in-memory state store" Sep 13 02:32:12.588703 systemd[1]: Created slice kubepods.slice. Sep 13 02:32:12.591025 systemd[1]: Created slice kubepods-burstable.slice. Sep 13 02:32:12.592558 systemd[1]: Created slice kubepods-besteffort.slice. Sep 13 02:32:12.614253 kubelet[2058]: E0913 02:32:12.614230 2058 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 02:32:12.614421 kubelet[2058]: I0913 02:32:12.614386 2058 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 02:32:12.614421 kubelet[2058]: I0913 02:32:12.614399 2058 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 02:32:12.614594 kubelet[2058]: I0913 02:32:12.614582 2058 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 02:32:12.615045 kubelet[2058]: E0913 02:32:12.615029 2058 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 02:32:12.615127 kubelet[2058]: E0913 02:32:12.615060 2058 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-6378d470a1\" not found" Sep 13 02:32:12.640033 kubelet[2058]: E0913 02:32:12.639834 2058 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://145.40.90.231:6443/api/v1/namespaces/default/events\": dial tcp 145.40.90.231:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-6378d470a1.1864b6c9d4d04284 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-6378d470a1,UID:ci-3510.3.8-n-6378d470a1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-6378d470a1,},FirstTimestamp:2025-09-13 02:32:12.5564361 +0000 UTC m=+0.414913138,LastTimestamp:2025-09-13 02:32:12.5564361 +0000 UTC m=+0.414913138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-6378d470a1,}" Sep 13 02:32:12.690623 systemd[1]: Created slice kubepods-burstable-podceff1b8ebde08fd1b29af31a3c8fb960.slice. Sep 13 02:32:12.702097 kubelet[2058]: E0913 02:32:12.702049 2058 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-6378d470a1\" not found" node="ci-3510.3.8-n-6378d470a1" Sep 13 02:32:12.718318 kubelet[2058]: I0913 02:32:12.718145 2058 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-6378d470a1" Sep 13 02:32:12.719103 kubelet[2058]: E0913 02:32:12.718984 2058 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://145.40.90.231:6443/api/v1/nodes\": dial tcp 145.40.90.231:6443: connect: connection refused" node="ci-3510.3.8-n-6378d470a1" Sep 13 02:32:12.720393 systemd[1]: Created slice kubepods-burstable-podc097ff71f005dae05cd589fd0805dd81.slice. Sep 13 02:32:12.746435 kubelet[2058]: E0913 02:32:12.746340 2058 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-6378d470a1\" not found" node="ci-3510.3.8-n-6378d470a1" Sep 13 02:32:12.752964 systemd[1]: Created slice kubepods-burstable-pod55866a686c0d2b93ab387e551c9ecff9.slice. Sep 13 02:32:12.757008 kubelet[2058]: E0913 02:32:12.756918 2058 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-6378d470a1\" not found" node="ci-3510.3.8-n-6378d470a1" Sep 13 02:32:12.768930 kubelet[2058]: I0913 02:32:12.768833 2058 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c097ff71f005dae05cd589fd0805dd81-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-6378d470a1\" (UID: \"c097ff71f005dae05cd589fd0805dd81\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:12.768930 kubelet[2058]: E0913 02:32:12.768865 2058 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-6378d470a1?timeout=10s\": dial tcp 145.40.90.231:6443: connect: connection refused" interval="400ms" Sep 13 02:32:12.769248 kubelet[2058]: I0913 02:32:12.768951 2058 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ceff1b8ebde08fd1b29af31a3c8fb960-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-6378d470a1\" (UID: \"ceff1b8ebde08fd1b29af31a3c8fb960\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:12.769248 kubelet[2058]: I0913 02:32:12.769027 2058 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ceff1b8ebde08fd1b29af31a3c8fb960-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-6378d470a1\" (UID: \"ceff1b8ebde08fd1b29af31a3c8fb960\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:12.769248 kubelet[2058]: I0913 02:32:12.769087 2058 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c097ff71f005dae05cd589fd0805dd81-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-6378d470a1\" (UID: \"c097ff71f005dae05cd589fd0805dd81\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:12.769248 kubelet[2058]: I0913 02:32:12.769142 2058 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c097ff71f005dae05cd589fd0805dd81-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-6378d470a1\" (UID: \"c097ff71f005dae05cd589fd0805dd81\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:12.769248 kubelet[2058]: I0913 02:32:12.769190 2058 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c097ff71f005dae05cd589fd0805dd81-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-6378d470a1\" (UID: \"c097ff71f005dae05cd589fd0805dd81\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:12.769743 kubelet[2058]: I0913 02:32:12.769231 2058 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55866a686c0d2b93ab387e551c9ecff9-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-6378d470a1\" (UID: \"55866a686c0d2b93ab387e551c9ecff9\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:12.769743 kubelet[2058]: I0913 02:32:12.769273 2058 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ceff1b8ebde08fd1b29af31a3c8fb960-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-6378d470a1\" (UID: \"ceff1b8ebde08fd1b29af31a3c8fb960\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:12.769743 kubelet[2058]: I0913 02:32:12.769319 2058 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c097ff71f005dae05cd589fd0805dd81-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-6378d470a1\" (UID: \"c097ff71f005dae05cd589fd0805dd81\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:12.923221 kubelet[2058]: I0913 02:32:12.923159 2058 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-6378d470a1" Sep 13 02:32:12.923973 kubelet[2058]: E0913 02:32:12.923913 2058 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://145.40.90.231:6443/api/v1/nodes\": dial tcp 145.40.90.231:6443: connect: connection refused" node="ci-3510.3.8-n-6378d470a1" Sep 13 02:32:13.006882 env[1567]: time="2025-09-13T02:32:13.006794721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-6378d470a1,Uid:ceff1b8ebde08fd1b29af31a3c8fb960,Namespace:kube-system,Attempt:0,}" Sep 13 02:32:13.048694 env[1567]: time="2025-09-13T02:32:13.048606615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-6378d470a1,Uid:c097ff71f005dae05cd589fd0805dd81,Namespace:kube-system,Attempt:0,}" Sep 13 02:32:13.058818 env[1567]: time="2025-09-13T02:32:13.058746462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-6378d470a1,Uid:55866a686c0d2b93ab387e551c9ecff9,Namespace:kube-system,Attempt:0,}" Sep 13 02:32:13.170302 kubelet[2058]: E0913 02:32:13.170177 2058 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.231:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-6378d470a1?timeout=10s\": dial tcp 145.40.90.231:6443: connect: connection refused" interval="800ms" Sep 13 02:32:13.328729 kubelet[2058]: I0913 02:32:13.328565 2058 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-6378d470a1" Sep 13 02:32:13.329424 kubelet[2058]: E0913 02:32:13.329320 2058 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://145.40.90.231:6443/api/v1/nodes\": dial tcp 145.40.90.231:6443: connect: connection refused" node="ci-3510.3.8-n-6378d470a1" Sep 13 02:32:13.578993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3080780405.mount: Deactivated successfully. Sep 13 02:32:13.580052 kubelet[2058]: E0913 02:32:13.579999 2058 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://145.40.90.231:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 145.40.90.231:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 02:32:13.580151 env[1567]: time="2025-09-13T02:32:13.580132078Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:13.581475 env[1567]: time="2025-09-13T02:32:13.581461718Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:13.581916 env[1567]: time="2025-09-13T02:32:13.581902460Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:13.582787 env[1567]: time="2025-09-13T02:32:13.582745301Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:13.583489 env[1567]: time="2025-09-13T02:32:13.583476765Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:13.585081 env[1567]: time="2025-09-13T02:32:13.585067125Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:13.586717 env[1567]: time="2025-09-13T02:32:13.586703651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:13.587099 env[1567]: time="2025-09-13T02:32:13.587086685Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:13.587908 env[1567]: time="2025-09-13T02:32:13.587897178Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:13.588358 env[1567]: time="2025-09-13T02:32:13.588346216Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:13.588749 env[1567]: time="2025-09-13T02:32:13.588740048Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:13.589150 env[1567]: time="2025-09-13T02:32:13.589114925Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:13.593109 env[1567]: time="2025-09-13T02:32:13.593076992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 02:32:13.593109 env[1567]: time="2025-09-13T02:32:13.593099168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 02:32:13.593227 env[1567]: time="2025-09-13T02:32:13.593109415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 02:32:13.593227 env[1567]: time="2025-09-13T02:32:13.593174798Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/15c31bc299d3bd94d5eeb889f25f89d3265b8e8c1082067c1c73fc6cb459bf42 pid=2113 runtime=io.containerd.runc.v2 Sep 13 02:32:13.594375 env[1567]: time="2025-09-13T02:32:13.594342853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 02:32:13.594375 env[1567]: time="2025-09-13T02:32:13.594366314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 02:32:13.594464 env[1567]: time="2025-09-13T02:32:13.594373998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 02:32:13.594464 env[1567]: time="2025-09-13T02:32:13.594442771Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/96527b4d90c72560ae161b779c1c1a15867331e545805b413405985481a2aa55 pid=2129 runtime=io.containerd.runc.v2 Sep 13 02:32:13.596029 env[1567]: time="2025-09-13T02:32:13.595996443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 02:32:13.596029 env[1567]: time="2025-09-13T02:32:13.596020390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 02:32:13.596029 env[1567]: time="2025-09-13T02:32:13.596027696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 02:32:13.596147 env[1567]: time="2025-09-13T02:32:13.596098389Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a4875d64a40509497693733f1818f018bb3add8d079d9cc1c893b72cc65db88 pid=2152 runtime=io.containerd.runc.v2 Sep 13 02:32:13.599351 systemd[1]: Started cri-containerd-15c31bc299d3bd94d5eeb889f25f89d3265b8e8c1082067c1c73fc6cb459bf42.scope. Sep 13 02:32:13.601089 systemd[1]: Started cri-containerd-96527b4d90c72560ae161b779c1c1a15867331e545805b413405985481a2aa55.scope. Sep 13 02:32:13.603247 systemd[1]: Started cri-containerd-6a4875d64a40509497693733f1818f018bb3add8d079d9cc1c893b72cc65db88.scope. Sep 13 02:32:13.623065 env[1567]: time="2025-09-13T02:32:13.623035935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-6378d470a1,Uid:55866a686c0d2b93ab387e551c9ecff9,Namespace:kube-system,Attempt:0,} returns sandbox id \"96527b4d90c72560ae161b779c1c1a15867331e545805b413405985481a2aa55\"" Sep 13 02:32:13.623172 env[1567]: time="2025-09-13T02:32:13.623096347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-6378d470a1,Uid:ceff1b8ebde08fd1b29af31a3c8fb960,Namespace:kube-system,Attempt:0,} returns sandbox id \"15c31bc299d3bd94d5eeb889f25f89d3265b8e8c1082067c1c73fc6cb459bf42\"" Sep 13 02:32:13.625131 env[1567]: time="2025-09-13T02:32:13.625114108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-6378d470a1,Uid:c097ff71f005dae05cd589fd0805dd81,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a4875d64a40509497693733f1818f018bb3add8d079d9cc1c893b72cc65db88\"" Sep 13 02:32:13.625334 env[1567]: time="2025-09-13T02:32:13.625318114Z" level=info msg="CreateContainer within sandbox \"15c31bc299d3bd94d5eeb889f25f89d3265b8e8c1082067c1c73fc6cb459bf42\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 02:32:13.625674 env[1567]: time="2025-09-13T02:32:13.625656725Z" level=info msg="CreateContainer within sandbox \"96527b4d90c72560ae161b779c1c1a15867331e545805b413405985481a2aa55\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 02:32:13.626528 env[1567]: time="2025-09-13T02:32:13.626513924Z" level=info msg="CreateContainer within sandbox \"6a4875d64a40509497693733f1818f018bb3add8d079d9cc1c893b72cc65db88\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 02:32:13.631495 env[1567]: time="2025-09-13T02:32:13.631447840Z" level=info msg="CreateContainer within sandbox \"96527b4d90c72560ae161b779c1c1a15867331e545805b413405985481a2aa55\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6af6c38ccdf24d46ea0b6cfeff4a09185565384e445cc13513c1b8f3a3f807ad\"" Sep 13 02:32:13.631719 env[1567]: time="2025-09-13T02:32:13.631677443Z" level=info msg="StartContainer for \"6af6c38ccdf24d46ea0b6cfeff4a09185565384e445cc13513c1b8f3a3f807ad\"" Sep 13 02:32:13.632777 env[1567]: time="2025-09-13T02:32:13.632764531Z" level=info msg="CreateContainer within sandbox \"15c31bc299d3bd94d5eeb889f25f89d3265b8e8c1082067c1c73fc6cb459bf42\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0948ba4a3634b2f3117cd33778e6a6f95646cadc3a1e323e18ec38ec376250cd\"" Sep 13 02:32:13.632922 env[1567]: time="2025-09-13T02:32:13.632905665Z" level=info msg="StartContainer for \"0948ba4a3634b2f3117cd33778e6a6f95646cadc3a1e323e18ec38ec376250cd\"" Sep 13 02:32:13.633483 env[1567]: time="2025-09-13T02:32:13.633467077Z" level=info msg="CreateContainer within sandbox \"6a4875d64a40509497693733f1818f018bb3add8d079d9cc1c893b72cc65db88\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aedfc3873eafc707b1f0dc4be6a90fe2c5a691612a7edd3c0f8aa5d207d4dedc\"" Sep 13 02:32:13.633638 env[1567]: time="2025-09-13T02:32:13.633624755Z" level=info msg="StartContainer for \"aedfc3873eafc707b1f0dc4be6a90fe2c5a691612a7edd3c0f8aa5d207d4dedc\"" Sep 13 02:32:13.640180 systemd[1]: Started cri-containerd-0948ba4a3634b2f3117cd33778e6a6f95646cadc3a1e323e18ec38ec376250cd.scope. Sep 13 02:32:13.640850 systemd[1]: Started cri-containerd-6af6c38ccdf24d46ea0b6cfeff4a09185565384e445cc13513c1b8f3a3f807ad.scope. Sep 13 02:32:13.642718 systemd[1]: Started cri-containerd-aedfc3873eafc707b1f0dc4be6a90fe2c5a691612a7edd3c0f8aa5d207d4dedc.scope. Sep 13 02:32:13.680939 env[1567]: time="2025-09-13T02:32:13.680900920Z" level=info msg="StartContainer for \"aedfc3873eafc707b1f0dc4be6a90fe2c5a691612a7edd3c0f8aa5d207d4dedc\" returns successfully" Sep 13 02:32:13.681034 env[1567]: time="2025-09-13T02:32:13.680955223Z" level=info msg="StartContainer for \"6af6c38ccdf24d46ea0b6cfeff4a09185565384e445cc13513c1b8f3a3f807ad\" returns successfully" Sep 13 02:32:13.681034 env[1567]: time="2025-09-13T02:32:13.681008510Z" level=info msg="StartContainer for \"0948ba4a3634b2f3117cd33778e6a6f95646cadc3a1e323e18ec38ec376250cd\" returns successfully" Sep 13 02:32:14.131295 kubelet[2058]: I0913 02:32:14.131278 2058 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-6378d470a1" Sep 13 02:32:14.426555 kubelet[2058]: E0913 02:32:14.426484 2058 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-6378d470a1\" not found" node="ci-3510.3.8-n-6378d470a1" Sep 13 02:32:14.520300 kubelet[2058]: I0913 02:32:14.520272 2058 apiserver.go:52] "Watching apiserver" Sep 13 02:32:14.525092 kubelet[2058]: I0913 02:32:14.525063 2058 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-6378d470a1" Sep 13 02:32:14.525092 kubelet[2058]: E0913 02:32:14.525096 2058 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-6378d470a1\": node \"ci-3510.3.8-n-6378d470a1\" not found" Sep 13 02:32:14.568018 kubelet[2058]: I0913 02:32:14.568000 2058 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 02:32:14.568018 kubelet[2058]: I0913 02:32:14.568000 2058 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:14.571951 kubelet[2058]: E0913 02:32:14.571939 2058 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-6378d470a1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:14.571951 kubelet[2058]: I0913 02:32:14.571953 2058 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:14.572764 kubelet[2058]: E0913 02:32:14.572726 2058 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-6378d470a1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:14.572764 kubelet[2058]: I0913 02:32:14.572735 2058 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:14.573436 kubelet[2058]: E0913 02:32:14.573396 2058 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-6378d470a1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:14.580255 kubelet[2058]: I0913 02:32:14.580247 2058 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:14.580824 kubelet[2058]: I0913 02:32:14.580816 2058 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:14.581006 kubelet[2058]: E0913 02:32:14.580995 2058 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-6378d470a1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:14.581335 kubelet[2058]: I0913 02:32:14.581329 2058 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:14.581463 kubelet[2058]: E0913 02:32:14.581453 2058 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-6378d470a1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:14.581974 kubelet[2058]: E0913 02:32:14.581967 2058 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-6378d470a1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:15.583701 kubelet[2058]: I0913 02:32:15.583641 2058 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:15.584601 kubelet[2058]: I0913 02:32:15.583862 2058 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:15.591520 kubelet[2058]: I0913 02:32:15.591465 2058 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 02:32:15.591520 kubelet[2058]: I0913 02:32:15.591488 2058 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 02:32:17.130812 systemd[1]: Reloading. Sep 13 02:32:17.164684 /usr/lib/systemd/system-generators/torcx-generator[2399]: time="2025-09-13T02:32:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 02:32:17.164708 /usr/lib/systemd/system-generators/torcx-generator[2399]: time="2025-09-13T02:32:17Z" level=info msg="torcx already run" Sep 13 02:32:17.228886 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 02:32:17.228897 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 02:32:17.243474 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 02:32:17.312067 systemd[1]: Stopping kubelet.service... Sep 13 02:32:17.337874 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 02:32:17.338085 systemd[1]: Stopped kubelet.service. Sep 13 02:32:17.339884 systemd[1]: Starting kubelet.service... Sep 13 02:32:17.570951 systemd[1]: Started kubelet.service. Sep 13 02:32:17.613321 kubelet[2462]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 02:32:17.613321 kubelet[2462]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 02:32:17.613321 kubelet[2462]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 02:32:17.613728 kubelet[2462]: I0913 02:32:17.613372 2462 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 02:32:17.618346 kubelet[2462]: I0913 02:32:17.618327 2462 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 02:32:17.618346 kubelet[2462]: I0913 02:32:17.618342 2462 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 02:32:17.618536 kubelet[2462]: I0913 02:32:17.618502 2462 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 02:32:17.619485 kubelet[2462]: I0913 02:32:17.619446 2462 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 02:32:17.621723 kubelet[2462]: I0913 02:32:17.621683 2462 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 02:32:17.623657 kubelet[2462]: E0913 02:32:17.623634 2462 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 02:32:17.623657 kubelet[2462]: I0913 02:32:17.623654 2462 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 02:32:17.656133 kubelet[2462]: I0913 02:32:17.656046 2462 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 02:32:17.656581 kubelet[2462]: I0913 02:32:17.656487 2462 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 02:32:17.656940 kubelet[2462]: I0913 02:32:17.656544 2462 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-6378d470a1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 02:32:17.656940 kubelet[2462]: I0913 02:32:17.656916 2462 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 02:32:17.656940 kubelet[2462]: I0913 02:32:17.656946 2462 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 02:32:17.657522 kubelet[2462]: I0913 02:32:17.657053 2462 state_mem.go:36] "Initialized new in-memory state store" Sep 13 02:32:17.657522 kubelet[2462]: I0913 02:32:17.657485 2462 kubelet.go:480] "Attempting to sync node with API server" Sep 13 02:32:17.657522 kubelet[2462]: I0913 02:32:17.657517 2462 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 02:32:17.657947 kubelet[2462]: I0913 02:32:17.657562 2462 kubelet.go:386] "Adding apiserver pod source" Sep 13 02:32:17.657947 kubelet[2462]: I0913 02:32:17.657592 2462 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 02:32:17.662436 kubelet[2462]: I0913 02:32:17.662377 2462 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 02:32:17.664416 kubelet[2462]: I0913 02:32:17.664344 2462 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 02:32:17.669060 kubelet[2462]: I0913 02:32:17.668985 2462 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 02:32:17.669265 kubelet[2462]: I0913 02:32:17.669075 2462 server.go:1289] "Started kubelet" Sep 13 02:32:17.669488 kubelet[2462]: I0913 02:32:17.669375 2462 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 02:32:17.669698 kubelet[2462]: I0913 02:32:17.669328 2462 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 02:32:17.670345 kubelet[2462]: I0913 02:32:17.670284 2462 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 02:32:17.672190 kubelet[2462]: I0913 02:32:17.672150 2462 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 02:32:17.672352 kubelet[2462]: I0913 02:32:17.672180 2462 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 02:32:17.672352 kubelet[2462]: I0913 02:32:17.672338 2462 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 02:32:17.672635 kubelet[2462]: I0913 02:32:17.672486 2462 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 02:32:17.672635 kubelet[2462]: E0913 02:32:17.672584 2462 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-6378d470a1\" not found" Sep 13 02:32:17.672993 kubelet[2462]: I0913 02:32:17.672953 2462 reconciler.go:26] "Reconciler: start to sync state" Sep 13 02:32:17.673433 kubelet[2462]: I0913 02:32:17.673368 2462 server.go:317] "Adding debug handlers to kubelet server" Sep 13 02:32:17.674618 kubelet[2462]: E0913 02:32:17.674538 2462 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 02:32:17.680375 kubelet[2462]: I0913 02:32:17.680337 2462 factory.go:223] Registration of the containerd container factory successfully Sep 13 02:32:17.680537 kubelet[2462]: I0913 02:32:17.680402 2462 factory.go:223] Registration of the systemd container factory successfully Sep 13 02:32:17.680624 kubelet[2462]: I0913 02:32:17.680527 2462 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 02:32:17.688256 kubelet[2462]: I0913 02:32:17.688221 2462 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 02:32:17.689389 kubelet[2462]: I0913 02:32:17.689374 2462 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 02:32:17.689444 kubelet[2462]: I0913 02:32:17.689394 2462 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 02:32:17.689444 kubelet[2462]: I0913 02:32:17.689412 2462 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 02:32:17.689444 kubelet[2462]: I0913 02:32:17.689419 2462 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 02:32:17.689550 kubelet[2462]: E0913 02:32:17.689458 2462 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 02:32:17.703034 kubelet[2462]: I0913 02:32:17.703019 2462 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 02:32:17.703034 kubelet[2462]: I0913 02:32:17.703029 2462 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 02:32:17.703155 kubelet[2462]: I0913 02:32:17.703042 2462 state_mem.go:36] "Initialized new in-memory state store" Sep 13 02:32:17.703155 kubelet[2462]: I0913 02:32:17.703131 2462 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 02:32:17.703155 kubelet[2462]: I0913 02:32:17.703139 2462 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 02:32:17.703155 kubelet[2462]: I0913 02:32:17.703152 2462 policy_none.go:49] "None policy: Start" Sep 13 02:32:17.703255 kubelet[2462]: I0913 02:32:17.703159 2462 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 02:32:17.703255 kubelet[2462]: I0913 02:32:17.703166 2462 state_mem.go:35] "Initializing new in-memory state store" Sep 13 02:32:17.703255 kubelet[2462]: I0913 02:32:17.703235 2462 state_mem.go:75] "Updated machine memory state" Sep 13 02:32:17.705434 kubelet[2462]: E0913 02:32:17.705424 2462 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 02:32:17.705529 kubelet[2462]: I0913 02:32:17.705520 2462 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 02:32:17.705575 kubelet[2462]: I0913 02:32:17.705529 2462 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 02:32:17.705633 kubelet[2462]: I0913 02:32:17.705625 2462 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 02:32:17.705944 kubelet[2462]: E0913 02:32:17.705934 2462 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 02:32:17.791057 kubelet[2462]: I0913 02:32:17.790949 2462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:17.791057 kubelet[2462]: I0913 02:32:17.791059 2462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:17.791483 kubelet[2462]: I0913 02:32:17.791224 2462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:17.799947 kubelet[2462]: I0913 02:32:17.799865 2462 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 02:32:17.800147 kubelet[2462]: E0913 02:32:17.799981 2462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-6378d470a1\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:17.800147 kubelet[2462]: I0913 02:32:17.800099 2462 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 02:32:17.801048 kubelet[2462]: I0913 02:32:17.800970 2462 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 02:32:17.801241 kubelet[2462]: E0913 02:32:17.801062 2462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-6378d470a1\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:17.810807 kubelet[2462]: I0913 02:32:17.810713 2462 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-6378d470a1" Sep 13 02:32:17.821810 kubelet[2462]: I0913 02:32:17.821655 2462 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-6378d470a1" Sep 13 02:32:17.822029 kubelet[2462]: I0913 02:32:17.821838 2462 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-6378d470a1" Sep 13 02:32:17.973839 kubelet[2462]: I0913 02:32:17.973720 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c097ff71f005dae05cd589fd0805dd81-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-6378d470a1\" (UID: \"c097ff71f005dae05cd589fd0805dd81\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:17.973839 kubelet[2462]: I0913 02:32:17.973819 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c097ff71f005dae05cd589fd0805dd81-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-6378d470a1\" (UID: \"c097ff71f005dae05cd589fd0805dd81\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:17.974263 kubelet[2462]: I0913 02:32:17.973882 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55866a686c0d2b93ab387e551c9ecff9-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-6378d470a1\" (UID: \"55866a686c0d2b93ab387e551c9ecff9\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:17.974263 kubelet[2462]: I0913 02:32:17.973962 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ceff1b8ebde08fd1b29af31a3c8fb960-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-6378d470a1\" (UID: \"ceff1b8ebde08fd1b29af31a3c8fb960\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:17.974263 kubelet[2462]: I0913 02:32:17.974035 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c097ff71f005dae05cd589fd0805dd81-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-6378d470a1\" (UID: \"c097ff71f005dae05cd589fd0805dd81\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:17.974263 kubelet[2462]: I0913 02:32:17.974113 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c097ff71f005dae05cd589fd0805dd81-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-6378d470a1\" (UID: \"c097ff71f005dae05cd589fd0805dd81\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:17.974778 kubelet[2462]: I0913 02:32:17.974240 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c097ff71f005dae05cd589fd0805dd81-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-6378d470a1\" (UID: \"c097ff71f005dae05cd589fd0805dd81\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:17.974778 kubelet[2462]: I0913 02:32:17.974345 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ceff1b8ebde08fd1b29af31a3c8fb960-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-6378d470a1\" (UID: \"ceff1b8ebde08fd1b29af31a3c8fb960\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:17.974778 kubelet[2462]: I0913 02:32:17.974454 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ceff1b8ebde08fd1b29af31a3c8fb960-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-6378d470a1\" (UID: \"ceff1b8ebde08fd1b29af31a3c8fb960\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:18.143928 sudo[2510]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 02:32:18.144073 sudo[2510]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 02:32:18.484716 sudo[2510]: pam_unix(sudo:session): session closed for user root Sep 13 02:32:18.658950 kubelet[2462]: I0913 02:32:18.658931 2462 apiserver.go:52] "Watching apiserver" Sep 13 02:32:18.673056 kubelet[2462]: I0913 02:32:18.673015 2462 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 02:32:18.694206 kubelet[2462]: I0913 02:32:18.694196 2462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:18.694257 kubelet[2462]: I0913 02:32:18.694214 2462 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:18.697907 kubelet[2462]: I0913 02:32:18.697869 2462 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 02:32:18.697907 kubelet[2462]: E0913 02:32:18.697893 2462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-6378d470a1\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:18.698472 kubelet[2462]: I0913 02:32:18.698439 2462 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 13 02:32:18.698511 kubelet[2462]: E0913 02:32:18.698484 2462 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-6378d470a1\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-6378d470a1" Sep 13 02:32:18.715157 kubelet[2462]: I0913 02:32:18.715123 2462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-6378d470a1" podStartSLOduration=3.715097541 podStartE2EDuration="3.715097541s" podCreationTimestamp="2025-09-13 02:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 02:32:18.709977594 +0000 UTC m=+1.134539980" watchObservedRunningTime="2025-09-13 02:32:18.715097541 +0000 UTC m=+1.139659925" Sep 13 02:32:18.719954 kubelet[2462]: I0913 02:32:18.719932 2462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-6378d470a1" podStartSLOduration=3.719922101 podStartE2EDuration="3.719922101s" podCreationTimestamp="2025-09-13 02:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 02:32:18.71520084 +0000 UTC m=+1.139763225" watchObservedRunningTime="2025-09-13 02:32:18.719922101 +0000 UTC m=+1.144484486" Sep 13 02:32:18.720076 kubelet[2462]: I0913 02:32:18.720002 2462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-6378d470a1" podStartSLOduration=1.719997691 podStartE2EDuration="1.719997691s" podCreationTimestamp="2025-09-13 02:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 02:32:18.719838563 +0000 UTC m=+1.144400949" watchObservedRunningTime="2025-09-13 02:32:18.719997691 +0000 UTC m=+1.144560077" Sep 13 02:32:19.984667 sudo[1738]: pam_unix(sudo:session): session closed for user root Sep 13 02:32:19.985431 sshd[1735]: pam_unix(sshd:session): session closed for user core Sep 13 02:32:19.986969 systemd[1]: sshd@6-145.40.90.231:22-139.178.89.65:34238.service: Deactivated successfully. Sep 13 02:32:19.987404 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 02:32:19.987489 systemd[1]: session-9.scope: Consumed 3.967s CPU time. Sep 13 02:32:19.987831 systemd-logind[1559]: Session 9 logged out. Waiting for processes to exit. Sep 13 02:32:19.988317 systemd-logind[1559]: Removed session 9. Sep 13 02:32:23.148148 kubelet[2462]: I0913 02:32:23.148111 2462 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 02:32:23.148615 kubelet[2462]: I0913 02:32:23.148594 2462 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 02:32:23.148684 env[1567]: time="2025-09-13T02:32:23.148423171Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 02:32:23.734224 systemd[1]: Created slice kubepods-besteffort-poddd1ee135_6d70_4e57_9eee_b8855abf52b1.slice. Sep 13 02:32:23.759741 systemd[1]: Created slice kubepods-burstable-pod7bd1272c_6240_4c99_ac1f_a7e07a3d6d92.slice. Sep 13 02:32:23.814768 kubelet[2462]: I0913 02:32:23.814651 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-host-proc-sys-net\") pod \"cilium-qzqqk\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " pod="kube-system/cilium-qzqqk" Sep 13 02:32:23.814768 kubelet[2462]: I0913 02:32:23.814761 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-host-proc-sys-kernel\") pod \"cilium-qzqqk\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " pod="kube-system/cilium-qzqqk" Sep 13 02:32:23.815174 kubelet[2462]: I0913 02:32:23.814818 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dd1ee135-6d70-4e57-9eee-b8855abf52b1-kube-proxy\") pod \"kube-proxy-k8j62\" (UID: \"dd1ee135-6d70-4e57-9eee-b8855abf52b1\") " pod="kube-system/kube-proxy-k8j62" Sep 13 02:32:23.815174 kubelet[2462]: I0913 02:32:23.814864 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-hostproc\") pod \"cilium-qzqqk\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " pod="kube-system/cilium-qzqqk" Sep 13 02:32:23.815174 kubelet[2462]: I0913 02:32:23.814908 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-cni-path\") pod \"cilium-qzqqk\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " pod="kube-system/cilium-qzqqk" Sep 13 02:32:23.815174 kubelet[2462]: I0913 02:32:23.814950 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-etc-cni-netd\") pod \"cilium-qzqqk\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " pod="kube-system/cilium-qzqqk" Sep 13 02:32:23.815174 kubelet[2462]: I0913 02:32:23.814989 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-lib-modules\") pod \"cilium-qzqqk\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " pod="kube-system/cilium-qzqqk" Sep 13 02:32:23.815174 kubelet[2462]: I0913 02:32:23.815036 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd1ee135-6d70-4e57-9eee-b8855abf52b1-lib-modules\") pod \"kube-proxy-k8j62\" (UID: \"dd1ee135-6d70-4e57-9eee-b8855abf52b1\") " pod="kube-system/kube-proxy-k8j62" Sep 13 02:32:23.815849 kubelet[2462]: I0913 02:32:23.815077 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bfmh\" (UniqueName: \"kubernetes.io/projected/dd1ee135-6d70-4e57-9eee-b8855abf52b1-kube-api-access-7bfmh\") pod \"kube-proxy-k8j62\" (UID: \"dd1ee135-6d70-4e57-9eee-b8855abf52b1\") " pod="kube-system/kube-proxy-k8j62" Sep 13 02:32:23.815849 kubelet[2462]: I0913 02:32:23.815127 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-cilium-run\") pod \"cilium-qzqqk\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " pod="kube-system/cilium-qzqqk" Sep 13 02:32:23.815849 kubelet[2462]: I0913 02:32:23.815195 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-cilium-cgroup\") pod \"cilium-qzqqk\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " pod="kube-system/cilium-qzqqk" Sep 13 02:32:23.815849 kubelet[2462]: I0913 02:32:23.815264 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-clustermesh-secrets\") pod \"cilium-qzqqk\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " pod="kube-system/cilium-qzqqk" Sep 13 02:32:23.815849 kubelet[2462]: I0913 02:32:23.815313 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-cilium-config-path\") pod \"cilium-qzqqk\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " pod="kube-system/cilium-qzqqk" Sep 13 02:32:23.816375 kubelet[2462]: I0913 02:32:23.815392 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd1ee135-6d70-4e57-9eee-b8855abf52b1-xtables-lock\") pod \"kube-proxy-k8j62\" (UID: \"dd1ee135-6d70-4e57-9eee-b8855abf52b1\") " pod="kube-system/kube-proxy-k8j62" Sep 13 02:32:23.816375 kubelet[2462]: I0913 02:32:23.815445 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-xtables-lock\") pod \"cilium-qzqqk\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " pod="kube-system/cilium-qzqqk" Sep 13 02:32:23.816375 kubelet[2462]: I0913 02:32:23.815490 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-hubble-tls\") pod \"cilium-qzqqk\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " pod="kube-system/cilium-qzqqk" Sep 13 02:32:23.816375 kubelet[2462]: I0913 02:32:23.815618 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56g6d\" (UniqueName: \"kubernetes.io/projected/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-kube-api-access-56g6d\") pod \"cilium-qzqqk\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " pod="kube-system/cilium-qzqqk" Sep 13 02:32:23.816375 kubelet[2462]: I0913 02:32:23.815775 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-bpf-maps\") pod \"cilium-qzqqk\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " pod="kube-system/cilium-qzqqk" Sep 13 02:32:23.917737 kubelet[2462]: I0913 02:32:23.917644 2462 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 02:32:24.059058 env[1567]: time="2025-09-13T02:32:24.058808685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k8j62,Uid:dd1ee135-6d70-4e57-9eee-b8855abf52b1,Namespace:kube-system,Attempt:0,}" Sep 13 02:32:24.063297 env[1567]: time="2025-09-13T02:32:24.063222312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qzqqk,Uid:7bd1272c-6240-4c99-ac1f-a7e07a3d6d92,Namespace:kube-system,Attempt:0,}" Sep 13 02:32:24.081891 env[1567]: time="2025-09-13T02:32:24.081726882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 02:32:24.081891 env[1567]: time="2025-09-13T02:32:24.081834313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 02:32:24.081891 env[1567]: time="2025-09-13T02:32:24.081872967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 02:32:24.082440 env[1567]: time="2025-09-13T02:32:24.082280382Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/36e26eaf5e436b1ab8e702585a454b120b61a4a377eec536a088b81ac4b92d9d pid=2619 runtime=io.containerd.runc.v2 Sep 13 02:32:24.088017 env[1567]: time="2025-09-13T02:32:24.087881668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 02:32:24.088276 env[1567]: time="2025-09-13T02:32:24.087981889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 02:32:24.088276 env[1567]: time="2025-09-13T02:32:24.088032538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 02:32:24.088669 env[1567]: time="2025-09-13T02:32:24.088465626Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9 pid=2634 runtime=io.containerd.runc.v2 Sep 13 02:32:24.112694 systemd[1]: Started cri-containerd-36e26eaf5e436b1ab8e702585a454b120b61a4a377eec536a088b81ac4b92d9d.scope. Sep 13 02:32:24.120686 systemd[1]: Started cri-containerd-307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9.scope. Sep 13 02:32:24.146384 env[1567]: time="2025-09-13T02:32:24.146322031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k8j62,Uid:dd1ee135-6d70-4e57-9eee-b8855abf52b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"36e26eaf5e436b1ab8e702585a454b120b61a4a377eec536a088b81ac4b92d9d\"" Sep 13 02:32:24.151012 env[1567]: time="2025-09-13T02:32:24.150940696Z" level=info msg="CreateContainer within sandbox \"36e26eaf5e436b1ab8e702585a454b120b61a4a377eec536a088b81ac4b92d9d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 02:32:24.151641 env[1567]: time="2025-09-13T02:32:24.151520878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qzqqk,Uid:7bd1272c-6240-4c99-ac1f-a7e07a3d6d92,Namespace:kube-system,Attempt:0,} returns sandbox id \"307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9\"" Sep 13 02:32:24.153029 env[1567]: time="2025-09-13T02:32:24.152997169Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 02:32:24.160558 env[1567]: time="2025-09-13T02:32:24.160513732Z" level=info msg="CreateContainer within sandbox \"36e26eaf5e436b1ab8e702585a454b120b61a4a377eec536a088b81ac4b92d9d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7916d0504786339223fc9afdeaa06353941c77e1edb8506dd91f3f607f42d433\"" Sep 13 02:32:24.161029 env[1567]: time="2025-09-13T02:32:24.161003009Z" level=info msg="StartContainer for \"7916d0504786339223fc9afdeaa06353941c77e1edb8506dd91f3f607f42d433\"" Sep 13 02:32:24.175266 systemd[1]: Started cri-containerd-7916d0504786339223fc9afdeaa06353941c77e1edb8506dd91f3f607f42d433.scope. Sep 13 02:32:24.195174 env[1567]: time="2025-09-13T02:32:24.195144853Z" level=info msg="StartContainer for \"7916d0504786339223fc9afdeaa06353941c77e1edb8506dd91f3f607f42d433\" returns successfully" Sep 13 02:32:24.410177 systemd[1]: Created slice kubepods-besteffort-pod1ebac4fd_ec62_4334_b637_3c8198928859.slice. Sep 13 02:32:24.421597 kubelet[2462]: I0913 02:32:24.421566 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx8q7\" (UniqueName: \"kubernetes.io/projected/1ebac4fd-ec62-4334-b637-3c8198928859-kube-api-access-lx8q7\") pod \"cilium-operator-6c4d7847fc-v8mdk\" (UID: \"1ebac4fd-ec62-4334-b637-3c8198928859\") " pod="kube-system/cilium-operator-6c4d7847fc-v8mdk" Sep 13 02:32:24.421914 kubelet[2462]: I0913 02:32:24.421616 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ebac4fd-ec62-4334-b637-3c8198928859-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-v8mdk\" (UID: \"1ebac4fd-ec62-4334-b637-3c8198928859\") " pod="kube-system/cilium-operator-6c4d7847fc-v8mdk" Sep 13 02:32:24.713768 env[1567]: time="2025-09-13T02:32:24.713545020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-v8mdk,Uid:1ebac4fd-ec62-4334-b637-3c8198928859,Namespace:kube-system,Attempt:0,}" Sep 13 02:32:24.742731 kubelet[2462]: I0913 02:32:24.742622 2462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k8j62" podStartSLOduration=1.742589167 podStartE2EDuration="1.742589167s" podCreationTimestamp="2025-09-13 02:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 02:32:24.726228737 +0000 UTC m=+7.150791236" watchObservedRunningTime="2025-09-13 02:32:24.742589167 +0000 UTC m=+7.167151599" Sep 13 02:32:24.850055 env[1567]: time="2025-09-13T02:32:24.849892462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 02:32:24.850055 env[1567]: time="2025-09-13T02:32:24.849996491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 02:32:24.850488 env[1567]: time="2025-09-13T02:32:24.850043620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 02:32:24.850616 env[1567]: time="2025-09-13T02:32:24.850471969Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/865240e1ec2943af1ad4aba69ab318722cbc402b603a837bda9ebbdfd90d796b pid=2861 runtime=io.containerd.runc.v2 Sep 13 02:32:24.875165 systemd[1]: Started cri-containerd-865240e1ec2943af1ad4aba69ab318722cbc402b603a837bda9ebbdfd90d796b.scope. Sep 13 02:32:24.944368 env[1567]: time="2025-09-13T02:32:24.944328533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-v8mdk,Uid:1ebac4fd-ec62-4334-b637-3c8198928859,Namespace:kube-system,Attempt:0,} returns sandbox id \"865240e1ec2943af1ad4aba69ab318722cbc402b603a837bda9ebbdfd90d796b\"" Sep 13 02:32:25.794499 update_engine[1561]: I0913 02:32:25.794384 1561 update_attempter.cc:509] Updating boot flags... Sep 13 02:32:28.290699 systemd[1]: Started sshd@7-145.40.90.231:22-47.251.77.219:56746.service. Sep 13 02:32:28.312900 sshd[2938]: Invalid user hive from 47.251.77.219 port 56746 Sep 13 02:32:28.322690 sshd[2938]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:28.322980 sshd[2938]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:28.322998 sshd[2938]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:28.323249 sshd[2938]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:28.608196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1694273286.mount: Deactivated successfully. Sep 13 02:32:28.920673 systemd[1]: Started sshd@8-145.40.90.231:22-47.251.77.219:56758.service. Sep 13 02:32:28.939016 sshd[2941]: Invalid user wang from 47.251.77.219 port 56758 Sep 13 02:32:28.949278 sshd[2941]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:28.949619 sshd[2941]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:28.949637 sshd[2941]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:28.949924 sshd[2941]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:29.553237 systemd[1]: Started sshd@9-145.40.90.231:22-47.251.77.219:55684.service. Sep 13 02:32:29.572941 sshd[2944]: Invalid user mongo from 47.251.77.219 port 55684 Sep 13 02:32:29.580165 sshd[2944]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:29.580469 sshd[2944]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:29.580487 sshd[2944]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:29.580713 sshd[2944]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:29.860897 systemd[1]: Started sshd@10-145.40.90.231:22-47.251.77.219:55696.service. Sep 13 02:32:29.879337 sshd[2947]: Invalid user user from 47.251.77.219 port 55696 Sep 13 02:32:29.887196 sshd[2947]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:29.887554 sshd[2947]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:29.887571 sshd[2947]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:29.887858 sshd[2947]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:30.174530 systemd[1]: Started sshd@11-145.40.90.231:22-47.251.77.219:55706.service. Sep 13 02:32:30.196209 sshd[2950]: Invalid user oracle from 47.251.77.219 port 55706 Sep 13 02:32:30.205929 sshd[2950]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:30.206135 sshd[2950]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:30.206153 sshd[2950]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:30.206315 sshd[2950]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:30.405204 env[1567]: time="2025-09-13T02:32:30.405152931Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:30.405766 env[1567]: time="2025-09-13T02:32:30.405709253Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:30.406622 env[1567]: time="2025-09-13T02:32:30.406581274Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:30.407012 env[1567]: time="2025-09-13T02:32:30.406954058Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 02:32:30.407768 env[1567]: time="2025-09-13T02:32:30.407682410Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 02:32:30.408938 env[1567]: time="2025-09-13T02:32:30.408894442Z" level=info msg="CreateContainer within sandbox \"307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 02:32:30.414333 env[1567]: time="2025-09-13T02:32:30.414310766Z" level=info msg="CreateContainer within sandbox \"307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4cfc6533e0c04a330c9e2f821c0a95ecef858c4c2aa659c1f872644ef44c9319\"" Sep 13 02:32:30.414674 env[1567]: time="2025-09-13T02:32:30.414660545Z" level=info msg="StartContainer for \"4cfc6533e0c04a330c9e2f821c0a95ecef858c4c2aa659c1f872644ef44c9319\"" Sep 13 02:32:30.439798 systemd[1]: Started cri-containerd-4cfc6533e0c04a330c9e2f821c0a95ecef858c4c2aa659c1f872644ef44c9319.scope. Sep 13 02:32:30.450268 env[1567]: time="2025-09-13T02:32:30.450214341Z" level=info msg="StartContainer for \"4cfc6533e0c04a330c9e2f821c0a95ecef858c4c2aa659c1f872644ef44c9319\" returns successfully" Sep 13 02:32:30.455219 systemd[1]: cri-containerd-4cfc6533e0c04a330c9e2f821c0a95ecef858c4c2aa659c1f872644ef44c9319.scope: Deactivated successfully. Sep 13 02:32:30.487872 systemd[1]: Started sshd@12-145.40.90.231:22-47.251.77.219:55714.service. Sep 13 02:32:30.509725 sshd[2999]: Invalid user gpadmin from 47.251.77.219 port 55714 Sep 13 02:32:30.518834 sshd[2999]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:30.519804 sshd[2999]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:30.519898 sshd[2999]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:30.520745 sshd[2999]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:30.538512 sshd[2938]: Failed password for invalid user hive from 47.251.77.219 port 56746 ssh2 Sep 13 02:32:30.807656 systemd[1]: Started sshd@13-145.40.90.231:22-47.251.77.219:55726.service. Sep 13 02:32:30.865974 sshd[3002]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:32:31.165246 sshd[2941]: Failed password for invalid user wang from 47.251.77.219 port 56758 ssh2 Sep 13 02:32:31.230319 sshd[2941]: Connection closed by invalid user wang 47.251.77.219 port 56758 [preauth] Sep 13 02:32:31.232800 systemd[1]: sshd@8-145.40.90.231:22-47.251.77.219:56758.service: Deactivated successfully. Sep 13 02:32:31.417221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cfc6533e0c04a330c9e2f821c0a95ecef858c4c2aa659c1f872644ef44c9319-rootfs.mount: Deactivated successfully. Sep 13 02:32:31.546582 env[1567]: time="2025-09-13T02:32:31.546436345Z" level=info msg="shim disconnected" id=4cfc6533e0c04a330c9e2f821c0a95ecef858c4c2aa659c1f872644ef44c9319 Sep 13 02:32:31.546582 env[1567]: time="2025-09-13T02:32:31.546540943Z" level=warning msg="cleaning up after shim disconnected" id=4cfc6533e0c04a330c9e2f821c0a95ecef858c4c2aa659c1f872644ef44c9319 namespace=k8s.io Sep 13 02:32:31.546582 env[1567]: time="2025-09-13T02:32:31.546568550Z" level=info msg="cleaning up dead shim" Sep 13 02:32:31.562262 env[1567]: time="2025-09-13T02:32:31.562155969Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:32:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3005 runtime=io.containerd.runc.v2\n" Sep 13 02:32:31.601314 sshd[2944]: Failed password for invalid user mongo from 47.251.77.219 port 55684 ssh2 Sep 13 02:32:31.737769 env[1567]: time="2025-09-13T02:32:31.737662608Z" level=info msg="CreateContainer within sandbox \"307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 02:32:31.746712 systemd[1]: Started sshd@14-145.40.90.231:22-47.251.77.219:55752.service. Sep 13 02:32:31.751114 env[1567]: time="2025-09-13T02:32:31.750997571Z" level=info msg="CreateContainer within sandbox \"307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"199bcc57e65c056b93be5b5206020bb10d4ba4ba5328bc2033c0d92cefc323c2\"" Sep 13 02:32:31.752103 env[1567]: time="2025-09-13T02:32:31.752024306Z" level=info msg="StartContainer for \"199bcc57e65c056b93be5b5206020bb10d4ba4ba5328bc2033c0d92cefc323c2\"" Sep 13 02:32:31.781877 systemd[1]: Started cri-containerd-199bcc57e65c056b93be5b5206020bb10d4ba4ba5328bc2033c0d92cefc323c2.scope. Sep 13 02:32:31.798586 env[1567]: time="2025-09-13T02:32:31.798556293Z" level=info msg="StartContainer for \"199bcc57e65c056b93be5b5206020bb10d4ba4ba5328bc2033c0d92cefc323c2\" returns successfully" Sep 13 02:32:31.798885 sshd[3018]: Invalid user apache from 47.251.77.219 port 55752 Sep 13 02:32:31.807308 sshd[3018]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:31.807629 sshd[3018]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:31.807655 sshd[3018]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:31.807736 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 02:32:31.807920 systemd[1]: Stopped systemd-sysctl.service. Sep 13 02:32:31.808031 systemd[1]: Stopping systemd-sysctl.service... Sep 13 02:32:31.809058 systemd[1]: Starting systemd-sysctl.service... Sep 13 02:32:31.809866 systemd[1]: cri-containerd-199bcc57e65c056b93be5b5206020bb10d4ba4ba5328bc2033c0d92cefc323c2.scope: Deactivated successfully. Sep 13 02:32:31.810867 sshd[3018]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:31.814326 systemd[1]: Finished systemd-sysctl.service. Sep 13 02:32:31.841943 env[1567]: time="2025-09-13T02:32:31.841877514Z" level=info msg="shim disconnected" id=199bcc57e65c056b93be5b5206020bb10d4ba4ba5328bc2033c0d92cefc323c2 Sep 13 02:32:31.841943 env[1567]: time="2025-09-13T02:32:31.841919827Z" level=warning msg="cleaning up after shim disconnected" id=199bcc57e65c056b93be5b5206020bb10d4ba4ba5328bc2033c0d92cefc323c2 namespace=k8s.io Sep 13 02:32:31.841943 env[1567]: time="2025-09-13T02:32:31.841930092Z" level=info msg="cleaning up dead shim" Sep 13 02:32:31.848266 env[1567]: time="2025-09-13T02:32:31.848234990Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:32:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3072 runtime=io.containerd.runc.v2\n" Sep 13 02:32:31.907537 sshd[2947]: Failed password for invalid user user from 47.251.77.219 port 55696 ssh2 Sep 13 02:32:31.925501 sshd[2938]: Connection closed by invalid user hive 47.251.77.219 port 56746 [preauth] Sep 13 02:32:31.928192 systemd[1]: sshd@7-145.40.90.231:22-47.251.77.219:56746.service: Deactivated successfully. Sep 13 02:32:32.063211 systemd[1]: Started sshd@15-145.40.90.231:22-47.251.77.219:55758.service. Sep 13 02:32:32.127711 sshd[3086]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:32:32.154229 sshd[2944]: Connection closed by invalid user mongo 47.251.77.219 port 55684 [preauth] Sep 13 02:32:32.154987 systemd[1]: sshd@9-145.40.90.231:22-47.251.77.219:55684.service: Deactivated successfully. Sep 13 02:32:32.362083 sshd[2950]: Failed password for invalid user oracle from 47.251.77.219 port 55706 ssh2 Sep 13 02:32:32.413174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-199bcc57e65c056b93be5b5206020bb10d4ba4ba5328bc2033c0d92cefc323c2-rootfs.mount: Deactivated successfully. Sep 13 02:32:32.576681 env[1567]: time="2025-09-13T02:32:32.576630980Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:32.577224 env[1567]: time="2025-09-13T02:32:32.577183001Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:32.577925 env[1567]: time="2025-09-13T02:32:32.577878590Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 02:32:32.578644 env[1567]: time="2025-09-13T02:32:32.578599540Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 02:32:32.580199 env[1567]: time="2025-09-13T02:32:32.580185506Z" level=info msg="CreateContainer within sandbox \"865240e1ec2943af1ad4aba69ab318722cbc402b603a837bda9ebbdfd90d796b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 02:32:32.584832 env[1567]: time="2025-09-13T02:32:32.584787990Z" level=info msg="CreateContainer within sandbox \"865240e1ec2943af1ad4aba69ab318722cbc402b603a837bda9ebbdfd90d796b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70\"" Sep 13 02:32:32.585233 env[1567]: time="2025-09-13T02:32:32.585172808Z" level=info msg="StartContainer for \"fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70\"" Sep 13 02:32:32.593858 systemd[1]: Started cri-containerd-fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70.scope. Sep 13 02:32:32.604909 env[1567]: time="2025-09-13T02:32:32.604874593Z" level=info msg="StartContainer for \"fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70\" returns successfully" Sep 13 02:32:32.676630 sshd[2999]: Failed password for invalid user gpadmin from 47.251.77.219 port 55714 ssh2 Sep 13 02:32:32.739787 env[1567]: time="2025-09-13T02:32:32.739678149Z" level=info msg="CreateContainer within sandbox \"307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 02:32:32.757086 env[1567]: time="2025-09-13T02:32:32.756970714Z" level=info msg="CreateContainer within sandbox \"307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"208aae558e4fe9b8544c66c27dc2aad7bd263881b71f5dfbb47ab0d86d9ae168\"" Sep 13 02:32:32.757817 env[1567]: time="2025-09-13T02:32:32.757745271Z" level=info msg="StartContainer for \"208aae558e4fe9b8544c66c27dc2aad7bd263881b71f5dfbb47ab0d86d9ae168\"" Sep 13 02:32:32.779764 kubelet[2462]: I0913 02:32:32.779656 2462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-v8mdk" podStartSLOduration=1.145738769 podStartE2EDuration="8.779624682s" podCreationTimestamp="2025-09-13 02:32:24 +0000 UTC" firstStartedPulling="2025-09-13 02:32:24.944996174 +0000 UTC m=+7.369558560" lastFinishedPulling="2025-09-13 02:32:32.578882086 +0000 UTC m=+15.003444473" observedRunningTime="2025-09-13 02:32:32.779225139 +0000 UTC m=+15.203787605" watchObservedRunningTime="2025-09-13 02:32:32.779624682 +0000 UTC m=+15.204187108" Sep 13 02:32:32.793922 systemd[1]: Started cri-containerd-208aae558e4fe9b8544c66c27dc2aad7bd263881b71f5dfbb47ab0d86d9ae168.scope. Sep 13 02:32:32.831262 env[1567]: time="2025-09-13T02:32:32.831210068Z" level=info msg="StartContainer for \"208aae558e4fe9b8544c66c27dc2aad7bd263881b71f5dfbb47ab0d86d9ae168\" returns successfully" Sep 13 02:32:32.835695 systemd[1]: cri-containerd-208aae558e4fe9b8544c66c27dc2aad7bd263881b71f5dfbb47ab0d86d9ae168.scope: Deactivated successfully. Sep 13 02:32:33.003785 env[1567]: time="2025-09-13T02:32:33.003751431Z" level=info msg="shim disconnected" id=208aae558e4fe9b8544c66c27dc2aad7bd263881b71f5dfbb47ab0d86d9ae168 Sep 13 02:32:33.003785 env[1567]: time="2025-09-13T02:32:33.003786730Z" level=warning msg="cleaning up after shim disconnected" id=208aae558e4fe9b8544c66c27dc2aad7bd263881b71f5dfbb47ab0d86d9ae168 namespace=k8s.io Sep 13 02:32:33.003925 env[1567]: time="2025-09-13T02:32:33.003793555Z" level=info msg="cleaning up dead shim" Sep 13 02:32:33.007340 env[1567]: time="2025-09-13T02:32:33.007299945Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:32:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3184 runtime=io.containerd.runc.v2\n" Sep 13 02:32:33.021593 sshd[3002]: Failed password for root from 47.251.77.219 port 55726 ssh2 Sep 13 02:32:33.068383 sshd[3002]: Connection closed by authenticating user root 47.251.77.219 port 55726 [preauth] Sep 13 02:32:33.069304 systemd[1]: sshd@13-145.40.90.231:22-47.251.77.219:55726.service: Deactivated successfully. Sep 13 02:32:33.243228 sshd[2947]: Connection closed by invalid user user 47.251.77.219 port 55696 [preauth] Sep 13 02:32:33.244460 systemd[1]: sshd@10-145.40.90.231:22-47.251.77.219:55696.service: Deactivated successfully. Sep 13 02:32:33.657905 systemd[1]: Started sshd@16-145.40.90.231:22-47.251.77.219:55810.service. Sep 13 02:32:33.678039 sshd[3199]: Invalid user user1 from 47.251.77.219 port 55810 Sep 13 02:32:33.688297 sshd[3199]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:33.689303 sshd[3199]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:33.689442 sshd[3199]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:33.690321 sshd[3199]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:33.751107 env[1567]: time="2025-09-13T02:32:33.750998260Z" level=info msg="CreateContainer within sandbox \"307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 02:32:33.761492 env[1567]: time="2025-09-13T02:32:33.761442709Z" level=info msg="CreateContainer within sandbox \"307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2ef04dac455cb708c56616563dd437310229f8f03ee59a4b99b71d9d31639571\"" Sep 13 02:32:33.761718 env[1567]: time="2025-09-13T02:32:33.761702174Z" level=info msg="StartContainer for \"2ef04dac455cb708c56616563dd437310229f8f03ee59a4b99b71d9d31639571\"" Sep 13 02:32:33.762713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2087482884.mount: Deactivated successfully. Sep 13 02:32:33.770102 systemd[1]: Started cri-containerd-2ef04dac455cb708c56616563dd437310229f8f03ee59a4b99b71d9d31639571.scope. Sep 13 02:32:33.770395 sshd[3018]: Failed password for invalid user apache from 47.251.77.219 port 55752 ssh2 Sep 13 02:32:33.780811 env[1567]: time="2025-09-13T02:32:33.780788721Z" level=info msg="StartContainer for \"2ef04dac455cb708c56616563dd437310229f8f03ee59a4b99b71d9d31639571\" returns successfully" Sep 13 02:32:33.781072 systemd[1]: cri-containerd-2ef04dac455cb708c56616563dd437310229f8f03ee59a4b99b71d9d31639571.scope: Deactivated successfully. Sep 13 02:32:33.788037 sshd[2999]: Connection closed by invalid user gpadmin 47.251.77.219 port 55714 [preauth] Sep 13 02:32:33.788706 systemd[1]: sshd@12-145.40.90.231:22-47.251.77.219:55714.service: Deactivated successfully. Sep 13 02:32:33.791389 env[1567]: time="2025-09-13T02:32:33.791363927Z" level=info msg="shim disconnected" id=2ef04dac455cb708c56616563dd437310229f8f03ee59a4b99b71d9d31639571 Sep 13 02:32:33.791463 env[1567]: time="2025-09-13T02:32:33.791393126Z" level=warning msg="cleaning up after shim disconnected" id=2ef04dac455cb708c56616563dd437310229f8f03ee59a4b99b71d9d31639571 namespace=k8s.io Sep 13 02:32:33.791463 env[1567]: time="2025-09-13T02:32:33.791399234Z" level=info msg="cleaning up dead shim" Sep 13 02:32:33.794758 env[1567]: time="2025-09-13T02:32:33.794742215Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:32:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3243 runtime=io.containerd.runc.v2\n" Sep 13 02:32:33.841165 sshd[2950]: Connection closed by invalid user oracle 47.251.77.219 port 55706 [preauth] Sep 13 02:32:33.842408 systemd[1]: sshd@11-145.40.90.231:22-47.251.77.219:55706.service: Deactivated successfully. Sep 13 02:32:33.982390 systemd[1]: Started sshd@17-145.40.90.231:22-47.251.77.219:55820.service. Sep 13 02:32:34.043297 sshd[3257]: Invalid user hadoop from 47.251.77.219 port 55820 Sep 13 02:32:34.051046 sshd[3257]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:34.052085 sshd[3257]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:34.052167 sshd[3257]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:34.053069 sshd[3257]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:34.417886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ef04dac455cb708c56616563dd437310229f8f03ee59a4b99b71d9d31639571-rootfs.mount: Deactivated successfully. Sep 13 02:32:34.559834 sshd[3086]: Failed password for root from 47.251.77.219 port 55758 ssh2 Sep 13 02:32:34.759670 env[1567]: time="2025-09-13T02:32:34.759545882Z" level=info msg="CreateContainer within sandbox \"307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 02:32:34.779139 env[1567]: time="2025-09-13T02:32:34.778972487Z" level=info msg="CreateContainer within sandbox \"307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50\"" Sep 13 02:32:34.780118 env[1567]: time="2025-09-13T02:32:34.780028278Z" level=info msg="StartContainer for \"263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50\"" Sep 13 02:32:34.807539 systemd[1]: Started cri-containerd-263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50.scope. Sep 13 02:32:34.833141 env[1567]: time="2025-09-13T02:32:34.833052334Z" level=info msg="StartContainer for \"263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50\" returns successfully" Sep 13 02:32:34.910446 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Sep 13 02:32:34.929659 kubelet[2462]: I0913 02:32:34.929615 2462 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 02:32:34.944857 systemd[1]: Created slice kubepods-burstable-pod99da99e7_8654_4fdc_a8fc_174d7fdfd9fb.slice. Sep 13 02:32:34.946809 systemd[1]: Created slice kubepods-burstable-podb18ae966_57da_44ce_b7fb_02b27246857d.slice. Sep 13 02:32:34.991511 kubelet[2462]: I0913 02:32:34.991489 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99da99e7-8654-4fdc-a8fc-174d7fdfd9fb-config-volume\") pod \"coredns-674b8bbfcf-rmrsn\" (UID: \"99da99e7-8654-4fdc-a8fc-174d7fdfd9fb\") " pod="kube-system/coredns-674b8bbfcf-rmrsn" Sep 13 02:32:34.991511 kubelet[2462]: I0913 02:32:34.991513 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b18ae966-57da-44ce-b7fb-02b27246857d-config-volume\") pod \"coredns-674b8bbfcf-wx4lh\" (UID: \"b18ae966-57da-44ce-b7fb-02b27246857d\") " pod="kube-system/coredns-674b8bbfcf-wx4lh" Sep 13 02:32:34.991636 kubelet[2462]: I0913 02:32:34.991522 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mjwc\" (UniqueName: \"kubernetes.io/projected/b18ae966-57da-44ce-b7fb-02b27246857d-kube-api-access-8mjwc\") pod \"coredns-674b8bbfcf-wx4lh\" (UID: \"b18ae966-57da-44ce-b7fb-02b27246857d\") " pod="kube-system/coredns-674b8bbfcf-wx4lh" Sep 13 02:32:34.991636 kubelet[2462]: I0913 02:32:34.991534 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdr78\" (UniqueName: \"kubernetes.io/projected/99da99e7-8654-4fdc-a8fc-174d7fdfd9fb-kube-api-access-rdr78\") pod \"coredns-674b8bbfcf-rmrsn\" (UID: \"99da99e7-8654-4fdc-a8fc-174d7fdfd9fb\") " pod="kube-system/coredns-674b8bbfcf-rmrsn" Sep 13 02:32:35.068444 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Sep 13 02:32:35.245152 systemd[1]: Started sshd@18-145.40.90.231:22-47.251.77.219:55846.service. Sep 13 02:32:35.247802 env[1567]: time="2025-09-13T02:32:35.247705802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rmrsn,Uid:99da99e7-8654-4fdc-a8fc-174d7fdfd9fb,Namespace:kube-system,Attempt:0,}" Sep 13 02:32:35.248912 env[1567]: time="2025-09-13T02:32:35.248776051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wx4lh,Uid:b18ae966-57da-44ce-b7fb-02b27246857d,Namespace:kube-system,Attempt:0,}" Sep 13 02:32:35.284209 sshd[3018]: Connection closed by invalid user apache 47.251.77.219 port 55752 [preauth] Sep 13 02:32:35.285445 systemd[1]: sshd@14-145.40.90.231:22-47.251.77.219:55752.service: Deactivated successfully. Sep 13 02:32:35.300696 sshd[3428]: Invalid user developer from 47.251.77.219 port 55846 Sep 13 02:32:35.310394 sshd[3428]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:35.310757 sshd[3428]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:35.310814 sshd[3428]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:35.311134 sshd[3428]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:35.554702 systemd[1]: Started sshd@19-145.40.90.231:22-47.251.77.219:55856.service. Sep 13 02:32:35.587877 sshd[3459]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:32:35.757693 sshd[3257]: Failed password for invalid user hadoop from 47.251.77.219 port 55820 ssh2 Sep 13 02:32:35.796906 kubelet[2462]: I0913 02:32:35.796819 2462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qzqqk" podStartSLOduration=6.5418594070000005 podStartE2EDuration="12.796791736s" podCreationTimestamp="2025-09-13 02:32:23 +0000 UTC" firstStartedPulling="2025-09-13 02:32:24.152665763 +0000 UTC m=+6.577228168" lastFinishedPulling="2025-09-13 02:32:30.407598111 +0000 UTC m=+12.832160497" observedRunningTime="2025-09-13 02:32:35.79605349 +0000 UTC m=+18.220615927" watchObservedRunningTime="2025-09-13 02:32:35.796791736 +0000 UTC m=+18.221354154" Sep 13 02:32:35.866073 systemd[1]: Started sshd@20-145.40.90.231:22-47.251.77.219:55860.service. Sep 13 02:32:35.892087 sshd[3462]: Invalid user mysql from 47.251.77.219 port 55860 Sep 13 02:32:35.902610 sshd[3462]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:35.902914 sshd[3462]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:35.902940 sshd[3462]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:35.903250 sshd[3462]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:35.988571 sshd[3257]: Connection closed by invalid user hadoop 47.251.77.219 port 55820 [preauth] Sep 13 02:32:35.991158 systemd[1]: sshd@17-145.40.90.231:22-47.251.77.219:55820.service: Deactivated successfully. Sep 13 02:32:36.257593 sshd[3199]: Failed password for invalid user user1 from 47.251.77.219 port 55810 ssh2 Sep 13 02:32:36.503314 systemd[1]: Started sshd@21-145.40.90.231:22-47.251.77.219:55888.service. Sep 13 02:32:36.528959 sshd[3466]: Invalid user tom from 47.251.77.219 port 55888 Sep 13 02:32:36.529248 sshd[3086]: Connection closed by authenticating user root 47.251.77.219 port 55758 [preauth] Sep 13 02:32:36.530007 systemd[1]: sshd@15-145.40.90.231:22-47.251.77.219:55758.service: Deactivated successfully. Sep 13 02:32:36.538904 sshd[3466]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:36.539248 sshd[3466]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:36.539276 sshd[3466]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:36.539633 sshd[3466]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:36.664069 systemd-networkd[1319]: cilium_host: Link UP Sep 13 02:32:36.664190 systemd-networkd[1319]: cilium_net: Link UP Sep 13 02:32:36.671229 systemd-networkd[1319]: cilium_net: Gained carrier Sep 13 02:32:36.678444 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 02:32:36.678479 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 02:32:36.678493 systemd-networkd[1319]: cilium_host: Gained carrier Sep 13 02:32:36.723614 systemd-networkd[1319]: cilium_vxlan: Link UP Sep 13 02:32:36.723619 systemd-networkd[1319]: cilium_vxlan: Gained carrier Sep 13 02:32:36.857365 kernel: NET: Registered PF_ALG protocol family Sep 13 02:32:36.926456 systemd-networkd[1319]: cilium_net: Gained IPv6LL Sep 13 02:32:37.314319 systemd-networkd[1319]: lxc_health: Link UP Sep 13 02:32:37.339380 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 02:32:37.339438 systemd-networkd[1319]: lxc_health: Gained carrier Sep 13 02:32:37.456311 systemd[1]: Started sshd@22-145.40.90.231:22-47.251.77.219:55924.service. Sep 13 02:32:37.462506 systemd-networkd[1319]: cilium_host: Gained IPv6LL Sep 13 02:32:37.470570 sshd[3199]: Connection closed by invalid user user1 47.251.77.219 port 55810 [preauth] Sep 13 02:32:37.471157 systemd[1]: sshd@16-145.40.90.231:22-47.251.77.219:55810.service: Deactivated successfully. Sep 13 02:32:37.486444 sshd[3428]: Failed password for invalid user developer from 47.251.77.219 port 55846 ssh2 Sep 13 02:32:37.490702 sshd[3895]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:32:37.763849 sshd[3459]: Failed password for root from 47.251.77.219 port 55856 ssh2 Sep 13 02:32:37.788851 systemd-networkd[1319]: lxccdca477630b5: Link UP Sep 13 02:32:37.790105 sshd[3459]: Connection closed by authenticating user root 47.251.77.219 port 55856 [preauth] Sep 13 02:32:37.790705 systemd[1]: sshd@19-145.40.90.231:22-47.251.77.219:55856.service: Deactivated successfully. Sep 13 02:32:37.811369 kernel: eth0: renamed from tmp7503c Sep 13 02:32:37.849881 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 13 02:32:37.849944 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccdca477630b5: link becomes ready Sep 13 02:32:37.849962 kernel: eth0: renamed from tmp9cf28 Sep 13 02:32:37.874713 systemd-networkd[1319]: lxcfccd1c867398: Link UP Sep 13 02:32:37.882402 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcfccd1c867398: link becomes ready Sep 13 02:32:37.882685 systemd-networkd[1319]: lxccdca477630b5: Gained carrier Sep 13 02:32:37.882835 systemd-networkd[1319]: lxcfccd1c867398: Gained carrier Sep 13 02:32:38.078521 sshd[3462]: Failed password for invalid user mysql from 47.251.77.219 port 55860 ssh2 Sep 13 02:32:38.102472 systemd-networkd[1319]: cilium_vxlan: Gained IPv6LL Sep 13 02:32:38.395011 systemd[1]: Started sshd@23-145.40.90.231:22-47.251.77.219:55952.service. Sep 13 02:32:38.424040 sshd[3931]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:32:38.424086 sshd[3931]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Sep 13 02:32:38.851331 sshd[3466]: Failed password for invalid user tom from 47.251.77.219 port 55888 ssh2 Sep 13 02:32:39.031998 systemd[1]: Started sshd@24-145.40.90.231:22-47.251.77.219:55958.service. Sep 13 02:32:39.050425 sshd[3936]: Invalid user apache from 47.251.77.219 port 55958 Sep 13 02:32:39.059037 sshd[3936]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:39.059257 sshd[3936]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:39.059273 sshd[3936]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:39.059506 sshd[3936]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:39.274447 sshd[3895]: Failed password for root from 47.251.77.219 port 55924 ssh2 Sep 13 02:32:39.318472 systemd-networkd[1319]: lxc_health: Gained IPv6LL Sep 13 02:32:39.345111 systemd[1]: Started sshd@25-145.40.90.231:22-47.251.77.219:55970.service. Sep 13 02:32:39.373007 sshd[3942]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:32:39.510473 systemd-networkd[1319]: lxcfccd1c867398: Gained IPv6LL Sep 13 02:32:39.531860 sshd[3428]: Connection closed by invalid user developer 47.251.77.219 port 55846 [preauth] Sep 13 02:32:39.532641 systemd[1]: sshd@18-145.40.90.231:22-47.251.77.219:55846.service: Deactivated successfully. Sep 13 02:32:39.693922 sshd[3895]: Connection closed by authenticating user root 47.251.77.219 port 55924 [preauth] Sep 13 02:32:39.694650 systemd[1]: sshd@22-145.40.90.231:22-47.251.77.219:55924.service: Deactivated successfully. Sep 13 02:32:39.894503 systemd-networkd[1319]: lxccdca477630b5: Gained IPv6LL Sep 13 02:32:39.990886 systemd[1]: Started sshd@26-145.40.90.231:22-47.251.77.219:40674.service. Sep 13 02:32:40.012764 sshd[3931]: Failed password for root from 47.251.77.219 port 55952 ssh2 Sep 13 02:32:40.013245 sshd[3947]: Invalid user esuser from 47.251.77.219 port 40674 Sep 13 02:32:40.024637 sshd[3947]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:40.024829 sshd[3947]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:40.024848 sshd[3947]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:40.025012 sshd[3947]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:40.089335 sshd[3462]: Connection closed by invalid user mysql 47.251.77.219 port 55860 [preauth] Sep 13 02:32:40.090054 systemd[1]: sshd@20-145.40.90.231:22-47.251.77.219:55860.service: Deactivated successfully. Sep 13 02:32:40.166963 env[1567]: time="2025-09-13T02:32:40.166895417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 02:32:40.166963 env[1567]: time="2025-09-13T02:32:40.166915960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 02:32:40.166963 env[1567]: time="2025-09-13T02:32:40.166922708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 02:32:40.167195 env[1567]: time="2025-09-13T02:32:40.166987270Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cf289cb8d1114ab7c56bd6f7b3fd9f4ba8c8ced174148860295e6b031fce11c pid=3968 runtime=io.containerd.runc.v2 Sep 13 02:32:40.167195 env[1567]: time="2025-09-13T02:32:40.167129057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 02:32:40.167195 env[1567]: time="2025-09-13T02:32:40.167149750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 02:32:40.167195 env[1567]: time="2025-09-13T02:32:40.167165047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 02:32:40.167270 env[1567]: time="2025-09-13T02:32:40.167229168Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7503c07f8c4c7b07c7111a33796267f16a8c1c47f2177abcd7e4b20dec1353af pid=3972 runtime=io.containerd.runc.v2 Sep 13 02:32:40.175179 systemd[1]: Started cri-containerd-7503c07f8c4c7b07c7111a33796267f16a8c1c47f2177abcd7e4b20dec1353af.scope. Sep 13 02:32:40.175915 systemd[1]: Started cri-containerd-9cf289cb8d1114ab7c56bd6f7b3fd9f4ba8c8ced174148860295e6b031fce11c.scope. Sep 13 02:32:40.196204 env[1567]: time="2025-09-13T02:32:40.196178536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wx4lh,Uid:b18ae966-57da-44ce-b7fb-02b27246857d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cf289cb8d1114ab7c56bd6f7b3fd9f4ba8c8ced174148860295e6b031fce11c\"" Sep 13 02:32:40.197763 env[1567]: time="2025-09-13T02:32:40.197741065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rmrsn,Uid:99da99e7-8654-4fdc-a8fc-174d7fdfd9fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"7503c07f8c4c7b07c7111a33796267f16a8c1c47f2177abcd7e4b20dec1353af\"" Sep 13 02:32:40.198384 env[1567]: time="2025-09-13T02:32:40.198369614Z" level=info msg="CreateContainer within sandbox \"9cf289cb8d1114ab7c56bd6f7b3fd9f4ba8c8ced174148860295e6b031fce11c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 02:32:40.199231 env[1567]: time="2025-09-13T02:32:40.199216829Z" level=info msg="CreateContainer within sandbox \"7503c07f8c4c7b07c7111a33796267f16a8c1c47f2177abcd7e4b20dec1353af\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 02:32:40.203539 env[1567]: time="2025-09-13T02:32:40.203495385Z" level=info msg="CreateContainer within sandbox \"9cf289cb8d1114ab7c56bd6f7b3fd9f4ba8c8ced174148860295e6b031fce11c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e5b9f43e89df315c389d5fb03bebe42ff3ef60ab2854a130790cf2cd07d65195\"" Sep 13 02:32:40.203729 env[1567]: time="2025-09-13T02:32:40.203714173Z" level=info msg="StartContainer for \"e5b9f43e89df315c389d5fb03bebe42ff3ef60ab2854a130790cf2cd07d65195\"" Sep 13 02:32:40.204403 env[1567]: time="2025-09-13T02:32:40.204386953Z" level=info msg="CreateContainer within sandbox \"7503c07f8c4c7b07c7111a33796267f16a8c1c47f2177abcd7e4b20dec1353af\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8096226391ef0ac5a57779533a6714cbfb808789aa596550553ef2210a5ae83\"" Sep 13 02:32:40.204564 env[1567]: time="2025-09-13T02:32:40.204551809Z" level=info msg="StartContainer for \"b8096226391ef0ac5a57779533a6714cbfb808789aa596550553ef2210a5ae83\"" Sep 13 02:32:40.211680 systemd[1]: Started cri-containerd-b8096226391ef0ac5a57779533a6714cbfb808789aa596550553ef2210a5ae83.scope. Sep 13 02:32:40.212318 systemd[1]: Started cri-containerd-e5b9f43e89df315c389d5fb03bebe42ff3ef60ab2854a130790cf2cd07d65195.scope. Sep 13 02:32:40.244177 env[1567]: time="2025-09-13T02:32:40.244122243Z" level=info msg="StartContainer for \"b8096226391ef0ac5a57779533a6714cbfb808789aa596550553ef2210a5ae83\" returns successfully" Sep 13 02:32:40.244177 env[1567]: time="2025-09-13T02:32:40.244122264Z" level=info msg="StartContainer for \"e5b9f43e89df315c389d5fb03bebe42ff3ef60ab2854a130790cf2cd07d65195\" returns successfully" Sep 13 02:32:40.402821 sshd[3466]: Connection closed by invalid user tom 47.251.77.219 port 55888 [preauth] Sep 13 02:32:40.405268 systemd[1]: sshd@21-145.40.90.231:22-47.251.77.219:55888.service: Deactivated successfully. Sep 13 02:32:40.619275 systemd[1]: Started sshd@27-145.40.90.231:22-47.251.77.219:40688.service. Sep 13 02:32:40.626563 sshd[3931]: Connection closed by authenticating user root 47.251.77.219 port 55952 [preauth] Sep 13 02:32:40.627234 systemd[1]: sshd@23-145.40.90.231:22-47.251.77.219:55952.service: Deactivated successfully. Sep 13 02:32:40.643418 sshd[4128]: Invalid user git from 47.251.77.219 port 40688 Sep 13 02:32:40.650750 sshd[4128]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:40.651038 sshd[4128]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:40.651062 sshd[4128]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:40.651330 sshd[4128]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:40.789965 kubelet[2462]: I0913 02:32:40.789831 2462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wx4lh" podStartSLOduration=16.789794407 podStartE2EDuration="16.789794407s" podCreationTimestamp="2025-09-13 02:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 02:32:40.789739943 +0000 UTC m=+23.214302420" watchObservedRunningTime="2025-09-13 02:32:40.789794407 +0000 UTC m=+23.214356833" Sep 13 02:32:40.804920 kubelet[2462]: I0913 02:32:40.804886 2462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rmrsn" podStartSLOduration=16.804872622 podStartE2EDuration="16.804872622s" podCreationTimestamp="2025-09-13 02:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 02:32:40.804517119 +0000 UTC m=+23.229079505" watchObservedRunningTime="2025-09-13 02:32:40.804872622 +0000 UTC m=+23.229435004" Sep 13 02:32:40.925639 systemd[1]: Started sshd@28-145.40.90.231:22-47.251.77.219:40696.service. Sep 13 02:32:40.977399 sshd[4137]: Invalid user postgres from 47.251.77.219 port 40696 Sep 13 02:32:40.985758 sshd[4137]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:40.986675 sshd[4137]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:40.986752 sshd[4137]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:40.987584 sshd[4137]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:41.786600 sshd[3936]: Failed password for invalid user apache from 47.251.77.219 port 55958 ssh2 Sep 13 02:32:42.100974 sshd[3942]: Failed password for root from 47.251.77.219 port 55970 ssh2 Sep 13 02:32:42.183322 systemd[1]: Started sshd@29-145.40.90.231:22-47.251.77.219:40748.service. Sep 13 02:32:42.203070 sshd[4140]: Invalid user plexserver from 47.251.77.219 port 40748 Sep 13 02:32:42.213514 sshd[4140]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:42.213866 sshd[4140]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:42.213894 sshd[4140]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:42.214168 sshd[4140]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:42.220565 sshd[3947]: Failed password for invalid user esuser from 47.251.77.219 port 40674 ssh2 Sep 13 02:32:42.311906 sshd[3947]: Connection closed by invalid user esuser 47.251.77.219 port 40674 [preauth] Sep 13 02:32:42.314285 systemd[1]: sshd@26-145.40.90.231:22-47.251.77.219:40674.service: Deactivated successfully. Sep 13 02:32:42.536428 sshd[3936]: Connection closed by invalid user apache 47.251.77.219 port 55958 [preauth] Sep 13 02:32:42.539030 systemd[1]: sshd@24-145.40.90.231:22-47.251.77.219:55958.service: Deactivated successfully. Sep 13 02:32:42.799362 systemd[1]: Started sshd@30-145.40.90.231:22-47.251.77.219:40762.service. Sep 13 02:32:42.818772 sshd[4145]: Invalid user app from 47.251.77.219 port 40762 Sep 13 02:32:42.828261 sshd[4145]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:42.828552 sshd[4145]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:42.828573 sshd[4145]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:42.828817 sshd[4145]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:42.846605 sshd[4128]: Failed password for invalid user git from 47.251.77.219 port 40688 ssh2 Sep 13 02:32:43.183937 sshd[4137]: Failed password for invalid user postgres from 47.251.77.219 port 40696 ssh2 Sep 13 02:32:43.240655 sshd[4128]: Connection closed by invalid user git 47.251.77.219 port 40688 [preauth] Sep 13 02:32:43.243338 systemd[1]: sshd@27-145.40.90.231:22-47.251.77.219:40688.service: Deactivated successfully. Sep 13 02:32:43.436412 systemd[1]: Started sshd@31-145.40.90.231:22-47.251.77.219:40782.service. Sep 13 02:32:43.458157 sshd[4149]: Invalid user lighthouse from 47.251.77.219 port 40782 Sep 13 02:32:43.467496 sshd[4149]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:43.467741 sshd[4149]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:43.467761 sshd[4149]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:43.468022 sshd[4149]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:43.750660 systemd[1]: Started sshd@32-145.40.90.231:22-47.251.77.219:40796.service. Sep 13 02:32:43.769891 sshd[4152]: Invalid user mysql from 47.251.77.219 port 40796 Sep 13 02:32:43.775353 sshd[3942]: Connection closed by authenticating user root 47.251.77.219 port 55970 [preauth] Sep 13 02:32:43.776011 systemd[1]: sshd@25-145.40.90.231:22-47.251.77.219:55970.service: Deactivated successfully. Sep 13 02:32:43.779016 sshd[4152]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:43.779260 sshd[4152]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:43.779282 sshd[4152]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:43.779537 sshd[4152]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:44.067491 systemd[1]: Started sshd@33-145.40.90.231:22-47.251.77.219:40806.service. Sep 13 02:32:44.099481 sshd[4156]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:32:44.374068 systemd[1]: Started sshd@34-145.40.90.231:22-47.251.77.219:40808.service. Sep 13 02:32:44.394693 sshd[4159]: Invalid user gpadmin from 47.251.77.219 port 40808 Sep 13 02:32:44.402675 sshd[4159]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:44.402931 sshd[4159]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:44.402953 sshd[4159]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:44.403177 sshd[4159]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:44.682700 systemd[1]: Started sshd@35-145.40.90.231:22-47.251.77.219:40818.service. Sep 13 02:32:44.686020 sshd[4140]: Failed password for invalid user plexserver from 47.251.77.219 port 40748 ssh2 Sep 13 02:32:44.707278 sshd[4162]: Invalid user oracle from 47.251.77.219 port 40818 Sep 13 02:32:44.717344 sshd[4162]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:44.717652 sshd[4162]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:44.717678 sshd[4162]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:44.717966 sshd[4162]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:44.996483 systemd[1]: Started sshd@36-145.40.90.231:22-47.251.77.219:40824.service. Sep 13 02:32:45.024435 sshd[4165]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:32:45.301013 sshd[4145]: Failed password for invalid user app from 47.251.77.219 port 40762 ssh2 Sep 13 02:32:45.311559 systemd[1]: Started sshd@37-145.40.90.231:22-47.251.77.219:40836.service. Sep 13 02:32:45.335520 sshd[4168]: Invalid user www from 47.251.77.219 port 40836 Sep 13 02:32:45.343168 sshd[4168]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:45.343507 sshd[4168]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:45.343531 sshd[4168]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:45.343785 sshd[4168]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:45.353111 sshd[4137]: Connection closed by invalid user postgres 47.251.77.219 port 40696 [preauth] Sep 13 02:32:45.353913 systemd[1]: sshd@28-145.40.90.231:22-47.251.77.219:40696.service: Deactivated successfully. Sep 13 02:32:45.408412 sshd[4149]: Failed password for invalid user lighthouse from 47.251.77.219 port 40782 ssh2 Sep 13 02:32:45.431819 sshd[4149]: Connection closed by invalid user lighthouse 47.251.77.219 port 40782 [preauth] Sep 13 02:32:45.434437 systemd[1]: sshd@31-145.40.90.231:22-47.251.77.219:40782.service: Deactivated successfully. Sep 13 02:32:45.720027 sshd[4152]: Failed password for invalid user mysql from 47.251.77.219 port 40796 ssh2 Sep 13 02:32:45.872745 sshd[4152]: Connection closed by invalid user mysql 47.251.77.219 port 40796 [preauth] Sep 13 02:32:45.875268 systemd[1]: sshd@32-145.40.90.231:22-47.251.77.219:40796.service: Deactivated successfully. Sep 13 02:32:46.281826 sshd[4140]: Connection closed by invalid user plexserver 47.251.77.219 port 40748 [preauth] Sep 13 02:32:46.284397 systemd[1]: sshd@29-145.40.90.231:22-47.251.77.219:40748.service: Deactivated successfully. Sep 13 02:32:46.511651 sshd[4156]: Failed password for root from 47.251.77.219 port 40806 ssh2 Sep 13 02:32:46.563453 systemd[1]: Started sshd@38-145.40.90.231:22-47.251.77.219:40882.service. Sep 13 02:32:46.579706 sshd[4175]: Invalid user admin from 47.251.77.219 port 40882 Sep 13 02:32:46.590213 sshd[4175]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:46.590538 sshd[4175]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:46.590562 sshd[4175]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:46.590817 sshd[4175]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:46.752872 sshd[4145]: Connection closed by invalid user app 47.251.77.219 port 40762 [preauth] Sep 13 02:32:46.755472 systemd[1]: sshd@30-145.40.90.231:22-47.251.77.219:40762.service: Deactivated successfully. Sep 13 02:32:46.814609 sshd[4159]: Failed password for invalid user gpadmin from 47.251.77.219 port 40808 ssh2 Sep 13 02:32:47.130065 sshd[4162]: Failed password for invalid user oracle from 47.251.77.219 port 40818 ssh2 Sep 13 02:32:47.240379 sshd[4165]: Failed password for root from 47.251.77.219 port 40824 ssh2 Sep 13 02:32:47.560202 sshd[4168]: Failed password for invalid user www from 47.251.77.219 port 40836 ssh2 Sep 13 02:32:47.672994 sshd[4159]: Connection closed by invalid user gpadmin 47.251.77.219 port 40808 [preauth] Sep 13 02:32:47.675602 systemd[1]: sshd@34-145.40.90.231:22-47.251.77.219:40808.service: Deactivated successfully. Sep 13 02:32:47.848502 systemd[1]: Started sshd@39-145.40.90.231:22-47.251.77.219:40922.service. Sep 13 02:32:47.878280 sshd[4181]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:32:48.099741 sshd[4168]: Connection closed by invalid user www 47.251.77.219 port 40836 [preauth] Sep 13 02:32:48.102450 systemd[1]: sshd@37-145.40.90.231:22-47.251.77.219:40836.service: Deactivated successfully. Sep 13 02:32:48.165162 systemd[1]: Started sshd@40-145.40.90.231:22-47.251.77.219:40932.service. Sep 13 02:32:48.185039 sshd[4185]: Invalid user guest from 47.251.77.219 port 40932 Sep 13 02:32:48.193398 sshd[4185]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:48.194520 sshd[4185]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:48.194607 sshd[4185]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:48.195557 sshd[4185]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:48.275527 sshd[4175]: Failed password for invalid user admin from 47.251.77.219 port 40882 ssh2 Sep 13 02:32:48.353179 sshd[4162]: Connection closed by invalid user oracle 47.251.77.219 port 40818 [preauth] Sep 13 02:32:48.355712 systemd[1]: sshd@35-145.40.90.231:22-47.251.77.219:40818.service: Deactivated successfully. Sep 13 02:32:48.502690 sshd[4156]: Connection closed by authenticating user root 47.251.77.219 port 40806 [preauth] Sep 13 02:32:48.505417 systemd[1]: sshd@33-145.40.90.231:22-47.251.77.219:40806.service: Deactivated successfully. Sep 13 02:32:48.941475 sshd[4175]: Connection closed by invalid user admin 47.251.77.219 port 40882 [preauth] Sep 13 02:32:48.944118 systemd[1]: sshd@38-145.40.90.231:22-47.251.77.219:40882.service: Deactivated successfully. Sep 13 02:32:49.139873 systemd[1]: Started sshd@41-145.40.90.231:22-47.251.77.219:40952.service. Sep 13 02:32:49.161000 sshd[4191]: Invalid user jumpserver from 47.251.77.219 port 40952 Sep 13 02:32:49.170712 sshd[4191]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:49.170993 sshd[4191]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:49.171016 sshd[4191]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:49.171285 sshd[4191]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:49.426912 sshd[4165]: Connection closed by authenticating user root 47.251.77.219 port 40824 [preauth] Sep 13 02:32:49.429532 systemd[1]: sshd@36-145.40.90.231:22-47.251.77.219:40824.service: Deactivated successfully. Sep 13 02:32:49.767480 systemd[1]: Started sshd@42-145.40.90.231:22-47.251.77.219:52062.service. Sep 13 02:32:49.794843 sshd[4195]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:32:49.824723 sshd[4185]: Failed password for invalid user guest from 47.251.77.219 port 40932 ssh2 Sep 13 02:32:50.033797 sshd[4181]: Failed password for root from 47.251.77.219 port 40922 ssh2 Sep 13 02:32:50.080255 sshd[4181]: Connection closed by authenticating user root 47.251.77.219 port 40922 [preauth] Sep 13 02:32:50.082899 systemd[1]: sshd@39-145.40.90.231:22-47.251.77.219:40922.service: Deactivated successfully. Sep 13 02:32:50.087987 systemd[1]: Started sshd@43-145.40.90.231:22-47.251.77.219:52070.service. Sep 13 02:32:50.108918 sshd[4199]: Invalid user git from 47.251.77.219 port 52070 Sep 13 02:32:50.115517 sshd[4199]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:50.115818 sshd[4199]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:50.115842 sshd[4199]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:50.116068 sshd[4199]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:50.391520 systemd[1]: Started sshd@44-145.40.90.231:22-47.251.77.219:52078.service. Sep 13 02:32:50.412949 sshd[4202]: Invalid user ranger from 47.251.77.219 port 52078 Sep 13 02:32:50.421002 sshd[4202]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:50.422213 sshd[4202]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:50.422306 sshd[4202]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:50.423312 sshd[4202]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:50.935964 sshd[4191]: Failed password for invalid user jumpserver from 47.251.77.219 port 40952 ssh2 Sep 13 02:32:51.203226 sshd[4185]: Connection closed by invalid user guest 47.251.77.219 port 40932 [preauth] Sep 13 02:32:51.205761 systemd[1]: sshd@40-145.40.90.231:22-47.251.77.219:40932.service: Deactivated successfully. Sep 13 02:32:51.345091 systemd[1]: Started sshd@45-145.40.90.231:22-47.251.77.219:52112.service. Sep 13 02:32:51.368180 sshd[4210]: Invalid user tom from 47.251.77.219 port 52112 Sep 13 02:32:51.378244 sshd[4210]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:51.378585 sshd[4210]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:51.378618 sshd[4210]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:51.378940 sshd[4210]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:51.559758 sshd[4195]: Failed password for root from 47.251.77.219 port 52062 ssh2 Sep 13 02:32:51.685046 sshd[4199]: Failed password for invalid user git from 47.251.77.219 port 52070 ssh2 Sep 13 02:32:51.966211 systemd[1]: Started sshd@46-145.40.90.231:22-47.251.77.219:52134.service. Sep 13 02:32:51.988850 sshd[4213]: Invalid user ubuntu from 47.251.77.219 port 52134 Sep 13 02:32:51.992331 sshd[4202]: Failed password for invalid user ranger from 47.251.77.219 port 52078 ssh2 Sep 13 02:32:51.996987 sshd[4195]: Connection closed by authenticating user root 47.251.77.219 port 52062 [preauth] Sep 13 02:32:51.997687 systemd[1]: sshd@42-145.40.90.231:22-47.251.77.219:52062.service: Deactivated successfully. Sep 13 02:32:52.000058 sshd[4213]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:52.000350 sshd[4213]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:52.000395 sshd[4213]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:52.000690 sshd[4213]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:52.520687 sshd[4202]: Connection closed by invalid user ranger 47.251.77.219 port 52078 [preauth] Sep 13 02:32:52.523277 systemd[1]: sshd@44-145.40.90.231:22-47.251.77.219:52078.service: Deactivated successfully. Sep 13 02:32:52.598112 sshd[4191]: Connection closed by invalid user jumpserver 47.251.77.219 port 40952 [preauth] Sep 13 02:32:52.600712 systemd[1]: sshd@41-145.40.90.231:22-47.251.77.219:40952.service: Deactivated successfully. Sep 13 02:32:52.704923 sshd[4199]: Connection closed by invalid user git 47.251.77.219 port 52070 [preauth] Sep 13 02:32:52.707580 systemd[1]: sshd@43-145.40.90.231:22-47.251.77.219:52070.service: Deactivated successfully. Sep 13 02:32:52.932677 systemd[1]: Started sshd@47-145.40.90.231:22-47.251.77.219:52154.service. Sep 13 02:32:52.952353 sshd[4220]: Invalid user rancher from 47.251.77.219 port 52154 Sep 13 02:32:52.961488 sshd[4220]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:52.961760 sshd[4220]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:52.961784 sshd[4220]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:52.962035 sshd[4220]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:53.246093 systemd[1]: Started sshd@48-145.40.90.231:22-47.251.77.219:52160.service. Sep 13 02:32:53.277521 sshd[4225]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:32:54.086725 sshd[4210]: Failed password for invalid user tom from 47.251.77.219 port 52112 ssh2 Sep 13 02:32:54.708669 sshd[4213]: Failed password for invalid user ubuntu from 47.251.77.219 port 52134 ssh2 Sep 13 02:32:54.857297 systemd[1]: Started sshd@49-145.40.90.231:22-47.251.77.219:52194.service. Sep 13 02:32:54.886695 sshd[4230]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:32:55.138238 sshd[4220]: Failed password for invalid user rancher from 47.251.77.219 port 52154 ssh2 Sep 13 02:32:55.179657 systemd[1]: Started sshd@50-145.40.90.231:22-47.251.77.219:52196.service. Sep 13 02:32:55.200874 sshd[4233]: Invalid user uftp from 47.251.77.219 port 52196 Sep 13 02:32:55.209899 sshd[4233]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:55.210165 sshd[4233]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:55.210186 sshd[4233]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:55.210408 sshd[4233]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:55.243844 sshd[4210]: Connection closed by invalid user tom 47.251.77.219 port 52112 [preauth] Sep 13 02:32:55.246443 systemd[1]: sshd@45-145.40.90.231:22-47.251.77.219:52112.service: Deactivated successfully. Sep 13 02:32:55.257655 sshd[4225]: Failed password for root from 47.251.77.219 port 52160 ssh2 Sep 13 02:32:55.480758 sshd[4225]: Connection closed by authenticating user root 47.251.77.219 port 52160 [preauth] Sep 13 02:32:55.483192 systemd[1]: sshd@48-145.40.90.231:22-47.251.77.219:52160.service: Deactivated successfully. Sep 13 02:32:56.404330 sshd[4213]: Connection closed by invalid user ubuntu 47.251.77.219 port 52134 [preauth] Sep 13 02:32:56.406902 systemd[1]: sshd@46-145.40.90.231:22-47.251.77.219:52134.service: Deactivated successfully. Sep 13 02:32:56.885838 sshd[4220]: Connection closed by invalid user rancher 47.251.77.219 port 52154 [preauth] Sep 13 02:32:56.888470 systemd[1]: sshd@47-145.40.90.231:22-47.251.77.219:52154.service: Deactivated successfully. Sep 13 02:32:57.338946 sshd[4230]: Failed password for root from 47.251.77.219 port 52194 ssh2 Sep 13 02:32:57.374756 systemd[1]: Started sshd@51-145.40.90.231:22-47.251.77.219:52254.service. Sep 13 02:32:57.394318 sshd[4242]: Invalid user observer from 47.251.77.219 port 52254 Sep 13 02:32:57.404194 sshd[4242]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:57.404482 sshd[4242]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:57.404508 sshd[4242]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:57.404776 sshd[4242]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:57.466601 sshd[4233]: Failed password for invalid user uftp from 47.251.77.219 port 52196 ssh2 Sep 13 02:32:57.682620 systemd[1]: Started sshd@52-145.40.90.231:22-47.251.77.219:52262.service. Sep 13 02:32:57.713254 sshd[4245]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=docker Sep 13 02:32:58.622527 systemd[1]: Started sshd@53-145.40.90.231:22-47.251.77.219:52302.service. Sep 13 02:32:58.640021 sshd[4248]: Invalid user oracle from 47.251.77.219 port 52302 Sep 13 02:32:58.648953 sshd[4248]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:58.649219 sshd[4248]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:58.649242 sshd[4248]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:58.649476 sshd[4248]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:58.940215 systemd[1]: Started sshd@54-145.40.90.231:22-47.251.77.219:52304.service. Sep 13 02:32:58.960288 sshd[4251]: Invalid user postgres from 47.251.77.219 port 52304 Sep 13 02:32:58.970228 sshd[4251]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:58.970548 sshd[4251]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:58.970572 sshd[4251]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:58.970828 sshd[4251]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:58.979041 sshd[4233]: Connection closed by invalid user uftp 47.251.77.219 port 52196 [preauth] Sep 13 02:32:58.979784 systemd[1]: sshd@50-145.40.90.231:22-47.251.77.219:52196.service: Deactivated successfully. Sep 13 02:32:59.252549 systemd[1]: Started sshd@55-145.40.90.231:22-47.251.77.219:52314.service. Sep 13 02:32:59.273564 sshd[4255]: Invalid user ts from 47.251.77.219 port 52314 Sep 13 02:32:59.285653 sshd[4255]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:59.285939 sshd[4255]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:32:59.285964 sshd[4255]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:32:59.286210 sshd[4255]: pam_faillock(sshd:auth): User unknown Sep 13 02:32:59.288491 sshd[4230]: Connection closed by authenticating user root 47.251.77.219 port 52194 [preauth] Sep 13 02:32:59.291019 systemd[1]: sshd@49-145.40.90.231:22-47.251.77.219:52194.service: Deactivated successfully. Sep 13 02:32:59.600731 sshd[4242]: Failed password for invalid user observer from 47.251.77.219 port 52254 ssh2 Sep 13 02:32:59.908684 sshd[4245]: Failed password for docker from 47.251.77.219 port 52262 ssh2 Sep 13 02:33:00.531902 systemd[1]: Started sshd@56-145.40.90.231:22-47.251.77.219:51608.service. Sep 13 02:33:00.552906 sshd[4259]: Invalid user gitlab from 47.251.77.219 port 51608 Sep 13 02:33:00.561114 sshd[4259]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:00.562269 sshd[4259]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:00.562408 sshd[4259]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:00.563426 sshd[4259]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:00.649639 sshd[4248]: Failed password for invalid user oracle from 47.251.77.219 port 52302 ssh2 Sep 13 02:33:00.838451 systemd[1]: Started sshd@57-145.40.90.231:22-47.251.77.219:51616.service. Sep 13 02:33:00.861868 sshd[4262]: Invalid user guest from 47.251.77.219 port 51616 Sep 13 02:33:00.870387 sshd[4262]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:00.870684 sshd[4262]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:00.870709 sshd[4262]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:00.870963 sshd[4262]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:00.971537 sshd[4251]: Failed password for invalid user postgres from 47.251.77.219 port 52304 ssh2 Sep 13 02:33:01.090741 sshd[4255]: Failed password for invalid user ts from 47.251.77.219 port 52314 ssh2 Sep 13 02:33:01.159034 sshd[4251]: Connection closed by invalid user postgres 47.251.77.219 port 52304 [preauth] Sep 13 02:33:01.161780 systemd[1]: sshd@54-145.40.90.231:22-47.251.77.219:52304.service: Deactivated successfully. Sep 13 02:33:01.436755 sshd[4242]: Connection closed by invalid user observer 47.251.77.219 port 52254 [preauth] Sep 13 02:33:01.439211 systemd[1]: sshd@51-145.40.90.231:22-47.251.77.219:52254.service: Deactivated successfully. Sep 13 02:33:01.536894 sshd[4255]: Connection closed by invalid user ts 47.251.77.219 port 52314 [preauth] Sep 13 02:33:01.539490 systemd[1]: sshd@55-145.40.90.231:22-47.251.77.219:52314.service: Deactivated successfully. Sep 13 02:33:01.572802 systemd[1]: Started sshd@58-145.40.90.231:22-47.251.77.219:51624.service. Sep 13 02:33:01.573253 sshd[4245]: Connection closed by authenticating user docker 47.251.77.219 port 52262 [preauth] Sep 13 02:33:01.573885 systemd[1]: sshd@52-145.40.90.231:22-47.251.77.219:52262.service: Deactivated successfully. Sep 13 02:33:01.596393 sshd[4269]: Invalid user flask from 47.251.77.219 port 51624 Sep 13 02:33:01.606309 sshd[4269]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:01.606643 sshd[4269]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:01.606673 sshd[4269]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:01.607025 sshd[4269]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:01.891528 systemd[1]: Started sshd@59-145.40.90.231:22-47.251.77.219:51638.service. Sep 13 02:33:01.916524 sshd[4273]: Invalid user gpuadmin from 47.251.77.219 port 51638 Sep 13 02:33:01.927218 sshd[4273]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:01.927546 sshd[4273]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:01.927570 sshd[4273]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:01.927839 sshd[4273]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:02.172632 sshd[4259]: Failed password for invalid user gitlab from 47.251.77.219 port 51608 ssh2 Sep 13 02:33:02.283640 sshd[4248]: Connection closed by invalid user oracle 47.251.77.219 port 52302 [preauth] Sep 13 02:33:02.286283 systemd[1]: sshd@53-145.40.90.231:22-47.251.77.219:52302.service: Deactivated successfully. Sep 13 02:33:02.479890 sshd[4262]: Failed password for invalid user guest from 47.251.77.219 port 51616 ssh2 Sep 13 02:33:02.535478 systemd[1]: Started sshd@60-145.40.90.231:22-47.251.77.219:51658.service. Sep 13 02:33:02.568400 sshd[4277]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:02.771107 sshd[4259]: Connection closed by invalid user gitlab 47.251.77.219 port 51608 [preauth] Sep 13 02:33:02.773749 systemd[1]: sshd@56-145.40.90.231:22-47.251.77.219:51608.service: Deactivated successfully. Sep 13 02:33:03.020479 sshd[4269]: Failed password for invalid user flask from 47.251.77.219 port 51624 ssh2 Sep 13 02:33:03.164622 systemd[1]: Started sshd@61-145.40.90.231:22-47.251.77.219:51676.service. Sep 13 02:33:03.185611 sshd[4281]: Invalid user gitlab from 47.251.77.219 port 51676 Sep 13 02:33:03.196830 sshd[4281]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:03.197182 sshd[4281]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:03.197213 sshd[4281]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:03.197619 sshd[4281]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:03.341170 sshd[4273]: Failed password for invalid user gpuadmin from 47.251.77.219 port 51638 ssh2 Sep 13 02:33:03.795112 sshd[4269]: Connection closed by invalid user flask 47.251.77.219 port 51624 [preauth] Sep 13 02:33:03.795767 systemd[1]: sshd@58-145.40.90.231:22-47.251.77.219:51624.service: Deactivated successfully. Sep 13 02:33:03.860586 sshd[4273]: Connection closed by invalid user gpuadmin 47.251.77.219 port 51638 [preauth] Sep 13 02:33:03.863151 systemd[1]: sshd@59-145.40.90.231:22-47.251.77.219:51638.service: Deactivated successfully. Sep 13 02:33:03.882307 sshd[4262]: Connection closed by invalid user guest 47.251.77.219 port 51616 [preauth] Sep 13 02:33:03.884648 systemd[1]: sshd@57-145.40.90.231:22-47.251.77.219:51616.service: Deactivated successfully. Sep 13 02:33:04.111604 systemd[1]: Started sshd@62-145.40.90.231:22-47.251.77.219:51706.service. Sep 13 02:33:04.129005 sshd[4287]: Invalid user jenkins from 47.251.77.219 port 51706 Sep 13 02:33:04.136888 sshd[4287]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:04.137169 sshd[4287]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:04.137192 sshd[4287]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:04.137478 sshd[4287]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:04.728331 systemd[1]: Started sshd@63-145.40.90.231:22-47.251.77.219:51730.service. Sep 13 02:33:04.747404 sshd[4290]: Invalid user admin from 47.251.77.219 port 51730 Sep 13 02:33:04.758708 sshd[4290]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:04.758987 sshd[4290]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:04.759010 sshd[4290]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:04.759255 sshd[4290]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:04.784555 sshd[4277]: Failed password for root from 47.251.77.219 port 51658 ssh2 Sep 13 02:33:05.646138 systemd[1]: Started sshd@64-145.40.90.231:22-47.251.77.219:51772.service. Sep 13 02:33:05.679758 sshd[4293]: Invalid user steam from 47.251.77.219 port 51772 Sep 13 02:33:05.686507 sshd[4293]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:05.687606 sshd[4293]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:05.687697 sshd[4293]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:05.688658 sshd[4293]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:05.885703 sshd[4281]: Failed password for invalid user gitlab from 47.251.77.219 port 51676 ssh2 Sep 13 02:33:05.962294 sshd[4287]: Failed password for invalid user jenkins from 47.251.77.219 port 51706 ssh2 Sep 13 02:33:05.967282 systemd[1]: Started sshd@65-145.40.90.231:22-47.251.77.219:51780.service. Sep 13 02:33:05.988265 sshd[4296]: Invalid user test from 47.251.77.219 port 51780 Sep 13 02:33:05.996205 sshd[4296]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:05.996484 sshd[4296]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:05.996509 sshd[4296]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:05.996802 sshd[4296]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:06.282538 systemd[1]: Started sshd@66-145.40.90.231:22-47.251.77.219:51794.service. Sep 13 02:33:06.305659 sshd[4299]: Invalid user test from 47.251.77.219 port 51794 Sep 13 02:33:06.313474 sshd[4299]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:06.313790 sshd[4299]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:06.313812 sshd[4299]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:06.314051 sshd[4299]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:06.584148 sshd[4290]: Failed password for invalid user admin from 47.251.77.219 port 51730 ssh2 Sep 13 02:33:06.970619 sshd[4277]: Connection closed by authenticating user root 47.251.77.219 port 51658 [preauth] Sep 13 02:33:06.973134 systemd[1]: sshd@60-145.40.90.231:22-47.251.77.219:51658.service: Deactivated successfully. Sep 13 02:33:07.110659 sshd[4290]: Connection closed by invalid user admin 47.251.77.219 port 51730 [preauth] Sep 13 02:33:07.113255 systemd[1]: sshd@63-145.40.90.231:22-47.251.77.219:51730.service: Deactivated successfully. Sep 13 02:33:07.338086 sshd[4287]: Connection closed by invalid user jenkins 47.251.77.219 port 51706 [preauth] Sep 13 02:33:07.340699 systemd[1]: sshd@62-145.40.90.231:22-47.251.77.219:51706.service: Deactivated successfully. Sep 13 02:33:07.622396 sshd[4281]: Connection closed by invalid user gitlab 47.251.77.219 port 51676 [preauth] Sep 13 02:33:07.624963 systemd[1]: sshd@61-145.40.90.231:22-47.251.77.219:51676.service: Deactivated successfully. Sep 13 02:33:07.648597 sshd[4293]: Failed password for invalid user steam from 47.251.77.219 port 51772 ssh2 Sep 13 02:33:07.666840 sshd[4293]: Connection closed by invalid user steam 47.251.77.219 port 51772 [preauth] Sep 13 02:33:07.669170 systemd[1]: sshd@64-145.40.90.231:22-47.251.77.219:51772.service: Deactivated successfully. Sep 13 02:33:07.873699 systemd[1]: Started sshd@67-145.40.90.231:22-47.251.77.219:51848.service. Sep 13 02:33:07.903024 sshd[4310]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:07.957257 sshd[4296]: Failed password for invalid user test from 47.251.77.219 port 51780 ssh2 Sep 13 02:33:08.182743 systemd[1]: Started sshd@68-145.40.90.231:22-47.251.77.219:51858.service. Sep 13 02:33:08.221252 sshd[4313]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:08.496619 systemd[1]: Started sshd@69-145.40.90.231:22-47.251.77.219:51872.service. Sep 13 02:33:08.516832 sshd[4316]: Invalid user zabbix from 47.251.77.219 port 51872 Sep 13 02:33:08.526269 sshd[4316]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:08.526544 sshd[4316]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:08.526566 sshd[4316]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:08.526793 sshd[4316]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:08.746027 sshd[4299]: Failed password for invalid user test from 47.251.77.219 port 51794 ssh2 Sep 13 02:33:08.806122 systemd[1]: Started sshd@70-145.40.90.231:22-47.251.77.219:51876.service. Sep 13 02:33:08.829694 sshd[4319]: Invalid user kubernetes from 47.251.77.219 port 51876 Sep 13 02:33:08.837220 sshd[4319]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:08.837447 sshd[4319]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:08.837467 sshd[4319]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:08.837686 sshd[4319]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:08.881224 sshd[4296]: Connection closed by invalid user test 47.251.77.219 port 51780 [preauth] Sep 13 02:33:08.883890 systemd[1]: sshd@65-145.40.90.231:22-47.251.77.219:51780.service: Deactivated successfully. Sep 13 02:33:09.129435 systemd[1]: Started sshd@71-145.40.90.231:22-47.251.77.219:51890.service. Sep 13 02:33:09.153134 sshd[4323]: Invalid user observer from 47.251.77.219 port 51890 Sep 13 02:33:09.163392 sshd[4323]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:09.163635 sshd[4323]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:09.163658 sshd[4323]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:09.163938 sshd[4323]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:09.198045 sshd[4299]: Connection closed by invalid user test 47.251.77.219 port 51794 [preauth] Sep 13 02:33:09.199734 systemd[1]: sshd@66-145.40.90.231:22-47.251.77.219:51794.service: Deactivated successfully. Sep 13 02:33:09.441550 systemd[1]: Started sshd@72-145.40.90.231:22-47.251.77.219:51902.service. Sep 13 02:33:09.465844 sshd[4327]: Invalid user hadoop from 47.251.77.219 port 51902 Sep 13 02:33:09.475377 sshd[4327]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:09.475659 sshd[4327]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:09.475682 sshd[4327]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:09.475937 sshd[4327]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:10.139270 sshd[4310]: Failed password for root from 47.251.77.219 port 51848 ssh2 Sep 13 02:33:10.395815 systemd[1]: Started sshd@73-145.40.90.231:22-47.251.77.219:41468.service. Sep 13 02:33:10.416070 sshd[4330]: Invalid user ranger from 47.251.77.219 port 41468 Sep 13 02:33:10.426888 sshd[4330]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:10.427197 sshd[4330]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:10.427224 sshd[4330]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:10.427555 sshd[4330]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:10.592565 sshd[4313]: Failed password for root from 47.251.77.219 port 51858 ssh2 Sep 13 02:33:10.705267 systemd[1]: Started sshd@74-145.40.90.231:22-47.251.77.219:41470.service. Sep 13 02:33:10.749417 sshd[4333]: Invalid user oracle from 47.251.77.219 port 41470 Sep 13 02:33:10.758504 sshd[4333]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:10.758720 sshd[4333]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:10.758737 sshd[4333]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:10.758943 sshd[4333]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:10.898780 sshd[4316]: Failed password for invalid user zabbix from 47.251.77.219 port 51872 ssh2 Sep 13 02:33:11.209770 sshd[4319]: Failed password for invalid user kubernetes from 47.251.77.219 port 51876 ssh2 Sep 13 02:33:11.340402 sshd[4323]: Failed password for invalid user observer from 47.251.77.219 port 51890 ssh2 Sep 13 02:33:11.641656 systemd[1]: Started sshd@75-145.40.90.231:22-47.251.77.219:41496.service. Sep 13 02:33:11.651644 sshd[4327]: Failed password for invalid user hadoop from 47.251.77.219 port 51902 ssh2 Sep 13 02:33:11.678444 sshd[4336]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:11.802107 sshd[4319]: Connection closed by invalid user kubernetes 47.251.77.219 port 51876 [preauth] Sep 13 02:33:11.802875 systemd[1]: sshd@70-145.40.90.231:22-47.251.77.219:51876.service: Deactivated successfully. Sep 13 02:33:12.075783 sshd[4330]: Failed password for invalid user ranger from 47.251.77.219 port 41468 ssh2 Sep 13 02:33:12.296351 systemd[1]: Started sshd@76-145.40.90.231:22-47.251.77.219:41512.service. Sep 13 02:33:12.305961 sshd[4310]: Connection closed by authenticating user root 47.251.77.219 port 51848 [preauth] Sep 13 02:33:12.306503 systemd[1]: sshd@67-145.40.90.231:22-47.251.77.219:51848.service: Deactivated successfully. Sep 13 02:33:12.319375 sshd[4340]: Invalid user default from 47.251.77.219 port 41512 Sep 13 02:33:12.327919 sshd[4340]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:12.328215 sshd[4340]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:12.328241 sshd[4340]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:12.328545 sshd[4340]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:12.346946 sshd[4316]: Connection closed by invalid user zabbix 47.251.77.219 port 51872 [preauth] Sep 13 02:33:12.349504 systemd[1]: sshd@69-145.40.90.231:22-47.251.77.219:51872.service: Deactivated successfully. Sep 13 02:33:12.406661 sshd[4333]: Failed password for invalid user oracle from 47.251.77.219 port 41470 ssh2 Sep 13 02:33:12.528322 sshd[4330]: Connection closed by invalid user ranger 47.251.77.219 port 41468 [preauth] Sep 13 02:33:12.530912 systemd[1]: sshd@73-145.40.90.231:22-47.251.77.219:41468.service: Deactivated successfully. Sep 13 02:33:12.577147 sshd[4333]: Connection closed by invalid user oracle 47.251.77.219 port 41470 [preauth] Sep 13 02:33:12.579834 systemd[1]: sshd@74-145.40.90.231:22-47.251.77.219:41470.service: Deactivated successfully. Sep 13 02:33:12.624322 sshd[4313]: Connection closed by authenticating user root 47.251.77.219 port 51858 [preauth] Sep 13 02:33:12.631167 systemd[1]: sshd@68-145.40.90.231:22-47.251.77.219:51858.service: Deactivated successfully. Sep 13 02:33:12.636903 systemd[1]: Started sshd@77-145.40.90.231:22-47.251.77.219:41520.service. Sep 13 02:33:12.664114 sshd[4348]: Invalid user tomcat from 47.251.77.219 port 41520 Sep 13 02:33:12.676291 sshd[4348]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:12.676600 sshd[4348]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:12.676624 sshd[4348]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:12.676901 sshd[4348]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:13.195799 sshd[4323]: Connection closed by invalid user observer 47.251.77.219 port 51890 [preauth] Sep 13 02:33:13.198455 systemd[1]: sshd@71-145.40.90.231:22-47.251.77.219:51890.service: Deactivated successfully. Sep 13 02:33:13.276755 systemd[1]: Started sshd@78-145.40.90.231:22-47.251.77.219:41538.service. Sep 13 02:33:13.312283 sshd[4352]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:13.349135 sshd[4327]: Connection closed by invalid user hadoop 47.251.77.219 port 51902 [preauth] Sep 13 02:33:13.351154 systemd[1]: sshd@72-145.40.90.231:22-47.251.77.219:51902.service: Deactivated successfully. Sep 13 02:33:13.463503 sshd[4336]: Failed password for root from 47.251.77.219 port 41496 ssh2 Sep 13 02:33:13.882025 sshd[4336]: Connection closed by authenticating user root 47.251.77.219 port 41496 [preauth] Sep 13 02:33:13.884613 systemd[1]: sshd@75-145.40.90.231:22-47.251.77.219:41496.service: Deactivated successfully. Sep 13 02:33:13.902346 systemd[1]: Started sshd@79-145.40.90.231:22-47.251.77.219:41552.service. Sep 13 02:33:13.916570 sshd[4340]: Failed password for invalid user default from 47.251.77.219 port 41512 ssh2 Sep 13 02:33:13.920530 sshd[4357]: Invalid user tools from 47.251.77.219 port 41552 Sep 13 02:33:13.929382 sshd[4357]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:13.929684 sshd[4357]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:13.929711 sshd[4357]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:13.929987 sshd[4357]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:13.958414 sshd[4340]: Connection closed by invalid user default 47.251.77.219 port 41512 [preauth] Sep 13 02:33:13.960337 systemd[1]: sshd@76-145.40.90.231:22-47.251.77.219:41512.service: Deactivated successfully. Sep 13 02:33:14.265778 sshd[4348]: Failed password for invalid user tomcat from 47.251.77.219 port 41520 ssh2 Sep 13 02:33:14.322933 sshd[4348]: Connection closed by invalid user tomcat 47.251.77.219 port 41520 [preauth] Sep 13 02:33:14.325553 systemd[1]: sshd@77-145.40.90.231:22-47.251.77.219:41520.service: Deactivated successfully. Sep 13 02:33:14.705796 sshd[4352]: Failed password for root from 47.251.77.219 port 41538 ssh2 Sep 13 02:33:14.863324 systemd[1]: Started sshd@80-145.40.90.231:22-47.251.77.219:41580.service. Sep 13 02:33:14.893398 sshd[4362]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:15.186824 systemd[1]: Started sshd@81-145.40.90.231:22-47.251.77.219:41588.service. Sep 13 02:33:15.227366 sshd[4365]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:15.322796 sshd[4357]: Failed password for invalid user tools from 47.251.77.219 port 41552 ssh2 Sep 13 02:33:15.515664 sshd[4352]: Connection closed by authenticating user root 47.251.77.219 port 41538 [preauth] Sep 13 02:33:15.518274 systemd[1]: sshd@78-145.40.90.231:22-47.251.77.219:41538.service: Deactivated successfully. Sep 13 02:33:15.783010 sshd[4357]: Connection closed by invalid user tools 47.251.77.219 port 41552 [preauth] Sep 13 02:33:15.784977 systemd[1]: sshd@79-145.40.90.231:22-47.251.77.219:41552.service: Deactivated successfully. Sep 13 02:33:15.806058 systemd[1]: Started sshd@82-145.40.90.231:22-47.251.77.219:41600.service. Sep 13 02:33:15.838128 sshd[4370]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:16.127178 systemd[1]: Started sshd@83-145.40.90.231:22-47.251.77.219:41604.service. Sep 13 02:33:16.145431 sshd[4374]: Invalid user oracle from 47.251.77.219 port 41604 Sep 13 02:33:16.154185 sshd[4374]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:16.154502 sshd[4374]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:16.154526 sshd[4374]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:16.154796 sshd[4374]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:17.075248 systemd[1]: Started sshd@84-145.40.90.231:22-47.251.77.219:41626.service. Sep 13 02:33:17.093111 sshd[4377]: Invalid user gitlab-runner from 47.251.77.219 port 41626 Sep 13 02:33:17.102353 sshd[4377]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:17.102651 sshd[4377]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:17.102675 sshd[4377]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:17.102941 sshd[4377]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:17.402304 systemd[1]: Started sshd@85-145.40.90.231:22-47.251.77.219:41642.service. Sep 13 02:33:17.424419 sshd[4362]: Failed password for root from 47.251.77.219 port 41580 ssh2 Sep 13 02:33:17.428163 sshd[4380]: Invalid user es from 47.251.77.219 port 41642 Sep 13 02:33:17.438924 sshd[4380]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:17.439216 sshd[4380]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:17.439240 sshd[4380]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:17.439555 sshd[4380]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:17.895253 sshd[4365]: Failed password for root from 47.251.77.219 port 41588 ssh2 Sep 13 02:33:17.959589 sshd[4374]: Failed password for invalid user oracle from 47.251.77.219 port 41604 ssh2 Sep 13 02:33:17.972448 sshd[4374]: Connection closed by invalid user oracle 47.251.77.219 port 41604 [preauth] Sep 13 02:33:17.975076 systemd[1]: sshd@83-145.40.90.231:22-47.251.77.219:41604.service: Deactivated successfully. Sep 13 02:33:18.036981 systemd[1]: Started sshd@86-145.40.90.231:22-47.251.77.219:41664.service. Sep 13 02:33:18.059589 sshd[4386]: Invalid user ubnt from 47.251.77.219 port 41664 Sep 13 02:33:18.069025 sshd[4386]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:18.069318 sshd[4386]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:18.069343 sshd[4386]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:18.069645 sshd[4386]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:18.506125 sshd[4370]: Failed password for root from 47.251.77.219 port 41600 ssh2 Sep 13 02:33:18.657177 systemd[1]: Started sshd@87-145.40.90.231:22-47.251.77.219:41674.service. Sep 13 02:33:18.687033 sshd[4389]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:18.991419 systemd[1]: Started sshd@88-145.40.90.231:22-47.251.77.219:41676.service. Sep 13 02:33:19.051098 sshd[4392]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:19.296515 sshd[4362]: Connection closed by authenticating user root 47.251.77.219 port 41580 [preauth] Sep 13 02:33:19.299086 systemd[1]: sshd@80-145.40.90.231:22-47.251.77.219:41580.service: Deactivated successfully. Sep 13 02:33:19.316660 systemd[1]: Started sshd@89-145.40.90.231:22-47.251.77.219:41690.service. Sep 13 02:33:19.374883 sshd[4396]: Invalid user developer from 47.251.77.219 port 41690 Sep 13 02:33:19.378590 sshd[4377]: Failed password for invalid user gitlab-runner from 47.251.77.219 port 41626 ssh2 Sep 13 02:33:19.383706 sshd[4396]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:19.384737 sshd[4396]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:19.384817 sshd[4396]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:19.385703 sshd[4396]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:19.630469 sshd[4365]: Connection closed by authenticating user root 47.251.77.219 port 41588 [preauth] Sep 13 02:33:19.631186 systemd[1]: sshd@81-145.40.90.231:22-47.251.77.219:41588.service: Deactivated successfully. Sep 13 02:33:19.716021 sshd[4380]: Failed password for invalid user es from 47.251.77.219 port 41642 ssh2 Sep 13 02:33:20.240868 sshd[4370]: Connection closed by authenticating user root 47.251.77.219 port 41600 [preauth] Sep 13 02:33:20.243488 systemd[1]: sshd@82-145.40.90.231:22-47.251.77.219:41600.service: Deactivated successfully. Sep 13 02:33:20.481782 sshd[4386]: Failed password for invalid user ubnt from 47.251.77.219 port 41664 ssh2 Sep 13 02:33:20.577929 systemd[1]: Started sshd@90-145.40.90.231:22-47.251.77.219:47040.service. Sep 13 02:33:20.597780 sshd[4402]: Invalid user mongodb from 47.251.77.219 port 47040 Sep 13 02:33:20.606667 sshd[4402]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:20.606969 sshd[4402]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:20.606995 sshd[4402]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:20.607254 sshd[4402]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:20.740758 sshd[4380]: Connection closed by invalid user es 47.251.77.219 port 41642 [preauth] Sep 13 02:33:20.743410 systemd[1]: sshd@85-145.40.90.231:22-47.251.77.219:41642.service: Deactivated successfully. Sep 13 02:33:20.964738 sshd[4377]: Connection closed by invalid user gitlab-runner 47.251.77.219 port 41626 [preauth] Sep 13 02:33:20.967586 systemd[1]: sshd@84-145.40.90.231:22-47.251.77.219:41626.service: Deactivated successfully. Sep 13 02:33:21.098579 sshd[4389]: Failed password for root from 47.251.77.219 port 41674 ssh2 Sep 13 02:33:21.267741 sshd[4392]: Failed password for root from 47.251.77.219 port 41676 ssh2 Sep 13 02:33:21.529514 sshd[4386]: Connection closed by invalid user ubnt 47.251.77.219 port 41664 [preauth] Sep 13 02:33:21.532155 systemd[1]: sshd@86-145.40.90.231:22-47.251.77.219:41664.service: Deactivated successfully. Sep 13 02:33:21.602403 sshd[4396]: Failed password for invalid user developer from 47.251.77.219 port 41690 ssh2 Sep 13 02:33:21.811378 systemd[1]: Started sshd@91-145.40.90.231:22-47.251.77.219:47064.service. Sep 13 02:33:21.832370 sshd[4408]: Invalid user sonar from 47.251.77.219 port 47064 Sep 13 02:33:21.842161 sshd[4408]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:21.842989 sshd[4408]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:21.843052 sshd[4408]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:21.843696 sshd[4408]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:22.137049 systemd[1]: Started sshd@92-145.40.90.231:22-47.251.77.219:47068.service. Sep 13 02:33:22.162582 sshd[4411]: Invalid user elasticsearch from 47.251.77.219 port 47068 Sep 13 02:33:22.171913 sshd[4411]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:22.172218 sshd[4411]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:22.172244 sshd[4411]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:22.172508 sshd[4411]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:22.442415 systemd[1]: Started sshd@93-145.40.90.231:22-47.251.77.219:47080.service. Sep 13 02:33:22.471872 sshd[4414]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=docker Sep 13 02:33:22.627894 sshd[4402]: Failed password for invalid user mongodb from 47.251.77.219 port 47040 ssh2 Sep 13 02:33:22.759232 systemd[1]: Started sshd@94-145.40.90.231:22-47.251.77.219:47092.service. Sep 13 02:33:22.797031 sshd[4417]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:23.089810 sshd[4389]: Connection closed by authenticating user root 47.251.77.219 port 41674 [preauth] Sep 13 02:33:23.092514 systemd[1]: sshd@87-145.40.90.231:22-47.251.77.219:41674.service: Deactivated successfully. Sep 13 02:33:23.453887 sshd[4392]: Connection closed by authenticating user root 47.251.77.219 port 41676 [preauth] Sep 13 02:33:23.456378 systemd[1]: sshd@88-145.40.90.231:22-47.251.77.219:41676.service: Deactivated successfully. Sep 13 02:33:23.605402 sshd[4396]: Connection closed by invalid user developer 47.251.77.219 port 41690 [preauth] Sep 13 02:33:23.608113 systemd[1]: sshd@89-145.40.90.231:22-47.251.77.219:41690.service: Deactivated successfully. Sep 13 02:33:23.667744 sshd[4408]: Failed password for invalid user sonar from 47.251.77.219 port 47064 ssh2 Sep 13 02:33:23.800576 sshd[4411]: Failed password for invalid user elasticsearch from 47.251.77.219 port 47068 ssh2 Sep 13 02:33:23.841404 sshd[4411]: Connection closed by invalid user elasticsearch 47.251.77.219 port 47068 [preauth] Sep 13 02:33:23.842735 systemd[1]: sshd@92-145.40.90.231:22-47.251.77.219:47068.service: Deactivated successfully. Sep 13 02:33:24.036521 systemd[1]: Started sshd@95-145.40.90.231:22-47.251.77.219:47124.service. Sep 13 02:33:24.057558 sshd[4424]: Invalid user tomcat from 47.251.77.219 port 47124 Sep 13 02:33:24.066259 sshd[4424]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:24.066570 sshd[4424]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:24.066597 sshd[4424]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:24.066943 sshd[4424]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:24.100900 sshd[4414]: Failed password for docker from 47.251.77.219 port 47080 ssh2 Sep 13 02:33:24.119352 sshd[4408]: Connection closed by invalid user sonar 47.251.77.219 port 47064 [preauth] Sep 13 02:33:24.121181 sshd[4402]: Connection closed by invalid user mongodb 47.251.77.219 port 47040 [preauth] Sep 13 02:33:24.122170 systemd[1]: sshd@91-145.40.90.231:22-47.251.77.219:47064.service: Deactivated successfully. Sep 13 02:33:24.124775 systemd[1]: sshd@90-145.40.90.231:22-47.251.77.219:47040.service: Deactivated successfully. Sep 13 02:33:24.356127 systemd[1]: Started sshd@96-145.40.90.231:22-47.251.77.219:47136.service. Sep 13 02:33:24.378626 sshd[4429]: Invalid user elsearch from 47.251.77.219 port 47136 Sep 13 02:33:24.388669 sshd[4429]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:24.388940 sshd[4429]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:24.388962 sshd[4429]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:24.389194 sshd[4429]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:24.404117 sshd[4414]: Connection closed by authenticating user docker 47.251.77.219 port 47080 [preauth] Sep 13 02:33:24.404869 systemd[1]: sshd@93-145.40.90.231:22-47.251.77.219:47080.service: Deactivated successfully. Sep 13 02:33:24.426246 sshd[4417]: Failed password for root from 47.251.77.219 port 47092 ssh2 Sep 13 02:33:24.686025 systemd[1]: Started sshd@97-145.40.90.231:22-47.251.77.219:47142.service. Sep 13 02:33:24.735516 sshd[4435]: Invalid user git from 47.251.77.219 port 47142 Sep 13 02:33:24.742856 sshd[4435]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:24.743060 sshd[4435]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:24.743077 sshd[4435]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:24.743255 sshd[4435]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:25.000381 sshd[4417]: Connection closed by authenticating user root 47.251.77.219 port 47092 [preauth] Sep 13 02:33:25.003121 systemd[1]: sshd@94-145.40.90.231:22-47.251.77.219:47092.service: Deactivated successfully. Sep 13 02:33:25.635920 sshd[4424]: Failed password for invalid user tomcat from 47.251.77.219 port 47124 ssh2 Sep 13 02:33:25.646651 systemd[1]: Started sshd@98-145.40.90.231:22-47.251.77.219:47164.service. Sep 13 02:33:25.667471 sshd[4439]: Invalid user ftpuser from 47.251.77.219 port 47164 Sep 13 02:33:25.675554 sshd[4439]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:25.676729 sshd[4439]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:25.676820 sshd[4439]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:25.677779 sshd[4439]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:25.713394 sshd[4424]: Connection closed by invalid user tomcat 47.251.77.219 port 47124 [preauth] Sep 13 02:33:25.715917 systemd[1]: sshd@95-145.40.90.231:22-47.251.77.219:47124.service: Deactivated successfully. Sep 13 02:33:25.957738 sshd[4429]: Failed password for invalid user elsearch from 47.251.77.219 port 47136 ssh2 Sep 13 02:33:26.312418 sshd[4435]: Failed password for invalid user git from 47.251.77.219 port 47142 ssh2 Sep 13 02:33:26.491465 sshd[4429]: Connection closed by invalid user elsearch 47.251.77.219 port 47136 [preauth] Sep 13 02:33:26.494133 systemd[1]: sshd@96-145.40.90.231:22-47.251.77.219:47136.service: Deactivated successfully. Sep 13 02:33:27.050606 sshd[4439]: Failed password for invalid user ftpuser from 47.251.77.219 port 47164 ssh2 Sep 13 02:33:27.338399 sshd[4435]: Connection closed by invalid user git 47.251.77.219 port 47142 [preauth] Sep 13 02:33:27.340986 systemd[1]: sshd@97-145.40.90.231:22-47.251.77.219:47142.service: Deactivated successfully. Sep 13 02:33:27.872257 sshd[4439]: Connection closed by invalid user ftpuser 47.251.77.219 port 47164 [preauth] Sep 13 02:33:27.873997 systemd[1]: sshd@98-145.40.90.231:22-47.251.77.219:47164.service: Deactivated successfully. Sep 13 02:33:28.197633 systemd[1]: Started sshd@99-145.40.90.231:22-47.251.77.219:47232.service. Sep 13 02:33:28.228780 sshd[4446]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:29.134849 systemd[1]: Started sshd@100-145.40.90.231:22-47.251.77.219:47268.service. Sep 13 02:33:29.155518 sshd[4449]: Invalid user deploy from 47.251.77.219 port 47268 Sep 13 02:33:29.165605 sshd[4449]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:29.165907 sshd[4449]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:29.165933 sshd[4449]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:29.166182 sshd[4449]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:29.472124 systemd[1]: Started sshd@101-145.40.90.231:22-47.251.77.219:47278.service. Sep 13 02:33:29.501293 sshd[4452]: Invalid user dev from 47.251.77.219 port 47278 Sep 13 02:33:29.510399 sshd[4452]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:29.510711 sshd[4452]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:29.510739 sshd[4452]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:29.511038 sshd[4452]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:29.783775 systemd[1]: Started sshd@102-145.40.90.231:22-47.251.77.219:56516.service. Sep 13 02:33:29.805320 sshd[4455]: Invalid user oscar from 47.251.77.219 port 56516 Sep 13 02:33:29.816535 sshd[4455]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:29.817783 sshd[4455]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:29.817881 sshd[4455]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:29.818843 sshd[4455]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:30.013833 sshd[4446]: Failed password for root from 47.251.77.219 port 47232 ssh2 Sep 13 02:33:30.102437 systemd[1]: Started sshd@103-145.40.90.231:22-47.251.77.219:56530.service. Sep 13 02:33:30.133968 sshd[4458]: Invalid user dolphinscheduler from 47.251.77.219 port 56530 Sep 13 02:33:30.144898 sshd[4458]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:30.146068 sshd[4458]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:30.146160 sshd[4458]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:30.147135 sshd[4458]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:30.431163 sshd[4446]: Connection closed by authenticating user root 47.251.77.219 port 47232 [preauth] Sep 13 02:33:30.433800 systemd[1]: sshd@99-145.40.90.231:22-47.251.77.219:47232.service: Deactivated successfully. Sep 13 02:33:30.722101 systemd[1]: Started sshd@104-145.40.90.231:22-47.251.77.219:56544.service. Sep 13 02:33:30.741569 sshd[4462]: Invalid user dev from 47.251.77.219 port 56544 Sep 13 02:33:30.750885 sshd[4462]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:30.751178 sshd[4462]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:30.751203 sshd[4462]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:30.751508 sshd[4462]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:31.349069 systemd[1]: Started sshd@105-145.40.90.231:22-47.251.77.219:56568.service. Sep 13 02:33:31.368220 sshd[4465]: Invalid user lighthouse from 47.251.77.219 port 56568 Sep 13 02:33:31.377303 sshd[4465]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:31.377602 sshd[4465]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:31.377629 sshd[4465]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:31.377952 sshd[4465]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:31.421572 sshd[4449]: Failed password for invalid user deploy from 47.251.77.219 port 47268 ssh2 Sep 13 02:33:31.685234 systemd[1]: Started sshd@106-145.40.90.231:22-47.251.77.219:56576.service. Sep 13 02:33:31.717038 sshd[4468]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:31.767646 sshd[4452]: Failed password for invalid user dev from 47.251.77.219 port 47278 ssh2 Sep 13 02:33:32.075570 sshd[4455]: Failed password for invalid user oscar from 47.251.77.219 port 56516 ssh2 Sep 13 02:33:32.539580 sshd[4458]: Failed password for invalid user dolphinscheduler from 47.251.77.219 port 56530 ssh2 Sep 13 02:33:32.636032 systemd[1]: Started sshd@107-145.40.90.231:22-47.251.77.219:56596.service. Sep 13 02:33:32.670746 sshd[4471]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:32.756820 sshd[4458]: Connection closed by invalid user dolphinscheduler 47.251.77.219 port 56530 [preauth] Sep 13 02:33:32.759466 systemd[1]: sshd@103-145.40.90.231:22-47.251.77.219:56530.service: Deactivated successfully. Sep 13 02:33:32.946867 systemd[1]: Started sshd@108-145.40.90.231:22-47.251.77.219:56604.service. Sep 13 02:33:32.972264 sshd[4475]: Invalid user user from 47.251.77.219 port 56604 Sep 13 02:33:32.981086 sshd[4475]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:32.981460 sshd[4475]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:32.981485 sshd[4475]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:32.981781 sshd[4475]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:33.080607 sshd[4449]: Connection closed by invalid user deploy 47.251.77.219 port 47268 [preauth] Sep 13 02:33:33.083215 systemd[1]: sshd@100-145.40.90.231:22-47.251.77.219:47268.service: Deactivated successfully. Sep 13 02:33:33.143620 sshd[4462]: Failed password for invalid user dev from 47.251.77.219 port 56544 ssh2 Sep 13 02:33:33.261232 systemd[1]: Started sshd@109-145.40.90.231:22-47.251.77.219:56614.service. Sep 13 02:33:33.307998 sshd[4479]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:33.327538 sshd[4452]: Connection closed by invalid user dev 47.251.77.219 port 47278 [preauth] Sep 13 02:33:33.330193 systemd[1]: sshd@101-145.40.90.231:22-47.251.77.219:47278.service: Deactivated successfully. Sep 13 02:33:33.569528 systemd[1]: Started sshd@110-145.40.90.231:22-47.251.77.219:56626.service. Sep 13 02:33:33.573564 sshd[4465]: Failed password for invalid user lighthouse from 47.251.77.219 port 56568 ssh2 Sep 13 02:33:33.590617 sshd[4483]: Invalid user svnuser from 47.251.77.219 port 56626 Sep 13 02:33:33.600030 sshd[4483]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:33.600285 sshd[4483]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:33.600306 sshd[4483]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:33.600556 sshd[4483]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:33.882844 systemd[1]: Started sshd@111-145.40.90.231:22-47.251.77.219:56642.service. Sep 13 02:33:33.902139 sshd[4486]: Invalid user ftpuser from 47.251.77.219 port 56642 Sep 13 02:33:33.912547 sshd[4468]: Failed password for root from 47.251.77.219 port 56576 ssh2 Sep 13 02:33:33.913648 sshd[4486]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:33.914734 sshd[4486]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:33.914829 sshd[4486]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:33.915860 sshd[4486]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:33.919412 sshd[4468]: Connection closed by authenticating user root 47.251.77.219 port 56576 [preauth] Sep 13 02:33:33.922004 systemd[1]: sshd@106-145.40.90.231:22-47.251.77.219:56576.service: Deactivated successfully. Sep 13 02:33:34.055481 sshd[4455]: Connection closed by invalid user oscar 47.251.77.219 port 56516 [preauth] Sep 13 02:33:34.058090 systemd[1]: sshd@102-145.40.90.231:22-47.251.77.219:56516.service: Deactivated successfully. Sep 13 02:33:34.196583 systemd[1]: Started sshd@112-145.40.90.231:22-47.251.77.219:56644.service. Sep 13 02:33:34.220161 sshd[4491]: Invalid user ubuntu from 47.251.77.219 port 56644 Sep 13 02:33:34.230732 sshd[4491]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:34.231038 sshd[4491]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:34.231063 sshd[4491]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:34.231301 sshd[4491]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:34.340346 sshd[4471]: Failed password for root from 47.251.77.219 port 56596 ssh2 Sep 13 02:33:34.572091 sshd[4462]: Connection closed by invalid user dev 47.251.77.219 port 56544 [preauth] Sep 13 02:33:34.574733 systemd[1]: sshd@104-145.40.90.231:22-47.251.77.219:56544.service: Deactivated successfully. Sep 13 02:33:34.650554 sshd[4475]: Failed password for invalid user user from 47.251.77.219 port 56604 ssh2 Sep 13 02:33:34.661064 sshd[4475]: Connection closed by invalid user user 47.251.77.219 port 56604 [preauth] Sep 13 02:33:34.661775 systemd[1]: sshd@108-145.40.90.231:22-47.251.77.219:56604.service: Deactivated successfully. Sep 13 02:33:34.821153 systemd[1]: Started sshd@113-145.40.90.231:22-47.251.77.219:56660.service. Sep 13 02:33:34.845434 sshd[4496]: Invalid user esadmin from 47.251.77.219 port 56660 Sep 13 02:33:34.858158 sshd[4496]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:34.859264 sshd[4496]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:34.859382 sshd[4496]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:34.860445 sshd[4496]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:34.873733 sshd[4471]: Connection closed by authenticating user root 47.251.77.219 port 56596 [preauth] Sep 13 02:33:34.876138 systemd[1]: sshd@107-145.40.90.231:22-47.251.77.219:56596.service: Deactivated successfully. Sep 13 02:33:35.135199 systemd[1]: Started sshd@114-145.40.90.231:22-47.251.77.219:56670.service. Sep 13 02:33:35.167000 sshd[4500]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:35.303552 sshd[4465]: Connection closed by invalid user lighthouse 47.251.77.219 port 56568 [preauth] Sep 13 02:33:35.306275 systemd[1]: sshd@105-145.40.90.231:22-47.251.77.219:56568.service: Deactivated successfully. Sep 13 02:33:35.449307 sshd[4479]: Failed password for root from 47.251.77.219 port 56614 ssh2 Sep 13 02:33:35.510871 sshd[4479]: Connection closed by authenticating user root 47.251.77.219 port 56614 [preauth] Sep 13 02:33:35.513523 systemd[1]: sshd@109-145.40.90.231:22-47.251.77.219:56614.service: Deactivated successfully. Sep 13 02:33:35.740599 sshd[4483]: Failed password for invalid user svnuser from 47.251.77.219 port 56626 ssh2 Sep 13 02:33:35.764348 systemd[1]: Started sshd@115-145.40.90.231:22-47.251.77.219:56688.service. Sep 13 02:33:35.790012 sshd[4505]: Invalid user deploy from 47.251.77.219 port 56688 Sep 13 02:33:35.797929 sshd[4505]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:35.799167 sshd[4505]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:35.799264 sshd[4505]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:35.800265 sshd[4505]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:35.839997 sshd[4491]: Failed password for invalid user ubuntu from 47.251.77.219 port 56644 ssh2 Sep 13 02:33:36.056742 sshd[4486]: Failed password for invalid user ftpuser from 47.251.77.219 port 56642 ssh2 Sep 13 02:33:36.077630 systemd[1]: Started sshd@116-145.40.90.231:22-47.251.77.219:56692.service. Sep 13 02:33:36.110617 sshd[4486]: Connection closed by invalid user ftpuser 47.251.77.219 port 56642 [preauth] Sep 13 02:33:36.113084 systemd[1]: sshd@111-145.40.90.231:22-47.251.77.219:56642.service: Deactivated successfully. Sep 13 02:33:36.129745 sshd[4508]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:36.396664 systemd[1]: Started sshd@117-145.40.90.231:22-47.251.77.219:56708.service. Sep 13 02:33:36.428606 sshd[4512]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:36.433607 sshd[4491]: Connection closed by invalid user ubuntu 47.251.77.219 port 56644 [preauth] Sep 13 02:33:36.436065 systemd[1]: sshd@112-145.40.90.231:22-47.251.77.219:56644.service: Deactivated successfully. Sep 13 02:33:36.469641 sshd[4496]: Failed password for invalid user esadmin from 47.251.77.219 port 56660 ssh2 Sep 13 02:33:36.579079 sshd[4500]: Failed password for root from 47.251.77.219 port 56670 ssh2 Sep 13 02:33:36.711979 systemd[1]: Started sshd@118-145.40.90.231:22-47.251.77.219:56718.service. Sep 13 02:33:36.736333 sshd[4516]: Invalid user oracle from 47.251.77.219 port 56718 Sep 13 02:33:36.745661 sshd[4516]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:36.745969 sshd[4516]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:36.745999 sshd[4516]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:36.746318 sshd[4516]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:37.021039 systemd[1]: Started sshd@119-145.40.90.231:22-47.251.77.219:56728.service. Sep 13 02:33:37.039792 sshd[4522]: Invalid user rabbitmq from 47.251.77.219 port 56728 Sep 13 02:33:37.049207 sshd[4522]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:37.049576 sshd[4522]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:37.049601 sshd[4522]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:37.049870 sshd[4522]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:37.140604 sshd[4496]: Connection closed by invalid user esadmin 47.251.77.219 port 56660 [preauth] Sep 13 02:33:37.143260 systemd[1]: sshd@113-145.40.90.231:22-47.251.77.219:56660.service: Deactivated successfully. Sep 13 02:33:37.213776 sshd[4505]: Failed password for invalid user deploy from 47.251.77.219 port 56688 ssh2 Sep 13 02:33:37.336934 systemd[1]: Started sshd@120-145.40.90.231:22-47.251.77.219:56732.service. Sep 13 02:33:37.365628 sshd[4526]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:37.369938 sshd[4500]: Connection closed by authenticating user root 47.251.77.219 port 56670 [preauth] Sep 13 02:33:37.370611 systemd[1]: sshd@114-145.40.90.231:22-47.251.77.219:56670.service: Deactivated successfully. Sep 13 02:33:37.539439 sshd[4483]: Connection closed by invalid user svnuser 47.251.77.219 port 56626 [preauth] Sep 13 02:33:37.541996 systemd[1]: sshd@110-145.40.90.231:22-47.251.77.219:56626.service: Deactivated successfully. Sep 13 02:33:37.646983 systemd[1]: Started sshd@121-145.40.90.231:22-47.251.77.219:56738.service. Sep 13 02:33:37.682052 sshd[4531]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:37.756024 sshd[4505]: Connection closed by invalid user deploy 47.251.77.219 port 56688 [preauth] Sep 13 02:33:37.758661 systemd[1]: sshd@115-145.40.90.231:22-47.251.77.219:56688.service: Deactivated successfully. Sep 13 02:33:38.014694 sshd[4508]: Failed password for root from 47.251.77.219 port 56692 ssh2 Sep 13 02:33:38.312753 sshd[4512]: Failed password for root from 47.251.77.219 port 56708 ssh2 Sep 13 02:33:38.331772 sshd[4508]: Connection closed by authenticating user root 47.251.77.219 port 56692 [preauth] Sep 13 02:33:38.334494 systemd[1]: sshd@116-145.40.90.231:22-47.251.77.219:56692.service: Deactivated successfully. Sep 13 02:33:38.586505 systemd[1]: Started sshd@122-145.40.90.231:22-47.251.77.219:56744.service. Sep 13 02:33:38.605530 sshd[4543]: Invalid user wang from 47.251.77.219 port 56744 Sep 13 02:33:38.616257 sshd[4543]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:38.616572 sshd[4543]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:38.616598 sshd[4543]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:38.616883 sshd[4543]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:38.630512 sshd[4516]: Failed password for invalid user oracle from 47.251.77.219 port 56718 ssh2 Sep 13 02:33:38.631420 sshd[4512]: Connection closed by authenticating user root 47.251.77.219 port 56708 [preauth] Sep 13 02:33:38.632419 systemd[1]: sshd@117-145.40.90.231:22-47.251.77.219:56708.service: Deactivated successfully. Sep 13 02:33:38.904295 systemd[1]: Started sshd@123-145.40.90.231:22-47.251.77.219:56758.service. Sep 13 02:33:38.926687 sshd[4547]: Invalid user hadoop from 47.251.77.219 port 56758 Sep 13 02:33:38.937226 sshd[4547]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:38.938347 sshd[4547]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:38.938464 sshd[4547]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:38.939444 sshd[4547]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:39.737839 sshd[4522]: Failed password for invalid user rabbitmq from 47.251.77.219 port 56728 ssh2 Sep 13 02:33:39.842245 systemd[1]: Started sshd@124-145.40.90.231:22-47.251.77.219:46892.service. Sep 13 02:33:39.868141 sshd[4550]: Invalid user ftp from 47.251.77.219 port 46892 Sep 13 02:33:39.876067 sshd[4550]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:39.876364 sshd[4550]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:39.876388 sshd[4550]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:39.876646 sshd[4550]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:40.053058 sshd[4526]: Failed password for root from 47.251.77.219 port 56732 ssh2 Sep 13 02:33:40.370410 sshd[4531]: Failed password for root from 47.251.77.219 port 56738 ssh2 Sep 13 02:33:40.380897 sshd[4516]: Connection closed by invalid user oracle 47.251.77.219 port 56718 [preauth] Sep 13 02:33:40.383564 systemd[1]: sshd@118-145.40.90.231:22-47.251.77.219:56718.service: Deactivated successfully. Sep 13 02:33:40.901281 sshd[4522]: Connection closed by invalid user rabbitmq 47.251.77.219 port 56728 [preauth] Sep 13 02:33:40.903927 systemd[1]: sshd@119-145.40.90.231:22-47.251.77.219:56728.service: Deactivated successfully. Sep 13 02:33:41.109291 sshd[4543]: Failed password for invalid user wang from 47.251.77.219 port 56744 ssh2 Sep 13 02:33:41.431605 sshd[4547]: Failed password for invalid user hadoop from 47.251.77.219 port 56758 ssh2 Sep 13 02:33:41.443348 systemd[1]: Started sshd@125-145.40.90.231:22-47.251.77.219:46938.service. Sep 13 02:33:41.464547 sshd[4555]: Invalid user yarn from 47.251.77.219 port 46938 Sep 13 02:33:41.477124 sshd[4555]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:41.477470 sshd[4555]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:41.477496 sshd[4555]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:41.477795 sshd[4555]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:41.760928 systemd[1]: Started sshd@126-145.40.90.231:22-47.251.77.219:46944.service. Sep 13 02:33:41.767418 sshd[4526]: Connection closed by authenticating user root 47.251.77.219 port 56732 [preauth] Sep 13 02:33:41.767990 systemd[1]: sshd@120-145.40.90.231:22-47.251.77.219:56732.service: Deactivated successfully. Sep 13 02:33:41.783300 sshd[4558]: Invalid user test2 from 47.251.77.219 port 46944 Sep 13 02:33:41.792109 sshd[4558]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:41.792394 sshd[4558]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:41.792417 sshd[4558]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:41.792632 sshd[4558]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:42.065276 systemd[1]: Started sshd@127-145.40.90.231:22-47.251.77.219:46960.service. Sep 13 02:33:42.085014 sshd[4531]: Connection closed by authenticating user root 47.251.77.219 port 56738 [preauth] Sep 13 02:33:42.085580 systemd[1]: sshd@121-145.40.90.231:22-47.251.77.219:56738.service: Deactivated successfully. Sep 13 02:33:42.088771 sshd[4562]: Invalid user oracle from 47.251.77.219 port 46960 Sep 13 02:33:42.097871 sshd[4562]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:42.098118 sshd[4562]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:42.098137 sshd[4562]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:42.098327 sshd[4562]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:42.173110 sshd[4550]: Failed password for invalid user ftp from 47.251.77.219 port 46892 ssh2 Sep 13 02:33:42.606525 sshd[4550]: Connection closed by invalid user ftp 47.251.77.219 port 46892 [preauth] Sep 13 02:33:42.609112 systemd[1]: sshd@124-145.40.90.231:22-47.251.77.219:46892.service: Deactivated successfully. Sep 13 02:33:42.690124 systemd[1]: Started sshd@128-145.40.90.231:22-47.251.77.219:46988.service. Sep 13 02:33:42.709766 sshd[4567]: Invalid user wang from 47.251.77.219 port 46988 Sep 13 02:33:42.719946 sshd[4567]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:42.720198 sshd[4567]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:42.720219 sshd[4567]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:42.720474 sshd[4567]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:42.809433 sshd[4547]: Connection closed by invalid user hadoop 47.251.77.219 port 56758 [preauth] Sep 13 02:33:42.810232 systemd[1]: sshd@123-145.40.90.231:22-47.251.77.219:56758.service: Deactivated successfully. Sep 13 02:33:43.168011 sshd[4543]: Connection closed by invalid user wang 47.251.77.219 port 56744 [preauth] Sep 13 02:33:43.170716 systemd[1]: sshd@122-145.40.90.231:22-47.251.77.219:56744.service: Deactivated successfully. Sep 13 02:33:43.713613 sshd[4555]: Failed password for invalid user yarn from 47.251.77.219 port 46938 ssh2 Sep 13 02:33:43.732046 sshd[4555]: Connection closed by invalid user yarn 47.251.77.219 port 46938 [preauth] Sep 13 02:33:43.734659 systemd[1]: sshd@125-145.40.90.231:22-47.251.77.219:46938.service: Deactivated successfully. Sep 13 02:33:43.977468 systemd[1]: Started sshd@129-145.40.90.231:22-47.251.77.219:47022.service. Sep 13 02:33:43.998571 sshd[4573]: Invalid user app from 47.251.77.219 port 47022 Sep 13 02:33:44.008643 sshd[4573]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:44.008950 sshd[4573]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:44.008975 sshd[4573]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:44.009266 sshd[4573]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:44.029074 sshd[4558]: Failed password for invalid user test2 from 47.251.77.219 port 46944 ssh2 Sep 13 02:33:44.138978 sshd[4562]: Failed password for invalid user oracle from 47.251.77.219 port 46960 ssh2 Sep 13 02:33:44.597418 systemd[1]: Started sshd@130-145.40.90.231:22-47.251.77.219:47030.service. Sep 13 02:33:44.624529 sshd[4576]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 user=root Sep 13 02:33:44.760611 sshd[4567]: Failed password for invalid user wang from 47.251.77.219 port 46988 ssh2 Sep 13 02:33:44.996662 sshd[4567]: Connection closed by invalid user wang 47.251.77.219 port 46988 [preauth] Sep 13 02:33:44.999269 systemd[1]: sshd@128-145.40.90.231:22-47.251.77.219:46988.service: Deactivated successfully. Sep 13 02:33:45.216312 systemd[1]: Started sshd@131-145.40.90.231:22-47.251.77.219:47050.service. Sep 13 02:33:45.235117 sshd[4580]: Invalid user es from 47.251.77.219 port 47050 Sep 13 02:33:45.243307 sshd[4580]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:45.243594 sshd[4580]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:45.243618 sshd[4580]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:45.243891 sshd[4580]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:45.361502 sshd[4558]: Connection closed by invalid user test2 47.251.77.219 port 46944 [preauth] Sep 13 02:33:45.364031 systemd[1]: sshd@126-145.40.90.231:22-47.251.77.219:46944.service: Deactivated successfully. Sep 13 02:33:45.532043 systemd[1]: Started sshd@132-145.40.90.231:22-47.251.77.219:47066.service. Sep 13 02:33:45.551865 sshd[4584]: Invalid user sugi from 47.251.77.219 port 47066 Sep 13 02:33:45.563263 sshd[4584]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:45.563625 sshd[4584]: pam_unix(sshd:auth): check pass; user unknown Sep 13 02:33:45.563657 sshd[4584]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=47.251.77.219 Sep 13 02:33:45.564012 sshd[4584]: pam_faillock(sshd:auth): User unknown Sep 13 02:33:45.658394 sshd[4573]: Failed password for invalid user app from 47.251.77.219 port 47022 ssh2 Sep 13 02:33:45.735707 sshd[4562]: Connection closed by invalid user oracle 47.251.77.219 port 46960 [preauth] Sep 13 02:33:45.738294 systemd[1]: sshd@127-145.40.90.231:22-47.251.77.219:46960.service: Deactivated successfully. Sep 13 02:33:45.973117 sshd[4573]: Connection closed by invalid user app 47.251.77.219 port 47022 [preauth] Sep 13 02:33:45.975684 systemd[1]: sshd@129-145.40.90.231:22-47.251.77.219:47022.service: Deactivated successfully. Sep 13 02:33:46.273298 sshd[4576]: Failed password for root from 47.251.77.219 port 47030 ssh2 Sep 13 02:33:46.827067 sshd[4576]: Connection closed by authenticating user root 47.251.77.219 port 47030 [preauth] Sep 13 02:33:46.828091 systemd[1]: sshd@130-145.40.90.231:22-47.251.77.219:47030.service: Deactivated successfully. Sep 13 02:33:47.364620 sshd[4580]: Failed password for invalid user es from 47.251.77.219 port 47050 ssh2 Sep 13 02:33:47.684609 sshd[4584]: Failed password for invalid user sugi from 47.251.77.219 port 47066 ssh2 Sep 13 02:33:47.925323 sshd[4584]: Connection closed by invalid user sugi 47.251.77.219 port 47066 [preauth] Sep 13 02:33:47.926055 systemd[1]: sshd@132-145.40.90.231:22-47.251.77.219:47066.service: Deactivated successfully. Sep 13 02:33:48.544827 sshd[4580]: Connection closed by invalid user es 47.251.77.219 port 47050 [preauth] Sep 13 02:33:48.547543 systemd[1]: sshd@131-145.40.90.231:22-47.251.77.219:47050.service: Deactivated successfully. Sep 13 02:38:35.701016 systemd[1]: Started sshd@133-145.40.90.231:22-139.178.89.65:53610.service. Sep 13 02:38:35.759990 sshd[4625]: Accepted publickey for core from 139.178.89.65 port 53610 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:38:35.762473 sshd[4625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:38:35.770993 systemd-logind[1559]: New session 10 of user core. Sep 13 02:38:35.772868 systemd[1]: Started session-10.scope. Sep 13 02:38:35.936348 sshd[4625]: pam_unix(sshd:session): session closed for user core Sep 13 02:38:35.937964 systemd[1]: sshd@133-145.40.90.231:22-139.178.89.65:53610.service: Deactivated successfully. Sep 13 02:38:35.938419 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 02:38:35.938850 systemd-logind[1559]: Session 10 logged out. Waiting for processes to exit. Sep 13 02:38:35.939271 systemd-logind[1559]: Removed session 10. Sep 13 02:38:40.946334 systemd[1]: Started sshd@134-145.40.90.231:22-139.178.89.65:34508.service. Sep 13 02:38:40.973887 sshd[4657]: Accepted publickey for core from 139.178.89.65 port 34508 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:38:40.974851 sshd[4657]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:38:40.978216 systemd-logind[1559]: New session 11 of user core. Sep 13 02:38:40.979025 systemd[1]: Started session-11.scope. Sep 13 02:38:41.107753 sshd[4657]: pam_unix(sshd:session): session closed for user core Sep 13 02:38:41.109123 systemd[1]: sshd@134-145.40.90.231:22-139.178.89.65:34508.service: Deactivated successfully. Sep 13 02:38:41.109607 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 02:38:41.109989 systemd-logind[1559]: Session 11 logged out. Waiting for processes to exit. Sep 13 02:38:41.110409 systemd-logind[1559]: Removed session 11. Sep 13 02:38:46.121769 systemd[1]: Started sshd@135-145.40.90.231:22-139.178.89.65:34518.service. Sep 13 02:38:46.153213 sshd[4689]: Accepted publickey for core from 139.178.89.65 port 34518 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:38:46.156642 sshd[4689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:38:46.167870 systemd-logind[1559]: New session 12 of user core. Sep 13 02:38:46.171832 systemd[1]: Started session-12.scope. Sep 13 02:38:46.275183 sshd[4689]: pam_unix(sshd:session): session closed for user core Sep 13 02:38:46.276736 systemd[1]: sshd@135-145.40.90.231:22-139.178.89.65:34518.service: Deactivated successfully. Sep 13 02:38:46.277221 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 02:38:46.277602 systemd-logind[1559]: Session 12 logged out. Waiting for processes to exit. Sep 13 02:38:46.278089 systemd-logind[1559]: Removed session 12. Sep 13 02:38:51.286702 systemd[1]: Started sshd@136-145.40.90.231:22-139.178.89.65:54082.service. Sep 13 02:38:51.317409 sshd[4715]: Accepted publickey for core from 139.178.89.65 port 54082 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:38:51.318279 sshd[4715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:38:51.321235 systemd-logind[1559]: New session 13 of user core. Sep 13 02:38:51.321996 systemd[1]: Started session-13.scope. Sep 13 02:38:51.408598 sshd[4715]: pam_unix(sshd:session): session closed for user core Sep 13 02:38:51.410564 systemd[1]: sshd@136-145.40.90.231:22-139.178.89.65:54082.service: Deactivated successfully. Sep 13 02:38:51.410919 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 02:38:51.411231 systemd-logind[1559]: Session 13 logged out. Waiting for processes to exit. Sep 13 02:38:51.411852 systemd[1]: Started sshd@137-145.40.90.231:22-139.178.89.65:54094.service. Sep 13 02:38:51.412229 systemd-logind[1559]: Removed session 13. Sep 13 02:38:51.439421 sshd[4740]: Accepted publickey for core from 139.178.89.65 port 54094 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:38:51.440440 sshd[4740]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:38:51.443844 systemd-logind[1559]: New session 14 of user core. Sep 13 02:38:51.444647 systemd[1]: Started session-14.scope. Sep 13 02:38:51.547909 sshd[4740]: pam_unix(sshd:session): session closed for user core Sep 13 02:38:51.549918 systemd[1]: sshd@137-145.40.90.231:22-139.178.89.65:54094.service: Deactivated successfully. Sep 13 02:38:51.550335 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 02:38:51.550724 systemd-logind[1559]: Session 14 logged out. Waiting for processes to exit. Sep 13 02:38:51.551407 systemd[1]: Started sshd@138-145.40.90.231:22-139.178.89.65:54110.service. Sep 13 02:38:51.551911 systemd-logind[1559]: Removed session 14. Sep 13 02:38:51.579660 sshd[4765]: Accepted publickey for core from 139.178.89.65 port 54110 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:38:51.583236 sshd[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:38:51.594187 systemd-logind[1559]: New session 15 of user core. Sep 13 02:38:51.596766 systemd[1]: Started session-15.scope. Sep 13 02:38:51.756452 sshd[4765]: pam_unix(sshd:session): session closed for user core Sep 13 02:38:51.758115 systemd[1]: sshd@138-145.40.90.231:22-139.178.89.65:54110.service: Deactivated successfully. Sep 13 02:38:51.758571 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 02:38:51.759022 systemd-logind[1559]: Session 15 logged out. Waiting for processes to exit. Sep 13 02:38:51.759683 systemd-logind[1559]: Removed session 15. Sep 13 02:38:56.767196 systemd[1]: Started sshd@139-145.40.90.231:22-139.178.89.65:54122.service. Sep 13 02:38:56.797894 sshd[4794]: Accepted publickey for core from 139.178.89.65 port 54122 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:38:56.798837 sshd[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:38:56.802200 systemd-logind[1559]: New session 16 of user core. Sep 13 02:38:56.802906 systemd[1]: Started session-16.scope. Sep 13 02:38:56.891350 sshd[4794]: pam_unix(sshd:session): session closed for user core Sep 13 02:38:56.892790 systemd[1]: sshd@139-145.40.90.231:22-139.178.89.65:54122.service: Deactivated successfully. Sep 13 02:38:56.893203 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 02:38:56.893644 systemd-logind[1559]: Session 16 logged out. Waiting for processes to exit. Sep 13 02:38:56.894141 systemd-logind[1559]: Removed session 16. Sep 13 02:39:01.901474 systemd[1]: Started sshd@140-145.40.90.231:22-139.178.89.65:49864.service. Sep 13 02:39:01.928408 sshd[4820]: Accepted publickey for core from 139.178.89.65 port 49864 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:39:01.929185 sshd[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:39:01.932234 systemd-logind[1559]: New session 17 of user core. Sep 13 02:39:01.932905 systemd[1]: Started session-17.scope. Sep 13 02:39:02.020767 sshd[4820]: pam_unix(sshd:session): session closed for user core Sep 13 02:39:02.023110 systemd[1]: sshd@140-145.40.90.231:22-139.178.89.65:49864.service: Deactivated successfully. Sep 13 02:39:02.023508 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 02:39:02.023896 systemd-logind[1559]: Session 17 logged out. Waiting for processes to exit. Sep 13 02:39:02.024565 systemd[1]: Started sshd@141-145.40.90.231:22-139.178.89.65:49870.service. Sep 13 02:39:02.025107 systemd-logind[1559]: Removed session 17. Sep 13 02:39:02.053244 sshd[4845]: Accepted publickey for core from 139.178.89.65 port 49870 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:39:02.056746 sshd[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:39:02.067667 systemd-logind[1559]: New session 18 of user core. Sep 13 02:39:02.070311 systemd[1]: Started session-18.scope. Sep 13 02:39:02.223585 sshd[4845]: pam_unix(sshd:session): session closed for user core Sep 13 02:39:02.225496 systemd[1]: sshd@141-145.40.90.231:22-139.178.89.65:49870.service: Deactivated successfully. Sep 13 02:39:02.225817 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 02:39:02.226196 systemd-logind[1559]: Session 18 logged out. Waiting for processes to exit. Sep 13 02:39:02.226816 systemd[1]: Started sshd@142-145.40.90.231:22-139.178.89.65:49878.service. Sep 13 02:39:02.227173 systemd-logind[1559]: Removed session 18. Sep 13 02:39:02.254039 sshd[4867]: Accepted publickey for core from 139.178.89.65 port 49878 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:39:02.254770 sshd[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:39:02.257192 systemd-logind[1559]: New session 19 of user core. Sep 13 02:39:02.257703 systemd[1]: Started session-19.scope. Sep 13 02:39:03.007276 sshd[4867]: pam_unix(sshd:session): session closed for user core Sep 13 02:39:03.009884 systemd[1]: sshd@142-145.40.90.231:22-139.178.89.65:49878.service: Deactivated successfully. Sep 13 02:39:03.010457 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 02:39:03.010957 systemd-logind[1559]: Session 19 logged out. Waiting for processes to exit. Sep 13 02:39:03.012003 systemd[1]: Started sshd@143-145.40.90.231:22-139.178.89.65:49884.service. Sep 13 02:39:03.012678 systemd-logind[1559]: Removed session 19. Sep 13 02:39:03.045091 sshd[4898]: Accepted publickey for core from 139.178.89.65 port 49884 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:39:03.046204 sshd[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:39:03.049852 systemd-logind[1559]: New session 20 of user core. Sep 13 02:39:03.050617 systemd[1]: Started session-20.scope. Sep 13 02:39:03.231768 sshd[4898]: pam_unix(sshd:session): session closed for user core Sep 13 02:39:03.234317 systemd[1]: sshd@143-145.40.90.231:22-139.178.89.65:49884.service: Deactivated successfully. Sep 13 02:39:03.234733 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 02:39:03.235080 systemd-logind[1559]: Session 20 logged out. Waiting for processes to exit. Sep 13 02:39:03.235778 systemd[1]: Started sshd@144-145.40.90.231:22-139.178.89.65:49888.service. Sep 13 02:39:03.236176 systemd-logind[1559]: Removed session 20. Sep 13 02:39:03.264540 sshd[4923]: Accepted publickey for core from 139.178.89.65 port 49888 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:39:03.267785 sshd[4923]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:39:03.278692 systemd-logind[1559]: New session 21 of user core. Sep 13 02:39:03.281411 systemd[1]: Started session-21.scope. Sep 13 02:39:03.425180 sshd[4923]: pam_unix(sshd:session): session closed for user core Sep 13 02:39:03.426699 systemd[1]: sshd@144-145.40.90.231:22-139.178.89.65:49888.service: Deactivated successfully. Sep 13 02:39:03.427126 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 02:39:03.427514 systemd-logind[1559]: Session 21 logged out. Waiting for processes to exit. Sep 13 02:39:03.428026 systemd-logind[1559]: Removed session 21. Sep 13 02:39:08.434859 systemd[1]: Started sshd@145-145.40.90.231:22-139.178.89.65:49894.service. Sep 13 02:39:08.462632 sshd[4950]: Accepted publickey for core from 139.178.89.65 port 49894 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:39:08.466433 sshd[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:39:08.472262 systemd-logind[1559]: New session 22 of user core. Sep 13 02:39:08.472811 systemd[1]: Started session-22.scope. Sep 13 02:39:08.553979 sshd[4950]: pam_unix(sshd:session): session closed for user core Sep 13 02:39:08.555515 systemd[1]: sshd@145-145.40.90.231:22-139.178.89.65:49894.service: Deactivated successfully. Sep 13 02:39:08.555920 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 02:39:08.556263 systemd-logind[1559]: Session 22 logged out. Waiting for processes to exit. Sep 13 02:39:08.556809 systemd-logind[1559]: Removed session 22. Sep 13 02:39:13.563514 systemd[1]: Started sshd@146-145.40.90.231:22-139.178.89.65:38380.service. Sep 13 02:39:13.590564 sshd[4974]: Accepted publickey for core from 139.178.89.65 port 38380 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:39:13.591508 sshd[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:39:13.594616 systemd-logind[1559]: New session 23 of user core. Sep 13 02:39:13.595331 systemd[1]: Started session-23.scope. Sep 13 02:39:13.675753 sshd[4974]: pam_unix(sshd:session): session closed for user core Sep 13 02:39:13.677152 systemd[1]: sshd@146-145.40.90.231:22-139.178.89.65:38380.service: Deactivated successfully. Sep 13 02:39:13.677574 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 02:39:13.677915 systemd-logind[1559]: Session 23 logged out. Waiting for processes to exit. Sep 13 02:39:13.678322 systemd-logind[1559]: Removed session 23. Sep 13 02:39:18.679576 systemd[1]: Started sshd@147-145.40.90.231:22-139.178.89.65:38382.service. Sep 13 02:39:18.708518 sshd[4998]: Accepted publickey for core from 139.178.89.65 port 38382 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:39:18.709338 sshd[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:39:18.712107 systemd-logind[1559]: New session 24 of user core. Sep 13 02:39:18.712750 systemd[1]: Started session-24.scope. Sep 13 02:39:18.806345 sshd[4998]: pam_unix(sshd:session): session closed for user core Sep 13 02:39:18.808423 systemd[1]: sshd@147-145.40.90.231:22-139.178.89.65:38382.service: Deactivated successfully. Sep 13 02:39:18.808824 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 02:39:18.809199 systemd-logind[1559]: Session 24 logged out. Waiting for processes to exit. Sep 13 02:39:18.809917 systemd[1]: Started sshd@148-145.40.90.231:22-139.178.89.65:38396.service. Sep 13 02:39:18.810342 systemd-logind[1559]: Removed session 24. Sep 13 02:39:18.838977 sshd[5022]: Accepted publickey for core from 139.178.89.65 port 38396 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:39:18.840080 sshd[5022]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:39:18.843313 systemd-logind[1559]: New session 25 of user core. Sep 13 02:39:18.844405 systemd[1]: Started session-25.scope. Sep 13 02:39:20.186751 env[1567]: time="2025-09-13T02:39:20.186684138Z" level=info msg="StopContainer for \"fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70\" with timeout 30 (s)" Sep 13 02:39:20.187549 env[1567]: time="2025-09-13T02:39:20.187166377Z" level=info msg="Stop container \"fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70\" with signal terminated" Sep 13 02:39:20.202829 systemd[1]: cri-containerd-fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70.scope: Deactivated successfully. Sep 13 02:39:20.222670 env[1567]: time="2025-09-13T02:39:20.222553691Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 02:39:20.229353 env[1567]: time="2025-09-13T02:39:20.229290159Z" level=info msg="shim disconnected" id=fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70 Sep 13 02:39:20.229579 env[1567]: time="2025-09-13T02:39:20.229372016Z" level=warning msg="cleaning up after shim disconnected" id=fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70 namespace=k8s.io Sep 13 02:39:20.229579 env[1567]: time="2025-09-13T02:39:20.229398150Z" level=info msg="cleaning up dead shim" Sep 13 02:39:20.229495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70-rootfs.mount: Deactivated successfully. Sep 13 02:39:20.229962 env[1567]: time="2025-09-13T02:39:20.229928970Z" level=info msg="StopContainer for \"263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50\" with timeout 2 (s)" Sep 13 02:39:20.230199 env[1567]: time="2025-09-13T02:39:20.230173579Z" level=info msg="Stop container \"263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50\" with signal terminated" Sep 13 02:39:20.236007 systemd-networkd[1319]: lxc_health: Link DOWN Sep 13 02:39:20.236016 systemd-networkd[1319]: lxc_health: Lost carrier Sep 13 02:39:20.236901 env[1567]: time="2025-09-13T02:39:20.236856976Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:39:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5081 runtime=io.containerd.runc.v2\n" Sep 13 02:39:20.237880 env[1567]: time="2025-09-13T02:39:20.237851382Z" level=info msg="StopContainer for \"fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70\" returns successfully" Sep 13 02:39:20.238437 env[1567]: time="2025-09-13T02:39:20.238408464Z" level=info msg="StopPodSandbox for \"865240e1ec2943af1ad4aba69ab318722cbc402b603a837bda9ebbdfd90d796b\"" Sep 13 02:39:20.238510 env[1567]: time="2025-09-13T02:39:20.238486614Z" level=info msg="Container to stop \"fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 02:39:20.241016 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-865240e1ec2943af1ad4aba69ab318722cbc402b603a837bda9ebbdfd90d796b-shm.mount: Deactivated successfully. Sep 13 02:39:20.244802 systemd[1]: cri-containerd-865240e1ec2943af1ad4aba69ab318722cbc402b603a837bda9ebbdfd90d796b.scope: Deactivated successfully. Sep 13 02:39:20.262742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-865240e1ec2943af1ad4aba69ab318722cbc402b603a837bda9ebbdfd90d796b-rootfs.mount: Deactivated successfully. Sep 13 02:39:20.262984 env[1567]: time="2025-09-13T02:39:20.262799195Z" level=info msg="shim disconnected" id=865240e1ec2943af1ad4aba69ab318722cbc402b603a837bda9ebbdfd90d796b Sep 13 02:39:20.262984 env[1567]: time="2025-09-13T02:39:20.262858572Z" level=warning msg="cleaning up after shim disconnected" id=865240e1ec2943af1ad4aba69ab318722cbc402b603a837bda9ebbdfd90d796b namespace=k8s.io Sep 13 02:39:20.262984 env[1567]: time="2025-09-13T02:39:20.262879456Z" level=info msg="cleaning up dead shim" Sep 13 02:39:20.269758 env[1567]: time="2025-09-13T02:39:20.269703049Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:39:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5118 runtime=io.containerd.runc.v2\n" Sep 13 02:39:20.270075 env[1567]: time="2025-09-13T02:39:20.270027661Z" level=info msg="TearDown network for sandbox \"865240e1ec2943af1ad4aba69ab318722cbc402b603a837bda9ebbdfd90d796b\" successfully" Sep 13 02:39:20.270075 env[1567]: time="2025-09-13T02:39:20.270052323Z" level=info msg="StopPodSandbox for \"865240e1ec2943af1ad4aba69ab318722cbc402b603a837bda9ebbdfd90d796b\" returns successfully" Sep 13 02:39:20.282979 kubelet[2462]: I0913 02:39:20.282940 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ebac4fd-ec62-4334-b637-3c8198928859-cilium-config-path\") pod \"1ebac4fd-ec62-4334-b637-3c8198928859\" (UID: \"1ebac4fd-ec62-4334-b637-3c8198928859\") " Sep 13 02:39:20.283507 kubelet[2462]: I0913 02:39:20.283005 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx8q7\" (UniqueName: \"kubernetes.io/projected/1ebac4fd-ec62-4334-b637-3c8198928859-kube-api-access-lx8q7\") pod \"1ebac4fd-ec62-4334-b637-3c8198928859\" (UID: \"1ebac4fd-ec62-4334-b637-3c8198928859\") " Sep 13 02:39:20.285338 kubelet[2462]: I0913 02:39:20.285280 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ebac4fd-ec62-4334-b637-3c8198928859-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1ebac4fd-ec62-4334-b637-3c8198928859" (UID: "1ebac4fd-ec62-4334-b637-3c8198928859"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 02:39:20.286045 kubelet[2462]: I0913 02:39:20.285988 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ebac4fd-ec62-4334-b637-3c8198928859-kube-api-access-lx8q7" (OuterVolumeSpecName: "kube-api-access-lx8q7") pod "1ebac4fd-ec62-4334-b637-3c8198928859" (UID: "1ebac4fd-ec62-4334-b637-3c8198928859"). InnerVolumeSpecName "kube-api-access-lx8q7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 02:39:20.288009 systemd[1]: var-lib-kubelet-pods-1ebac4fd\x2dec62\x2d4334\x2db637\x2d3c8198928859-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlx8q7.mount: Deactivated successfully. Sep 13 02:39:20.302781 systemd[1]: cri-containerd-263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50.scope: Deactivated successfully. Sep 13 02:39:20.303067 systemd[1]: cri-containerd-263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50.scope: Consumed 6.622s CPU time. Sep 13 02:39:20.321882 env[1567]: time="2025-09-13T02:39:20.321828308Z" level=info msg="shim disconnected" id=263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50 Sep 13 02:39:20.322083 env[1567]: time="2025-09-13T02:39:20.321885738Z" level=warning msg="cleaning up after shim disconnected" id=263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50 namespace=k8s.io Sep 13 02:39:20.322083 env[1567]: time="2025-09-13T02:39:20.321903380Z" level=info msg="cleaning up dead shim" Sep 13 02:39:20.330089 env[1567]: time="2025-09-13T02:39:20.330046332Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:39:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5147 runtime=io.containerd.runc.v2\n" Sep 13 02:39:20.331424 env[1567]: time="2025-09-13T02:39:20.331326062Z" level=info msg="StopContainer for \"263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50\" returns successfully" Sep 13 02:39:20.331902 env[1567]: time="2025-09-13T02:39:20.331836591Z" level=info msg="StopPodSandbox for \"307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9\"" Sep 13 02:39:20.331999 env[1567]: time="2025-09-13T02:39:20.331908280Z" level=info msg="Container to stop \"199bcc57e65c056b93be5b5206020bb10d4ba4ba5328bc2033c0d92cefc323c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 02:39:20.331999 env[1567]: time="2025-09-13T02:39:20.331936435Z" level=info msg="Container to stop \"208aae558e4fe9b8544c66c27dc2aad7bd263881b71f5dfbb47ab0d86d9ae168\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 02:39:20.331999 env[1567]: time="2025-09-13T02:39:20.331953785Z" level=info msg="Container to stop \"4cfc6533e0c04a330c9e2f821c0a95ecef858c4c2aa659c1f872644ef44c9319\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 02:39:20.331999 env[1567]: time="2025-09-13T02:39:20.331968628Z" level=info msg="Container to stop \"263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 02:39:20.331999 env[1567]: time="2025-09-13T02:39:20.331983007Z" level=info msg="Container to stop \"2ef04dac455cb708c56616563dd437310229f8f03ee59a4b99b71d9d31639571\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 02:39:20.338988 systemd[1]: cri-containerd-307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9.scope: Deactivated successfully. Sep 13 02:39:20.383553 kubelet[2462]: I0913 02:39:20.383463 2462 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lx8q7\" (UniqueName: \"kubernetes.io/projected/1ebac4fd-ec62-4334-b637-3c8198928859-kube-api-access-lx8q7\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:20.383553 kubelet[2462]: I0913 02:39:20.383518 2462 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1ebac4fd-ec62-4334-b637-3c8198928859-cilium-config-path\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:20.384147 env[1567]: time="2025-09-13T02:39:20.384057442Z" level=info msg="shim disconnected" id=307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9 Sep 13 02:39:20.384347 env[1567]: time="2025-09-13T02:39:20.384155960Z" level=warning msg="cleaning up after shim disconnected" id=307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9 namespace=k8s.io Sep 13 02:39:20.384347 env[1567]: time="2025-09-13T02:39:20.384190457Z" level=info msg="cleaning up dead shim" Sep 13 02:39:20.399692 env[1567]: time="2025-09-13T02:39:20.399608040Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:39:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5178 runtime=io.containerd.runc.v2\n" Sep 13 02:39:20.400462 env[1567]: time="2025-09-13T02:39:20.400393959Z" level=info msg="TearDown network for sandbox \"307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9\" successfully" Sep 13 02:39:20.400701 env[1567]: time="2025-09-13T02:39:20.400455840Z" level=info msg="StopPodSandbox for \"307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9\" returns successfully" Sep 13 02:39:20.484397 kubelet[2462]: I0913 02:39:20.484249 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-lib-modules\") pod \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " Sep 13 02:39:20.484397 kubelet[2462]: I0913 02:39:20.484295 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92" (UID: "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:39:20.484805 kubelet[2462]: I0913 02:39:20.484428 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-cni-path\") pod \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " Sep 13 02:39:20.484805 kubelet[2462]: I0913 02:39:20.484496 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-cni-path" (OuterVolumeSpecName: "cni-path") pod "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92" (UID: "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:39:20.484805 kubelet[2462]: I0913 02:39:20.484659 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-etc-cni-netd\") pod \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " Sep 13 02:39:20.484805 kubelet[2462]: I0913 02:39:20.484738 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92" (UID: "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:39:20.485262 kubelet[2462]: I0913 02:39:20.484821 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-clustermesh-secrets\") pod \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " Sep 13 02:39:20.485262 kubelet[2462]: I0913 02:39:20.484947 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-xtables-lock\") pod \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " Sep 13 02:39:20.485262 kubelet[2462]: I0913 02:39:20.485042 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-cilium-config-path\") pod \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " Sep 13 02:39:20.485262 kubelet[2462]: I0913 02:39:20.485055 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92" (UID: "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:39:20.485262 kubelet[2462]: I0913 02:39:20.485185 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-bpf-maps\") pod \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " Sep 13 02:39:20.485262 kubelet[2462]: I0913 02:39:20.485221 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92" (UID: "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:39:20.485924 kubelet[2462]: I0913 02:39:20.485283 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-host-proc-sys-kernel\") pod \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " Sep 13 02:39:20.485924 kubelet[2462]: I0913 02:39:20.485429 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-cilium-cgroup\") pod \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " Sep 13 02:39:20.485924 kubelet[2462]: I0913 02:39:20.485413 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92" (UID: "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:39:20.485924 kubelet[2462]: I0913 02:39:20.485539 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-hubble-tls\") pod \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " Sep 13 02:39:20.485924 kubelet[2462]: I0913 02:39:20.485496 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92" (UID: "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:39:20.486493 kubelet[2462]: I0913 02:39:20.485649 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-hostproc\") pod \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " Sep 13 02:39:20.486493 kubelet[2462]: I0913 02:39:20.485741 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-cilium-run\") pod \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " Sep 13 02:39:20.486493 kubelet[2462]: I0913 02:39:20.485779 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-hostproc" (OuterVolumeSpecName: "hostproc") pod "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92" (UID: "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:39:20.486493 kubelet[2462]: I0913 02:39:20.485850 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92" (UID: "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:39:20.486493 kubelet[2462]: I0913 02:39:20.485915 2462 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-lib-modules\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:20.486493 kubelet[2462]: I0913 02:39:20.485990 2462 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-cni-path\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:20.487133 kubelet[2462]: I0913 02:39:20.486045 2462 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-etc-cni-netd\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:20.487133 kubelet[2462]: I0913 02:39:20.486097 2462 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-xtables-lock\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:20.487133 kubelet[2462]: I0913 02:39:20.486147 2462 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-bpf-maps\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:20.487133 kubelet[2462]: I0913 02:39:20.486202 2462 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:20.487133 kubelet[2462]: I0913 02:39:20.486260 2462 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-cilium-cgroup\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:20.491072 kubelet[2462]: I0913 02:39:20.490959 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92" (UID: "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 02:39:20.491813 kubelet[2462]: I0913 02:39:20.491695 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92" (UID: "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 02:39:20.492711 kubelet[2462]: I0913 02:39:20.492611 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92" (UID: "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 02:39:20.587499 kubelet[2462]: I0913 02:39:20.587380 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56g6d\" (UniqueName: \"kubernetes.io/projected/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-kube-api-access-56g6d\") pod \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " Sep 13 02:39:20.587499 kubelet[2462]: I0913 02:39:20.587477 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-host-proc-sys-net\") pod \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\" (UID: \"7bd1272c-6240-4c99-ac1f-a7e07a3d6d92\") " Sep 13 02:39:20.587938 kubelet[2462]: I0913 02:39:20.587576 2462 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-hubble-tls\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:20.587938 kubelet[2462]: I0913 02:39:20.587610 2462 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-hostproc\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:20.587938 kubelet[2462]: I0913 02:39:20.587638 2462 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-cilium-run\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:20.587938 kubelet[2462]: I0913 02:39:20.587665 2462 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-clustermesh-secrets\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:20.587938 kubelet[2462]: I0913 02:39:20.587694 2462 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-cilium-config-path\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:20.587938 kubelet[2462]: I0913 02:39:20.587694 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92" (UID: "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:39:20.594289 kubelet[2462]: I0913 02:39:20.594177 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-kube-api-access-56g6d" (OuterVolumeSpecName: "kube-api-access-56g6d") pod "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92" (UID: "7bd1272c-6240-4c99-ac1f-a7e07a3d6d92"). InnerVolumeSpecName "kube-api-access-56g6d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 02:39:20.688061 kubelet[2462]: I0913 02:39:20.687944 2462 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-56g6d\" (UniqueName: \"kubernetes.io/projected/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-kube-api-access-56g6d\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:20.688061 kubelet[2462]: I0913 02:39:20.688018 2462 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92-host-proc-sys-net\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:20.911206 kubelet[2462]: I0913 02:39:20.911031 2462 scope.go:117] "RemoveContainer" containerID="263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50" Sep 13 02:39:20.914158 env[1567]: time="2025-09-13T02:39:20.914071257Z" level=info msg="RemoveContainer for \"263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50\"" Sep 13 02:39:20.919523 env[1567]: time="2025-09-13T02:39:20.919446373Z" level=info msg="RemoveContainer for \"263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50\" returns successfully" Sep 13 02:39:20.920074 kubelet[2462]: I0913 02:39:20.919992 2462 scope.go:117] "RemoveContainer" containerID="2ef04dac455cb708c56616563dd437310229f8f03ee59a4b99b71d9d31639571" Sep 13 02:39:20.922668 env[1567]: time="2025-09-13T02:39:20.922586569Z" level=info msg="RemoveContainer for \"2ef04dac455cb708c56616563dd437310229f8f03ee59a4b99b71d9d31639571\"" Sep 13 02:39:20.923218 systemd[1]: Removed slice kubepods-burstable-pod7bd1272c_6240_4c99_ac1f_a7e07a3d6d92.slice. Sep 13 02:39:20.923573 systemd[1]: kubepods-burstable-pod7bd1272c_6240_4c99_ac1f_a7e07a3d6d92.slice: Consumed 6.711s CPU time. Sep 13 02:39:20.926702 systemd[1]: Removed slice kubepods-besteffort-pod1ebac4fd_ec62_4334_b637_3c8198928859.slice. Sep 13 02:39:20.927550 env[1567]: time="2025-09-13T02:39:20.927474182Z" level=info msg="RemoveContainer for \"2ef04dac455cb708c56616563dd437310229f8f03ee59a4b99b71d9d31639571\" returns successfully" Sep 13 02:39:20.927851 kubelet[2462]: I0913 02:39:20.927806 2462 scope.go:117] "RemoveContainer" containerID="208aae558e4fe9b8544c66c27dc2aad7bd263881b71f5dfbb47ab0d86d9ae168" Sep 13 02:39:20.930276 env[1567]: time="2025-09-13T02:39:20.930199635Z" level=info msg="RemoveContainer for \"208aae558e4fe9b8544c66c27dc2aad7bd263881b71f5dfbb47ab0d86d9ae168\"" Sep 13 02:39:20.935889 env[1567]: time="2025-09-13T02:39:20.935806208Z" level=info msg="RemoveContainer for \"208aae558e4fe9b8544c66c27dc2aad7bd263881b71f5dfbb47ab0d86d9ae168\" returns successfully" Sep 13 02:39:20.936260 kubelet[2462]: I0913 02:39:20.936213 2462 scope.go:117] "RemoveContainer" containerID="199bcc57e65c056b93be5b5206020bb10d4ba4ba5328bc2033c0d92cefc323c2" Sep 13 02:39:20.938892 env[1567]: time="2025-09-13T02:39:20.938796102Z" level=info msg="RemoveContainer for \"199bcc57e65c056b93be5b5206020bb10d4ba4ba5328bc2033c0d92cefc323c2\"" Sep 13 02:39:20.943111 env[1567]: time="2025-09-13T02:39:20.942992223Z" level=info msg="RemoveContainer for \"199bcc57e65c056b93be5b5206020bb10d4ba4ba5328bc2033c0d92cefc323c2\" returns successfully" Sep 13 02:39:20.943453 kubelet[2462]: I0913 02:39:20.943392 2462 scope.go:117] "RemoveContainer" containerID="4cfc6533e0c04a330c9e2f821c0a95ecef858c4c2aa659c1f872644ef44c9319" Sep 13 02:39:20.946082 env[1567]: time="2025-09-13T02:39:20.945971536Z" level=info msg="RemoveContainer for \"4cfc6533e0c04a330c9e2f821c0a95ecef858c4c2aa659c1f872644ef44c9319\"" Sep 13 02:39:20.950553 env[1567]: time="2025-09-13T02:39:20.950454319Z" level=info msg="RemoveContainer for \"4cfc6533e0c04a330c9e2f821c0a95ecef858c4c2aa659c1f872644ef44c9319\" returns successfully" Sep 13 02:39:20.950873 kubelet[2462]: I0913 02:39:20.950819 2462 scope.go:117] "RemoveContainer" containerID="263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50" Sep 13 02:39:20.951516 env[1567]: time="2025-09-13T02:39:20.951301865Z" level=error msg="ContainerStatus for \"263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50\": not found" Sep 13 02:39:20.951792 kubelet[2462]: E0913 02:39:20.951724 2462 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50\": not found" containerID="263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50" Sep 13 02:39:20.951976 kubelet[2462]: I0913 02:39:20.951806 2462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50"} err="failed to get container status \"263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50\": rpc error: code = NotFound desc = an error occurred when try to find container \"263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50\": not found" Sep 13 02:39:20.951976 kubelet[2462]: I0913 02:39:20.951906 2462 scope.go:117] "RemoveContainer" containerID="2ef04dac455cb708c56616563dd437310229f8f03ee59a4b99b71d9d31639571" Sep 13 02:39:20.952597 env[1567]: time="2025-09-13T02:39:20.952408357Z" level=error msg="ContainerStatus for \"2ef04dac455cb708c56616563dd437310229f8f03ee59a4b99b71d9d31639571\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ef04dac455cb708c56616563dd437310229f8f03ee59a4b99b71d9d31639571\": not found" Sep 13 02:39:20.952970 kubelet[2462]: E0913 02:39:20.952888 2462 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ef04dac455cb708c56616563dd437310229f8f03ee59a4b99b71d9d31639571\": not found" containerID="2ef04dac455cb708c56616563dd437310229f8f03ee59a4b99b71d9d31639571" Sep 13 02:39:20.953138 kubelet[2462]: I0913 02:39:20.952965 2462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ef04dac455cb708c56616563dd437310229f8f03ee59a4b99b71d9d31639571"} err="failed to get container status \"2ef04dac455cb708c56616563dd437310229f8f03ee59a4b99b71d9d31639571\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ef04dac455cb708c56616563dd437310229f8f03ee59a4b99b71d9d31639571\": not found" Sep 13 02:39:20.953138 kubelet[2462]: I0913 02:39:20.953042 2462 scope.go:117] "RemoveContainer" containerID="208aae558e4fe9b8544c66c27dc2aad7bd263881b71f5dfbb47ab0d86d9ae168" Sep 13 02:39:20.953788 env[1567]: time="2025-09-13T02:39:20.953582176Z" level=error msg="ContainerStatus for \"208aae558e4fe9b8544c66c27dc2aad7bd263881b71f5dfbb47ab0d86d9ae168\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"208aae558e4fe9b8544c66c27dc2aad7bd263881b71f5dfbb47ab0d86d9ae168\": not found" Sep 13 02:39:20.954135 kubelet[2462]: E0913 02:39:20.954074 2462 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"208aae558e4fe9b8544c66c27dc2aad7bd263881b71f5dfbb47ab0d86d9ae168\": not found" containerID="208aae558e4fe9b8544c66c27dc2aad7bd263881b71f5dfbb47ab0d86d9ae168" Sep 13 02:39:20.954393 kubelet[2462]: I0913 02:39:20.954163 2462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"208aae558e4fe9b8544c66c27dc2aad7bd263881b71f5dfbb47ab0d86d9ae168"} err="failed to get container status \"208aae558e4fe9b8544c66c27dc2aad7bd263881b71f5dfbb47ab0d86d9ae168\": rpc error: code = NotFound desc = an error occurred when try to find container \"208aae558e4fe9b8544c66c27dc2aad7bd263881b71f5dfbb47ab0d86d9ae168\": not found" Sep 13 02:39:20.954393 kubelet[2462]: I0913 02:39:20.954225 2462 scope.go:117] "RemoveContainer" containerID="199bcc57e65c056b93be5b5206020bb10d4ba4ba5328bc2033c0d92cefc323c2" Sep 13 02:39:20.954897 env[1567]: time="2025-09-13T02:39:20.954754926Z" level=error msg="ContainerStatus for \"199bcc57e65c056b93be5b5206020bb10d4ba4ba5328bc2033c0d92cefc323c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"199bcc57e65c056b93be5b5206020bb10d4ba4ba5328bc2033c0d92cefc323c2\": not found" Sep 13 02:39:20.955152 kubelet[2462]: E0913 02:39:20.955105 2462 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"199bcc57e65c056b93be5b5206020bb10d4ba4ba5328bc2033c0d92cefc323c2\": not found" containerID="199bcc57e65c056b93be5b5206020bb10d4ba4ba5328bc2033c0d92cefc323c2" Sep 13 02:39:20.955301 kubelet[2462]: I0913 02:39:20.955166 2462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"199bcc57e65c056b93be5b5206020bb10d4ba4ba5328bc2033c0d92cefc323c2"} err="failed to get container status \"199bcc57e65c056b93be5b5206020bb10d4ba4ba5328bc2033c0d92cefc323c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"199bcc57e65c056b93be5b5206020bb10d4ba4ba5328bc2033c0d92cefc323c2\": not found" Sep 13 02:39:20.955301 kubelet[2462]: I0913 02:39:20.955208 2462 scope.go:117] "RemoveContainer" containerID="4cfc6533e0c04a330c9e2f821c0a95ecef858c4c2aa659c1f872644ef44c9319" Sep 13 02:39:20.955859 env[1567]: time="2025-09-13T02:39:20.955703898Z" level=error msg="ContainerStatus for \"4cfc6533e0c04a330c9e2f821c0a95ecef858c4c2aa659c1f872644ef44c9319\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4cfc6533e0c04a330c9e2f821c0a95ecef858c4c2aa659c1f872644ef44c9319\": not found" Sep 13 02:39:20.956121 kubelet[2462]: E0913 02:39:20.956072 2462 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4cfc6533e0c04a330c9e2f821c0a95ecef858c4c2aa659c1f872644ef44c9319\": not found" containerID="4cfc6533e0c04a330c9e2f821c0a95ecef858c4c2aa659c1f872644ef44c9319" Sep 13 02:39:20.956241 kubelet[2462]: I0913 02:39:20.956149 2462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4cfc6533e0c04a330c9e2f821c0a95ecef858c4c2aa659c1f872644ef44c9319"} err="failed to get container status \"4cfc6533e0c04a330c9e2f821c0a95ecef858c4c2aa659c1f872644ef44c9319\": rpc error: code = NotFound desc = an error occurred when try to find container \"4cfc6533e0c04a330c9e2f821c0a95ecef858c4c2aa659c1f872644ef44c9319\": not found" Sep 13 02:39:20.956241 kubelet[2462]: I0913 02:39:20.956196 2462 scope.go:117] "RemoveContainer" containerID="fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70" Sep 13 02:39:20.958705 env[1567]: time="2025-09-13T02:39:20.958620650Z" level=info msg="RemoveContainer for \"fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70\"" Sep 13 02:39:20.962924 env[1567]: time="2025-09-13T02:39:20.962846959Z" level=info msg="RemoveContainer for \"fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70\" returns successfully" Sep 13 02:39:20.963481 kubelet[2462]: I0913 02:39:20.963406 2462 scope.go:117] "RemoveContainer" containerID="fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70" Sep 13 02:39:20.964073 env[1567]: time="2025-09-13T02:39:20.963925003Z" level=error msg="ContainerStatus for \"fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70\": not found" Sep 13 02:39:20.964379 kubelet[2462]: E0913 02:39:20.964315 2462 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70\": not found" containerID="fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70" Sep 13 02:39:20.964550 kubelet[2462]: I0913 02:39:20.964403 2462 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70"} err="failed to get container status \"fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa28a2e1798479e140cee7028cf323c8c43dfb2e86052c2765e34bfa5e876d70\": not found" Sep 13 02:39:21.203785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-263d8e3f01c07dfe0ff813f14402cb7ac5f873a4e53011d0d1f96063a6dc9e50-rootfs.mount: Deactivated successfully. Sep 13 02:39:21.203859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9-rootfs.mount: Deactivated successfully. Sep 13 02:39:21.203904 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-307fa13cbc349ea7cf503eae9d6e2575329f2f78cb75c1a82ddaa97cde0a4fb9-shm.mount: Deactivated successfully. Sep 13 02:39:21.203956 systemd[1]: var-lib-kubelet-pods-7bd1272c\x2d6240\x2d4c99\x2dac1f\x2da7e07a3d6d92-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d56g6d.mount: Deactivated successfully. Sep 13 02:39:21.204004 systemd[1]: var-lib-kubelet-pods-7bd1272c\x2d6240\x2d4c99\x2dac1f\x2da7e07a3d6d92-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 02:39:21.204055 systemd[1]: var-lib-kubelet-pods-7bd1272c\x2d6240\x2d4c99\x2dac1f\x2da7e07a3d6d92-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 02:39:21.692800 kubelet[2462]: I0913 02:39:21.692763 2462 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ebac4fd-ec62-4334-b637-3c8198928859" path="/var/lib/kubelet/pods/1ebac4fd-ec62-4334-b637-3c8198928859/volumes" Sep 13 02:39:21.693298 kubelet[2462]: I0913 02:39:21.693280 2462 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bd1272c-6240-4c99-ac1f-a7e07a3d6d92" path="/var/lib/kubelet/pods/7bd1272c-6240-4c99-ac1f-a7e07a3d6d92/volumes" Sep 13 02:39:22.162342 sshd[5022]: pam_unix(sshd:session): session closed for user core Sep 13 02:39:22.164068 systemd[1]: sshd@148-145.40.90.231:22-139.178.89.65:38396.service: Deactivated successfully. Sep 13 02:39:22.164459 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 02:39:22.164868 systemd-logind[1559]: Session 25 logged out. Waiting for processes to exit. Sep 13 02:39:22.165556 systemd[1]: Started sshd@149-145.40.90.231:22-139.178.89.65:40032.service. Sep 13 02:39:22.166016 systemd-logind[1559]: Removed session 25. Sep 13 02:39:22.193691 sshd[5195]: Accepted publickey for core from 139.178.89.65 port 40032 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:39:22.197160 sshd[5195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:39:22.207441 systemd-logind[1559]: New session 26 of user core. Sep 13 02:39:22.210885 systemd[1]: Started session-26.scope. Sep 13 02:39:22.586781 sshd[5195]: pam_unix(sshd:session): session closed for user core Sep 13 02:39:22.596463 systemd[1]: sshd@149-145.40.90.231:22-139.178.89.65:40032.service: Deactivated successfully. Sep 13 02:39:22.598323 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 02:39:22.600454 systemd-logind[1559]: Session 26 logged out. Waiting for processes to exit. Sep 13 02:39:22.604185 systemd[1]: Started sshd@150-145.40.90.231:22-139.178.89.65:40046.service. Sep 13 02:39:22.608068 systemd-logind[1559]: Removed session 26. Sep 13 02:39:22.622823 systemd[1]: Created slice kubepods-burstable-podc1fa7622_664b_4b73_b92b_5389f467f5a7.slice. Sep 13 02:39:22.645797 sshd[5218]: Accepted publickey for core from 139.178.89.65 port 40046 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:39:22.646880 sshd[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:39:22.649998 systemd-logind[1559]: New session 27 of user core. Sep 13 02:39:22.650653 systemd[1]: Started session-27.scope. Sep 13 02:39:22.703832 kubelet[2462]: I0913 02:39:22.703704 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-cni-path\") pod \"cilium-hlmd7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " pod="kube-system/cilium-hlmd7" Sep 13 02:39:22.703832 kubelet[2462]: I0913 02:39:22.703804 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-lib-modules\") pod \"cilium-hlmd7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " pod="kube-system/cilium-hlmd7" Sep 13 02:39:22.704798 kubelet[2462]: I0913 02:39:22.703854 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-xtables-lock\") pod \"cilium-hlmd7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " pod="kube-system/cilium-hlmd7" Sep 13 02:39:22.704798 kubelet[2462]: I0913 02:39:22.703898 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-cilium-run\") pod \"cilium-hlmd7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " pod="kube-system/cilium-hlmd7" Sep 13 02:39:22.704798 kubelet[2462]: I0913 02:39:22.703946 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-cilium-cgroup\") pod \"cilium-hlmd7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " pod="kube-system/cilium-hlmd7" Sep 13 02:39:22.704798 kubelet[2462]: I0913 02:39:22.703992 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsxnq\" (UniqueName: \"kubernetes.io/projected/c1fa7622-664b-4b73-b92b-5389f467f5a7-kube-api-access-xsxnq\") pod \"cilium-hlmd7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " pod="kube-system/cilium-hlmd7" Sep 13 02:39:22.704798 kubelet[2462]: I0913 02:39:22.704043 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1fa7622-664b-4b73-b92b-5389f467f5a7-clustermesh-secrets\") pod \"cilium-hlmd7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " pod="kube-system/cilium-hlmd7" Sep 13 02:39:22.704798 kubelet[2462]: I0913 02:39:22.704088 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-host-proc-sys-net\") pod \"cilium-hlmd7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " pod="kube-system/cilium-hlmd7" Sep 13 02:39:22.705459 kubelet[2462]: I0913 02:39:22.704135 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c1fa7622-664b-4b73-b92b-5389f467f5a7-cilium-ipsec-secrets\") pod \"cilium-hlmd7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " pod="kube-system/cilium-hlmd7" Sep 13 02:39:22.705459 kubelet[2462]: I0913 02:39:22.704180 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-host-proc-sys-kernel\") pod \"cilium-hlmd7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " pod="kube-system/cilium-hlmd7" Sep 13 02:39:22.705459 kubelet[2462]: I0913 02:39:22.704227 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-bpf-maps\") pod \"cilium-hlmd7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " pod="kube-system/cilium-hlmd7" Sep 13 02:39:22.705459 kubelet[2462]: I0913 02:39:22.704270 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-hostproc\") pod \"cilium-hlmd7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " pod="kube-system/cilium-hlmd7" Sep 13 02:39:22.705459 kubelet[2462]: I0913 02:39:22.704316 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1fa7622-664b-4b73-b92b-5389f467f5a7-hubble-tls\") pod \"cilium-hlmd7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " pod="kube-system/cilium-hlmd7" Sep 13 02:39:22.705459 kubelet[2462]: I0913 02:39:22.704377 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1fa7622-664b-4b73-b92b-5389f467f5a7-cilium-config-path\") pod \"cilium-hlmd7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " pod="kube-system/cilium-hlmd7" Sep 13 02:39:22.706407 kubelet[2462]: I0913 02:39:22.704510 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-etc-cni-netd\") pod \"cilium-hlmd7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " pod="kube-system/cilium-hlmd7" Sep 13 02:39:22.790552 systemd[1]: Started sshd@151-145.40.90.231:22-139.178.89.65:40062.service. Sep 13 02:39:22.791344 sshd[5218]: pam_unix(sshd:session): session closed for user core Sep 13 02:39:22.793122 systemd[1]: sshd@150-145.40.90.231:22-139.178.89.65:40046.service: Deactivated successfully. Sep 13 02:39:22.793568 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 02:39:22.793902 systemd-logind[1559]: Session 27 logged out. Waiting for processes to exit. Sep 13 02:39:22.794302 systemd-logind[1559]: Removed session 27. Sep 13 02:39:22.796877 kubelet[2462]: E0913 02:39:22.796849 2462 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-xsxnq lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-hlmd7" podUID="c1fa7622-664b-4b73-b92b-5389f467f5a7" Sep 13 02:39:22.814168 kubelet[2462]: E0913 02:39:22.814147 2462 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 02:39:22.817526 sshd[5242]: Accepted publickey for core from 139.178.89.65 port 40062 ssh2: RSA SHA256:NXAhTYqk+AK0kb7vgLrOn5RR7PIJmqdshx8rZ3PsnQM Sep 13 02:39:22.818260 sshd[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 02:39:22.820586 systemd-logind[1559]: New session 28 of user core. Sep 13 02:39:22.821199 systemd[1]: Started session-28.scope. Sep 13 02:39:23.008175 kubelet[2462]: I0913 02:39:23.008066 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-hostproc\") pod \"c1fa7622-664b-4b73-b92b-5389f467f5a7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " Sep 13 02:39:23.008175 kubelet[2462]: I0913 02:39:23.008165 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1fa7622-664b-4b73-b92b-5389f467f5a7-hubble-tls\") pod \"c1fa7622-664b-4b73-b92b-5389f467f5a7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " Sep 13 02:39:23.008621 kubelet[2462]: I0913 02:39:23.008219 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-cni-path\") pod \"c1fa7622-664b-4b73-b92b-5389f467f5a7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " Sep 13 02:39:23.008621 kubelet[2462]: I0913 02:39:23.008203 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-hostproc" (OuterVolumeSpecName: "hostproc") pod "c1fa7622-664b-4b73-b92b-5389f467f5a7" (UID: "c1fa7622-664b-4b73-b92b-5389f467f5a7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:39:23.008621 kubelet[2462]: I0913 02:39:23.008270 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-host-proc-sys-kernel\") pod \"c1fa7622-664b-4b73-b92b-5389f467f5a7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " Sep 13 02:39:23.008621 kubelet[2462]: I0913 02:39:23.008328 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-lib-modules\") pod \"c1fa7622-664b-4b73-b92b-5389f467f5a7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " Sep 13 02:39:23.008621 kubelet[2462]: I0913 02:39:23.008389 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-cilium-run\") pod \"c1fa7622-664b-4b73-b92b-5389f467f5a7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " Sep 13 02:39:23.009253 kubelet[2462]: I0913 02:39:23.008322 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c1fa7622-664b-4b73-b92b-5389f467f5a7" (UID: "c1fa7622-664b-4b73-b92b-5389f467f5a7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:39:23.009253 kubelet[2462]: I0913 02:39:23.008441 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c1fa7622-664b-4b73-b92b-5389f467f5a7-cilium-ipsec-secrets\") pod \"c1fa7622-664b-4b73-b92b-5389f467f5a7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " Sep 13 02:39:23.009253 kubelet[2462]: I0913 02:39:23.008331 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-cni-path" (OuterVolumeSpecName: "cni-path") pod "c1fa7622-664b-4b73-b92b-5389f467f5a7" (UID: "c1fa7622-664b-4b73-b92b-5389f467f5a7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:39:23.009253 kubelet[2462]: I0913 02:39:23.008463 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c1fa7622-664b-4b73-b92b-5389f467f5a7" (UID: "c1fa7622-664b-4b73-b92b-5389f467f5a7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:39:23.009253 kubelet[2462]: I0913 02:39:23.008491 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1fa7622-664b-4b73-b92b-5389f467f5a7-clustermesh-secrets\") pod \"c1fa7622-664b-4b73-b92b-5389f467f5a7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " Sep 13 02:39:23.009935 kubelet[2462]: I0913 02:39:23.008446 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c1fa7622-664b-4b73-b92b-5389f467f5a7" (UID: "c1fa7622-664b-4b73-b92b-5389f467f5a7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:39:23.009935 kubelet[2462]: I0913 02:39:23.008535 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-bpf-maps\") pod \"c1fa7622-664b-4b73-b92b-5389f467f5a7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " Sep 13 02:39:23.009935 kubelet[2462]: I0913 02:39:23.008592 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c1fa7622-664b-4b73-b92b-5389f467f5a7" (UID: "c1fa7622-664b-4b73-b92b-5389f467f5a7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:39:23.009935 kubelet[2462]: I0913 02:39:23.008638 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-etc-cni-netd\") pod \"c1fa7622-664b-4b73-b92b-5389f467f5a7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " Sep 13 02:39:23.009935 kubelet[2462]: I0913 02:39:23.008696 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-xtables-lock\") pod \"c1fa7622-664b-4b73-b92b-5389f467f5a7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " Sep 13 02:39:23.009935 kubelet[2462]: I0913 02:39:23.008746 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-cilium-cgroup\") pod \"c1fa7622-664b-4b73-b92b-5389f467f5a7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " Sep 13 02:39:23.010647 kubelet[2462]: I0913 02:39:23.008743 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c1fa7622-664b-4b73-b92b-5389f467f5a7" (UID: "c1fa7622-664b-4b73-b92b-5389f467f5a7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:39:23.010647 kubelet[2462]: I0913 02:39:23.008790 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c1fa7622-664b-4b73-b92b-5389f467f5a7" (UID: "c1fa7622-664b-4b73-b92b-5389f467f5a7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:39:23.010647 kubelet[2462]: I0913 02:39:23.008743 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c1fa7622-664b-4b73-b92b-5389f467f5a7" (UID: "c1fa7622-664b-4b73-b92b-5389f467f5a7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:39:23.010647 kubelet[2462]: I0913 02:39:23.008792 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-host-proc-sys-net\") pod \"c1fa7622-664b-4b73-b92b-5389f467f5a7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " Sep 13 02:39:23.010647 kubelet[2462]: I0913 02:39:23.008837 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c1fa7622-664b-4b73-b92b-5389f467f5a7" (UID: "c1fa7622-664b-4b73-b92b-5389f467f5a7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 02:39:23.011159 kubelet[2462]: I0913 02:39:23.008899 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1fa7622-664b-4b73-b92b-5389f467f5a7-cilium-config-path\") pod \"c1fa7622-664b-4b73-b92b-5389f467f5a7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " Sep 13 02:39:23.011159 kubelet[2462]: I0913 02:39:23.008954 2462 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsxnq\" (UniqueName: \"kubernetes.io/projected/c1fa7622-664b-4b73-b92b-5389f467f5a7-kube-api-access-xsxnq\") pod \"c1fa7622-664b-4b73-b92b-5389f467f5a7\" (UID: \"c1fa7622-664b-4b73-b92b-5389f467f5a7\") " Sep 13 02:39:23.011159 kubelet[2462]: I0913 02:39:23.009048 2462 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-cilium-cgroup\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:23.011159 kubelet[2462]: I0913 02:39:23.009096 2462 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-host-proc-sys-net\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:23.011159 kubelet[2462]: I0913 02:39:23.009138 2462 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-hostproc\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:23.011159 kubelet[2462]: I0913 02:39:23.009167 2462 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-cni-path\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:23.011159 kubelet[2462]: I0913 02:39:23.009194 2462 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:23.012045 kubelet[2462]: I0913 02:39:23.009223 2462 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-lib-modules\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:23.012045 kubelet[2462]: I0913 02:39:23.009252 2462 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-cilium-run\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:23.012045 kubelet[2462]: I0913 02:39:23.009277 2462 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-bpf-maps\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:23.012045 kubelet[2462]: I0913 02:39:23.009303 2462 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-etc-cni-netd\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:23.012045 kubelet[2462]: I0913 02:39:23.009327 2462 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1fa7622-664b-4b73-b92b-5389f467f5a7-xtables-lock\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:23.013877 kubelet[2462]: I0913 02:39:23.013780 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1fa7622-664b-4b73-b92b-5389f467f5a7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c1fa7622-664b-4b73-b92b-5389f467f5a7" (UID: "c1fa7622-664b-4b73-b92b-5389f467f5a7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 02:39:23.015321 kubelet[2462]: I0913 02:39:23.015213 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1fa7622-664b-4b73-b92b-5389f467f5a7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c1fa7622-664b-4b73-b92b-5389f467f5a7" (UID: "c1fa7622-664b-4b73-b92b-5389f467f5a7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 02:39:23.015594 kubelet[2462]: I0913 02:39:23.015448 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1fa7622-664b-4b73-b92b-5389f467f5a7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c1fa7622-664b-4b73-b92b-5389f467f5a7" (UID: "c1fa7622-664b-4b73-b92b-5389f467f5a7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 02:39:23.015972 kubelet[2462]: I0913 02:39:23.015863 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1fa7622-664b-4b73-b92b-5389f467f5a7-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c1fa7622-664b-4b73-b92b-5389f467f5a7" (UID: "c1fa7622-664b-4b73-b92b-5389f467f5a7"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 02:39:23.016191 kubelet[2462]: I0913 02:39:23.016114 2462 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1fa7622-664b-4b73-b92b-5389f467f5a7-kube-api-access-xsxnq" (OuterVolumeSpecName: "kube-api-access-xsxnq") pod "c1fa7622-664b-4b73-b92b-5389f467f5a7" (UID: "c1fa7622-664b-4b73-b92b-5389f467f5a7"). InnerVolumeSpecName "kube-api-access-xsxnq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 02:39:23.020247 systemd[1]: var-lib-kubelet-pods-c1fa7622\x2d664b\x2d4b73\x2db92b\x2d5389f467f5a7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxsxnq.mount: Deactivated successfully. Sep 13 02:39:23.020563 systemd[1]: var-lib-kubelet-pods-c1fa7622\x2d664b\x2d4b73\x2db92b\x2d5389f467f5a7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 02:39:23.020758 systemd[1]: var-lib-kubelet-pods-c1fa7622\x2d664b\x2d4b73\x2db92b\x2d5389f467f5a7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 02:39:23.020939 systemd[1]: var-lib-kubelet-pods-c1fa7622\x2d664b\x2d4b73\x2db92b\x2d5389f467f5a7-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 02:39:23.110080 kubelet[2462]: I0913 02:39:23.109960 2462 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1fa7622-664b-4b73-b92b-5389f467f5a7-cilium-config-path\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:23.110080 kubelet[2462]: I0913 02:39:23.110034 2462 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xsxnq\" (UniqueName: \"kubernetes.io/projected/c1fa7622-664b-4b73-b92b-5389f467f5a7-kube-api-access-xsxnq\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:23.110080 kubelet[2462]: I0913 02:39:23.110071 2462 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1fa7622-664b-4b73-b92b-5389f467f5a7-hubble-tls\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:23.110080 kubelet[2462]: I0913 02:39:23.110103 2462 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c1fa7622-664b-4b73-b92b-5389f467f5a7-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:23.110770 kubelet[2462]: I0913 02:39:23.110130 2462 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1fa7622-664b-4b73-b92b-5389f467f5a7-clustermesh-secrets\") on node \"ci-3510.3.8-n-6378d470a1\" DevicePath \"\"" Sep 13 02:39:23.690873 kubelet[2462]: E0913 02:39:23.690748 2462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-wx4lh" podUID="b18ae966-57da-44ce-b7fb-02b27246857d" Sep 13 02:39:23.700806 systemd[1]: Removed slice kubepods-burstable-podc1fa7622_664b_4b73_b92b_5389f467f5a7.slice. Sep 13 02:39:24.007067 systemd[1]: Created slice kubepods-burstable-pod61dbf707_75c4_475a_a9ae_4e5e964cabc4.slice. Sep 13 02:39:24.017797 kubelet[2462]: I0913 02:39:24.017728 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/61dbf707-75c4-475a-a9ae-4e5e964cabc4-hostproc\") pod \"cilium-2pw4x\" (UID: \"61dbf707-75c4-475a-a9ae-4e5e964cabc4\") " pod="kube-system/cilium-2pw4x" Sep 13 02:39:24.017797 kubelet[2462]: I0913 02:39:24.017778 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/61dbf707-75c4-475a-a9ae-4e5e964cabc4-cilium-cgroup\") pod \"cilium-2pw4x\" (UID: \"61dbf707-75c4-475a-a9ae-4e5e964cabc4\") " pod="kube-system/cilium-2pw4x" Sep 13 02:39:24.018315 kubelet[2462]: I0913 02:39:24.017811 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/61dbf707-75c4-475a-a9ae-4e5e964cabc4-cni-path\") pod \"cilium-2pw4x\" (UID: \"61dbf707-75c4-475a-a9ae-4e5e964cabc4\") " pod="kube-system/cilium-2pw4x" Sep 13 02:39:24.018315 kubelet[2462]: I0913 02:39:24.017841 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/61dbf707-75c4-475a-a9ae-4e5e964cabc4-host-proc-sys-kernel\") pod \"cilium-2pw4x\" (UID: \"61dbf707-75c4-475a-a9ae-4e5e964cabc4\") " pod="kube-system/cilium-2pw4x" Sep 13 02:39:24.018315 kubelet[2462]: I0913 02:39:24.017922 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/61dbf707-75c4-475a-a9ae-4e5e964cabc4-cilium-run\") pod \"cilium-2pw4x\" (UID: \"61dbf707-75c4-475a-a9ae-4e5e964cabc4\") " pod="kube-system/cilium-2pw4x" Sep 13 02:39:24.018315 kubelet[2462]: I0913 02:39:24.017987 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/61dbf707-75c4-475a-a9ae-4e5e964cabc4-clustermesh-secrets\") pod \"cilium-2pw4x\" (UID: \"61dbf707-75c4-475a-a9ae-4e5e964cabc4\") " pod="kube-system/cilium-2pw4x" Sep 13 02:39:24.018315 kubelet[2462]: I0913 02:39:24.018041 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61dbf707-75c4-475a-a9ae-4e5e964cabc4-etc-cni-netd\") pod \"cilium-2pw4x\" (UID: \"61dbf707-75c4-475a-a9ae-4e5e964cabc4\") " pod="kube-system/cilium-2pw4x" Sep 13 02:39:24.018315 kubelet[2462]: I0913 02:39:24.018070 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61dbf707-75c4-475a-a9ae-4e5e964cabc4-lib-modules\") pod \"cilium-2pw4x\" (UID: \"61dbf707-75c4-475a-a9ae-4e5e964cabc4\") " pod="kube-system/cilium-2pw4x" Sep 13 02:39:24.018665 kubelet[2462]: I0913 02:39:24.018095 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/61dbf707-75c4-475a-a9ae-4e5e964cabc4-hubble-tls\") pod \"cilium-2pw4x\" (UID: \"61dbf707-75c4-475a-a9ae-4e5e964cabc4\") " pod="kube-system/cilium-2pw4x" Sep 13 02:39:24.018665 kubelet[2462]: I0913 02:39:24.018120 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rzzs\" (UniqueName: \"kubernetes.io/projected/61dbf707-75c4-475a-a9ae-4e5e964cabc4-kube-api-access-4rzzs\") pod \"cilium-2pw4x\" (UID: \"61dbf707-75c4-475a-a9ae-4e5e964cabc4\") " pod="kube-system/cilium-2pw4x" Sep 13 02:39:24.018665 kubelet[2462]: I0913 02:39:24.018147 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61dbf707-75c4-475a-a9ae-4e5e964cabc4-cilium-config-path\") pod \"cilium-2pw4x\" (UID: \"61dbf707-75c4-475a-a9ae-4e5e964cabc4\") " pod="kube-system/cilium-2pw4x" Sep 13 02:39:24.018665 kubelet[2462]: I0913 02:39:24.018172 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/61dbf707-75c4-475a-a9ae-4e5e964cabc4-host-proc-sys-net\") pod \"cilium-2pw4x\" (UID: \"61dbf707-75c4-475a-a9ae-4e5e964cabc4\") " pod="kube-system/cilium-2pw4x" Sep 13 02:39:24.018665 kubelet[2462]: I0913 02:39:24.018194 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/61dbf707-75c4-475a-a9ae-4e5e964cabc4-bpf-maps\") pod \"cilium-2pw4x\" (UID: \"61dbf707-75c4-475a-a9ae-4e5e964cabc4\") " pod="kube-system/cilium-2pw4x" Sep 13 02:39:24.018665 kubelet[2462]: I0913 02:39:24.018220 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61dbf707-75c4-475a-a9ae-4e5e964cabc4-xtables-lock\") pod \"cilium-2pw4x\" (UID: \"61dbf707-75c4-475a-a9ae-4e5e964cabc4\") " pod="kube-system/cilium-2pw4x" Sep 13 02:39:24.018984 kubelet[2462]: I0913 02:39:24.018261 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/61dbf707-75c4-475a-a9ae-4e5e964cabc4-cilium-ipsec-secrets\") pod \"cilium-2pw4x\" (UID: \"61dbf707-75c4-475a-a9ae-4e5e964cabc4\") " pod="kube-system/cilium-2pw4x" Sep 13 02:39:24.313448 env[1567]: time="2025-09-13T02:39:24.313201386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2pw4x,Uid:61dbf707-75c4-475a-a9ae-4e5e964cabc4,Namespace:kube-system,Attempt:0,}" Sep 13 02:39:24.336020 env[1567]: time="2025-09-13T02:39:24.335850105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 02:39:24.336020 env[1567]: time="2025-09-13T02:39:24.335953383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 02:39:24.336465 env[1567]: time="2025-09-13T02:39:24.335994383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 02:39:24.336688 env[1567]: time="2025-09-13T02:39:24.336543926Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b49f6e611e77e1e8920337a1a8d1c964dbf44b9d6f58b9bdf82755569e9bcd44 pid=5285 runtime=io.containerd.runc.v2 Sep 13 02:39:24.369057 systemd[1]: Started cri-containerd-b49f6e611e77e1e8920337a1a8d1c964dbf44b9d6f58b9bdf82755569e9bcd44.scope. Sep 13 02:39:24.388579 env[1567]: time="2025-09-13T02:39:24.388532119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2pw4x,Uid:61dbf707-75c4-475a-a9ae-4e5e964cabc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b49f6e611e77e1e8920337a1a8d1c964dbf44b9d6f58b9bdf82755569e9bcd44\"" Sep 13 02:39:24.392420 env[1567]: time="2025-09-13T02:39:24.392351414Z" level=info msg="CreateContainer within sandbox \"b49f6e611e77e1e8920337a1a8d1c964dbf44b9d6f58b9bdf82755569e9bcd44\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 02:39:24.398786 env[1567]: time="2025-09-13T02:39:24.398721837Z" level=info msg="CreateContainer within sandbox \"b49f6e611e77e1e8920337a1a8d1c964dbf44b9d6f58b9bdf82755569e9bcd44\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"450fcd43f10c5f00dfdf30af584c91b130fbbb5100a7787dd4f047d9d0a6bccd\"" Sep 13 02:39:24.399165 env[1567]: time="2025-09-13T02:39:24.399127510Z" level=info msg="StartContainer for \"450fcd43f10c5f00dfdf30af584c91b130fbbb5100a7787dd4f047d9d0a6bccd\"" Sep 13 02:39:24.414266 systemd[1]: Started cri-containerd-450fcd43f10c5f00dfdf30af584c91b130fbbb5100a7787dd4f047d9d0a6bccd.scope. Sep 13 02:39:24.438405 env[1567]: time="2025-09-13T02:39:24.438316206Z" level=info msg="StartContainer for \"450fcd43f10c5f00dfdf30af584c91b130fbbb5100a7787dd4f047d9d0a6bccd\" returns successfully" Sep 13 02:39:24.449133 systemd[1]: cri-containerd-450fcd43f10c5f00dfdf30af584c91b130fbbb5100a7787dd4f047d9d0a6bccd.scope: Deactivated successfully. Sep 13 02:39:24.480959 env[1567]: time="2025-09-13T02:39:24.480876713Z" level=info msg="shim disconnected" id=450fcd43f10c5f00dfdf30af584c91b130fbbb5100a7787dd4f047d9d0a6bccd Sep 13 02:39:24.480959 env[1567]: time="2025-09-13T02:39:24.480929434Z" level=warning msg="cleaning up after shim disconnected" id=450fcd43f10c5f00dfdf30af584c91b130fbbb5100a7787dd4f047d9d0a6bccd namespace=k8s.io Sep 13 02:39:24.480959 env[1567]: time="2025-09-13T02:39:24.480942097Z" level=info msg="cleaning up dead shim" Sep 13 02:39:24.488919 env[1567]: time="2025-09-13T02:39:24.488873006Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:39:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5372 runtime=io.containerd.runc.v2\n" Sep 13 02:39:24.937946 env[1567]: time="2025-09-13T02:39:24.937835797Z" level=info msg="CreateContainer within sandbox \"b49f6e611e77e1e8920337a1a8d1c964dbf44b9d6f58b9bdf82755569e9bcd44\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 02:39:24.955700 env[1567]: time="2025-09-13T02:39:24.955594199Z" level=info msg="CreateContainer within sandbox \"b49f6e611e77e1e8920337a1a8d1c964dbf44b9d6f58b9bdf82755569e9bcd44\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cc753e403b9969896fb492db299cd218f222a1e3c7a0dbfc820fdebe283f6507\"" Sep 13 02:39:24.956729 env[1567]: time="2025-09-13T02:39:24.956613094Z" level=info msg="StartContainer for \"cc753e403b9969896fb492db299cd218f222a1e3c7a0dbfc820fdebe283f6507\"" Sep 13 02:39:24.981880 systemd[1]: Started cri-containerd-cc753e403b9969896fb492db299cd218f222a1e3c7a0dbfc820fdebe283f6507.scope. Sep 13 02:39:25.006652 env[1567]: time="2025-09-13T02:39:25.006612389Z" level=info msg="StartContainer for \"cc753e403b9969896fb492db299cd218f222a1e3c7a0dbfc820fdebe283f6507\" returns successfully" Sep 13 02:39:25.014982 systemd[1]: cri-containerd-cc753e403b9969896fb492db299cd218f222a1e3c7a0dbfc820fdebe283f6507.scope: Deactivated successfully. Sep 13 02:39:25.034238 env[1567]: time="2025-09-13T02:39:25.034154440Z" level=info msg="shim disconnected" id=cc753e403b9969896fb492db299cd218f222a1e3c7a0dbfc820fdebe283f6507 Sep 13 02:39:25.034238 env[1567]: time="2025-09-13T02:39:25.034214020Z" level=warning msg="cleaning up after shim disconnected" id=cc753e403b9969896fb492db299cd218f222a1e3c7a0dbfc820fdebe283f6507 namespace=k8s.io Sep 13 02:39:25.034238 env[1567]: time="2025-09-13T02:39:25.034231573Z" level=info msg="cleaning up dead shim" Sep 13 02:39:25.041666 env[1567]: time="2025-09-13T02:39:25.041631825Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:39:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5433 runtime=io.containerd.runc.v2\n" Sep 13 02:39:25.690520 kubelet[2462]: E0913 02:39:25.690353 2462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-wx4lh" podUID="b18ae966-57da-44ce-b7fb-02b27246857d" Sep 13 02:39:25.695693 kubelet[2462]: I0913 02:39:25.695625 2462 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1fa7622-664b-4b73-b92b-5389f467f5a7" path="/var/lib/kubelet/pods/c1fa7622-664b-4b73-b92b-5389f467f5a7/volumes" Sep 13 02:39:25.944900 env[1567]: time="2025-09-13T02:39:25.944677929Z" level=info msg="CreateContainer within sandbox \"b49f6e611e77e1e8920337a1a8d1c964dbf44b9d6f58b9bdf82755569e9bcd44\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 02:39:25.962705 env[1567]: time="2025-09-13T02:39:25.962603462Z" level=info msg="CreateContainer within sandbox \"b49f6e611e77e1e8920337a1a8d1c964dbf44b9d6f58b9bdf82755569e9bcd44\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7cdaf1ab1bfec500d321cf4cffdcc471d314b63ff53260228b32ff0ae6f7bcd4\"" Sep 13 02:39:25.963161 env[1567]: time="2025-09-13T02:39:25.963147977Z" level=info msg="StartContainer for \"7cdaf1ab1bfec500d321cf4cffdcc471d314b63ff53260228b32ff0ae6f7bcd4\"" Sep 13 02:39:25.973170 systemd[1]: Started cri-containerd-7cdaf1ab1bfec500d321cf4cffdcc471d314b63ff53260228b32ff0ae6f7bcd4.scope. Sep 13 02:39:25.985567 env[1567]: time="2025-09-13T02:39:25.985544105Z" level=info msg="StartContainer for \"7cdaf1ab1bfec500d321cf4cffdcc471d314b63ff53260228b32ff0ae6f7bcd4\" returns successfully" Sep 13 02:39:25.986989 systemd[1]: cri-containerd-7cdaf1ab1bfec500d321cf4cffdcc471d314b63ff53260228b32ff0ae6f7bcd4.scope: Deactivated successfully. Sep 13 02:39:26.019284 env[1567]: time="2025-09-13T02:39:26.019182787Z" level=info msg="shim disconnected" id=7cdaf1ab1bfec500d321cf4cffdcc471d314b63ff53260228b32ff0ae6f7bcd4 Sep 13 02:39:26.019284 env[1567]: time="2025-09-13T02:39:26.019279823Z" level=warning msg="cleaning up after shim disconnected" id=7cdaf1ab1bfec500d321cf4cffdcc471d314b63ff53260228b32ff0ae6f7bcd4 namespace=k8s.io Sep 13 02:39:26.019780 env[1567]: time="2025-09-13T02:39:26.019308003Z" level=info msg="cleaning up dead shim" Sep 13 02:39:26.037063 env[1567]: time="2025-09-13T02:39:26.036980792Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:39:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5487 runtime=io.containerd.runc.v2\n" Sep 13 02:39:26.134883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cdaf1ab1bfec500d321cf4cffdcc471d314b63ff53260228b32ff0ae6f7bcd4-rootfs.mount: Deactivated successfully. Sep 13 02:39:26.400855 kubelet[2462]: I0913 02:39:26.400735 2462 setters.go:618] "Node became not ready" node="ci-3510.3.8-n-6378d470a1" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T02:39:26Z","lastTransitionTime":"2025-09-13T02:39:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 02:39:26.951807 env[1567]: time="2025-09-13T02:39:26.951722719Z" level=info msg="CreateContainer within sandbox \"b49f6e611e77e1e8920337a1a8d1c964dbf44b9d6f58b9bdf82755569e9bcd44\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 02:39:26.967130 env[1567]: time="2025-09-13T02:39:26.967083626Z" level=info msg="CreateContainer within sandbox \"b49f6e611e77e1e8920337a1a8d1c964dbf44b9d6f58b9bdf82755569e9bcd44\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ee001924a812a87b9ecbfc69491005f08954ca1663e9d2842372a18ee48a493a\"" Sep 13 02:39:26.967484 env[1567]: time="2025-09-13T02:39:26.967400678Z" level=info msg="StartContainer for \"ee001924a812a87b9ecbfc69491005f08954ca1663e9d2842372a18ee48a493a\"" Sep 13 02:39:26.976284 systemd[1]: Started cri-containerd-ee001924a812a87b9ecbfc69491005f08954ca1663e9d2842372a18ee48a493a.scope. Sep 13 02:39:26.991581 env[1567]: time="2025-09-13T02:39:26.991520921Z" level=info msg="StartContainer for \"ee001924a812a87b9ecbfc69491005f08954ca1663e9d2842372a18ee48a493a\" returns successfully" Sep 13 02:39:26.993693 systemd[1]: cri-containerd-ee001924a812a87b9ecbfc69491005f08954ca1663e9d2842372a18ee48a493a.scope: Deactivated successfully. Sep 13 02:39:27.016017 env[1567]: time="2025-09-13T02:39:27.015961065Z" level=info msg="shim disconnected" id=ee001924a812a87b9ecbfc69491005f08954ca1663e9d2842372a18ee48a493a Sep 13 02:39:27.016017 env[1567]: time="2025-09-13T02:39:27.015987345Z" level=warning msg="cleaning up after shim disconnected" id=ee001924a812a87b9ecbfc69491005f08954ca1663e9d2842372a18ee48a493a namespace=k8s.io Sep 13 02:39:27.016017 env[1567]: time="2025-09-13T02:39:27.015992984Z" level=info msg="cleaning up dead shim" Sep 13 02:39:27.019454 env[1567]: time="2025-09-13T02:39:27.019437410Z" level=warning msg="cleanup warnings time=\"2025-09-13T02:39:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5541 runtime=io.containerd.runc.v2\n" Sep 13 02:39:27.131546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee001924a812a87b9ecbfc69491005f08954ca1663e9d2842372a18ee48a493a-rootfs.mount: Deactivated successfully. Sep 13 02:39:27.691174 kubelet[2462]: E0913 02:39:27.691081 2462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-wx4lh" podUID="b18ae966-57da-44ce-b7fb-02b27246857d" Sep 13 02:39:27.816037 kubelet[2462]: E0913 02:39:27.815955 2462 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 02:39:27.960836 env[1567]: time="2025-09-13T02:39:27.960610665Z" level=info msg="CreateContainer within sandbox \"b49f6e611e77e1e8920337a1a8d1c964dbf44b9d6f58b9bdf82755569e9bcd44\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 02:39:27.989034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1508355062.mount: Deactivated successfully. Sep 13 02:39:27.995469 env[1567]: time="2025-09-13T02:39:27.995352792Z" level=info msg="CreateContainer within sandbox \"b49f6e611e77e1e8920337a1a8d1c964dbf44b9d6f58b9bdf82755569e9bcd44\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cb0e266b95b791cdcb766fbe64a54dbf811981ae41cd6bbb9438009103402095\"" Sep 13 02:39:27.996439 env[1567]: time="2025-09-13T02:39:27.996344488Z" level=info msg="StartContainer for \"cb0e266b95b791cdcb766fbe64a54dbf811981ae41cd6bbb9438009103402095\"" Sep 13 02:39:28.038481 systemd[1]: Started cri-containerd-cb0e266b95b791cdcb766fbe64a54dbf811981ae41cd6bbb9438009103402095.scope. Sep 13 02:39:28.070881 env[1567]: time="2025-09-13T02:39:28.070828209Z" level=info msg="StartContainer for \"cb0e266b95b791cdcb766fbe64a54dbf811981ae41cd6bbb9438009103402095\" returns successfully" Sep 13 02:39:28.290367 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 02:39:28.998625 kubelet[2462]: I0913 02:39:28.998470 2462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2pw4x" podStartSLOduration=5.9984363290000005 podStartE2EDuration="5.998436329s" podCreationTimestamp="2025-09-13 02:39:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 02:39:28.997516687 +0000 UTC m=+431.422079138" watchObservedRunningTime="2025-09-13 02:39:28.998436329 +0000 UTC m=+431.422998765" Sep 13 02:39:29.690102 kubelet[2462]: E0913 02:39:29.690074 2462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-wx4lh" podUID="b18ae966-57da-44ce-b7fb-02b27246857d" Sep 13 02:39:31.333472 systemd-networkd[1319]: lxc_health: Link UP Sep 13 02:39:31.361378 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 02:39:31.361464 systemd-networkd[1319]: lxc_health: Gained carrier Sep 13 02:39:31.689999 kubelet[2462]: E0913 02:39:31.689925 2462 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-wx4lh" podUID="b18ae966-57da-44ce-b7fb-02b27246857d" Sep 13 02:39:33.270460 systemd-networkd[1319]: lxc_health: Gained IPv6LL Sep 13 02:39:37.504956 sshd[5242]: pam_unix(sshd:session): session closed for user core Sep 13 02:39:37.506330 systemd[1]: sshd@151-145.40.90.231:22-139.178.89.65:40062.service: Deactivated successfully. Sep 13 02:39:37.506735 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 02:39:37.507122 systemd-logind[1559]: Session 28 logged out. Waiting for processes to exit. Sep 13 02:39:37.507684 systemd-logind[1559]: Removed session 28.