Dec 13 03:55:59.564308 kernel: microcode: microcode updated early to revision 0xf4, date = 2022-07-31 Dec 13 03:55:59.564321 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 03:55:59.564328 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 03:55:59.564332 kernel: BIOS-provided physical RAM map: Dec 13 03:55:59.564335 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Dec 13 03:55:59.564339 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Dec 13 03:55:59.564343 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Dec 13 03:55:59.564348 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Dec 13 03:55:59.564352 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Dec 13 03:55:59.564355 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000006e2d8fff] usable Dec 13 03:55:59.564359 kernel: BIOS-e820: [mem 0x000000006e2d9000-0x000000006e2d9fff] ACPI NVS Dec 13 03:55:59.564363 kernel: BIOS-e820: [mem 0x000000006e2da000-0x000000006e2dafff] reserved Dec 13 03:55:59.564366 kernel: BIOS-e820: [mem 0x000000006e2db000-0x0000000077fc4fff] usable Dec 13 03:55:59.564370 kernel: BIOS-e820: [mem 0x0000000077fc5000-0x00000000790a7fff] reserved Dec 13 03:55:59.564376 kernel: BIOS-e820: [mem 0x00000000790a8000-0x0000000079230fff] usable Dec 13 03:55:59.564380 kernel: BIOS-e820: [mem 0x0000000079231000-0x0000000079662fff] ACPI NVS Dec 13 03:55:59.564384 kernel: BIOS-e820: [mem 0x0000000079663000-0x000000007befefff] reserved Dec 13 03:55:59.564388 kernel: BIOS-e820: [mem 0x000000007beff000-0x000000007befffff] usable Dec 13 03:55:59.564392 kernel: BIOS-e820: [mem 0x000000007bf00000-0x000000007f7fffff] reserved Dec 13 03:55:59.564396 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 03:55:59.564400 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Dec 13 03:55:59.564404 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Dec 13 03:55:59.564408 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Dec 13 03:55:59.564413 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Dec 13 03:55:59.564417 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000087f7fffff] usable Dec 13 03:55:59.564421 kernel: NX (Execute Disable) protection: active Dec 13 03:55:59.564446 kernel: SMBIOS 3.2.1 present. Dec 13 03:55:59.564451 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Dec 13 03:55:59.564455 kernel: tsc: Detected 3400.000 MHz processor Dec 13 03:55:59.564459 kernel: tsc: Detected 3399.906 MHz TSC Dec 13 03:55:59.564463 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 03:55:59.564468 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 03:55:59.564489 kernel: last_pfn = 0x87f800 max_arch_pfn = 0x400000000 Dec 13 03:55:59.564494 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 03:55:59.564498 kernel: last_pfn = 0x7bf00 max_arch_pfn = 0x400000000 Dec 13 03:55:59.564503 kernel: Using GB pages for direct mapping Dec 13 03:55:59.564507 kernel: ACPI: Early table checksum verification disabled Dec 13 03:55:59.564511 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Dec 13 03:55:59.564515 kernel: ACPI: XSDT 0x00000000795440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Dec 13 03:55:59.564520 kernel: ACPI: FACP 0x0000000079580620 000114 (v06 01072009 AMI 00010013) Dec 13 03:55:59.564526 kernel: ACPI: DSDT 0x0000000079544268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Dec 13 03:55:59.564531 kernel: ACPI: FACS 0x0000000079662F80 000040 Dec 13 03:55:59.564535 kernel: ACPI: APIC 0x0000000079580738 00012C (v04 01072009 AMI 00010013) Dec 13 03:55:59.564540 kernel: ACPI: FPDT 0x0000000079580868 000044 (v01 01072009 AMI 00010013) Dec 13 03:55:59.564545 kernel: ACPI: FIDT 0x00000000795808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Dec 13 03:55:59.564549 kernel: ACPI: MCFG 0x0000000079580950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Dec 13 03:55:59.564554 kernel: ACPI: SPMI 0x0000000079580990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Dec 13 03:55:59.564559 kernel: ACPI: SSDT 0x00000000795809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Dec 13 03:55:59.564564 kernel: ACPI: SSDT 0x00000000795824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Dec 13 03:55:59.564568 kernel: ACPI: SSDT 0x00000000795856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Dec 13 03:55:59.564572 kernel: ACPI: HPET 0x00000000795879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 03:55:59.564577 kernel: ACPI: SSDT 0x0000000079587A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Dec 13 03:55:59.564582 kernel: ACPI: SSDT 0x00000000795889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Dec 13 03:55:59.564586 kernel: ACPI: UEFI 0x00000000795892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 03:55:59.564591 kernel: ACPI: LPIT 0x0000000079589318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 03:55:59.564595 kernel: ACPI: SSDT 0x00000000795893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Dec 13 03:55:59.564600 kernel: ACPI: SSDT 0x000000007958BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Dec 13 03:55:59.564605 kernel: ACPI: DBGP 0x000000007958D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 03:55:59.564609 kernel: ACPI: DBG2 0x000000007958D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Dec 13 03:55:59.564614 kernel: ACPI: SSDT 0x000000007958D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Dec 13 03:55:59.564618 kernel: ACPI: DMAR 0x000000007958EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Dec 13 03:55:59.564623 kernel: ACPI: SSDT 0x000000007958ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Dec 13 03:55:59.564628 kernel: ACPI: TPM2 0x000000007958EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Dec 13 03:55:59.564632 kernel: ACPI: SSDT 0x000000007958EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Dec 13 03:55:59.564637 kernel: ACPI: WSMT 0x000000007958FC28 000028 (v01 'n 01072009 AMI 00010013) Dec 13 03:55:59.564642 kernel: ACPI: EINJ 0x000000007958FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Dec 13 03:55:59.564646 kernel: ACPI: ERST 0x000000007958FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Dec 13 03:55:59.564651 kernel: ACPI: BERT 0x000000007958FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Dec 13 03:55:59.564656 kernel: ACPI: HEST 0x000000007958FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Dec 13 03:55:59.564660 kernel: ACPI: SSDT 0x0000000079590260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Dec 13 03:55:59.564665 kernel: ACPI: Reserving FACP table memory at [mem 0x79580620-0x79580733] Dec 13 03:55:59.564669 kernel: ACPI: Reserving DSDT table memory at [mem 0x79544268-0x7958061e] Dec 13 03:55:59.564674 kernel: ACPI: Reserving FACS table memory at [mem 0x79662f80-0x79662fbf] Dec 13 03:55:59.564679 kernel: ACPI: Reserving APIC table memory at [mem 0x79580738-0x79580863] Dec 13 03:55:59.564684 kernel: ACPI: Reserving FPDT table memory at [mem 0x79580868-0x795808ab] Dec 13 03:55:59.564688 kernel: ACPI: Reserving FIDT table memory at [mem 0x795808b0-0x7958094b] Dec 13 03:55:59.564693 kernel: ACPI: Reserving MCFG table memory at [mem 0x79580950-0x7958098b] Dec 13 03:55:59.564697 kernel: ACPI: Reserving SPMI table memory at [mem 0x79580990-0x795809d0] Dec 13 03:55:59.564702 kernel: ACPI: Reserving SSDT table memory at [mem 0x795809d8-0x795824f3] Dec 13 03:55:59.564706 kernel: ACPI: Reserving SSDT table memory at [mem 0x795824f8-0x795856bd] Dec 13 03:55:59.564711 kernel: ACPI: Reserving SSDT table memory at [mem 0x795856c0-0x795879ea] Dec 13 03:55:59.564715 kernel: ACPI: Reserving HPET table memory at [mem 0x795879f0-0x79587a27] Dec 13 03:55:59.564720 kernel: ACPI: Reserving SSDT table memory at [mem 0x79587a28-0x795889d5] Dec 13 03:55:59.564725 kernel: ACPI: Reserving SSDT table memory at [mem 0x795889d8-0x795892ce] Dec 13 03:55:59.564729 kernel: ACPI: Reserving UEFI table memory at [mem 0x795892d0-0x79589311] Dec 13 03:55:59.564734 kernel: ACPI: Reserving LPIT table memory at [mem 0x79589318-0x795893ab] Dec 13 03:55:59.564739 kernel: ACPI: Reserving SSDT table memory at [mem 0x795893b0-0x7958bb8d] Dec 13 03:55:59.564743 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958bb90-0x7958d071] Dec 13 03:55:59.564748 kernel: ACPI: Reserving DBGP table memory at [mem 0x7958d078-0x7958d0ab] Dec 13 03:55:59.564752 kernel: ACPI: Reserving DBG2 table memory at [mem 0x7958d0b0-0x7958d103] Dec 13 03:55:59.564757 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958d108-0x7958ec6e] Dec 13 03:55:59.564762 kernel: ACPI: Reserving DMAR table memory at [mem 0x7958ec70-0x7958ed17] Dec 13 03:55:59.564767 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ed18-0x7958ee5b] Dec 13 03:55:59.564771 kernel: ACPI: Reserving TPM2 table memory at [mem 0x7958ee60-0x7958ee93] Dec 13 03:55:59.564776 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ee98-0x7958fc26] Dec 13 03:55:59.564780 kernel: ACPI: Reserving WSMT table memory at [mem 0x7958fc28-0x7958fc4f] Dec 13 03:55:59.564785 kernel: ACPI: Reserving EINJ table memory at [mem 0x7958fc50-0x7958fd7f] Dec 13 03:55:59.564789 kernel: ACPI: Reserving ERST table memory at [mem 0x7958fd80-0x7958ffaf] Dec 13 03:55:59.564794 kernel: ACPI: Reserving BERT table memory at [mem 0x7958ffb0-0x7958ffdf] Dec 13 03:55:59.564798 kernel: ACPI: Reserving HEST table memory at [mem 0x7958ffe0-0x7959025b] Dec 13 03:55:59.564804 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590260-0x795903c1] Dec 13 03:55:59.564808 kernel: No NUMA configuration found Dec 13 03:55:59.564813 kernel: Faking a node at [mem 0x0000000000000000-0x000000087f7fffff] Dec 13 03:55:59.564817 kernel: NODE_DATA(0) allocated [mem 0x87f7fa000-0x87f7fffff] Dec 13 03:55:59.564822 kernel: Zone ranges: Dec 13 03:55:59.564826 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 03:55:59.564831 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 03:55:59.564835 kernel: Normal [mem 0x0000000100000000-0x000000087f7fffff] Dec 13 03:55:59.564840 kernel: Movable zone start for each node Dec 13 03:55:59.564845 kernel: Early memory node ranges Dec 13 03:55:59.564850 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Dec 13 03:55:59.564854 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Dec 13 03:55:59.564859 kernel: node 0: [mem 0x0000000040400000-0x000000006e2d8fff] Dec 13 03:55:59.564863 kernel: node 0: [mem 0x000000006e2db000-0x0000000077fc4fff] Dec 13 03:55:59.564868 kernel: node 0: [mem 0x00000000790a8000-0x0000000079230fff] Dec 13 03:55:59.564872 kernel: node 0: [mem 0x000000007beff000-0x000000007befffff] Dec 13 03:55:59.564877 kernel: node 0: [mem 0x0000000100000000-0x000000087f7fffff] Dec 13 03:55:59.564881 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000087f7fffff] Dec 13 03:55:59.564890 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 03:55:59.564894 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Dec 13 03:55:59.564899 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 13 03:55:59.564905 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Dec 13 03:55:59.564910 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Dec 13 03:55:59.564915 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Dec 13 03:55:59.564920 kernel: On node 0, zone Normal: 16640 pages in unavailable ranges Dec 13 03:55:59.564925 kernel: On node 0, zone Normal: 2048 pages in unavailable ranges Dec 13 03:55:59.564930 kernel: ACPI: PM-Timer IO Port: 0x1808 Dec 13 03:55:59.564935 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Dec 13 03:55:59.564940 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Dec 13 03:55:59.564945 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Dec 13 03:55:59.564950 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Dec 13 03:55:59.564955 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Dec 13 03:55:59.564960 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Dec 13 03:55:59.564965 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Dec 13 03:55:59.564969 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Dec 13 03:55:59.564975 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Dec 13 03:55:59.564980 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Dec 13 03:55:59.564985 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Dec 13 03:55:59.564989 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Dec 13 03:55:59.564994 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Dec 13 03:55:59.564999 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Dec 13 03:55:59.565004 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Dec 13 03:55:59.565009 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Dec 13 03:55:59.565013 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Dec 13 03:55:59.565019 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 03:55:59.565024 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 03:55:59.565029 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 03:55:59.565034 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 03:55:59.565039 kernel: TSC deadline timer available Dec 13 03:55:59.565043 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Dec 13 03:55:59.565048 kernel: [mem 0x7f800000-0xdfffffff] available for PCI devices Dec 13 03:55:59.565053 kernel: Booting paravirtualized kernel on bare hardware Dec 13 03:55:59.565058 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 03:55:59.565064 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 03:55:59.565069 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 03:55:59.565074 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 03:55:59.565078 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 03:55:59.565083 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8222327 Dec 13 03:55:59.565088 kernel: Policy zone: Normal Dec 13 03:55:59.565093 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 03:55:59.565098 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 03:55:59.565104 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Dec 13 03:55:59.565109 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Dec 13 03:55:59.565114 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 03:55:59.565119 kernel: Memory: 32681612K/33411988K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 730116K reserved, 0K cma-reserved) Dec 13 03:55:59.565124 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 03:55:59.565129 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 03:55:59.565133 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 03:55:59.565138 kernel: rcu: Hierarchical RCU implementation. Dec 13 03:55:59.565144 kernel: rcu: RCU event tracing is enabled. Dec 13 03:55:59.565150 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 03:55:59.565154 kernel: Rude variant of Tasks RCU enabled. Dec 13 03:55:59.565159 kernel: Tracing variant of Tasks RCU enabled. Dec 13 03:55:59.565164 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 03:55:59.565169 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 03:55:59.565174 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Dec 13 03:55:59.565179 kernel: random: crng init done Dec 13 03:55:59.565184 kernel: Console: colour dummy device 80x25 Dec 13 03:55:59.565188 kernel: printk: console [tty0] enabled Dec 13 03:55:59.565194 kernel: printk: console [ttyS1] enabled Dec 13 03:55:59.565199 kernel: ACPI: Core revision 20210730 Dec 13 03:55:59.565204 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Dec 13 03:55:59.565209 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 03:55:59.565213 kernel: DMAR: Host address width 39 Dec 13 03:55:59.565218 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Dec 13 03:55:59.565223 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Dec 13 03:55:59.565228 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Dec 13 03:55:59.565233 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Dec 13 03:55:59.565239 kernel: DMAR: RMRR base: 0x00000079f11000 end: 0x0000007a15afff Dec 13 03:55:59.565243 kernel: DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff Dec 13 03:55:59.565248 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Dec 13 03:55:59.565253 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Dec 13 03:55:59.565258 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Dec 13 03:55:59.565263 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Dec 13 03:55:59.565268 kernel: x2apic enabled Dec 13 03:55:59.565272 kernel: Switched APIC routing to cluster x2apic. Dec 13 03:55:59.565277 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 03:55:59.565283 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Dec 13 03:55:59.565288 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Dec 13 03:55:59.565293 kernel: CPU0: Thermal monitoring enabled (TM1) Dec 13 03:55:59.565297 kernel: process: using mwait in idle threads Dec 13 03:55:59.565302 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 03:55:59.565307 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 03:55:59.565312 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 03:55:59.565317 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 03:55:59.565322 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 03:55:59.565328 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 03:55:59.565332 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 13 03:55:59.565337 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 03:55:59.565342 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 13 03:55:59.565347 kernel: RETBleed: Mitigation: Enhanced IBRS Dec 13 03:55:59.565352 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 03:55:59.565357 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 03:55:59.565362 kernel: TAA: Mitigation: TSX disabled Dec 13 03:55:59.565366 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Dec 13 03:55:59.565372 kernel: SRBDS: Mitigation: Microcode Dec 13 03:55:59.565377 kernel: GDS: Vulnerable: No microcode Dec 13 03:55:59.565382 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 03:55:59.565387 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 03:55:59.565391 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 03:55:59.565396 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 03:55:59.565401 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 03:55:59.565406 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 03:55:59.565411 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 03:55:59.565416 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 03:55:59.565421 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Dec 13 03:55:59.565428 kernel: Freeing SMP alternatives memory: 32K Dec 13 03:55:59.565432 kernel: pid_max: default: 32768 minimum: 301 Dec 13 03:55:59.565437 kernel: LSM: Security Framework initializing Dec 13 03:55:59.565463 kernel: SELinux: Initializing. Dec 13 03:55:59.565468 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 03:55:59.565473 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 03:55:59.565492 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Dec 13 03:55:59.565498 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Dec 13 03:55:59.565503 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Dec 13 03:55:59.565507 kernel: ... version: 4 Dec 13 03:55:59.565512 kernel: ... bit width: 48 Dec 13 03:55:59.565517 kernel: ... generic registers: 4 Dec 13 03:55:59.565522 kernel: ... value mask: 0000ffffffffffff Dec 13 03:55:59.565527 kernel: ... max period: 00007fffffffffff Dec 13 03:55:59.565531 kernel: ... fixed-purpose events: 3 Dec 13 03:55:59.565536 kernel: ... event mask: 000000070000000f Dec 13 03:55:59.565542 kernel: signal: max sigframe size: 2032 Dec 13 03:55:59.565547 kernel: rcu: Hierarchical SRCU implementation. Dec 13 03:55:59.565551 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Dec 13 03:55:59.565556 kernel: smp: Bringing up secondary CPUs ... Dec 13 03:55:59.565561 kernel: x86: Booting SMP configuration: Dec 13 03:55:59.565566 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Dec 13 03:55:59.565571 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 03:55:59.565576 kernel: #9 #10 #11 #12 #13 #14 #15 Dec 13 03:55:59.565581 kernel: smp: Brought up 1 node, 16 CPUs Dec 13 03:55:59.565586 kernel: smpboot: Max logical packages: 1 Dec 13 03:55:59.565591 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Dec 13 03:55:59.565596 kernel: devtmpfs: initialized Dec 13 03:55:59.565601 kernel: x86/mm: Memory block size: 128MB Dec 13 03:55:59.565606 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6e2d9000-0x6e2d9fff] (4096 bytes) Dec 13 03:55:59.565611 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x79231000-0x79662fff] (4399104 bytes) Dec 13 03:55:59.565616 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 03:55:59.565621 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 03:55:59.565626 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 03:55:59.565631 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 03:55:59.565636 kernel: audit: initializing netlink subsys (disabled) Dec 13 03:55:59.565641 kernel: audit: type=2000 audit(1734062154.123:1): state=initialized audit_enabled=0 res=1 Dec 13 03:55:59.565646 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 03:55:59.565650 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 03:55:59.565655 kernel: cpuidle: using governor menu Dec 13 03:55:59.565660 kernel: ACPI: bus type PCI registered Dec 13 03:55:59.565665 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 03:55:59.565670 kernel: dca service started, version 1.12.1 Dec 13 03:55:59.565675 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Dec 13 03:55:59.565680 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Dec 13 03:55:59.565685 kernel: PCI: Using configuration type 1 for base access Dec 13 03:55:59.565690 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Dec 13 03:55:59.565694 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 03:55:59.565699 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 03:55:59.565704 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 03:55:59.565709 kernel: ACPI: Added _OSI(Module Device) Dec 13 03:55:59.565715 kernel: ACPI: Added _OSI(Processor Device) Dec 13 03:55:59.565719 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 03:55:59.565724 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 03:55:59.565729 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 03:55:59.565734 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 03:55:59.565739 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 03:55:59.565743 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Dec 13 03:55:59.565748 kernel: ACPI: Dynamic OEM Table Load: Dec 13 03:55:59.565753 kernel: ACPI: SSDT 0xFFFF8B90C021AB00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Dec 13 03:55:59.565759 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Dec 13 03:55:59.565764 kernel: ACPI: Dynamic OEM Table Load: Dec 13 03:55:59.565768 kernel: ACPI: SSDT 0xFFFF8B90C1CE9400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Dec 13 03:55:59.565773 kernel: ACPI: Dynamic OEM Table Load: Dec 13 03:55:59.565778 kernel: ACPI: SSDT 0xFFFF8B90C1C5F800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Dec 13 03:55:59.565783 kernel: ACPI: Dynamic OEM Table Load: Dec 13 03:55:59.565788 kernel: ACPI: SSDT 0xFFFF8B90C1D4F000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Dec 13 03:55:59.565792 kernel: ACPI: Dynamic OEM Table Load: Dec 13 03:55:59.565797 kernel: ACPI: SSDT 0xFFFF8B90C014E000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Dec 13 03:55:59.565802 kernel: ACPI: Dynamic OEM Table Load: Dec 13 03:55:59.565807 kernel: ACPI: SSDT 0xFFFF8B90C1CE9800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Dec 13 03:55:59.565812 kernel: ACPI: Interpreter enabled Dec 13 03:55:59.565817 kernel: ACPI: PM: (supports S0 S5) Dec 13 03:55:59.565822 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 03:55:59.565827 kernel: HEST: Enabling Firmware First mode for corrected errors. Dec 13 03:55:59.565832 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Dec 13 03:55:59.565836 kernel: HEST: Table parsing has been initialized. Dec 13 03:55:59.565841 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Dec 13 03:55:59.565846 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 03:55:59.565852 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Dec 13 03:55:59.565857 kernel: ACPI: PM: Power Resource [USBC] Dec 13 03:55:59.565861 kernel: ACPI: PM: Power Resource [V0PR] Dec 13 03:55:59.565866 kernel: ACPI: PM: Power Resource [V1PR] Dec 13 03:55:59.565871 kernel: ACPI: PM: Power Resource [V2PR] Dec 13 03:55:59.565876 kernel: ACPI: PM: Power Resource [WRST] Dec 13 03:55:59.565880 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Dec 13 03:55:59.565885 kernel: ACPI: PM: Power Resource [FN00] Dec 13 03:55:59.565890 kernel: ACPI: PM: Power Resource [FN01] Dec 13 03:55:59.565896 kernel: ACPI: PM: Power Resource [FN02] Dec 13 03:55:59.565900 kernel: ACPI: PM: Power Resource [FN03] Dec 13 03:55:59.565905 kernel: ACPI: PM: Power Resource [FN04] Dec 13 03:55:59.565910 kernel: ACPI: PM: Power Resource [PIN] Dec 13 03:55:59.565915 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Dec 13 03:55:59.565979 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 03:55:59.566024 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Dec 13 03:55:59.566065 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Dec 13 03:55:59.566073 kernel: PCI host bridge to bus 0000:00 Dec 13 03:55:59.566115 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 03:55:59.566153 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 03:55:59.566190 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 03:55:59.566227 kernel: pci_bus 0000:00: root bus resource [mem 0x7f800000-0xdfffffff window] Dec 13 03:55:59.566263 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Dec 13 03:55:59.566299 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Dec 13 03:55:59.566350 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Dec 13 03:55:59.566399 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Dec 13 03:55:59.566463 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Dec 13 03:55:59.566524 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Dec 13 03:55:59.566568 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Dec 13 03:55:59.566614 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Dec 13 03:55:59.566659 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x94000000-0x94ffffff 64bit] Dec 13 03:55:59.566700 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Dec 13 03:55:59.566742 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Dec 13 03:55:59.566788 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Dec 13 03:55:59.566829 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9651f000-0x9651ffff 64bit] Dec 13 03:55:59.566875 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Dec 13 03:55:59.566918 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9651e000-0x9651efff 64bit] Dec 13 03:55:59.566963 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Dec 13 03:55:59.567004 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x96500000-0x9650ffff 64bit] Dec 13 03:55:59.567046 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Dec 13 03:55:59.567091 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Dec 13 03:55:59.567133 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x96512000-0x96513fff 64bit] Dec 13 03:55:59.567175 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9651d000-0x9651dfff 64bit] Dec 13 03:55:59.567219 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Dec 13 03:55:59.567260 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 03:55:59.567304 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Dec 13 03:55:59.567346 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 03:55:59.567397 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Dec 13 03:55:59.567477 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9651a000-0x9651afff 64bit] Dec 13 03:55:59.567519 kernel: pci 0000:00:16.0: PME# supported from D3hot Dec 13 03:55:59.567563 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Dec 13 03:55:59.567605 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x96519000-0x96519fff 64bit] Dec 13 03:55:59.567646 kernel: pci 0000:00:16.1: PME# supported from D3hot Dec 13 03:55:59.567690 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Dec 13 03:55:59.567731 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x96518000-0x96518fff 64bit] Dec 13 03:55:59.567774 kernel: pci 0000:00:16.4: PME# supported from D3hot Dec 13 03:55:59.567818 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Dec 13 03:55:59.567860 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x96510000-0x96511fff] Dec 13 03:55:59.567900 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x96517000-0x965170ff] Dec 13 03:55:59.567940 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Dec 13 03:55:59.567982 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Dec 13 03:55:59.568022 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Dec 13 03:55:59.568065 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x96516000-0x965167ff] Dec 13 03:55:59.568106 kernel: pci 0000:00:17.0: PME# supported from D3hot Dec 13 03:55:59.568153 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Dec 13 03:55:59.568195 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Dec 13 03:55:59.568242 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Dec 13 03:55:59.568285 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Dec 13 03:55:59.568330 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Dec 13 03:55:59.568372 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Dec 13 03:55:59.568417 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Dec 13 03:55:59.568495 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Dec 13 03:55:59.568541 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Dec 13 03:55:59.568584 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Dec 13 03:55:59.568628 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Dec 13 03:55:59.568669 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 03:55:59.568716 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Dec 13 03:55:59.568761 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Dec 13 03:55:59.568805 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x96514000-0x965140ff 64bit] Dec 13 03:55:59.568845 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Dec 13 03:55:59.568892 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Dec 13 03:55:59.568934 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Dec 13 03:55:59.568975 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 03:55:59.569022 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Dec 13 03:55:59.569065 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Dec 13 03:55:59.569110 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x96200000-0x962fffff pref] Dec 13 03:55:59.569153 kernel: pci 0000:02:00.0: PME# supported from D3cold Dec 13 03:55:59.569196 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Dec 13 03:55:59.569238 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 03:55:59.569286 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Dec 13 03:55:59.569331 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Dec 13 03:55:59.569374 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x96100000-0x961fffff pref] Dec 13 03:55:59.569418 kernel: pci 0000:02:00.1: PME# supported from D3cold Dec 13 03:55:59.569506 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Dec 13 03:55:59.569548 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 03:55:59.569590 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Dec 13 03:55:59.569631 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Dec 13 03:55:59.569671 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 03:55:59.569713 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Dec 13 03:55:59.569758 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Dec 13 03:55:59.569804 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Dec 13 03:55:59.569847 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x96400000-0x9647ffff] Dec 13 03:55:59.569890 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Dec 13 03:55:59.569932 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x96480000-0x96483fff] Dec 13 03:55:59.569975 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Dec 13 03:55:59.570017 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Dec 13 03:55:59.570059 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Dec 13 03:55:59.570102 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Dec 13 03:55:59.570150 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Dec 13 03:55:59.570194 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Dec 13 03:55:59.570236 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x96300000-0x9637ffff] Dec 13 03:55:59.570279 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Dec 13 03:55:59.570378 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x96380000-0x96383fff] Dec 13 03:55:59.570421 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Dec 13 03:55:59.570469 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Dec 13 03:55:59.570530 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Dec 13 03:55:59.570571 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Dec 13 03:55:59.570613 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Dec 13 03:55:59.570659 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Dec 13 03:55:59.570702 kernel: pci 0000:07:00.0: enabling Extended Tags Dec 13 03:55:59.570745 kernel: pci 0000:07:00.0: supports D1 D2 Dec 13 03:55:59.570787 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 03:55:59.570831 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Dec 13 03:55:59.570872 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Dec 13 03:55:59.570915 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Dec 13 03:55:59.570960 kernel: pci_bus 0000:08: extended config space not accessible Dec 13 03:55:59.571009 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Dec 13 03:55:59.571054 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x95000000-0x95ffffff] Dec 13 03:55:59.571100 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x96000000-0x9601ffff] Dec 13 03:55:59.571146 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Dec 13 03:55:59.571193 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 03:55:59.571237 kernel: pci 0000:08:00.0: supports D1 D2 Dec 13 03:55:59.571282 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 03:55:59.571326 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Dec 13 03:55:59.571369 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Dec 13 03:55:59.571412 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Dec 13 03:55:59.571421 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Dec 13 03:55:59.571429 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Dec 13 03:55:59.571434 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Dec 13 03:55:59.571464 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Dec 13 03:55:59.571469 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Dec 13 03:55:59.571474 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Dec 13 03:55:59.571480 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Dec 13 03:55:59.571485 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Dec 13 03:55:59.571510 kernel: iommu: Default domain type: Translated Dec 13 03:55:59.571516 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 03:55:59.571561 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Dec 13 03:55:59.571607 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 03:55:59.571652 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Dec 13 03:55:59.571660 kernel: vgaarb: loaded Dec 13 03:55:59.571665 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 03:55:59.571670 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 03:55:59.571676 kernel: PTP clock support registered Dec 13 03:55:59.571681 kernel: PCI: Using ACPI for IRQ routing Dec 13 03:55:59.571687 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 03:55:59.571692 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Dec 13 03:55:59.571697 kernel: e820: reserve RAM buffer [mem 0x6e2d9000-0x6fffffff] Dec 13 03:55:59.571702 kernel: e820: reserve RAM buffer [mem 0x77fc5000-0x77ffffff] Dec 13 03:55:59.571707 kernel: e820: reserve RAM buffer [mem 0x79231000-0x7bffffff] Dec 13 03:55:59.571713 kernel: e820: reserve RAM buffer [mem 0x7bf00000-0x7bffffff] Dec 13 03:55:59.571718 kernel: e820: reserve RAM buffer [mem 0x87f800000-0x87fffffff] Dec 13 03:55:59.571723 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 03:55:59.571728 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Dec 13 03:55:59.571734 kernel: clocksource: Switched to clocksource tsc-early Dec 13 03:55:59.571739 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 03:55:59.571744 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 03:55:59.571749 kernel: pnp: PnP ACPI init Dec 13 03:55:59.571791 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Dec 13 03:55:59.571835 kernel: pnp 00:02: [dma 0 disabled] Dec 13 03:55:59.571877 kernel: pnp 00:03: [dma 0 disabled] Dec 13 03:55:59.571919 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Dec 13 03:55:59.571958 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Dec 13 03:55:59.571998 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Dec 13 03:55:59.572039 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Dec 13 03:55:59.572077 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Dec 13 03:55:59.572115 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Dec 13 03:55:59.572152 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Dec 13 03:55:59.572191 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Dec 13 03:55:59.572228 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Dec 13 03:55:59.572265 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Dec 13 03:55:59.572303 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Dec 13 03:55:59.572342 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Dec 13 03:55:59.572380 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Dec 13 03:55:59.572419 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Dec 13 03:55:59.572481 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Dec 13 03:55:59.572536 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Dec 13 03:55:59.572573 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Dec 13 03:55:59.572611 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Dec 13 03:55:59.572650 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Dec 13 03:55:59.572658 kernel: pnp: PnP ACPI: found 10 devices Dec 13 03:55:59.572663 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 03:55:59.572670 kernel: NET: Registered PF_INET protocol family Dec 13 03:55:59.572675 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 03:55:59.572681 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 03:55:59.572686 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 03:55:59.572691 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 03:55:59.572696 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 03:55:59.572701 kernel: TCP: Hash tables configured (established 262144 bind 65536) Dec 13 03:55:59.572707 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 03:55:59.572713 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 03:55:59.572718 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 03:55:59.572723 kernel: NET: Registered PF_XDP protocol family Dec 13 03:55:59.572765 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7f800000-0x7f800fff 64bit] Dec 13 03:55:59.572807 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7f801000-0x7f801fff 64bit] Dec 13 03:55:59.572849 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7f802000-0x7f802fff 64bit] Dec 13 03:55:59.572890 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 03:55:59.572936 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Dec 13 03:55:59.572980 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Dec 13 03:55:59.573024 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Dec 13 03:55:59.573068 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Dec 13 03:55:59.573110 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Dec 13 03:55:59.573153 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Dec 13 03:55:59.573197 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 03:55:59.573240 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Dec 13 03:55:59.573283 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Dec 13 03:55:59.573325 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Dec 13 03:55:59.573369 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Dec 13 03:55:59.573411 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Dec 13 03:55:59.573479 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Dec 13 03:55:59.573521 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Dec 13 03:55:59.573566 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Dec 13 03:55:59.573611 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Dec 13 03:55:59.573656 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Dec 13 03:55:59.573699 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Dec 13 03:55:59.573742 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Dec 13 03:55:59.573784 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Dec 13 03:55:59.573826 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Dec 13 03:55:59.573865 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Dec 13 03:55:59.573903 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 03:55:59.573942 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 03:55:59.573979 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 03:55:59.574016 kernel: pci_bus 0000:00: resource 7 [mem 0x7f800000-0xdfffffff window] Dec 13 03:55:59.574053 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Dec 13 03:55:59.574099 kernel: pci_bus 0000:02: resource 1 [mem 0x96100000-0x962fffff] Dec 13 03:55:59.574140 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 03:55:59.574183 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Dec 13 03:55:59.574224 kernel: pci_bus 0000:04: resource 1 [mem 0x96400000-0x964fffff] Dec 13 03:55:59.574266 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 03:55:59.574306 kernel: pci_bus 0000:05: resource 1 [mem 0x96300000-0x963fffff] Dec 13 03:55:59.574349 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Dec 13 03:55:59.574387 kernel: pci_bus 0000:07: resource 1 [mem 0x95000000-0x960fffff] Dec 13 03:55:59.574432 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Dec 13 03:55:59.574474 kernel: pci_bus 0000:08: resource 1 [mem 0x95000000-0x960fffff] Dec 13 03:55:59.574483 kernel: PCI: CLS 64 bytes, default 64 Dec 13 03:55:59.574488 kernel: DMAR: No ATSR found Dec 13 03:55:59.574494 kernel: DMAR: No SATC found Dec 13 03:55:59.574499 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Dec 13 03:55:59.574504 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Dec 13 03:55:59.574510 kernel: DMAR: IOMMU feature nwfs inconsistent Dec 13 03:55:59.574515 kernel: DMAR: IOMMU feature pasid inconsistent Dec 13 03:55:59.574520 kernel: DMAR: IOMMU feature eafs inconsistent Dec 13 03:55:59.574526 kernel: DMAR: IOMMU feature prs inconsistent Dec 13 03:55:59.574532 kernel: DMAR: IOMMU feature nest inconsistent Dec 13 03:55:59.574537 kernel: DMAR: IOMMU feature mts inconsistent Dec 13 03:55:59.574542 kernel: DMAR: IOMMU feature sc_support inconsistent Dec 13 03:55:59.574547 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Dec 13 03:55:59.574553 kernel: DMAR: dmar0: Using Queued invalidation Dec 13 03:55:59.574558 kernel: DMAR: dmar1: Using Queued invalidation Dec 13 03:55:59.574602 kernel: pci 0000:00:00.0: Adding to iommu group 0 Dec 13 03:55:59.574646 kernel: pci 0000:00:01.0: Adding to iommu group 1 Dec 13 03:55:59.574692 kernel: pci 0000:00:01.1: Adding to iommu group 1 Dec 13 03:55:59.574734 kernel: pci 0000:00:02.0: Adding to iommu group 2 Dec 13 03:55:59.574776 kernel: pci 0000:00:08.0: Adding to iommu group 3 Dec 13 03:55:59.574819 kernel: pci 0000:00:12.0: Adding to iommu group 4 Dec 13 03:55:59.574861 kernel: pci 0000:00:14.0: Adding to iommu group 5 Dec 13 03:55:59.574903 kernel: pci 0000:00:14.2: Adding to iommu group 5 Dec 13 03:55:59.574944 kernel: pci 0000:00:15.0: Adding to iommu group 6 Dec 13 03:55:59.574986 kernel: pci 0000:00:15.1: Adding to iommu group 6 Dec 13 03:55:59.575028 kernel: pci 0000:00:16.0: Adding to iommu group 7 Dec 13 03:55:59.575072 kernel: pci 0000:00:16.1: Adding to iommu group 7 Dec 13 03:55:59.575114 kernel: pci 0000:00:16.4: Adding to iommu group 7 Dec 13 03:55:59.575155 kernel: pci 0000:00:17.0: Adding to iommu group 8 Dec 13 03:55:59.575198 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Dec 13 03:55:59.575240 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Dec 13 03:55:59.575282 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Dec 13 03:55:59.575324 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Dec 13 03:55:59.575366 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Dec 13 03:55:59.575409 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Dec 13 03:55:59.575453 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Dec 13 03:55:59.575496 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Dec 13 03:55:59.575537 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Dec 13 03:55:59.575582 kernel: pci 0000:02:00.0: Adding to iommu group 1 Dec 13 03:55:59.575625 kernel: pci 0000:02:00.1: Adding to iommu group 1 Dec 13 03:55:59.575670 kernel: pci 0000:04:00.0: Adding to iommu group 16 Dec 13 03:55:59.575713 kernel: pci 0000:05:00.0: Adding to iommu group 17 Dec 13 03:55:59.575758 kernel: pci 0000:07:00.0: Adding to iommu group 18 Dec 13 03:55:59.575804 kernel: pci 0000:08:00.0: Adding to iommu group 18 Dec 13 03:55:59.575811 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Dec 13 03:55:59.575817 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 03:55:59.575823 kernel: software IO TLB: mapped [mem 0x0000000073fc5000-0x0000000077fc5000] (64MB) Dec 13 03:55:59.575828 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Dec 13 03:55:59.575833 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Dec 13 03:55:59.575839 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Dec 13 03:55:59.575844 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Dec 13 03:55:59.575851 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Dec 13 03:55:59.575897 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Dec 13 03:55:59.575906 kernel: Initialise system trusted keyrings Dec 13 03:55:59.575911 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Dec 13 03:55:59.575916 kernel: Key type asymmetric registered Dec 13 03:55:59.575922 kernel: Asymmetric key parser 'x509' registered Dec 13 03:55:59.575927 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 03:55:59.575932 kernel: io scheduler mq-deadline registered Dec 13 03:55:59.575939 kernel: io scheduler kyber registered Dec 13 03:55:59.575944 kernel: io scheduler bfq registered Dec 13 03:55:59.575987 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Dec 13 03:55:59.576030 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Dec 13 03:55:59.576073 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Dec 13 03:55:59.576115 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Dec 13 03:55:59.576158 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Dec 13 03:55:59.576201 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Dec 13 03:55:59.576245 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Dec 13 03:55:59.576292 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Dec 13 03:55:59.576300 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Dec 13 03:55:59.576305 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Dec 13 03:55:59.576311 kernel: pstore: Registered erst as persistent store backend Dec 13 03:55:59.576316 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 03:55:59.576321 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 03:55:59.576327 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 03:55:59.576333 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 03:55:59.576376 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Dec 13 03:55:59.576384 kernel: i8042: PNP: No PS/2 controller found. Dec 13 03:55:59.576421 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Dec 13 03:55:59.576463 kernel: rtc_cmos rtc_cmos: registered as rtc0 Dec 13 03:55:59.576503 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-12-13T03:55:58 UTC (1734062158) Dec 13 03:55:59.576541 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Dec 13 03:55:59.576550 kernel: fail to initialize ptp_kvm Dec 13 03:55:59.576556 kernel: intel_pstate: Intel P-state driver initializing Dec 13 03:55:59.576561 kernel: intel_pstate: Disabling energy efficiency optimization Dec 13 03:55:59.576567 kernel: intel_pstate: HWP enabled Dec 13 03:55:59.576572 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Dec 13 03:55:59.576577 kernel: vesafb: scrolling: redraw Dec 13 03:55:59.576583 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Dec 13 03:55:59.576588 kernel: vesafb: framebuffer at 0x95000000, mapped to 0x00000000837a9ca7, using 768k, total 768k Dec 13 03:55:59.576593 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 03:55:59.576600 kernel: fb0: VESA VGA frame buffer device Dec 13 03:55:59.576605 kernel: NET: Registered PF_INET6 protocol family Dec 13 03:55:59.576610 kernel: Segment Routing with IPv6 Dec 13 03:55:59.576615 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 03:55:59.576621 kernel: NET: Registered PF_PACKET protocol family Dec 13 03:55:59.576626 kernel: Key type dns_resolver registered Dec 13 03:55:59.576631 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Dec 13 03:55:59.576637 kernel: microcode: Microcode Update Driver: v2.2. Dec 13 03:55:59.576642 kernel: IPI shorthand broadcast: enabled Dec 13 03:55:59.576648 kernel: sched_clock: Marking stable (1852421075, 1360181587)->(4660019596, -1447416934) Dec 13 03:55:59.576673 kernel: registered taskstats version 1 Dec 13 03:55:59.576679 kernel: Loading compiled-in X.509 certificates Dec 13 03:55:59.576684 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 03:55:59.576689 kernel: Key type .fscrypt registered Dec 13 03:55:59.576694 kernel: Key type fscrypt-provisioning registered Dec 13 03:55:59.576699 kernel: pstore: Using crash dump compression: deflate Dec 13 03:55:59.576704 kernel: ima: Allocated hash algorithm: sha1 Dec 13 03:55:59.576710 kernel: ima: No architecture policies found Dec 13 03:55:59.576716 kernel: clk: Disabling unused clocks Dec 13 03:55:59.576721 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 03:55:59.576726 kernel: Write protecting the kernel read-only data: 28672k Dec 13 03:55:59.576731 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 03:55:59.576736 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 03:55:59.576742 kernel: Run /init as init process Dec 13 03:55:59.576747 kernel: with arguments: Dec 13 03:55:59.576752 kernel: /init Dec 13 03:55:59.576757 kernel: with environment: Dec 13 03:55:59.576763 kernel: HOME=/ Dec 13 03:55:59.576768 kernel: TERM=linux Dec 13 03:55:59.576773 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 03:55:59.576780 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 03:55:59.576786 systemd[1]: Detected architecture x86-64. Dec 13 03:55:59.576792 systemd[1]: Running in initrd. Dec 13 03:55:59.576797 systemd[1]: No hostname configured, using default hostname. Dec 13 03:55:59.576802 systemd[1]: Hostname set to . Dec 13 03:55:59.576808 systemd[1]: Initializing machine ID from random generator. Dec 13 03:55:59.576814 systemd[1]: Queued start job for default target initrd.target. Dec 13 03:55:59.576819 systemd[1]: Started systemd-ask-password-console.path. Dec 13 03:55:59.576824 systemd[1]: Reached target cryptsetup.target. Dec 13 03:55:59.576830 systemd[1]: Reached target paths.target. Dec 13 03:55:59.576835 systemd[1]: Reached target slices.target. Dec 13 03:55:59.576840 systemd[1]: Reached target swap.target. Dec 13 03:55:59.576845 systemd[1]: Reached target timers.target. Dec 13 03:55:59.576852 systemd[1]: Listening on iscsid.socket. Dec 13 03:55:59.576857 systemd[1]: Listening on iscsiuio.socket. Dec 13 03:55:59.576863 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 03:55:59.576868 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 03:55:59.576873 systemd[1]: Listening on systemd-journald.socket. Dec 13 03:55:59.576879 systemd[1]: Listening on systemd-networkd.socket. Dec 13 03:55:59.576884 kernel: tsc: Refined TSC clocksource calibration: 3408.020 MHz Dec 13 03:55:59.576890 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fe73e8a3, max_idle_ns: 440795370711 ns Dec 13 03:55:59.576895 kernel: clocksource: Switched to clocksource tsc Dec 13 03:55:59.576901 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 03:55:59.576906 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 03:55:59.576912 systemd[1]: Reached target sockets.target. Dec 13 03:55:59.576917 systemd[1]: Starting kmod-static-nodes.service... Dec 13 03:55:59.576922 systemd[1]: Finished network-cleanup.service. Dec 13 03:55:59.576928 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 03:55:59.576933 systemd[1]: Starting systemd-journald.service... Dec 13 03:55:59.576939 systemd[1]: Starting systemd-modules-load.service... Dec 13 03:55:59.576948 systemd-journald[268]: Journal started Dec 13 03:55:59.576973 systemd-journald[268]: Runtime Journal (/run/log/journal/de71aad0bd3448fc8f454b16cd4f232e) is 8.0M, max 639.3M, 631.3M free. Dec 13 03:55:59.578771 systemd-modules-load[269]: Inserted module 'overlay' Dec 13 03:55:59.607823 kernel: audit: type=1334 audit(1734062159.583:2): prog-id=6 op=LOAD Dec 13 03:55:59.607834 systemd[1]: Starting systemd-resolved.service... Dec 13 03:55:59.583000 audit: BPF prog-id=6 op=LOAD Dec 13 03:55:59.651442 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 03:55:59.651460 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 03:55:59.685466 kernel: Bridge firewalling registered Dec 13 03:55:59.685482 systemd[1]: Started systemd-journald.service. Dec 13 03:55:59.699715 systemd-modules-load[269]: Inserted module 'br_netfilter' Dec 13 03:55:59.747222 kernel: audit: type=1130 audit(1734062159.706:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:55:59.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:55:59.702245 systemd-resolved[271]: Positive Trust Anchors: Dec 13 03:55:59.822625 kernel: SCSI subsystem initialized Dec 13 03:55:59.822638 kernel: audit: type=1130 audit(1734062159.759:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:55:59.822647 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 03:55:59.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:55:59.702250 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 03:55:59.922773 kernel: device-mapper: uevent: version 1.0.3 Dec 13 03:55:59.922798 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 03:55:59.922830 kernel: audit: type=1130 audit(1734062159.879:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:55:59.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:55:59.702269 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 03:55:59.994648 kernel: audit: type=1130 audit(1734062159.921:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:55:59.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:55:59.703834 systemd-resolved[271]: Defaulting to hostname 'linux'. Dec 13 03:56:00.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:55:59.727660 systemd[1]: Started systemd-resolved.service. Dec 13 03:56:00.101992 kernel: audit: type=1130 audit(1734062160.003:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:00.102007 kernel: audit: type=1130 audit(1734062160.056:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:00.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:55:59.760606 systemd[1]: Finished kmod-static-nodes.service. Dec 13 03:55:59.880575 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 03:55:59.923075 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 03:55:59.966827 systemd-modules-load[269]: Inserted module 'dm_multipath' Dec 13 03:56:00.024353 systemd[1]: Finished systemd-modules-load.service. Dec 13 03:56:00.056749 systemd[1]: Reached target nss-lookup.target. Dec 13 03:56:00.111095 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 03:56:00.130999 systemd[1]: Starting systemd-sysctl.service... Dec 13 03:56:00.131290 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 03:56:00.134166 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 03:56:00.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:00.134894 systemd[1]: Finished systemd-sysctl.service. Dec 13 03:56:00.183651 kernel: audit: type=1130 audit(1734062160.132:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:00.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:00.195773 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 03:56:00.259515 kernel: audit: type=1130 audit(1734062160.195:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:00.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:00.252101 systemd[1]: Starting dracut-cmdline.service... Dec 13 03:56:00.274466 dracut-cmdline[294]: dracut-dracut-053 Dec 13 03:56:00.274466 dracut-cmdline[294]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Dec 13 03:56:00.274466 dracut-cmdline[294]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 03:56:00.342496 kernel: Loading iSCSI transport class v2.0-870. Dec 13 03:56:00.342512 kernel: iscsi: registered transport (tcp) Dec 13 03:56:00.399127 kernel: iscsi: registered transport (qla4xxx) Dec 13 03:56:00.399145 kernel: QLogic iSCSI HBA Driver Dec 13 03:56:00.415207 systemd[1]: Finished dracut-cmdline.service. Dec 13 03:56:00.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:00.424146 systemd[1]: Starting dracut-pre-udev.service... Dec 13 03:56:00.479493 kernel: raid6: avx2x4 gen() 46520 MB/s Dec 13 03:56:00.514459 kernel: raid6: avx2x4 xor() 21583 MB/s Dec 13 03:56:00.549500 kernel: raid6: avx2x2 gen() 53671 MB/s Dec 13 03:56:00.584495 kernel: raid6: avx2x2 xor() 32053 MB/s Dec 13 03:56:00.619461 kernel: raid6: avx2x1 gen() 45153 MB/s Dec 13 03:56:00.653459 kernel: raid6: avx2x1 xor() 27854 MB/s Dec 13 03:56:00.687459 kernel: raid6: sse2x4 gen() 21344 MB/s Dec 13 03:56:00.721498 kernel: raid6: sse2x4 xor() 11961 MB/s Dec 13 03:56:00.755494 kernel: raid6: sse2x2 gen() 21631 MB/s Dec 13 03:56:00.789494 kernel: raid6: sse2x2 xor() 13381 MB/s Dec 13 03:56:00.823497 kernel: raid6: sse2x1 gen() 18269 MB/s Dec 13 03:56:00.875006 kernel: raid6: sse2x1 xor() 8911 MB/s Dec 13 03:56:00.875021 kernel: raid6: using algorithm avx2x2 gen() 53671 MB/s Dec 13 03:56:00.875029 kernel: raid6: .... xor() 32053 MB/s, rmw enabled Dec 13 03:56:00.893032 kernel: raid6: using avx2x2 recovery algorithm Dec 13 03:56:00.938428 kernel: xor: automatically using best checksumming function avx Dec 13 03:56:01.017468 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 03:56:01.022852 systemd[1]: Finished dracut-pre-udev.service. Dec 13 03:56:01.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:01.021000 audit: BPF prog-id=7 op=LOAD Dec 13 03:56:01.021000 audit: BPF prog-id=8 op=LOAD Dec 13 03:56:01.023514 systemd[1]: Starting systemd-udevd.service... Dec 13 03:56:01.031389 systemd-udevd[473]: Using default interface naming scheme 'v252'. Dec 13 03:56:01.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:01.044821 systemd[1]: Started systemd-udevd.service. Dec 13 03:56:01.084554 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Dec 13 03:56:01.061123 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 03:56:01.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:01.087673 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 03:56:01.101477 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 03:56:01.152017 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 03:56:01.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:01.179561 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 03:56:01.216488 kernel: ACPI: bus type USB registered Dec 13 03:56:01.216522 kernel: usbcore: registered new interface driver usbfs Dec 13 03:56:01.216531 kernel: usbcore: registered new interface driver hub Dec 13 03:56:01.234136 kernel: usbcore: registered new device driver usb Dec 13 03:56:01.252431 kernel: libata version 3.00 loaded. Dec 13 03:56:01.252463 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 03:56:01.285035 kernel: AES CTR mode by8 optimization enabled Dec 13 03:56:01.285429 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Dec 13 03:56:01.319027 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Dec 13 03:56:01.358679 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Dec 13 03:56:02.293173 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Dec 13 03:56:02.293241 kernel: ahci 0000:00:17.0: version 3.0 Dec 13 03:56:02.293298 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Dec 13 03:56:02.293350 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Dec 13 03:56:02.293398 kernel: pps pps0: new PPS source ptp0 Dec 13 03:56:02.293459 kernel: igb 0000:04:00.0: added PHC on eth0 Dec 13 03:56:02.293513 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 03:56:02.293562 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:08:6a Dec 13 03:56:02.293616 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Dec 13 03:56:02.293668 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Dec 13 03:56:02.293719 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Dec 13 03:56:02.293767 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Dec 13 03:56:02.293814 kernel: pps pps1: new PPS source ptp1 Dec 13 03:56:02.293867 kernel: igb 0000:05:00.0: added PHC on eth1 Dec 13 03:56:02.293919 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 03:56:02.293969 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:08:6b Dec 13 03:56:02.294018 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Dec 13 03:56:02.294069 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Dec 13 03:56:02.294118 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Dec 13 03:56:02.294165 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Dec 13 03:56:02.294214 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Dec 13 03:56:02.294261 kernel: scsi host0: ahci Dec 13 03:56:02.294313 kernel: scsi host1: ahci Dec 13 03:56:02.294362 kernel: scsi host2: ahci Dec 13 03:56:02.294413 kernel: scsi host3: ahci Dec 13 03:56:02.294466 kernel: scsi host4: ahci Dec 13 03:56:02.294517 kernel: scsi host5: ahci Dec 13 03:56:02.294565 kernel: scsi host6: ahci Dec 13 03:56:02.294613 kernel: scsi host7: ahci Dec 13 03:56:02.294661 kernel: ata1: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516100 irq 134 Dec 13 03:56:02.294671 kernel: ata2: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516180 irq 134 Dec 13 03:56:02.294677 kernel: ata3: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516200 irq 134 Dec 13 03:56:02.294684 kernel: ata4: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516280 irq 134 Dec 13 03:56:02.294690 kernel: ata5: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516300 irq 134 Dec 13 03:56:02.294697 kernel: ata6: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516380 irq 134 Dec 13 03:56:02.294703 kernel: ata7: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516400 irq 134 Dec 13 03:56:02.294709 kernel: ata8: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516480 irq 134 Dec 13 03:56:02.294716 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Dec 13 03:56:02.294767 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Dec 13 03:56:02.294815 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 03:56:02.294822 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Dec 13 03:56:02.294869 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Dec 13 03:56:02.294877 kernel: hub 1-0:1.0: USB hub found Dec 13 03:56:02.294939 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 03:56:02.294947 kernel: hub 1-0:1.0: 16 ports detected Dec 13 03:56:02.294998 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 03:56:02.295006 kernel: hub 2-0:1.0: USB hub found Dec 13 03:56:02.295061 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 03:56:02.295069 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Dec 13 03:56:02.295118 kernel: hub 2-0:1.0: 10 ports detected Dec 13 03:56:02.295170 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Dec 13 03:56:02.295178 kernel: ata7: SATA link down (SStatus 0 SControl 300) Dec 13 03:56:02.295184 kernel: ata8: SATA link down (SStatus 0 SControl 300) Dec 13 03:56:02.295191 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 03:56:02.295239 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Dec 13 03:56:02.295248 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Dec 13 03:56:02.295255 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Dec 13 03:56:02.297372 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Dec 13 03:56:02.297382 kernel: ata2.00: Features: NCQ-prio Dec 13 03:56:02.297389 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Dec 13 03:56:02.297395 kernel: ata1.00: Features: NCQ-prio Dec 13 03:56:02.297402 kernel: ata2.00: configured for UDMA/133 Dec 13 03:56:02.297408 kernel: ata1.00: configured for UDMA/133 Dec 13 03:56:02.297417 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Dec 13 03:56:02.719320 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Dec 13 03:56:02.719420 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 03:56:02.719439 kernel: hub 1-14:1.0: USB hub found Dec 13 03:56:02.719513 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 03:56:02.719521 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Dec 13 03:56:02.719591 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Dec 13 03:56:02.719652 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Dec 13 03:56:02.719707 kernel: sd 1:0:0:0: [sda] Write Protect is off Dec 13 03:56:02.719761 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Dec 13 03:56:02.719813 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 03:56:02.719872 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 03:56:02.719879 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 03:56:02.719886 kernel: GPT:9289727 != 937703087 Dec 13 03:56:02.719894 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 03:56:02.719901 kernel: GPT:9289727 != 937703087 Dec 13 03:56:02.719907 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 03:56:02.719913 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 03:56:02.719920 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 03:56:02.719926 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Dec 13 03:56:02.719980 kernel: hub 1-14:1.0: 4 ports detected Dec 13 03:56:02.720035 kernel: mlx5_core 0000:02:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Dec 13 03:56:02.720091 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Dec 13 03:56:02.720151 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Dec 13 03:56:03.070517 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Dec 13 03:56:03.070629 kernel: sd 0:0:0:0: [sdb] Write Protect is off Dec 13 03:56:03.070720 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Dec 13 03:56:03.070802 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 03:56:03.070880 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Dec 13 03:56:03.070933 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Dec 13 03:56:03.071033 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 03:56:03.071041 kernel: port_module: 9 callbacks suppressed Dec 13 03:56:03.071048 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Dec 13 03:56:03.071103 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 03:56:03.071110 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 03:56:03.071160 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Dec 13 03:56:03.071212 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 03:56:03.071221 kernel: usbcore: registered new interface driver usbhid Dec 13 03:56:03.071228 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (526) Dec 13 03:56:03.071234 kernel: usbhid: USB HID core driver Dec 13 03:56:03.071241 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Dec 13 03:56:03.071247 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Dec 13 03:56:03.071310 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 03:56:03.071317 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Dec 13 03:56:03.071325 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 03:56:03.071331 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Dec 13 03:56:03.071390 kernel: mlx5_core 0000:02:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Dec 13 03:56:03.071473 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 03:56:03.071501 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 03:56:03.071508 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 03:56:02.750610 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 03:56:03.103514 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 03:56:02.809586 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 03:56:03.127548 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Dec 13 03:56:02.838519 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 03:56:03.143355 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Dec 13 03:56:03.143456 disk-uuid[674]: Primary Header is updated. Dec 13 03:56:03.143456 disk-uuid[674]: Secondary Entries is updated. Dec 13 03:56:03.143456 disk-uuid[674]: Secondary Header is updated. Dec 13 03:56:02.851392 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 03:56:02.864987 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 03:56:02.874128 systemd[1]: Starting disk-uuid.service... Dec 13 03:56:04.088860 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 03:56:04.109285 disk-uuid[675]: The operation has completed successfully. Dec 13 03:56:04.117513 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 03:56:04.149306 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 03:56:04.248968 kernel: audit: type=1130 audit(1734062164.156:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:04.248983 kernel: audit: type=1131 audit(1734062164.156:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:04.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:04.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:04.149351 systemd[1]: Finished disk-uuid.service. Dec 13 03:56:04.280464 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 03:56:04.157125 systemd[1]: Starting verity-setup.service... Dec 13 03:56:04.312396 systemd[1]: Found device dev-mapper-usr.device. Dec 13 03:56:04.322443 systemd[1]: Mounting sysusr-usr.mount... Dec 13 03:56:04.328695 systemd[1]: Finished verity-setup.service. Dec 13 03:56:04.403528 kernel: audit: type=1130 audit(1734062164.347:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:04.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:04.451794 systemd[1]: Mounted sysusr-usr.mount. Dec 13 03:56:04.465658 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 03:56:04.458736 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 03:56:04.459143 systemd[1]: Starting ignition-setup.service... Dec 13 03:56:04.562476 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 03:56:04.562490 kernel: BTRFS info (device sda6): using free space tree Dec 13 03:56:04.562500 kernel: BTRFS info (device sda6): has skinny extents Dec 13 03:56:04.562507 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 03:56:04.466061 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 03:56:04.621473 kernel: audit: type=1130 audit(1734062164.571:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:04.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:04.548684 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 03:56:04.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:04.571996 systemd[1]: Finished ignition-setup.service. Dec 13 03:56:04.712782 kernel: audit: type=1130 audit(1734062164.630:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:04.712796 kernel: audit: type=1334 audit(1734062164.688:24): prog-id=9 op=LOAD Dec 13 03:56:04.688000 audit: BPF prog-id=9 op=LOAD Dec 13 03:56:04.631141 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 03:56:04.690452 systemd[1]: Starting systemd-networkd.service... Dec 13 03:56:04.779441 kernel: audit: type=1130 audit(1734062164.726:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:04.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:04.726295 systemd-networkd[879]: lo: Link UP Dec 13 03:56:04.753239 ignition[867]: Ignition 2.14.0 Dec 13 03:56:04.726298 systemd-networkd[879]: lo: Gained carrier Dec 13 03:56:04.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:04.753257 ignition[867]: Stage: fetch-offline Dec 13 03:56:04.949259 kernel: audit: type=1130 audit(1734062164.814:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:04.949275 kernel: audit: type=1130 audit(1734062164.874:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:04.949309 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Dec 13 03:56:04.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:04.726601 systemd-networkd[879]: Enumeration completed Dec 13 03:56:04.981556 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f1np1: link becomes ready Dec 13 03:56:04.753335 ignition[867]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:56:04.726669 systemd[1]: Started systemd-networkd.service. Dec 13 03:56:04.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:04.753380 ignition[867]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 03:56:04.727342 systemd-networkd[879]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 03:56:04.763503 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 03:56:05.029561 iscsid[898]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 03:56:05.029561 iscsid[898]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 03:56:05.029561 iscsid[898]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 03:56:05.029561 iscsid[898]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 03:56:05.029561 iscsid[898]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 03:56:05.029561 iscsid[898]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 03:56:05.029561 iscsid[898]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 03:56:05.181737 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Dec 13 03:56:05.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:04.727603 systemd[1]: Reached target network.target. Dec 13 03:56:05.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:04.763569 ignition[867]: parsed url from cmdline: "" Dec 13 03:56:04.768173 unknown[867]: fetched base config from "system" Dec 13 03:56:04.763571 ignition[867]: no config URL provided Dec 13 03:56:04.768188 unknown[867]: fetched user config from "system" Dec 13 03:56:04.763574 ignition[867]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 03:56:04.788132 systemd[1]: Starting iscsiuio.service... Dec 13 03:56:04.763596 ignition[867]: parsing config with SHA512: 840cfa192293fe2e6d9e0316c5fb313c936c5c771d95a198559e1590a2ea3b1561cf336a044f4aac70c6fb5de66980bb143ee6e103689550ba6598cb8c9c1f13 Dec 13 03:56:04.801657 systemd[1]: Started iscsiuio.service. Dec 13 03:56:04.768713 ignition[867]: fetch-offline: fetch-offline passed Dec 13 03:56:04.814783 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 03:56:04.768728 ignition[867]: POST message to Packet Timeline Dec 13 03:56:04.874672 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 03:56:04.768747 ignition[867]: POST Status error: resource requires networking Dec 13 03:56:04.875139 systemd[1]: Starting ignition-kargs.service... Dec 13 03:56:04.768856 ignition[867]: Ignition finished successfully Dec 13 03:56:04.950244 systemd-networkd[879]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 03:56:04.953804 ignition[888]: Ignition 2.14.0 Dec 13 03:56:04.964017 systemd[1]: Starting iscsid.service... Dec 13 03:56:04.953807 ignition[888]: Stage: kargs Dec 13 03:56:04.988589 systemd[1]: Started iscsid.service. Dec 13 03:56:04.953862 ignition[888]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:56:04.999015 systemd[1]: Starting dracut-initqueue.service... Dec 13 03:56:04.953872 ignition[888]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 03:56:05.017631 systemd[1]: Finished dracut-initqueue.service. Dec 13 03:56:04.955191 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 03:56:05.044677 systemd[1]: Reached target remote-fs-pre.target. Dec 13 03:56:04.956571 ignition[888]: kargs: kargs passed Dec 13 03:56:05.055704 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 03:56:04.956575 ignition[888]: POST message to Packet Timeline Dec 13 03:56:05.097670 systemd[1]: Reached target remote-fs.target. Dec 13 03:56:04.956585 ignition[888]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 03:56:05.118721 systemd[1]: Starting dracut-pre-mount.service... Dec 13 03:56:04.960729 ignition[888]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55087->[::1]:53: read: connection refused Dec 13 03:56:05.154643 systemd[1]: Finished dracut-pre-mount.service. Dec 13 03:56:05.161119 ignition[888]: GET https://metadata.packet.net/metadata: attempt #2 Dec 13 03:56:05.169081 systemd-networkd[879]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 03:56:05.161525 ignition[888]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33366->[::1]:53: read: connection refused Dec 13 03:56:05.197706 systemd-networkd[879]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 03:56:05.226042 systemd-networkd[879]: enp2s0f1np1: Link UP Dec 13 03:56:05.226209 systemd-networkd[879]: enp2s0f1np1: Gained carrier Dec 13 03:56:05.242859 systemd-networkd[879]: enp2s0f0np0: Link UP Dec 13 03:56:05.243148 systemd-networkd[879]: eno2: Link UP Dec 13 03:56:05.243409 systemd-networkd[879]: eno1: Link UP Dec 13 03:56:05.562246 ignition[888]: GET https://metadata.packet.net/metadata: attempt #3 Dec 13 03:56:05.563752 ignition[888]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60516->[::1]:53: read: connection refused Dec 13 03:56:06.014175 systemd-networkd[879]: enp2s0f0np0: Gained carrier Dec 13 03:56:06.022674 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0np0: link becomes ready Dec 13 03:56:06.051758 systemd-networkd[879]: enp2s0f0np0: DHCPv4 address 145.40.90.151/31, gateway 145.40.90.150 acquired from 145.40.83.140 Dec 13 03:56:06.246896 systemd-networkd[879]: enp2s0f1np1: Gained IPv6LL Dec 13 03:56:06.364222 ignition[888]: GET https://metadata.packet.net/metadata: attempt #4 Dec 13 03:56:06.365651 ignition[888]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60290->[::1]:53: read: connection refused Dec 13 03:56:07.334915 systemd-networkd[879]: enp2s0f0np0: Gained IPv6LL Dec 13 03:56:07.966599 ignition[888]: GET https://metadata.packet.net/metadata: attempt #5 Dec 13 03:56:07.967818 ignition[888]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52872->[::1]:53: read: connection refused Dec 13 03:56:11.171370 ignition[888]: GET https://metadata.packet.net/metadata: attempt #6 Dec 13 03:56:11.699119 ignition[888]: GET result: OK Dec 13 03:56:12.025691 ignition[888]: Ignition finished successfully Dec 13 03:56:12.027231 systemd[1]: Finished ignition-kargs.service. Dec 13 03:56:12.118227 kernel: kauditd_printk_skb: 3 callbacks suppressed Dec 13 03:56:12.118244 kernel: audit: type=1130 audit(1734062172.040:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:12.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:12.049329 ignition[919]: Ignition 2.14.0 Dec 13 03:56:12.042662 systemd[1]: Starting ignition-disks.service... Dec 13 03:56:12.049333 ignition[919]: Stage: disks Dec 13 03:56:12.049389 ignition[919]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:56:12.049398 ignition[919]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 03:56:12.050869 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 03:56:12.052492 ignition[919]: disks: disks passed Dec 13 03:56:12.052495 ignition[919]: POST message to Packet Timeline Dec 13 03:56:12.052506 ignition[919]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 03:56:13.232849 ignition[919]: GET result: OK Dec 13 03:56:13.650076 ignition[919]: Ignition finished successfully Dec 13 03:56:13.652505 systemd[1]: Finished ignition-disks.service. Dec 13 03:56:13.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:13.666980 systemd[1]: Reached target initrd-root-device.target. Dec 13 03:56:13.744709 kernel: audit: type=1130 audit(1734062173.665:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:13.730651 systemd[1]: Reached target local-fs-pre.target. Dec 13 03:56:13.730687 systemd[1]: Reached target local-fs.target. Dec 13 03:56:13.753656 systemd[1]: Reached target sysinit.target. Dec 13 03:56:13.767657 systemd[1]: Reached target basic.target. Dec 13 03:56:13.781378 systemd[1]: Starting systemd-fsck-root.service... Dec 13 03:56:13.802446 systemd-fsck[934]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 03:56:13.815856 systemd[1]: Finished systemd-fsck-root.service. Dec 13 03:56:13.907268 kernel: audit: type=1130 audit(1734062173.823:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:13.907286 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 03:56:13.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:13.829946 systemd[1]: Mounting sysroot.mount... Dec 13 03:56:13.915088 systemd[1]: Mounted sysroot.mount. Dec 13 03:56:13.928712 systemd[1]: Reached target initrd-root-fs.target. Dec 13 03:56:13.945187 systemd[1]: Mounting sysroot-usr.mount... Dec 13 03:56:13.960531 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 03:56:13.975195 systemd[1]: Starting flatcar-static-network.service... Dec 13 03:56:13.990653 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 03:56:13.990772 systemd[1]: Reached target ignition-diskful.target. Dec 13 03:56:14.009810 systemd[1]: Mounted sysroot-usr.mount. Dec 13 03:56:14.032886 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 03:56:14.176394 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (945) Dec 13 03:56:14.176415 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 03:56:14.176432 kernel: BTRFS info (device sda6): using free space tree Dec 13 03:56:14.176441 kernel: BTRFS info (device sda6): has skinny extents Dec 13 03:56:14.176448 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 03:56:14.176521 coreos-metadata[941]: Dec 13 03:56:14.114 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 03:56:14.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:14.237451 kernel: audit: type=1130 audit(1734062174.184:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:14.237463 coreos-metadata[942]: Dec 13 03:56:14.114 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 03:56:14.045339 systemd[1]: Starting initrd-setup-root.service... Dec 13 03:56:14.112668 systemd[1]: Finished initrd-setup-root.service. Dec 13 03:56:14.279595 initrd-setup-root[952]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 03:56:14.185738 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 03:56:14.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:14.333591 initrd-setup-root[960]: cut: /sysroot/etc/group: No such file or directory Dec 13 03:56:14.368640 kernel: audit: type=1130 audit(1734062174.305:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:14.247051 systemd[1]: Starting ignition-mount.service... Dec 13 03:56:14.375649 initrd-setup-root[968]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 03:56:14.385654 ignition[1015]: INFO : Ignition 2.14.0 Dec 13 03:56:14.385654 ignition[1015]: INFO : Stage: mount Dec 13 03:56:14.385654 ignition[1015]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:56:14.385654 ignition[1015]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 03:56:14.385654 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 03:56:14.385654 ignition[1015]: INFO : mount: mount passed Dec 13 03:56:14.385654 ignition[1015]: INFO : POST message to Packet Timeline Dec 13 03:56:14.385654 ignition[1015]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 03:56:14.273057 systemd[1]: Starting sysroot-boot.service... Dec 13 03:56:14.474718 initrd-setup-root[976]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 03:56:14.287034 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 03:56:14.287093 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 03:56:14.292533 systemd[1]: Finished sysroot-boot.service. Dec 13 03:56:14.534864 coreos-metadata[941]: Dec 13 03:56:14.534 INFO Fetch successful Dec 13 03:56:14.563777 coreos-metadata[941]: Dec 13 03:56:14.563 INFO wrote hostname ci-3510.3.6-a-840ab18f38 to /sysroot/etc/hostname Dec 13 03:56:14.564171 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 03:56:14.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:14.634623 kernel: audit: type=1130 audit(1734062174.576:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:14.891548 coreos-metadata[942]: Dec 13 03:56:14.891 INFO Fetch successful Dec 13 03:56:14.918578 systemd[1]: flatcar-static-network.service: Deactivated successfully. Dec 13 03:56:14.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:14.955580 ignition[1015]: INFO : GET result: OK Dec 13 03:56:15.041655 kernel: audit: type=1130 audit(1734062174.917:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:15.041669 kernel: audit: type=1131 audit(1734062174.917:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:14.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:14.918641 systemd[1]: Finished flatcar-static-network.service. Dec 13 03:56:15.242976 ignition[1015]: INFO : Ignition finished successfully Dec 13 03:56:15.244149 systemd[1]: Finished ignition-mount.service. Dec 13 03:56:15.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:15.263200 systemd[1]: Starting ignition-files.service... Dec 13 03:56:15.334620 kernel: audit: type=1130 audit(1734062175.260:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:15.329344 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 03:56:15.391287 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1033) Dec 13 03:56:15.391302 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 03:56:15.391310 kernel: BTRFS info (device sda6): using free space tree Dec 13 03:56:15.415011 kernel: BTRFS info (device sda6): has skinny extents Dec 13 03:56:15.464427 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 03:56:15.465650 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 03:56:15.481550 ignition[1052]: INFO : Ignition 2.14.0 Dec 13 03:56:15.481550 ignition[1052]: INFO : Stage: files Dec 13 03:56:15.481550 ignition[1052]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:56:15.481550 ignition[1052]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 03:56:15.481550 ignition[1052]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 03:56:15.481550 ignition[1052]: DEBUG : files: compiled without relabeling support, skipping Dec 13 03:56:15.485124 unknown[1052]: wrote ssh authorized keys file for user: core Dec 13 03:56:15.555633 ignition[1052]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 03:56:15.555633 ignition[1052]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 03:56:15.555633 ignition[1052]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 03:56:15.555633 ignition[1052]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 03:56:15.555633 ignition[1052]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 03:56:15.555633 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 03:56:15.555633 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 03:56:15.555633 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 03:56:15.659636 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 03:56:15.659636 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 03:56:15.659636 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 03:56:16.165645 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 03:56:16.226842 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 03:56:16.226842 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 03:56:16.274665 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1056) Dec 13 03:56:16.274690 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 03:56:16.274690 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 03:56:16.274690 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 03:56:16.274690 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 03:56:16.274690 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 03:56:16.274690 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 03:56:16.274690 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 03:56:16.274690 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 03:56:16.274690 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 03:56:16.274690 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 03:56:16.274690 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 03:56:16.274690 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Dec 13 03:56:16.274690 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 03:56:16.274690 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem743874215" Dec 13 03:56:16.274690 ignition[1052]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem743874215": device or resource busy Dec 13 03:56:16.532762 ignition[1052]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem743874215", trying btrfs: device or resource busy Dec 13 03:56:16.532762 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem743874215" Dec 13 03:56:16.532762 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem743874215" Dec 13 03:56:16.532762 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem743874215" Dec 13 03:56:16.532762 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem743874215" Dec 13 03:56:16.532762 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Dec 13 03:56:16.532762 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 03:56:16.532762 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 03:56:16.692588 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Dec 13 03:56:16.952464 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 03:56:16.952464 ignition[1052]: INFO : files: op(10): [started] processing unit "packet-phone-home.service" Dec 13 03:56:16.952464 ignition[1052]: INFO : files: op(10): [finished] processing unit "packet-phone-home.service" Dec 13 03:56:16.952464 ignition[1052]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 03:56:16.952464 ignition[1052]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 03:56:16.952464 ignition[1052]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Dec 13 03:56:16.952464 ignition[1052]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 03:56:17.049725 ignition[1052]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 03:56:17.049725 ignition[1052]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Dec 13 03:56:17.049725 ignition[1052]: INFO : files: op(14): [started] setting preset to enabled for "packet-phone-home.service" Dec 13 03:56:17.049725 ignition[1052]: INFO : files: op(14): [finished] setting preset to enabled for "packet-phone-home.service" Dec 13 03:56:17.049725 ignition[1052]: INFO : files: op(15): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 03:56:17.049725 ignition[1052]: INFO : files: op(15): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 03:56:17.049725 ignition[1052]: INFO : files: op(16): [started] setting preset to enabled for "prepare-helm.service" Dec 13 03:56:17.049725 ignition[1052]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 03:56:17.049725 ignition[1052]: INFO : files: createResultFile: createFiles: op(17): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 03:56:17.049725 ignition[1052]: INFO : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 03:56:17.049725 ignition[1052]: INFO : files: files passed Dec 13 03:56:17.049725 ignition[1052]: INFO : POST message to Packet Timeline Dec 13 03:56:17.049725 ignition[1052]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 03:56:17.458743 ignition[1052]: INFO : GET result: OK Dec 13 03:56:17.802142 ignition[1052]: INFO : Ignition finished successfully Dec 13 03:56:17.804854 systemd[1]: Finished ignition-files.service. Dec 13 03:56:17.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:17.825603 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 03:56:17.896702 kernel: audit: type=1130 audit(1734062177.818:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:17.886694 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 03:56:17.920622 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 03:56:17.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:17.887128 systemd[1]: Starting ignition-quench.service... Dec 13 03:56:18.111666 kernel: audit: type=1130 audit(1734062177.929:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:18.111682 kernel: audit: type=1130 audit(1734062177.998:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:18.111690 kernel: audit: type=1131 audit(1734062177.998:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:17.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:17.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:17.903946 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 03:56:17.930907 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 03:56:17.930969 systemd[1]: Finished ignition-quench.service. Dec 13 03:56:18.266189 kernel: audit: type=1130 audit(1734062178.152:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:18.266203 kernel: audit: type=1131 audit(1734062178.152:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:18.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:18.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:17.998709 systemd[1]: Reached target ignition-complete.target. Dec 13 03:56:18.121109 systemd[1]: Starting initrd-parse-etc.service... Dec 13 03:56:18.141288 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 03:56:18.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:18.141333 systemd[1]: Finished initrd-parse-etc.service. Dec 13 03:56:18.387656 kernel: audit: type=1130 audit(1734062178.313:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:18.172830 systemd[1]: Reached target initrd-fs.target. Dec 13 03:56:18.274642 systemd[1]: Reached target initrd.target. Dec 13 03:56:18.274773 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 03:56:18.275121 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 03:56:18.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:18.295783 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 03:56:18.521670 kernel: audit: type=1131 audit(1734062178.444:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:18.314268 systemd[1]: Starting initrd-cleanup.service... Dec 13 03:56:18.382495 systemd[1]: Stopped target nss-lookup.target. Dec 13 03:56:18.396694 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 03:56:18.413664 systemd[1]: Stopped target timers.target. Dec 13 03:56:18.420722 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 03:56:18.420796 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 03:56:18.445880 systemd[1]: Stopped target initrd.target. Dec 13 03:56:18.514736 systemd[1]: Stopped target basic.target. Dec 13 03:56:18.528687 systemd[1]: Stopped target ignition-complete.target. Dec 13 03:56:18.549709 systemd[1]: Stopped target ignition-diskful.target. Dec 13 03:56:18.567833 systemd[1]: Stopped target initrd-root-device.target. Dec 13 03:56:18.582762 systemd[1]: Stopped target remote-fs.target. Dec 13 03:56:18.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:18.600902 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 03:56:18.783676 kernel: audit: type=1131 audit(1734062178.696:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:18.616046 systemd[1]: Stopped target sysinit.target. Dec 13 03:56:18.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:18.632036 systemd[1]: Stopped target local-fs.target. Dec 13 03:56:18.868670 kernel: audit: type=1131 audit(1734062178.792:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:18.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:18.647020 systemd[1]: Stopped target local-fs-pre.target. Dec 13 03:56:18.665008 systemd[1]: Stopped target swap.target. Dec 13 03:56:18.681028 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 03:56:18.681398 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 03:56:18.698250 systemd[1]: Stopped target cryptsetup.target. Dec 13 03:56:18.774724 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 03:56:18.774803 systemd[1]: Stopped dracut-initqueue.service. Dec 13 03:56:18.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:18.792794 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 03:56:18.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:18.792866 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 03:56:18.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:18.861816 systemd[1]: Stopped target paths.target. Dec 13 03:56:19.022597 ignition[1103]: INFO : Ignition 2.14.0 Dec 13 03:56:19.022597 ignition[1103]: INFO : Stage: umount Dec 13 03:56:19.022597 ignition[1103]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 03:56:19.022597 ignition[1103]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 03:56:19.022597 ignition[1103]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 03:56:19.022597 ignition[1103]: INFO : umount: umount passed Dec 13 03:56:19.022597 ignition[1103]: INFO : POST message to Packet Timeline Dec 13 03:56:19.022597 ignition[1103]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 03:56:19.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:19.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:19.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:19.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:19.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:18.875722 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 03:56:18.879656 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 03:56:18.891691 systemd[1]: Stopped target slices.target. Dec 13 03:56:18.898718 systemd[1]: Stopped target sockets.target. Dec 13 03:56:18.921704 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 03:56:18.921785 systemd[1]: Closed iscsid.socket. Dec 13 03:56:18.935793 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 03:56:18.935902 systemd[1]: Closed iscsiuio.socket. Dec 13 03:56:18.949897 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 03:56:18.950138 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 03:56:18.968089 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 03:56:18.968450 systemd[1]: Stopped ignition-files.service. Dec 13 03:56:18.982745 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 03:56:18.982825 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 03:56:19.001394 systemd[1]: Stopping ignition-mount.service... Dec 13 03:56:19.016126 systemd[1]: Stopping sysroot-boot.service... Dec 13 03:56:19.029590 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 03:56:19.029684 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 03:56:19.050943 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 03:56:19.051089 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 03:56:19.075012 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 03:56:19.075937 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 03:56:19.075981 systemd[1]: Finished initrd-cleanup.service. Dec 13 03:56:19.115659 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 03:56:19.115756 systemd[1]: Stopped sysroot-boot.service. Dec 13 03:56:19.523456 ignition[1103]: INFO : GET result: OK Dec 13 03:56:19.857827 ignition[1103]: INFO : Ignition finished successfully Dec 13 03:56:19.860640 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 03:56:19.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:19.860960 systemd[1]: Stopped ignition-mount.service. Dec 13 03:56:19.875001 systemd[1]: Stopped target network.target. Dec 13 03:56:19.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:19.890644 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 03:56:19.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:19.890874 systemd[1]: Stopped ignition-disks.service. Dec 13 03:56:19.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:19.905833 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 03:56:19.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:19.905955 systemd[1]: Stopped ignition-kargs.service. Dec 13 03:56:19.920932 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 03:56:19.921081 systemd[1]: Stopped ignition-setup.service. Dec 13 03:56:19.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:19.936927 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 03:56:20.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:20.014000 audit: BPF prog-id=6 op=UNLOAD Dec 13 03:56:19.937071 systemd[1]: Stopped initrd-setup-root.service. Dec 13 03:56:19.952212 systemd[1]: Stopping systemd-networkd.service... Dec 13 03:56:19.962582 systemd-networkd[879]: enp2s0f0np0: DHCPv6 lease lost Dec 13 03:56:20.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:19.967955 systemd[1]: Stopping systemd-resolved.service... Dec 13 03:56:20.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:19.973594 systemd-networkd[879]: enp2s0f1np1: DHCPv6 lease lost Dec 13 03:56:20.086000 audit: BPF prog-id=9 op=UNLOAD Dec 13 03:56:20.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:19.983333 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 03:56:19.983592 systemd[1]: Stopped systemd-resolved.service. Dec 13 03:56:20.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:20.001601 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 03:56:20.001840 systemd[1]: Stopped systemd-networkd.service. Dec 13 03:56:20.015947 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 03:56:20.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:20.016074 systemd[1]: Closed systemd-networkd.socket. Dec 13 03:56:20.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:20.032989 systemd[1]: Stopping network-cleanup.service... Dec 13 03:56:20.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:20.039649 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 03:56:20.039687 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 03:56:20.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:20.060734 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 03:56:20.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:20.060803 systemd[1]: Stopped systemd-sysctl.service. Dec 13 03:56:20.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:20.077870 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 03:56:20.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:20.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:20.077977 systemd[1]: Stopped systemd-modules-load.service. Dec 13 03:56:20.096052 systemd[1]: Stopping systemd-udevd.service... Dec 13 03:56:20.115386 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 03:56:20.116713 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 03:56:20.116772 systemd[1]: Stopped systemd-udevd.service. Dec 13 03:56:20.121762 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 03:56:20.121787 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 03:56:20.141605 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 03:56:20.141629 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 03:56:20.157595 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 03:56:20.157639 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 03:56:20.172762 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 03:56:20.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:20.172851 systemd[1]: Stopped dracut-cmdline.service. Dec 13 03:56:20.188816 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 03:56:20.188966 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 03:56:20.205689 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 03:56:20.218496 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 03:56:20.476232 iscsid[898]: iscsid shutting down. Dec 13 03:56:20.218528 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 03:56:20.238356 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 03:56:20.238499 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 03:56:20.253674 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 03:56:20.253784 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 03:56:20.272194 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 03:56:20.273556 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 03:56:20.273782 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 03:56:20.476445 systemd-journald[268]: Received SIGTERM from PID 1 (n/a). Dec 13 03:56:20.383216 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 03:56:20.383472 systemd[1]: Stopped network-cleanup.service. Dec 13 03:56:20.397084 systemd[1]: Reached target initrd-switch-root.target. Dec 13 03:56:20.414499 systemd[1]: Starting initrd-switch-root.service... Dec 13 03:56:20.431633 systemd[1]: Switching root. Dec 13 03:56:20.476563 systemd-journald[268]: Journal stopped Dec 13 03:56:24.619952 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 03:56:24.619966 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 03:56:24.619974 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 03:56:24.619979 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 03:56:24.619984 kernel: SELinux: policy capability open_perms=1 Dec 13 03:56:24.619989 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 03:56:24.619995 kernel: SELinux: policy capability always_check_network=0 Dec 13 03:56:24.620002 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 03:56:24.620007 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 03:56:24.620013 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 03:56:24.620018 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 03:56:24.620024 systemd[1]: Successfully loaded SELinux policy in 304.109ms. Dec 13 03:56:24.620031 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.938ms. Dec 13 03:56:24.620037 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 03:56:24.620045 systemd[1]: Detected architecture x86-64. Dec 13 03:56:24.620051 systemd[1]: Detected first boot. Dec 13 03:56:24.620057 systemd[1]: Hostname set to . Dec 13 03:56:24.620063 systemd[1]: Initializing machine ID from random generator. Dec 13 03:56:24.620069 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 03:56:24.620076 systemd[1]: Populated /etc with preset unit settings. Dec 13 03:56:24.620082 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 03:56:24.620089 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 03:56:24.620096 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 03:56:24.620102 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 03:56:24.620108 systemd[1]: Stopped iscsiuio.service. Dec 13 03:56:24.620114 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 03:56:24.620121 systemd[1]: Stopped iscsid.service. Dec 13 03:56:24.620127 kernel: kauditd_printk_skb: 63 callbacks suppressed Dec 13 03:56:24.620133 kernel: audit: type=1131 audit(1734062182.852:106): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.620139 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 03:56:24.620145 systemd[1]: Stopped initrd-switch-root.service. Dec 13 03:56:24.620151 kernel: audit: type=1130 audit(1734062182.976:107): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.620158 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 03:56:24.620164 kernel: audit: type=1131 audit(1734062182.976:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.620170 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 03:56:24.620176 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 03:56:24.620182 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 03:56:24.620189 systemd[1]: Created slice system-getty.slice. Dec 13 03:56:24.620195 systemd[1]: Created slice system-modprobe.slice. Dec 13 03:56:24.620201 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 03:56:24.620209 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 03:56:24.620216 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 03:56:24.620222 systemd[1]: Created slice user.slice. Dec 13 03:56:24.620228 systemd[1]: Started systemd-ask-password-console.path. Dec 13 03:56:24.620235 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 03:56:24.620241 systemd[1]: Set up automount boot.automount. Dec 13 03:56:24.620247 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 03:56:24.620254 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 03:56:24.620261 systemd[1]: Stopped target initrd-fs.target. Dec 13 03:56:24.620267 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 03:56:24.620274 systemd[1]: Reached target integritysetup.target. Dec 13 03:56:24.620280 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 03:56:24.620286 systemd[1]: Reached target remote-fs.target. Dec 13 03:56:24.620292 systemd[1]: Reached target slices.target. Dec 13 03:56:24.620299 systemd[1]: Reached target swap.target. Dec 13 03:56:24.620305 systemd[1]: Reached target torcx.target. Dec 13 03:56:24.620313 systemd[1]: Reached target veritysetup.target. Dec 13 03:56:24.620319 systemd[1]: Listening on systemd-coredump.socket. Dec 13 03:56:24.620326 systemd[1]: Listening on systemd-initctl.socket. Dec 13 03:56:24.620332 systemd[1]: Listening on systemd-networkd.socket. Dec 13 03:56:24.620339 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 03:56:24.620345 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 03:56:24.620352 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 03:56:24.620359 systemd[1]: Mounting dev-hugepages.mount... Dec 13 03:56:24.620365 systemd[1]: Mounting dev-mqueue.mount... Dec 13 03:56:24.620372 systemd[1]: Mounting media.mount... Dec 13 03:56:24.620378 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:56:24.620384 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 03:56:24.620391 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 03:56:24.620397 systemd[1]: Mounting tmp.mount... Dec 13 03:56:24.620404 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 03:56:24.620411 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:56:24.620418 systemd[1]: Starting kmod-static-nodes.service... Dec 13 03:56:24.620427 systemd[1]: Starting modprobe@configfs.service... Dec 13 03:56:24.620433 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:56:24.620440 systemd[1]: Starting modprobe@drm.service... Dec 13 03:56:24.620446 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:56:24.620453 systemd[1]: Starting modprobe@fuse.service... Dec 13 03:56:24.620460 kernel: fuse: init (API version 7.34) Dec 13 03:56:24.620466 systemd[1]: Starting modprobe@loop.service... Dec 13 03:56:24.620473 kernel: loop: module loaded Dec 13 03:56:24.620480 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 03:56:24.620486 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 03:56:24.620493 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 03:56:24.620499 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 03:56:24.620506 kernel: audit: type=1131 audit(1734062184.311:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.620512 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 03:56:24.620518 kernel: audit: type=1131 audit(1734062184.386:110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.620525 systemd[1]: Stopped systemd-journald.service. Dec 13 03:56:24.620532 kernel: audit: type=1130 audit(1734062184.450:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.620538 kernel: audit: type=1131 audit(1734062184.450:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.620544 kernel: audit: type=1334 audit(1734062184.534:113): prog-id=18 op=LOAD Dec 13 03:56:24.620549 kernel: audit: type=1334 audit(1734062184.552:114): prog-id=19 op=LOAD Dec 13 03:56:24.620555 kernel: audit: type=1334 audit(1734062184.570:115): prog-id=20 op=LOAD Dec 13 03:56:24.620561 systemd[1]: Starting systemd-journald.service... Dec 13 03:56:24.620567 systemd[1]: Starting systemd-modules-load.service... Dec 13 03:56:24.620576 systemd-journald[1257]: Journal started Dec 13 03:56:24.620601 systemd-journald[1257]: Runtime Journal (/run/log/journal/affdf8a7bf5c45c5b75c15c682966128) is 8.0M, max 639.3M, 631.3M free. Dec 13 03:56:20.841000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 03:56:21.113000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 03:56:21.116000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 03:56:21.116000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 03:56:21.116000 audit: BPF prog-id=10 op=LOAD Dec 13 03:56:21.116000 audit: BPF prog-id=10 op=UNLOAD Dec 13 03:56:21.116000 audit: BPF prog-id=11 op=LOAD Dec 13 03:56:21.116000 audit: BPF prog-id=11 op=UNLOAD Dec 13 03:56:21.186000 audit[1147]: AVC avc: denied { associate } for pid=1147 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 03:56:21.186000 audit[1147]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001d98a2 a1=c00015adf8 a2=c0001630c0 a3=32 items=0 ppid=1130 pid=1147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:56:21.186000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 03:56:21.212000 audit[1147]: AVC avc: denied { associate } for pid=1147 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 03:56:21.212000 audit[1147]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001d9979 a2=1ed a3=0 items=2 ppid=1130 pid=1147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:56:21.212000 audit: CWD cwd="/" Dec 13 03:56:21.212000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:21.212000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:21.212000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 03:56:22.747000 audit: BPF prog-id=12 op=LOAD Dec 13 03:56:22.747000 audit: BPF prog-id=3 op=UNLOAD Dec 13 03:56:22.747000 audit: BPF prog-id=13 op=LOAD Dec 13 03:56:22.747000 audit: BPF prog-id=14 op=LOAD Dec 13 03:56:22.747000 audit: BPF prog-id=4 op=UNLOAD Dec 13 03:56:22.747000 audit: BPF prog-id=5 op=UNLOAD Dec 13 03:56:22.748000 audit: BPF prog-id=15 op=LOAD Dec 13 03:56:22.748000 audit: BPF prog-id=12 op=UNLOAD Dec 13 03:56:22.748000 audit: BPF prog-id=16 op=LOAD Dec 13 03:56:22.748000 audit: BPF prog-id=17 op=LOAD Dec 13 03:56:22.748000 audit: BPF prog-id=13 op=UNLOAD Dec 13 03:56:22.748000 audit: BPF prog-id=14 op=UNLOAD Dec 13 03:56:22.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:22.797000 audit: BPF prog-id=15 op=UNLOAD Dec 13 03:56:22.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:22.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:22.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:22.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.534000 audit: BPF prog-id=18 op=LOAD Dec 13 03:56:24.552000 audit: BPF prog-id=19 op=LOAD Dec 13 03:56:24.570000 audit: BPF prog-id=20 op=LOAD Dec 13 03:56:24.588000 audit: BPF prog-id=16 op=UNLOAD Dec 13 03:56:24.588000 audit: BPF prog-id=17 op=UNLOAD Dec 13 03:56:24.616000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 03:56:24.616000 audit[1257]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffe7a2595f0 a2=4000 a3=7ffe7a25968c items=0 ppid=1 pid=1257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:56:24.616000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 03:56:22.746878 systemd[1]: Queued start job for default target multi-user.target. Dec 13 03:56:21.185673 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 03:56:22.750545 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 03:56:21.186148 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:21Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 03:56:21.186164 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:21Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 03:56:21.186186 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:21Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 03:56:21.186194 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:21Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 03:56:21.186214 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:21Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 03:56:21.186223 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:21Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 03:56:21.186361 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:21Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 03:56:21.186390 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:21Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 03:56:21.186400 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:21Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 03:56:21.187127 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:21Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 03:56:21.187152 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:21Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 03:56:21.187166 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:21Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 03:56:21.187177 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:21Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 03:56:21.187189 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:21Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 03:56:21.187199 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:21Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 03:56:22.395318 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:22Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:56:22.395465 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:22Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:56:22.395522 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:22Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:56:22.395614 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:22Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 03:56:22.395643 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:22Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 03:56:22.395676 /usr/lib/systemd/system-generators/torcx-generator[1147]: time="2024-12-13T03:56:22Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 03:56:24.658480 systemd[1]: Starting systemd-network-generator.service... Dec 13 03:56:24.684620 systemd[1]: Starting systemd-remount-fs.service... Dec 13 03:56:24.709429 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 03:56:24.750630 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 03:56:24.750686 systemd[1]: Stopped verity-setup.service. Dec 13 03:56:24.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.793480 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:56:24.811428 systemd[1]: Started systemd-journald.service. Dec 13 03:56:24.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.819980 systemd[1]: Mounted dev-hugepages.mount. Dec 13 03:56:24.827703 systemd[1]: Mounted dev-mqueue.mount. Dec 13 03:56:24.834680 systemd[1]: Mounted media.mount. Dec 13 03:56:24.841694 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 03:56:24.850681 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 03:56:24.858668 systemd[1]: Mounted tmp.mount. Dec 13 03:56:24.865745 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 03:56:24.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.873761 systemd[1]: Finished kmod-static-nodes.service. Dec 13 03:56:24.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.881783 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 03:56:24.881888 systemd[1]: Finished modprobe@configfs.service. Dec 13 03:56:24.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.890856 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:56:24.890989 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:56:24.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.899986 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 03:56:24.900178 systemd[1]: Finished modprobe@drm.service. Dec 13 03:56:24.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.909093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:56:24.909328 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:56:24.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.918381 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 03:56:24.918798 systemd[1]: Finished modprobe@fuse.service. Dec 13 03:56:24.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.928358 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:56:24.928778 systemd[1]: Finished modprobe@loop.service. Dec 13 03:56:24.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.937367 systemd[1]: Finished systemd-modules-load.service. Dec 13 03:56:24.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.946388 systemd[1]: Finished systemd-network-generator.service. Dec 13 03:56:24.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.955294 systemd[1]: Finished systemd-remount-fs.service. Dec 13 03:56:24.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.964251 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 03:56:24.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:24.973948 systemd[1]: Reached target network-pre.target. Dec 13 03:56:24.985279 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 03:56:24.996038 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 03:56:25.002685 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 03:56:25.003638 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 03:56:25.012000 systemd[1]: Starting systemd-journal-flush.service... Dec 13 03:56:25.015160 systemd-journald[1257]: Time spent on flushing to /var/log/journal/affdf8a7bf5c45c5b75c15c682966128 is 14.638ms for 1626 entries. Dec 13 03:56:25.015160 systemd-journald[1257]: System Journal (/var/log/journal/affdf8a7bf5c45c5b75c15c682966128) is 8.0M, max 195.6M, 187.6M free. Dec 13 03:56:25.059298 systemd-journald[1257]: Received client request to flush runtime journal. Dec 13 03:56:25.027540 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:56:25.028032 systemd[1]: Starting systemd-random-seed.service... Dec 13 03:56:25.043576 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:56:25.044077 systemd[1]: Starting systemd-sysctl.service... Dec 13 03:56:25.051028 systemd[1]: Starting systemd-sysusers.service... Dec 13 03:56:25.058112 systemd[1]: Starting systemd-udev-settle.service... Dec 13 03:56:25.067111 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 03:56:25.076584 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 03:56:25.084647 systemd[1]: Finished systemd-journal-flush.service. Dec 13 03:56:25.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:25.092669 systemd[1]: Finished systemd-random-seed.service. Dec 13 03:56:25.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:25.100656 systemd[1]: Finished systemd-sysctl.service. Dec 13 03:56:25.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:25.108610 systemd[1]: Finished systemd-sysusers.service. Dec 13 03:56:25.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:25.118404 systemd[1]: Reached target first-boot-complete.target. Dec 13 03:56:25.127400 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 03:56:25.136977 udevadm[1273]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 03:56:25.147456 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 03:56:25.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:25.311523 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 03:56:25.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:25.319000 audit: BPF prog-id=21 op=LOAD Dec 13 03:56:25.319000 audit: BPF prog-id=22 op=LOAD Dec 13 03:56:25.319000 audit: BPF prog-id=7 op=UNLOAD Dec 13 03:56:25.319000 audit: BPF prog-id=8 op=UNLOAD Dec 13 03:56:25.320684 systemd[1]: Starting systemd-udevd.service... Dec 13 03:56:25.332170 systemd-udevd[1276]: Using default interface naming scheme 'v252'. Dec 13 03:56:25.350612 systemd[1]: Started systemd-udevd.service. Dec 13 03:56:25.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:25.360594 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Dec 13 03:56:25.360000 audit: BPF prog-id=23 op=LOAD Dec 13 03:56:25.361885 systemd[1]: Starting systemd-networkd.service... Dec 13 03:56:25.384437 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 03:56:25.384512 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Dec 13 03:56:25.404000 audit: BPF prog-id=24 op=LOAD Dec 13 03:56:25.406431 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 03:56:25.423433 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1336) Dec 13 03:56:25.447430 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 03:56:25.465000 audit: BPF prog-id=25 op=LOAD Dec 13 03:56:25.465000 audit: BPF prog-id=26 op=LOAD Dec 13 03:56:25.467914 systemd[1]: Starting systemd-userdbd.service... Dec 13 03:56:25.384000 audit[1284]: AVC avc: denied { confidentiality } for pid=1284 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 03:56:25.487440 kernel: IPMI message handler: version 39.2 Dec 13 03:56:25.501588 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 03:56:25.527377 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Dec 13 03:56:25.565319 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Dec 13 03:56:25.565480 kernel: ACPI: button: Power Button [PWRF] Dec 13 03:56:25.565506 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Dec 13 03:56:25.384000 audit[1284]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c3f1fe06f0 a1=4d98c a2=7fa37b4b9bc5 a3=5 items=42 ppid=1276 pid=1284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:56:25.384000 audit: CWD cwd="/" Dec 13 03:56:25.384000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=1 name=(null) inode=10908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=2 name=(null) inode=10908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=3 name=(null) inode=10909 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=4 name=(null) inode=10908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=5 name=(null) inode=10910 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=6 name=(null) inode=10908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=7 name=(null) inode=10911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=8 name=(null) inode=10911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=9 name=(null) inode=10912 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=10 name=(null) inode=10911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=11 name=(null) inode=10913 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=12 name=(null) inode=10911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=13 name=(null) inode=10914 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=14 name=(null) inode=10911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=15 name=(null) inode=10915 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=16 name=(null) inode=10911 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=17 name=(null) inode=10916 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=18 name=(null) inode=10908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=19 name=(null) inode=10917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=20 name=(null) inode=10917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=21 name=(null) inode=10918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=22 name=(null) inode=10917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=23 name=(null) inode=10919 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=24 name=(null) inode=10917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=25 name=(null) inode=10920 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=26 name=(null) inode=10917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=27 name=(null) inode=10921 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=28 name=(null) inode=10917 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=29 name=(null) inode=10922 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=30 name=(null) inode=10908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=31 name=(null) inode=10923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=32 name=(null) inode=10923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=33 name=(null) inode=10924 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=34 name=(null) inode=10923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=35 name=(null) inode=10925 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=36 name=(null) inode=10923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=37 name=(null) inode=10926 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=38 name=(null) inode=10923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=39 name=(null) inode=10927 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=40 name=(null) inode=10923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PATH item=41 name=(null) inode=10928 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 03:56:25.384000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 03:56:25.576507 (udev-worker)[1299]: could not read from '/sys/module/pcc_cpufreq/initstate': No such device Dec 13 03:56:25.581733 systemd[1]: Started systemd-userdbd.service. Dec 13 03:56:25.584432 kernel: iTCO_vendor_support: vendor-support=0 Dec 13 03:56:25.584477 kernel: ipmi device interface Dec 13 03:56:25.584497 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Dec 13 03:56:25.584622 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Dec 13 03:56:25.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:25.704439 kernel: ipmi_si: IPMI System Interface driver Dec 13 03:56:25.704496 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Dec 13 03:56:25.704578 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Dec 13 03:56:25.764089 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Dec 13 03:56:25.764104 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Dec 13 03:56:25.764114 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Dec 13 03:56:25.847675 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Dec 13 03:56:25.847926 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Dec 13 03:56:25.848149 kernel: ipmi_si: Adding ACPI-specified kcs state machine Dec 13 03:56:25.848179 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Dec 13 03:56:25.928240 kernel: intel_rapl_common: Found RAPL domain package Dec 13 03:56:25.928288 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Dec 13 03:56:25.928394 kernel: intel_rapl_common: Found RAPL domain core Dec 13 03:56:25.946429 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Dec 13 03:56:25.946528 kernel: intel_rapl_common: Found RAPL domain uncore Dec 13 03:56:25.946550 kernel: intel_rapl_common: Found RAPL domain dram Dec 13 03:56:26.044801 systemd-networkd[1318]: bond0: netdev ready Dec 13 03:56:26.047046 systemd-networkd[1318]: lo: Link UP Dec 13 03:56:26.047050 systemd-networkd[1318]: lo: Gained carrier Dec 13 03:56:26.047564 systemd-networkd[1318]: Enumeration completed Dec 13 03:56:26.047656 systemd[1]: Started systemd-networkd.service. Dec 13 03:56:26.047928 systemd-networkd[1318]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Dec 13 03:56:26.051079 systemd-networkd[1318]: enp2s0f1np1: Configuring with /etc/systemd/network/10-04:3f:72:d7:69:1b.network. Dec 13 03:56:26.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:26.113465 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Dec 13 03:56:26.131446 kernel: ipmi_ssif: IPMI SSIF Interface driver Dec 13 03:56:26.137641 systemd[1]: Finished systemd-udev-settle.service. Dec 13 03:56:26.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:26.146122 systemd[1]: Starting lvm2-activation-early.service... Dec 13 03:56:26.161598 lvm[1379]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 03:56:26.198914 systemd[1]: Finished lvm2-activation-early.service. Dec 13 03:56:26.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:26.207556 systemd[1]: Reached target cryptsetup.target. Dec 13 03:56:26.216089 systemd[1]: Starting lvm2-activation.service... Dec 13 03:56:26.218348 lvm[1380]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 03:56:26.251852 systemd[1]: Finished lvm2-activation.service. Dec 13 03:56:26.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:26.260550 systemd[1]: Reached target local-fs-pre.target. Dec 13 03:56:26.268506 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 03:56:26.268521 systemd[1]: Reached target local-fs.target. Dec 13 03:56:26.276512 systemd[1]: Reached target machines.target. Dec 13 03:56:26.285080 systemd[1]: Starting ldconfig.service... Dec 13 03:56:26.292076 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:56:26.292112 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:56:26.292714 systemd[1]: Starting systemd-boot-update.service... Dec 13 03:56:26.300013 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 03:56:26.310255 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 03:56:26.311082 systemd[1]: Starting systemd-sysext.service... Dec 13 03:56:26.311272 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1382 (bootctl) Dec 13 03:56:26.311922 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 03:56:26.319887 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 03:56:26.330978 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 03:56:26.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:26.335922 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 03:56:26.336128 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 03:56:26.371429 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 03:56:26.443462 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 03:56:26.443931 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 03:56:26.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:26.499438 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 03:56:26.499541 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Dec 13 03:56:26.515427 systemd-fsck[1392]: fsck.fat 4.2 (2021-01-31) Dec 13 03:56:26.515427 systemd-fsck[1392]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 03:56:26.516438 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 03:56:26.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:26.536332 systemd[1]: Mounting boot.mount... Dec 13 03:56:26.540430 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Dec 13 03:56:26.541782 systemd-networkd[1318]: enp2s0f0np0: Configuring with /etc/systemd/network/10-04:3f:72:d7:69:1a.network. Dec 13 03:56:26.548668 systemd[1]: Mounted boot.mount. Dec 13 03:56:26.571476 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 03:56:26.571508 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 03:56:26.596553 systemd[1]: Finished systemd-boot-update.service. Dec 13 03:56:26.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:26.606991 (sd-sysext)[1396]: Using extensions 'kubernetes'. Dec 13 03:56:26.607176 (sd-sysext)[1396]: Merged extensions into '/usr'. Dec 13 03:56:26.616871 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:56:26.617575 systemd[1]: Mounting usr-share-oem.mount... Dec 13 03:56:26.624648 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:56:26.625255 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:56:26.633014 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:56:26.640016 systemd[1]: Starting modprobe@loop.service... Dec 13 03:56:26.646547 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:56:26.646613 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:56:26.646677 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:56:26.648242 systemd[1]: Mounted usr-share-oem.mount. Dec 13 03:56:26.655694 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:56:26.655756 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:56:26.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:26.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:26.663733 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:56:26.663793 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:56:26.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:26.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:26.671805 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:56:26.671867 systemd[1]: Finished modprobe@loop.service. Dec 13 03:56:26.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:26.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:26.679809 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:56:26.679884 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:56:26.680532 systemd[1]: Finished systemd-sysext.service. Dec 13 03:56:26.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:26.689343 systemd[1]: Starting ensure-sysext.service... Dec 13 03:56:26.705241 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 03:56:26.716473 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 03:56:26.716515 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Dec 13 03:56:26.724059 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 03:56:26.725627 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 03:56:26.727489 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 03:56:26.752376 systemd[1]: Reloading. Dec 13 03:56:26.759469 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Dec 13 03:56:26.759517 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Dec 13 03:56:26.777043 ldconfig[1381]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 03:56:26.778552 systemd-networkd[1318]: bond0: Link UP Dec 13 03:56:26.778805 systemd-networkd[1318]: enp2s0f1np1: Link UP Dec 13 03:56:26.778959 systemd-networkd[1318]: enp2s0f1np1: Gained carrier Dec 13 03:56:26.780110 systemd-networkd[1318]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-04:3f:72:d7:69:1a.network. Dec 13 03:56:26.784009 /usr/lib/systemd/system-generators/torcx-generator[1422]: time="2024-12-13T03:56:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 03:56:26.784028 /usr/lib/systemd/system-generators/torcx-generator[1422]: time="2024-12-13T03:56:26Z" level=info msg="torcx already run" Dec 13 03:56:26.820465 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Dec 13 03:56:26.820512 kernel: bond0: active interface up! Dec 13 03:56:26.855729 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 03:56:26.855737 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 03:56:26.856458 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 03:56:26.866780 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 03:56:26.906000 audit: BPF prog-id=27 op=LOAD Dec 13 03:56:26.906000 audit: BPF prog-id=18 op=UNLOAD Dec 13 03:56:26.907000 audit: BPF prog-id=28 op=LOAD Dec 13 03:56:26.907000 audit: BPF prog-id=29 op=LOAD Dec 13 03:56:26.907000 audit: BPF prog-id=19 op=UNLOAD Dec 13 03:56:26.907000 audit: BPF prog-id=20 op=UNLOAD Dec 13 03:56:26.907000 audit: BPF prog-id=30 op=LOAD Dec 13 03:56:26.907000 audit: BPF prog-id=31 op=LOAD Dec 13 03:56:26.907000 audit: BPF prog-id=21 op=UNLOAD Dec 13 03:56:26.907000 audit: BPF prog-id=22 op=UNLOAD Dec 13 03:56:26.909861 systemd-networkd[1318]: enp2s0f0np0: Link UP Dec 13 03:56:26.908000 audit: BPF prog-id=32 op=LOAD Dec 13 03:56:26.908000 audit: BPF prog-id=23 op=UNLOAD Dec 13 03:56:26.910029 systemd-networkd[1318]: bond0: Gained carrier Dec 13 03:56:26.910118 systemd-networkd[1318]: enp2s0f0np0: Gained carrier Dec 13 03:56:26.908000 audit: BPF prog-id=33 op=LOAD Dec 13 03:56:26.908000 audit: BPF prog-id=24 op=UNLOAD Dec 13 03:56:26.908000 audit: BPF prog-id=34 op=LOAD Dec 13 03:56:26.908000 audit: BPF prog-id=35 op=LOAD Dec 13 03:56:26.908000 audit: BPF prog-id=25 op=UNLOAD Dec 13 03:56:26.908000 audit: BPF prog-id=26 op=UNLOAD Dec 13 03:56:26.911891 systemd[1]: Finished ldconfig.service. Dec 13 03:56:26.917770 systemd-networkd[1318]: enp2s0f1np1: Link DOWN Dec 13 03:56:26.917772 systemd-networkd[1318]: enp2s0f1np1: Lost carrier Dec 13 03:56:26.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:26.919055 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 03:56:26.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 03:56:26.938504 systemd[1]: Starting audit-rules.service... Dec 13 03:56:26.948428 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 03:56:26.953000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 03:56:26.953000 audit[1500]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc53b990a0 a2=420 a3=0 items=0 ppid=1485 pid=1500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 03:56:26.953000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 03:56:26.955489 augenrules[1500]: No rules Dec 13 03:56:26.964196 systemd[1]: Starting clean-ca-certificates.service... Dec 13 03:56:26.971464 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 03:56:26.971493 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 03:56:27.009219 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 03:56:27.012427 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 03:56:27.035115 systemd[1]: Starting systemd-resolved.service... Dec 13 03:56:27.035428 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 03:56:27.057427 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 03:56:27.057985 systemd[1]: Starting systemd-timesyncd.service... Dec 13 03:56:27.074068 systemd[1]: Starting systemd-update-utmp.service... Dec 13 03:56:27.080481 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 03:56:27.096000 systemd[1]: Finished audit-rules.service. Dec 13 03:56:27.102492 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 03:56:27.102515 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Dec 13 03:56:27.118473 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 03:56:27.122109 systemd-networkd[1318]: enp2s0f1np1: Link UP Dec 13 03:56:27.122112 systemd-networkd[1318]: enp2s0f1np1: Gained carrier Dec 13 03:56:27.137427 kernel: bond0: (slave enp2s0f1np1): invalid new link 1 on slave Dec 13 03:56:27.168723 systemd[1]: Finished clean-ca-certificates.service. Dec 13 03:56:27.173429 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Dec 13 03:56:27.181714 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 03:56:27.194090 systemd[1]: Finished systemd-update-utmp.service. Dec 13 03:56:27.203097 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 03:56:27.203724 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 03:56:27.211096 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 03:56:27.218029 systemd[1]: Starting modprobe@loop.service... Dec 13 03:56:27.222188 systemd-resolved[1507]: Positive Trust Anchors: Dec 13 03:56:27.222193 systemd-resolved[1507]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 03:56:27.222212 systemd-resolved[1507]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 03:56:27.224526 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 03:56:27.224599 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:56:27.225308 systemd[1]: Starting systemd-update-done.service... Dec 13 03:56:27.226083 systemd-resolved[1507]: Using system hostname 'ci-3510.3.6-a-840ab18f38'. Dec 13 03:56:27.232518 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 03:56:27.232998 systemd[1]: Started systemd-timesyncd.service. Dec 13 03:56:27.241728 systemd[1]: Started systemd-resolved.service. Dec 13 03:56:27.249709 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 03:56:27.249776 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 03:56:27.257709 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 03:56:27.257771 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 03:56:27.265692 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 03:56:27.265753 systemd[1]: Finished modprobe@loop.service. Dec 13 03:56:27.273694 systemd[1]: Finished systemd-update-done.service. Dec 13 03:56:27.287059 systemd[1]: Reached target network.target. Dec 13 03:56:27.297427 kernel: bond0: (slave enp2s0f1np1): link status up again after 100 ms Dec 13 03:56:27.313554 systemd[1]: Reached target nss-lookup.target. Dec 13 03:56:27.318426 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Dec 13 03:56:27.325558 systemd[1]: Reached target time-set.target. Dec 13 03:56:27.333539 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 03:56:27.333611 systemd[1]: Reached target sysinit.target. Dec 13 03:56:27.341601 systemd[1]: Started motdgen.path. Dec 13 03:56:27.348577 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 03:56:27.358630 systemd[1]: Started logrotate.timer. Dec 13 03:56:27.365595 systemd[1]: Started mdadm.timer. Dec 13 03:56:27.372560 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 03:56:27.380525 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 03:56:27.380585 systemd[1]: Reached target paths.target. Dec 13 03:56:27.387551 systemd[1]: Reached target timers.target. Dec 13 03:56:27.394675 systemd[1]: Listening on dbus.socket. Dec 13 03:56:27.402073 systemd[1]: Starting docker.socket... Dec 13 03:56:27.410006 systemd[1]: Listening on sshd.socket. Dec 13 03:56:27.416594 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:56:27.416660 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 03:56:27.418284 systemd[1]: Listening on docker.socket. Dec 13 03:56:27.426360 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 03:56:27.426420 systemd[1]: Reached target sockets.target. Dec 13 03:56:27.434550 systemd[1]: Reached target basic.target. Dec 13 03:56:27.441591 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:56:27.441643 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 03:56:27.441691 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 03:56:27.442224 systemd[1]: Starting containerd.service... Dec 13 03:56:27.449014 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 03:56:27.458091 systemd[1]: Starting coreos-metadata.service... Dec 13 03:56:27.465151 systemd[1]: Starting dbus.service... Dec 13 03:56:27.471380 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 03:56:27.476201 jq[1525]: false Dec 13 03:56:27.479062 systemd[1]: Starting extend-filesystems.service... Dec 13 03:56:27.480872 coreos-metadata[1518]: Dec 13 03:56:27.480 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 03:56:27.485516 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 03:56:27.485656 dbus-daemon[1524]: [system] SELinux support is enabled Dec 13 03:56:27.486318 systemd[1]: Starting modprobe@drm.service... Dec 13 03:56:27.487404 extend-filesystems[1526]: Found loop1 Dec 13 03:56:27.508567 extend-filesystems[1526]: Found sda Dec 13 03:56:27.508567 extend-filesystems[1526]: Found sda1 Dec 13 03:56:27.508567 extend-filesystems[1526]: Found sda2 Dec 13 03:56:27.508567 extend-filesystems[1526]: Found sda3 Dec 13 03:56:27.508567 extend-filesystems[1526]: Found usr Dec 13 03:56:27.508567 extend-filesystems[1526]: Found sda4 Dec 13 03:56:27.508567 extend-filesystems[1526]: Found sda6 Dec 13 03:56:27.508567 extend-filesystems[1526]: Found sda7 Dec 13 03:56:27.508567 extend-filesystems[1526]: Found sda9 Dec 13 03:56:27.508567 extend-filesystems[1526]: Checking size of /dev/sda9 Dec 13 03:56:27.508567 extend-filesystems[1526]: Resized partition /dev/sda9 Dec 13 03:56:27.664550 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Dec 13 03:56:27.664592 coreos-metadata[1521]: Dec 13 03:56:27.489 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 03:56:27.495266 systemd[1]: Starting motdgen.service... Dec 13 03:56:27.664798 extend-filesystems[1536]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 03:56:27.527281 systemd[1]: Starting prepare-helm.service... Dec 13 03:56:27.541101 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 03:56:27.555054 systemd[1]: Starting sshd-keygen.service... Dec 13 03:56:27.569315 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 03:56:27.583462 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 03:56:27.680030 update_engine[1556]: I1213 03:56:27.644448 1556 main.cc:92] Flatcar Update Engine starting Dec 13 03:56:27.680030 update_engine[1556]: I1213 03:56:27.648063 1556 update_check_scheduler.cc:74] Next update check in 10m24s Dec 13 03:56:27.584577 systemd[1]: Starting tcsd.service... Dec 13 03:56:27.680252 jq[1557]: true Dec 13 03:56:27.595828 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 03:56:27.596352 systemd[1]: Starting update-engine.service... Dec 13 03:56:27.610307 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 03:56:27.625456 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 03:56:27.626563 systemd[1]: Started dbus.service. Dec 13 03:56:27.642347 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 03:56:27.642451 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 03:56:27.642686 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 03:56:27.642759 systemd[1]: Finished modprobe@drm.service. Dec 13 03:56:27.656719 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 03:56:27.656804 systemd[1]: Finished motdgen.service. Dec 13 03:56:27.672109 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 03:56:27.672202 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 03:56:27.690286 jq[1561]: true Dec 13 03:56:27.690436 sshd_keygen[1553]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 03:56:27.691135 systemd[1]: Finished ensure-sysext.service. Dec 13 03:56:27.699499 env[1562]: time="2024-12-13T03:56:27.699458769Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 03:56:27.704650 systemd[1]: Finished sshd-keygen.service. Dec 13 03:56:27.708454 env[1562]: time="2024-12-13T03:56:27.708405555Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 03:56:27.708493 env[1562]: time="2024-12-13T03:56:27.708484685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:56:27.709195 env[1562]: time="2024-12-13T03:56:27.709152049Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 03:56:27.709195 env[1562]: time="2024-12-13T03:56:27.709167236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:56:27.709305 env[1562]: time="2024-12-13T03:56:27.709290964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 03:56:27.709327 env[1562]: time="2024-12-13T03:56:27.709307159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 03:56:27.709327 env[1562]: time="2024-12-13T03:56:27.709318508Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 03:56:27.709358 env[1562]: time="2024-12-13T03:56:27.709327382Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 03:56:27.709391 env[1562]: time="2024-12-13T03:56:27.709382339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:56:27.709591 env[1562]: time="2024-12-13T03:56:27.709543498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 03:56:27.709627 env[1562]: time="2024-12-13T03:56:27.709607982Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 03:56:27.709627 env[1562]: time="2024-12-13T03:56:27.709619448Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 03:56:27.709661 env[1562]: time="2024-12-13T03:56:27.709645375Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 03:56:27.709661 env[1562]: time="2024-12-13T03:56:27.709653618Z" level=info msg="metadata content store policy set" policy=shared Dec 13 03:56:27.712623 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Dec 13 03:56:27.712709 systemd[1]: Condition check resulted in tcsd.service being skipped. Dec 13 03:56:27.712958 tar[1559]: linux-amd64/helm Dec 13 03:56:27.718472 systemd[1]: Started update-engine.service. Dec 13 03:56:27.724073 env[1562]: time="2024-12-13T03:56:27.724059376Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 03:56:27.724117 env[1562]: time="2024-12-13T03:56:27.724075437Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 03:56:27.724117 env[1562]: time="2024-12-13T03:56:27.724084088Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 03:56:27.724117 env[1562]: time="2024-12-13T03:56:27.724102074Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 03:56:27.724117 env[1562]: time="2024-12-13T03:56:27.724110163Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 03:56:27.725652 env[1562]: time="2024-12-13T03:56:27.724119045Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 03:56:27.725652 env[1562]: time="2024-12-13T03:56:27.724126353Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 03:56:27.725652 env[1562]: time="2024-12-13T03:56:27.724134641Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 03:56:27.725652 env[1562]: time="2024-12-13T03:56:27.724141593Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 03:56:27.725652 env[1562]: time="2024-12-13T03:56:27.724148762Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 03:56:27.725652 env[1562]: time="2024-12-13T03:56:27.724155348Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 03:56:27.725652 env[1562]: time="2024-12-13T03:56:27.724161821Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 03:56:27.725988 env[1562]: time="2024-12-13T03:56:27.725978262Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 03:56:27.726033 env[1562]: time="2024-12-13T03:56:27.726024779Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 03:56:27.726206 env[1562]: time="2024-12-13T03:56:27.726190016Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 03:56:27.726249 env[1562]: time="2024-12-13T03:56:27.726214165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 03:56:27.726249 env[1562]: time="2024-12-13T03:56:27.726223195Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 03:56:27.726306 env[1562]: time="2024-12-13T03:56:27.726253099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 03:56:27.726306 env[1562]: time="2024-12-13T03:56:27.726261164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 03:56:27.726306 env[1562]: time="2024-12-13T03:56:27.726268445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 03:56:27.726306 env[1562]: time="2024-12-13T03:56:27.726276142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 03:56:27.726306 env[1562]: time="2024-12-13T03:56:27.726282823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 03:56:27.726306 env[1562]: time="2024-12-13T03:56:27.726289382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 03:56:27.726306 env[1562]: time="2024-12-13T03:56:27.726295588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 03:56:27.726306 env[1562]: time="2024-12-13T03:56:27.726302457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 03:56:27.728248 env[1562]: time="2024-12-13T03:56:27.726314637Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 03:56:27.727359 systemd[1]: Starting issuegen.service... Dec 13 03:56:27.728407 env[1562]: time="2024-12-13T03:56:27.728291577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 03:56:27.728407 env[1562]: time="2024-12-13T03:56:27.728305217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 03:56:27.728407 env[1562]: time="2024-12-13T03:56:27.728316130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 03:56:27.728407 env[1562]: time="2024-12-13T03:56:27.728325151Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 03:56:27.728407 env[1562]: time="2024-12-13T03:56:27.728334175Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 03:56:27.728407 env[1562]: time="2024-12-13T03:56:27.728342371Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 03:56:27.728407 env[1562]: time="2024-12-13T03:56:27.728354008Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 03:56:27.728407 env[1562]: time="2024-12-13T03:56:27.728377113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 03:56:27.728617 bash[1600]: Updated "/home/core/.ssh/authorized_keys" Dec 13 03:56:27.728740 env[1562]: time="2024-12-13T03:56:27.728501495Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 03:56:27.728740 env[1562]: time="2024-12-13T03:56:27.728533990Z" level=info msg="Connect containerd service" Dec 13 03:56:27.728740 env[1562]: time="2024-12-13T03:56:27.728563038Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 03:56:27.730485 env[1562]: time="2024-12-13T03:56:27.729104204Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 03:56:27.730485 env[1562]: time="2024-12-13T03:56:27.729196664Z" level=info msg="Start subscribing containerd event" Dec 13 03:56:27.730485 env[1562]: time="2024-12-13T03:56:27.729241802Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 03:56:27.730485 env[1562]: time="2024-12-13T03:56:27.729255451Z" level=info msg="Start recovering state" Dec 13 03:56:27.730485 env[1562]: time="2024-12-13T03:56:27.729276415Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 03:56:27.730485 env[1562]: time="2024-12-13T03:56:27.729298447Z" level=info msg="Start event monitor" Dec 13 03:56:27.730485 env[1562]: time="2024-12-13T03:56:27.729308513Z" level=info msg="containerd successfully booted in 0.030497s" Dec 13 03:56:27.730485 env[1562]: time="2024-12-13T03:56:27.729315017Z" level=info msg="Start snapshots syncer" Dec 13 03:56:27.730485 env[1562]: time="2024-12-13T03:56:27.729324088Z" level=info msg="Start cni network conf syncer for default" Dec 13 03:56:27.730485 env[1562]: time="2024-12-13T03:56:27.729328651Z" level=info msg="Start streaming server" Dec 13 03:56:27.736523 systemd[1]: Started locksmithd.service. Dec 13 03:56:27.743504 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 03:56:27.743521 systemd[1]: Reached target system-config.target. Dec 13 03:56:27.752840 systemd[1]: Starting systemd-logind.service... Dec 13 03:56:27.759482 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 03:56:27.759509 systemd[1]: Reached target user-config.target. Dec 13 03:56:27.767606 systemd[1]: Started containerd.service. Dec 13 03:56:27.774673 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 03:56:27.777275 systemd-logind[1614]: Watching system buttons on /dev/input/event3 (Power Button) Dec 13 03:56:27.777286 systemd-logind[1614]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 03:56:27.777302 systemd-logind[1614]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Dec 13 03:56:27.777439 systemd-logind[1614]: New seat seat0. Dec 13 03:56:27.784671 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 03:56:27.784752 systemd[1]: Finished issuegen.service. Dec 13 03:56:27.791657 systemd[1]: Started systemd-logind.service. Dec 13 03:56:27.793826 locksmithd[1613]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 03:56:27.800349 systemd[1]: Starting systemd-user-sessions.service... Dec 13 03:56:27.808698 systemd[1]: Finished systemd-user-sessions.service. Dec 13 03:56:27.817287 systemd[1]: Started getty@tty1.service. Dec 13 03:56:27.824188 systemd[1]: Started serial-getty@ttyS1.service. Dec 13 03:56:27.832568 systemd[1]: Reached target getty.target. Dec 13 03:56:27.967899 tar[1559]: linux-amd64/LICENSE Dec 13 03:56:27.967899 tar[1559]: linux-amd64/README.md Dec 13 03:56:27.970449 systemd[1]: Finished prepare-helm.service. Dec 13 03:56:27.993479 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Dec 13 03:56:28.022156 extend-filesystems[1536]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 03:56:28.022156 extend-filesystems[1536]: old_desc_blocks = 1, new_desc_blocks = 56 Dec 13 03:56:28.022156 extend-filesystems[1536]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Dec 13 03:56:28.063615 extend-filesystems[1526]: Resized filesystem in /dev/sda9 Dec 13 03:56:28.063615 extend-filesystems[1526]: Found sdb Dec 13 03:56:28.022581 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 03:56:28.022660 systemd[1]: Finished extend-filesystems.service. Dec 13 03:56:28.710574 systemd-networkd[1318]: bond0: Gained IPv6LL Dec 13 03:56:28.711686 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 03:56:28.721806 systemd[1]: Reached target network-online.target. Dec 13 03:56:28.732483 systemd[1]: Starting kubelet.service... Dec 13 03:56:29.414505 kernel: mlx5_core 0000:02:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Dec 13 03:56:29.522177 systemd[1]: Started kubelet.service. Dec 13 03:56:30.024725 kubelet[1633]: E1213 03:56:30.024673 1633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 03:56:30.025731 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 03:56:30.025800 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 03:56:32.903976 login[1620]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Dec 13 03:56:32.910547 login[1621]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 03:56:32.918001 systemd[1]: Created slice user-500.slice. Dec 13 03:56:32.918628 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 03:56:32.919656 systemd-logind[1614]: New session 1 of user core. Dec 13 03:56:32.923982 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 03:56:32.924691 systemd[1]: Starting user@500.service... Dec 13 03:56:32.926729 (systemd)[1648]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:56:33.019442 systemd[1648]: Queued start job for default target default.target. Dec 13 03:56:33.019672 systemd[1648]: Reached target paths.target. Dec 13 03:56:33.019684 systemd[1648]: Reached target sockets.target. Dec 13 03:56:33.019691 systemd[1648]: Reached target timers.target. Dec 13 03:56:33.019698 systemd[1648]: Reached target basic.target. Dec 13 03:56:33.019716 systemd[1648]: Reached target default.target. Dec 13 03:56:33.019731 systemd[1648]: Startup finished in 89ms. Dec 13 03:56:33.019775 systemd[1]: Started user@500.service. Dec 13 03:56:33.020328 systemd[1]: Started session-1.scope. Dec 13 03:56:33.383774 coreos-metadata[1518]: Dec 13 03:56:33.383 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Dec 13 03:56:33.384538 coreos-metadata[1521]: Dec 13 03:56:33.383 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Dec 13 03:56:33.904851 login[1620]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 03:56:33.916147 systemd-logind[1614]: New session 2 of user core. Dec 13 03:56:33.918565 systemd[1]: Started session-2.scope. Dec 13 03:56:34.383970 coreos-metadata[1518]: Dec 13 03:56:34.383 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 03:56:34.384746 coreos-metadata[1521]: Dec 13 03:56:34.383 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 03:56:34.917476 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:2 port 2:2 Dec 13 03:56:34.917643 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:2 Dec 13 03:56:35.492395 systemd[1]: Created slice system-sshd.slice. Dec 13 03:56:35.493386 systemd[1]: Started sshd@0-145.40.90.151:22-139.178.68.195:41996.service. Dec 13 03:56:35.549572 sshd[1669]: Accepted publickey for core from 139.178.68.195 port 41996 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:56:35.550416 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:56:35.553204 systemd-logind[1614]: New session 3 of user core. Dec 13 03:56:35.553820 systemd[1]: Started session-3.scope. Dec 13 03:56:35.606905 systemd[1]: Started sshd@1-145.40.90.151:22-139.178.68.195:41998.service. Dec 13 03:56:35.643221 sshd[1674]: Accepted publickey for core from 139.178.68.195 port 41998 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:56:35.643917 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:56:35.646246 systemd-logind[1614]: New session 4 of user core. Dec 13 03:56:35.646753 systemd[1]: Started session-4.scope. Dec 13 03:56:35.661935 systemd-timesyncd[1508]: Contacted time server 135.148.100.14:123 (0.flatcar.pool.ntp.org). Dec 13 03:56:35.661960 systemd-timesyncd[1508]: Initial clock synchronization to Fri 2024-12-13 03:56:35.996043 UTC. Dec 13 03:56:35.700073 sshd[1674]: pam_unix(sshd:session): session closed for user core Dec 13 03:56:35.701801 systemd[1]: sshd@1-145.40.90.151:22-139.178.68.195:41998.service: Deactivated successfully. Dec 13 03:56:35.702112 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 03:56:35.702383 systemd-logind[1614]: Session 4 logged out. Waiting for processes to exit. Dec 13 03:56:35.702923 systemd[1]: Started sshd@2-145.40.90.151:22-139.178.68.195:47538.service. Dec 13 03:56:35.703335 systemd-logind[1614]: Removed session 4. Dec 13 03:56:35.740066 sshd[1680]: Accepted publickey for core from 139.178.68.195 port 47538 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:56:35.741248 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:56:35.745190 systemd-logind[1614]: New session 5 of user core. Dec 13 03:56:35.746312 systemd[1]: Started session-5.scope. Dec 13 03:56:35.804233 sshd[1680]: pam_unix(sshd:session): session closed for user core Dec 13 03:56:35.805515 systemd[1]: sshd@2-145.40.90.151:22-139.178.68.195:47538.service: Deactivated successfully. Dec 13 03:56:35.805902 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 03:56:35.806278 systemd-logind[1614]: Session 5 logged out. Waiting for processes to exit. Dec 13 03:56:35.806957 systemd-logind[1614]: Removed session 5. Dec 13 03:56:35.942464 coreos-metadata[1518]: Dec 13 03:56:35.942 INFO Fetch successful Dec 13 03:56:36.024081 unknown[1518]: wrote ssh authorized keys file for user: core Dec 13 03:56:36.037259 update-ssh-keys[1686]: Updated "/home/core/.ssh/authorized_keys" Dec 13 03:56:36.037509 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 03:56:36.198025 coreos-metadata[1521]: Dec 13 03:56:36.197 INFO Fetch successful Dec 13 03:56:36.280394 systemd[1]: Finished coreos-metadata.service. Dec 13 03:56:36.281300 systemd[1]: Started packet-phone-home.service. Dec 13 03:56:36.281427 systemd[1]: Reached target multi-user.target. Dec 13 03:56:36.282129 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 03:56:36.286690 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 03:56:36.286774 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 03:56:36.286959 curl[1689]: % Total % Received % Xferd Average Speed Time Time Time Current Dec 13 03:56:36.286934 systemd[1]: Startup finished in 2.035s (kernel) + 21.679s (initrd) + 15.762s (userspace) = 39.477s. Dec 13 03:56:36.287157 curl[1689]: Dload Upload Total Spent Left Speed Dec 13 03:56:37.127708 curl[1689]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Dec 13 03:56:37.130180 systemd[1]: packet-phone-home.service: Deactivated successfully. Dec 13 03:56:40.135418 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 03:56:40.136003 systemd[1]: Stopped kubelet.service. Dec 13 03:56:40.139315 systemd[1]: Starting kubelet.service... Dec 13 03:56:40.340996 systemd[1]: Started kubelet.service. Dec 13 03:56:40.382804 kubelet[1695]: E1213 03:56:40.382748 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 03:56:40.385372 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 03:56:40.385473 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 03:56:46.054094 systemd[1]: Started sshd@3-145.40.90.151:22-139.178.68.195:39298.service. Dec 13 03:56:46.090852 sshd[1712]: Accepted publickey for core from 139.178.68.195 port 39298 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:56:46.091535 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:56:46.093968 systemd-logind[1614]: New session 6 of user core. Dec 13 03:56:46.094377 systemd[1]: Started session-6.scope. Dec 13 03:56:46.147152 sshd[1712]: pam_unix(sshd:session): session closed for user core Dec 13 03:56:46.148857 systemd[1]: sshd@3-145.40.90.151:22-139.178.68.195:39298.service: Deactivated successfully. Dec 13 03:56:46.149203 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 03:56:46.149476 systemd-logind[1614]: Session 6 logged out. Waiting for processes to exit. Dec 13 03:56:46.150012 systemd[1]: Started sshd@4-145.40.90.151:22-139.178.68.195:39306.service. Dec 13 03:56:46.150413 systemd-logind[1614]: Removed session 6. Dec 13 03:56:46.186957 sshd[1718]: Accepted publickey for core from 139.178.68.195 port 39306 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:56:46.187836 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:56:46.190938 systemd-logind[1614]: New session 7 of user core. Dec 13 03:56:46.191500 systemd[1]: Started session-7.scope. Dec 13 03:56:46.243614 sshd[1718]: pam_unix(sshd:session): session closed for user core Dec 13 03:56:46.250273 systemd[1]: sshd@4-145.40.90.151:22-139.178.68.195:39306.service: Deactivated successfully. Dec 13 03:56:46.251913 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 03:56:46.253656 systemd-logind[1614]: Session 7 logged out. Waiting for processes to exit. Dec 13 03:56:46.256338 systemd[1]: Started sshd@5-145.40.90.151:22-139.178.68.195:39310.service. Dec 13 03:56:46.258884 systemd-logind[1614]: Removed session 7. Dec 13 03:56:46.297637 sshd[1724]: Accepted publickey for core from 139.178.68.195 port 39310 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:56:46.298467 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:56:46.301178 systemd-logind[1614]: New session 8 of user core. Dec 13 03:56:46.301823 systemd[1]: Started session-8.scope. Dec 13 03:56:46.363349 sshd[1724]: pam_unix(sshd:session): session closed for user core Dec 13 03:56:46.370911 systemd[1]: sshd@5-145.40.90.151:22-139.178.68.195:39310.service: Deactivated successfully. Dec 13 03:56:46.372546 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 03:56:46.374274 systemd-logind[1614]: Session 8 logged out. Waiting for processes to exit. Dec 13 03:56:46.377062 systemd[1]: Started sshd@6-145.40.90.151:22-139.178.68.195:39316.service. Dec 13 03:56:46.379455 systemd-logind[1614]: Removed session 8. Dec 13 03:56:46.417610 sshd[1730]: Accepted publickey for core from 139.178.68.195 port 39316 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 03:56:46.418407 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 03:56:46.421141 systemd-logind[1614]: New session 9 of user core. Dec 13 03:56:46.421657 systemd[1]: Started session-9.scope. Dec 13 03:56:46.511007 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 03:56:46.511699 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 03:56:46.536788 systemd[1]: Starting docker.service... Dec 13 03:56:46.554165 env[1748]: time="2024-12-13T03:56:46.554106232Z" level=info msg="Starting up" Dec 13 03:56:46.554707 env[1748]: time="2024-12-13T03:56:46.554671810Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 03:56:46.554707 env[1748]: time="2024-12-13T03:56:46.554681664Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 03:56:46.554707 env[1748]: time="2024-12-13T03:56:46.554693354Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 03:56:46.554707 env[1748]: time="2024-12-13T03:56:46.554699653Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 03:56:46.555570 env[1748]: time="2024-12-13T03:56:46.555559349Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 03:56:46.555570 env[1748]: time="2024-12-13T03:56:46.555567690Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 03:56:46.555637 env[1748]: time="2024-12-13T03:56:46.555575464Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 03:56:46.555637 env[1748]: time="2024-12-13T03:56:46.555580696Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 03:56:46.568311 env[1748]: time="2024-12-13T03:56:46.568271493Z" level=info msg="Loading containers: start." Dec 13 03:56:46.710467 kernel: Initializing XFRM netlink socket Dec 13 03:56:46.751227 env[1748]: time="2024-12-13T03:56:46.751204614Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 03:56:46.798712 systemd-networkd[1318]: docker0: Link UP Dec 13 03:56:46.820311 env[1748]: time="2024-12-13T03:56:46.820273896Z" level=info msg="Loading containers: done." Dec 13 03:56:46.837835 env[1748]: time="2024-12-13T03:56:46.837704552Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 03:56:46.838318 env[1748]: time="2024-12-13T03:56:46.838236343Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 03:56:46.838655 env[1748]: time="2024-12-13T03:56:46.838564325Z" level=info msg="Daemon has completed initialization" Dec 13 03:56:46.846205 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck890878830-merged.mount: Deactivated successfully. Dec 13 03:56:46.871270 systemd[1]: Started docker.service. Dec 13 03:56:46.888136 env[1748]: time="2024-12-13T03:56:46.888042373Z" level=info msg="API listen on /run/docker.sock" Dec 13 03:56:47.253063 systemd[1]: Started sshd@7-145.40.90.151:22-92.27.157.252:43062.service. Dec 13 03:56:48.098466 sshd[1886]: Invalid user yqz from 92.27.157.252 port 43062 Dec 13 03:56:48.105357 sshd[1886]: pam_faillock(sshd:auth): User unknown Dec 13 03:56:48.106398 sshd[1886]: pam_unix(sshd:auth): check pass; user unknown Dec 13 03:56:48.106518 sshd[1886]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.27.157.252 Dec 13 03:56:48.107395 sshd[1886]: pam_faillock(sshd:auth): User unknown Dec 13 03:56:48.173404 env[1562]: time="2024-12-13T03:56:48.173274120Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 03:56:49.007836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount295219795.mount: Deactivated successfully. Dec 13 03:56:50.010815 sshd[1886]: Failed password for invalid user yqz from 92.27.157.252 port 43062 ssh2 Dec 13 03:56:50.179569 env[1562]: time="2024-12-13T03:56:50.179505487Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:50.180710 env[1562]: time="2024-12-13T03:56:50.180685860Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:50.182309 env[1562]: time="2024-12-13T03:56:50.182263773Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:50.183625 env[1562]: time="2024-12-13T03:56:50.183584703Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:50.184054 env[1562]: time="2024-12-13T03:56:50.184007876Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 03:56:50.185232 env[1562]: time="2024-12-13T03:56:50.185199123Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 03:56:50.590261 sshd[1886]: Received disconnect from 92.27.157.252 port 43062:11: Bye Bye [preauth] Dec 13 03:56:50.590261 sshd[1886]: Disconnected from invalid user yqz 92.27.157.252 port 43062 [preauth] Dec 13 03:56:50.592806 systemd[1]: sshd@7-145.40.90.151:22-92.27.157.252:43062.service: Deactivated successfully. Dec 13 03:56:50.595997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 03:56:50.596343 systemd[1]: Stopped kubelet.service. Dec 13 03:56:50.597110 systemd[1]: Starting kubelet.service... Dec 13 03:56:50.777503 systemd[1]: Started kubelet.service. Dec 13 03:56:50.794800 kubelet[1910]: E1213 03:56:50.794748 1910 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 03:56:50.795784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 03:56:50.795854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 03:56:51.715868 env[1562]: time="2024-12-13T03:56:51.715799367Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:51.716395 env[1562]: time="2024-12-13T03:56:51.716354355Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:51.717703 env[1562]: time="2024-12-13T03:56:51.717664327Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:51.718507 env[1562]: time="2024-12-13T03:56:51.718421597Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:51.718958 env[1562]: time="2024-12-13T03:56:51.718912608Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 03:56:51.719556 env[1562]: time="2024-12-13T03:56:51.719510188Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 03:56:52.897816 env[1562]: time="2024-12-13T03:56:52.897790959Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:52.898390 env[1562]: time="2024-12-13T03:56:52.898375302Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:52.900314 env[1562]: time="2024-12-13T03:56:52.900257660Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:52.901568 env[1562]: time="2024-12-13T03:56:52.901519589Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:52.902437 env[1562]: time="2024-12-13T03:56:52.902386748Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 03:56:52.902770 env[1562]: time="2024-12-13T03:56:52.902719056Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 03:56:53.873152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3474121035.mount: Deactivated successfully. Dec 13 03:56:54.260269 env[1562]: time="2024-12-13T03:56:54.260221064Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:54.260722 env[1562]: time="2024-12-13T03:56:54.260687217Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:54.261534 env[1562]: time="2024-12-13T03:56:54.261486124Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:54.262208 env[1562]: time="2024-12-13T03:56:54.262174010Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:54.262798 env[1562]: time="2024-12-13T03:56:54.262751793Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 03:56:54.263168 env[1562]: time="2024-12-13T03:56:54.263116935Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 03:56:54.811280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2262876355.mount: Deactivated successfully. Dec 13 03:56:55.483719 env[1562]: time="2024-12-13T03:56:55.483665663Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:55.484385 env[1562]: time="2024-12-13T03:56:55.484344291Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:55.485449 env[1562]: time="2024-12-13T03:56:55.485407322Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:55.486484 env[1562]: time="2024-12-13T03:56:55.486435072Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:55.486957 env[1562]: time="2024-12-13T03:56:55.486900296Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 03:56:55.487414 env[1562]: time="2024-12-13T03:56:55.487383782Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 03:56:56.001654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1681163146.mount: Deactivated successfully. Dec 13 03:56:56.003124 env[1562]: time="2024-12-13T03:56:56.003087167Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:56.003766 env[1562]: time="2024-12-13T03:56:56.003710676Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:56.004479 env[1562]: time="2024-12-13T03:56:56.004421296Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:56.005207 env[1562]: time="2024-12-13T03:56:56.005165373Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:56.005600 env[1562]: time="2024-12-13T03:56:56.005557134Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 03:56:56.006067 env[1562]: time="2024-12-13T03:56:56.006040980Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 03:56:56.589713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount590816537.mount: Deactivated successfully. Dec 13 03:56:58.164504 env[1562]: time="2024-12-13T03:56:58.164449748Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:58.165172 env[1562]: time="2024-12-13T03:56:58.165130862Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:58.166348 env[1562]: time="2024-12-13T03:56:58.166302886Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:58.167669 env[1562]: time="2024-12-13T03:56:58.167655199Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:56:58.167999 env[1562]: time="2024-12-13T03:56:58.167958317Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 03:57:00.247387 systemd[1]: Stopped kubelet.service. Dec 13 03:57:00.248699 systemd[1]: Starting kubelet.service... Dec 13 03:57:00.263245 systemd[1]: Reloading. Dec 13 03:57:00.298805 /usr/lib/systemd/system-generators/torcx-generator[1995]: time="2024-12-13T03:57:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 03:57:00.298832 /usr/lib/systemd/system-generators/torcx-generator[1995]: time="2024-12-13T03:57:00Z" level=info msg="torcx already run" Dec 13 03:57:00.353834 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 03:57:00.353843 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 03:57:00.366572 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 03:57:00.434650 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 03:57:00.434690 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 03:57:00.434788 systemd[1]: Stopped kubelet.service. Dec 13 03:57:00.435594 systemd[1]: Starting kubelet.service... Dec 13 03:57:00.633663 systemd[1]: Started kubelet.service. Dec 13 03:57:00.660277 kubelet[2059]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 03:57:00.660277 kubelet[2059]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 03:57:00.660277 kubelet[2059]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 03:57:00.662572 kubelet[2059]: I1213 03:57:00.662524 2059 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 03:57:00.898485 kubelet[2059]: I1213 03:57:00.898410 2059 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 03:57:00.898485 kubelet[2059]: I1213 03:57:00.898428 2059 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 03:57:00.898577 kubelet[2059]: I1213 03:57:00.898572 2059 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 03:57:00.927862 kubelet[2059]: I1213 03:57:00.927790 2059 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 03:57:00.939003 kubelet[2059]: E1213 03:57:00.938901 2059 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://145.40.90.151:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 145.40.90.151:6443: connect: connection refused" logger="UnhandledError" Dec 13 03:57:00.954298 kubelet[2059]: E1213 03:57:00.954202 2059 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 03:57:00.954298 kubelet[2059]: I1213 03:57:00.954268 2059 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 03:57:00.996027 kubelet[2059]: I1213 03:57:00.995939 2059 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 03:57:00.999106 kubelet[2059]: I1213 03:57:00.999029 2059 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 03:57:00.999399 kubelet[2059]: I1213 03:57:00.999332 2059 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 03:57:00.999924 kubelet[2059]: I1213 03:57:00.999403 2059 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.6-a-840ab18f38","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 03:57:00.999924 kubelet[2059]: I1213 03:57:00.999909 2059 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 03:57:00.999924 kubelet[2059]: I1213 03:57:00.999939 2059 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 03:57:01.000421 kubelet[2059]: I1213 03:57:01.000142 2059 state_mem.go:36] "Initialized new in-memory state store" Dec 13 03:57:01.011851 kubelet[2059]: I1213 03:57:01.011806 2059 kubelet.go:408] "Attempting to sync node with API server" Dec 13 03:57:01.012060 kubelet[2059]: I1213 03:57:01.011868 2059 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 03:57:01.012060 kubelet[2059]: I1213 03:57:01.011953 2059 kubelet.go:314] "Adding apiserver pod source" Dec 13 03:57:01.012060 kubelet[2059]: I1213 03:57:01.011986 2059 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 03:57:01.064692 kubelet[2059]: W1213 03:57:01.064515 2059 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://145.40.90.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 145.40.90.151:6443: connect: connection refused Dec 13 03:57:01.064692 kubelet[2059]: E1213 03:57:01.064670 2059 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://145.40.90.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 145.40.90.151:6443: connect: connection refused" logger="UnhandledError" Dec 13 03:57:01.085955 kubelet[2059]: I1213 03:57:01.085902 2059 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 03:57:01.087346 kubelet[2059]: W1213 03:57:01.087246 2059 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://145.40.90.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-840ab18f38&limit=500&resourceVersion=0": dial tcp 145.40.90.151:6443: connect: connection refused Dec 13 03:57:01.087551 kubelet[2059]: E1213 03:57:01.087375 2059 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://145.40.90.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-840ab18f38&limit=500&resourceVersion=0\": dial tcp 145.40.90.151:6443: connect: connection refused" logger="UnhandledError" Dec 13 03:57:01.091360 kubelet[2059]: I1213 03:57:01.091270 2059 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 03:57:01.092935 kubelet[2059]: W1213 03:57:01.092859 2059 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 03:57:01.094130 kubelet[2059]: I1213 03:57:01.094085 2059 server.go:1269] "Started kubelet" Dec 13 03:57:01.094354 kubelet[2059]: I1213 03:57:01.094226 2059 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 03:57:01.094511 kubelet[2059]: I1213 03:57:01.094244 2059 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 03:57:01.094928 kubelet[2059]: I1213 03:57:01.094883 2059 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 03:57:01.099109 kubelet[2059]: I1213 03:57:01.099058 2059 server.go:460] "Adding debug handlers to kubelet server" Dec 13 03:57:01.107960 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 03:57:01.108524 kubelet[2059]: I1213 03:57:01.108447 2059 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 03:57:01.108524 kubelet[2059]: I1213 03:57:01.108384 2059 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 03:57:01.109509 kubelet[2059]: I1213 03:57:01.109471 2059 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 03:57:01.110283 kubelet[2059]: E1213 03:57:01.109089 2059 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-840ab18f38\" not found" Dec 13 03:57:01.110283 kubelet[2059]: I1213 03:57:01.110207 2059 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 03:57:01.110749 kubelet[2059]: I1213 03:57:01.110670 2059 reconciler.go:26] "Reconciler: start to sync state" Dec 13 03:57:01.146704 kubelet[2059]: W1213 03:57:01.146551 2059 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://145.40.90.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.151:6443: connect: connection refused Dec 13 03:57:01.147002 kubelet[2059]: E1213 03:57:01.146795 2059 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://145.40.90.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 145.40.90.151:6443: connect: connection refused" logger="UnhandledError" Dec 13 03:57:01.147206 kubelet[2059]: E1213 03:57:01.146988 2059 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-840ab18f38?timeout=10s\": dial tcp 145.40.90.151:6443: connect: connection refused" interval="200ms" Dec 13 03:57:01.148461 kubelet[2059]: I1213 03:57:01.147710 2059 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 03:57:01.152582 kubelet[2059]: I1213 03:57:01.152444 2059 factory.go:221] Registration of the containerd container factory successfully Dec 13 03:57:01.152582 kubelet[2059]: I1213 03:57:01.152492 2059 factory.go:221] Registration of the systemd container factory successfully Dec 13 03:57:01.153113 kubelet[2059]: E1213 03:57:01.153062 2059 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 03:57:01.156258 kubelet[2059]: E1213 03:57:01.152104 2059 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://145.40.90.151:6443/api/v1/namespaces/default/events\": dial tcp 145.40.90.151:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.6-a-840ab18f38.1810a06712bc24a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-840ab18f38,UID:ci-3510.3.6-a-840ab18f38,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-840ab18f38,},FirstTimestamp:2024-12-13 03:57:01.094036642 +0000 UTC m=+0.457151618,LastTimestamp:2024-12-13 03:57:01.094036642 +0000 UTC m=+0.457151618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-840ab18f38,}" Dec 13 03:57:01.174397 kubelet[2059]: I1213 03:57:01.174338 2059 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 03:57:01.175459 kubelet[2059]: I1213 03:57:01.175402 2059 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 03:57:01.175459 kubelet[2059]: I1213 03:57:01.175450 2059 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 03:57:01.175637 kubelet[2059]: I1213 03:57:01.175472 2059 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 03:57:01.175637 kubelet[2059]: E1213 03:57:01.175521 2059 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 03:57:01.175933 kubelet[2059]: W1213 03:57:01.175871 2059 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://145.40.90.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 145.40.90.151:6443: connect: connection refused Dec 13 03:57:01.175933 kubelet[2059]: E1213 03:57:01.175914 2059 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://145.40.90.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 145.40.90.151:6443: connect: connection refused" logger="UnhandledError" Dec 13 03:57:01.188008 kubelet[2059]: I1213 03:57:01.187991 2059 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 03:57:01.188008 kubelet[2059]: I1213 03:57:01.188004 2059 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 03:57:01.188126 kubelet[2059]: I1213 03:57:01.188019 2059 state_mem.go:36] "Initialized new in-memory state store" Dec 13 03:57:01.189261 kubelet[2059]: I1213 03:57:01.189248 2059 policy_none.go:49] "None policy: Start" Dec 13 03:57:01.189688 kubelet[2059]: I1213 03:57:01.189673 2059 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 03:57:01.189741 kubelet[2059]: I1213 03:57:01.189692 2059 state_mem.go:35] "Initializing new in-memory state store" Dec 13 03:57:01.193569 systemd[1]: Created slice kubepods.slice. Dec 13 03:57:01.197422 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 03:57:01.200090 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 03:57:01.211188 kubelet[2059]: E1213 03:57:01.211145 2059 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-840ab18f38\" not found" Dec 13 03:57:01.212064 kubelet[2059]: I1213 03:57:01.212019 2059 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 03:57:01.212184 kubelet[2059]: I1213 03:57:01.212149 2059 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 03:57:01.212184 kubelet[2059]: I1213 03:57:01.212162 2059 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 03:57:01.212293 kubelet[2059]: I1213 03:57:01.212279 2059 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 03:57:01.213069 kubelet[2059]: E1213 03:57:01.213052 2059 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.6-a-840ab18f38\" not found" Dec 13 03:57:01.298277 systemd[1]: Created slice kubepods-burstable-poddd35b18421b872994841d1d0422c1420.slice. Dec 13 03:57:01.311812 kubelet[2059]: I1213 03:57:01.311722 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd35b18421b872994841d1d0422c1420-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-840ab18f38\" (UID: \"dd35b18421b872994841d1d0422c1420\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:01.311812 kubelet[2059]: I1213 03:57:01.311795 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd35b18421b872994841d1d0422c1420-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-840ab18f38\" (UID: \"dd35b18421b872994841d1d0422c1420\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:01.312119 kubelet[2059]: I1213 03:57:01.311849 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06e4c65c55f69d97d1662d5dc2f53a1a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-840ab18f38\" (UID: \"06e4c65c55f69d97d1662d5dc2f53a1a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:01.312119 kubelet[2059]: I1213 03:57:01.311896 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/06e4c65c55f69d97d1662d5dc2f53a1a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-840ab18f38\" (UID: \"06e4c65c55f69d97d1662d5dc2f53a1a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:01.312119 kubelet[2059]: I1213 03:57:01.311950 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa1622841b957bd222340aaf38774aed-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-840ab18f38\" (UID: \"aa1622841b957bd222340aaf38774aed\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:01.312119 kubelet[2059]: I1213 03:57:01.311997 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd35b18421b872994841d1d0422c1420-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-840ab18f38\" (UID: \"dd35b18421b872994841d1d0422c1420\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:01.312119 kubelet[2059]: I1213 03:57:01.312044 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/06e4c65c55f69d97d1662d5dc2f53a1a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-840ab18f38\" (UID: \"06e4c65c55f69d97d1662d5dc2f53a1a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:01.312625 kubelet[2059]: I1213 03:57:01.312085 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06e4c65c55f69d97d1662d5dc2f53a1a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-840ab18f38\" (UID: \"06e4c65c55f69d97d1662d5dc2f53a1a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:01.312625 kubelet[2059]: I1213 03:57:01.312183 2059 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06e4c65c55f69d97d1662d5dc2f53a1a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-840ab18f38\" (UID: \"06e4c65c55f69d97d1662d5dc2f53a1a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:01.315437 kubelet[2059]: I1213 03:57:01.315381 2059 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.6-a-840ab18f38" Dec 13 03:57:01.316175 kubelet[2059]: E1213 03:57:01.316072 2059 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://145.40.90.151:6443/api/v1/nodes\": dial tcp 145.40.90.151:6443: connect: connection refused" node="ci-3510.3.6-a-840ab18f38" Dec 13 03:57:01.329655 systemd[1]: Created slice kubepods-burstable-pod06e4c65c55f69d97d1662d5dc2f53a1a.slice. Dec 13 03:57:01.337569 systemd[1]: Created slice kubepods-burstable-podaa1622841b957bd222340aaf38774aed.slice. Dec 13 03:57:01.348902 kubelet[2059]: E1213 03:57:01.348787 2059 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-840ab18f38?timeout=10s\": dial tcp 145.40.90.151:6443: connect: connection refused" interval="400ms" Dec 13 03:57:01.520390 kubelet[2059]: I1213 03:57:01.520213 2059 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.6-a-840ab18f38" Dec 13 03:57:01.521095 kubelet[2059]: E1213 03:57:01.520990 2059 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://145.40.90.151:6443/api/v1/nodes\": dial tcp 145.40.90.151:6443: connect: connection refused" node="ci-3510.3.6-a-840ab18f38" Dec 13 03:57:01.624060 env[1562]: time="2024-12-13T03:57:01.623922832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-840ab18f38,Uid:dd35b18421b872994841d1d0422c1420,Namespace:kube-system,Attempt:0,}" Dec 13 03:57:01.635123 env[1562]: time="2024-12-13T03:57:01.635011848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-840ab18f38,Uid:06e4c65c55f69d97d1662d5dc2f53a1a,Namespace:kube-system,Attempt:0,}" Dec 13 03:57:01.643249 env[1562]: time="2024-12-13T03:57:01.643163745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-840ab18f38,Uid:aa1622841b957bd222340aaf38774aed,Namespace:kube-system,Attempt:0,}" Dec 13 03:57:01.750550 kubelet[2059]: E1213 03:57:01.750380 2059 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-840ab18f38?timeout=10s\": dial tcp 145.40.90.151:6443: connect: connection refused" interval="800ms" Dec 13 03:57:01.925863 kubelet[2059]: I1213 03:57:01.925751 2059 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.6-a-840ab18f38" Dec 13 03:57:01.926571 kubelet[2059]: E1213 03:57:01.926443 2059 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://145.40.90.151:6443/api/v1/nodes\": dial tcp 145.40.90.151:6443: connect: connection refused" node="ci-3510.3.6-a-840ab18f38" Dec 13 03:57:02.150812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount444391040.mount: Deactivated successfully. Dec 13 03:57:02.152441 env[1562]: time="2024-12-13T03:57:02.152358913Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:57:02.153501 env[1562]: time="2024-12-13T03:57:02.153471873Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:57:02.155474 env[1562]: time="2024-12-13T03:57:02.155414013Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:57:02.156382 env[1562]: time="2024-12-13T03:57:02.156328854Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:57:02.159812 env[1562]: time="2024-12-13T03:57:02.159758162Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:57:02.163249 env[1562]: time="2024-12-13T03:57:02.163183909Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:57:02.165857 env[1562]: time="2024-12-13T03:57:02.165801532Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:57:02.166651 env[1562]: time="2024-12-13T03:57:02.166599798Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:57:02.167465 env[1562]: time="2024-12-13T03:57:02.167431898Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:57:02.168251 env[1562]: time="2024-12-13T03:57:02.168197253Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:57:02.170888 env[1562]: time="2024-12-13T03:57:02.170856123Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:57:02.171690 env[1562]: time="2024-12-13T03:57:02.171660556Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:57:02.176842 env[1562]: time="2024-12-13T03:57:02.176719756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:57:02.176842 env[1562]: time="2024-12-13T03:57:02.176766179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:57:02.176842 env[1562]: time="2024-12-13T03:57:02.176781947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:57:02.177041 env[1562]: time="2024-12-13T03:57:02.176963221Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c11797153bfea47d57d82f161301d5993dfb739f8e6f7f59b57f6dffca997a7 pid=2114 runtime=io.containerd.runc.v2 Dec 13 03:57:02.178920 env[1562]: time="2024-12-13T03:57:02.178854601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:57:02.178920 env[1562]: time="2024-12-13T03:57:02.178895315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:57:02.178920 env[1562]: time="2024-12-13T03:57:02.178910318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:57:02.179106 env[1562]: time="2024-12-13T03:57:02.179067857Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba3e20a8ab36910ade773a4354673f2db74b865b49043ae157db72d37558d763 pid=2125 runtime=io.containerd.runc.v2 Dec 13 03:57:02.180876 env[1562]: time="2024-12-13T03:57:02.180816299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:57:02.180876 env[1562]: time="2024-12-13T03:57:02.180857639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:57:02.181024 env[1562]: time="2024-12-13T03:57:02.180881995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:57:02.181076 env[1562]: time="2024-12-13T03:57:02.181035854Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ed25a438895ee2d9f404917784476c1e41cf50095a27f4da60c6e944b7ecc03 pid=2139 runtime=io.containerd.runc.v2 Dec 13 03:57:02.192212 systemd[1]: Started cri-containerd-7c11797153bfea47d57d82f161301d5993dfb739f8e6f7f59b57f6dffca997a7.scope. Dec 13 03:57:02.193517 systemd[1]: Started cri-containerd-ba3e20a8ab36910ade773a4354673f2db74b865b49043ae157db72d37558d763.scope. Dec 13 03:57:02.197120 systemd[1]: Started cri-containerd-0ed25a438895ee2d9f404917784476c1e41cf50095a27f4da60c6e944b7ecc03.scope. Dec 13 03:57:02.226539 env[1562]: time="2024-12-13T03:57:02.226487951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-840ab18f38,Uid:aa1622841b957bd222340aaf38774aed,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c11797153bfea47d57d82f161301d5993dfb739f8e6f7f59b57f6dffca997a7\"" Dec 13 03:57:02.226994 env[1562]: time="2024-12-13T03:57:02.226975383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-840ab18f38,Uid:dd35b18421b872994841d1d0422c1420,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ed25a438895ee2d9f404917784476c1e41cf50095a27f4da60c6e944b7ecc03\"" Dec 13 03:57:02.227095 env[1562]: time="2024-12-13T03:57:02.227079253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-840ab18f38,Uid:06e4c65c55f69d97d1662d5dc2f53a1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba3e20a8ab36910ade773a4354673f2db74b865b49043ae157db72d37558d763\"" Dec 13 03:57:02.228519 env[1562]: time="2024-12-13T03:57:02.228503412Z" level=info msg="CreateContainer within sandbox \"7c11797153bfea47d57d82f161301d5993dfb739f8e6f7f59b57f6dffca997a7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 03:57:02.228644 env[1562]: time="2024-12-13T03:57:02.228629128Z" level=info msg="CreateContainer within sandbox \"ba3e20a8ab36910ade773a4354673f2db74b865b49043ae157db72d37558d763\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 03:57:02.228693 env[1562]: time="2024-12-13T03:57:02.228640499Z" level=info msg="CreateContainer within sandbox \"0ed25a438895ee2d9f404917784476c1e41cf50095a27f4da60c6e944b7ecc03\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 03:57:02.234889 env[1562]: time="2024-12-13T03:57:02.234841213Z" level=info msg="CreateContainer within sandbox \"0ed25a438895ee2d9f404917784476c1e41cf50095a27f4da60c6e944b7ecc03\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b223b994d2a21377164586bfd9c95b671e3ae5810f14d70b1922d9d3f5d8081b\"" Dec 13 03:57:02.235100 env[1562]: time="2024-12-13T03:57:02.235087652Z" level=info msg="StartContainer for \"b223b994d2a21377164586bfd9c95b671e3ae5810f14d70b1922d9d3f5d8081b\"" Dec 13 03:57:02.235879 env[1562]: time="2024-12-13T03:57:02.235863257Z" level=info msg="CreateContainer within sandbox \"7c11797153bfea47d57d82f161301d5993dfb739f8e6f7f59b57f6dffca997a7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a5bcda344e4c934cce9bdf52bec68c964d2e457e5e66790f8d752cffaf2f3015\"" Dec 13 03:57:02.236036 env[1562]: time="2024-12-13T03:57:02.236021717Z" level=info msg="StartContainer for \"a5bcda344e4c934cce9bdf52bec68c964d2e457e5e66790f8d752cffaf2f3015\"" Dec 13 03:57:02.236742 env[1562]: time="2024-12-13T03:57:02.236723965Z" level=info msg="CreateContainer within sandbox \"ba3e20a8ab36910ade773a4354673f2db74b865b49043ae157db72d37558d763\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e16ca4d7a685f7f157b8a9302fdb722a64fc7fee41a0eaa4dac6a007a6f13766\"" Dec 13 03:57:02.236915 env[1562]: time="2024-12-13T03:57:02.236902483Z" level=info msg="StartContainer for \"e16ca4d7a685f7f157b8a9302fdb722a64fc7fee41a0eaa4dac6a007a6f13766\"" Dec 13 03:57:02.244263 systemd[1]: Started cri-containerd-a5bcda344e4c934cce9bdf52bec68c964d2e457e5e66790f8d752cffaf2f3015.scope. Dec 13 03:57:02.244933 systemd[1]: Started cri-containerd-b223b994d2a21377164586bfd9c95b671e3ae5810f14d70b1922d9d3f5d8081b.scope. Dec 13 03:57:02.245488 systemd[1]: Started cri-containerd-e16ca4d7a685f7f157b8a9302fdb722a64fc7fee41a0eaa4dac6a007a6f13766.scope. Dec 13 03:57:02.253231 kubelet[2059]: W1213 03:57:02.253195 2059 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://145.40.90.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 145.40.90.151:6443: connect: connection refused Dec 13 03:57:02.253305 kubelet[2059]: E1213 03:57:02.253241 2059 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://145.40.90.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 145.40.90.151:6443: connect: connection refused" logger="UnhandledError" Dec 13 03:57:02.267992 env[1562]: time="2024-12-13T03:57:02.267959983Z" level=info msg="StartContainer for \"a5bcda344e4c934cce9bdf52bec68c964d2e457e5e66790f8d752cffaf2f3015\" returns successfully" Dec 13 03:57:02.268273 env[1562]: time="2024-12-13T03:57:02.268258315Z" level=info msg="StartContainer for \"b223b994d2a21377164586bfd9c95b671e3ae5810f14d70b1922d9d3f5d8081b\" returns successfully" Dec 13 03:57:02.269525 env[1562]: time="2024-12-13T03:57:02.269509835Z" level=info msg="StartContainer for \"e16ca4d7a685f7f157b8a9302fdb722a64fc7fee41a0eaa4dac6a007a6f13766\" returns successfully" Dec 13 03:57:02.728495 kubelet[2059]: I1213 03:57:02.728452 2059 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.6-a-840ab18f38" Dec 13 03:57:02.838863 kubelet[2059]: E1213 03:57:02.838845 2059 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.6-a-840ab18f38\" not found" node="ci-3510.3.6-a-840ab18f38" Dec 13 03:57:02.942587 kubelet[2059]: I1213 03:57:02.942565 2059 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.6-a-840ab18f38" Dec 13 03:57:03.013629 kubelet[2059]: I1213 03:57:03.013540 2059 apiserver.go:52] "Watching apiserver" Dec 13 03:57:03.109744 kubelet[2059]: I1213 03:57:03.109642 2059 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 03:57:03.195481 kubelet[2059]: E1213 03:57:03.195393 2059 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-840ab18f38\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:03.195481 kubelet[2059]: E1213 03:57:03.195405 2059 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.6-a-840ab18f38\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:03.196072 kubelet[2059]: E1213 03:57:03.196018 2059 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.6-a-840ab18f38\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:04.202399 kubelet[2059]: W1213 03:57:04.202338 2059 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 03:57:05.597949 systemd[1]: Reloading. Dec 13 03:57:05.623212 /usr/lib/systemd/system-generators/torcx-generator[2390]: time="2024-12-13T03:57:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 03:57:05.623237 /usr/lib/systemd/system-generators/torcx-generator[2390]: time="2024-12-13T03:57:05Z" level=info msg="torcx already run" Dec 13 03:57:05.677862 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 03:57:05.677871 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 03:57:05.689546 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 03:57:05.757256 systemd[1]: Stopping kubelet.service... Dec 13 03:57:05.781344 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 03:57:05.781916 systemd[1]: Stopped kubelet.service. Dec 13 03:57:05.782042 systemd[1]: kubelet.service: Consumed 1.131s CPU time. Dec 13 03:57:05.785926 systemd[1]: Starting kubelet.service... Dec 13 03:57:05.994570 systemd[1]: Started kubelet.service. Dec 13 03:57:06.015222 kubelet[2454]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 03:57:06.015222 kubelet[2454]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 03:57:06.015222 kubelet[2454]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 03:57:06.015466 kubelet[2454]: I1213 03:57:06.015261 2454 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 03:57:06.018901 kubelet[2454]: I1213 03:57:06.018860 2454 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 03:57:06.018901 kubelet[2454]: I1213 03:57:06.018871 2454 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 03:57:06.019050 kubelet[2454]: I1213 03:57:06.019020 2454 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 03:57:06.019809 kubelet[2454]: I1213 03:57:06.019773 2454 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 03:57:06.021018 kubelet[2454]: I1213 03:57:06.020981 2454 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 03:57:06.023554 kubelet[2454]: E1213 03:57:06.023505 2454 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 03:57:06.023554 kubelet[2454]: I1213 03:57:06.023522 2454 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 03:57:06.042409 kubelet[2454]: I1213 03:57:06.042364 2454 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 03:57:06.042500 kubelet[2454]: I1213 03:57:06.042445 2454 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 03:57:06.042548 kubelet[2454]: I1213 03:57:06.042525 2454 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 03:57:06.042706 kubelet[2454]: I1213 03:57:06.042545 2454 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.6-a-840ab18f38","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 03:57:06.042706 kubelet[2454]: I1213 03:57:06.042683 2454 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 03:57:06.042706 kubelet[2454]: I1213 03:57:06.042690 2454 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 03:57:06.042853 kubelet[2454]: I1213 03:57:06.042713 2454 state_mem.go:36] "Initialized new in-memory state store" Dec 13 03:57:06.042853 kubelet[2454]: I1213 03:57:06.042773 2454 kubelet.go:408] "Attempting to sync node with API server" Dec 13 03:57:06.042853 kubelet[2454]: I1213 03:57:06.042783 2454 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 03:57:06.042853 kubelet[2454]: I1213 03:57:06.042817 2454 kubelet.go:314] "Adding apiserver pod source" Dec 13 03:57:06.042853 kubelet[2454]: I1213 03:57:06.042826 2454 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 03:57:06.043230 kubelet[2454]: I1213 03:57:06.043192 2454 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 03:57:06.043547 kubelet[2454]: I1213 03:57:06.043535 2454 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 03:57:06.043819 kubelet[2454]: I1213 03:57:06.043809 2454 server.go:1269] "Started kubelet" Dec 13 03:57:06.043896 kubelet[2454]: I1213 03:57:06.043873 2454 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 03:57:06.043939 kubelet[2454]: I1213 03:57:06.043872 2454 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 03:57:06.044071 kubelet[2454]: I1213 03:57:06.044060 2454 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 03:57:06.045051 kubelet[2454]: I1213 03:57:06.045029 2454 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 03:57:06.045372 kubelet[2454]: I1213 03:57:06.045351 2454 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 03:57:06.045579 kubelet[2454]: I1213 03:57:06.045560 2454 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 03:57:06.045579 kubelet[2454]: I1213 03:57:06.045575 2454 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 03:57:06.045721 kubelet[2454]: E1213 03:57:06.045174 2454 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-840ab18f38\" not found" Dec 13 03:57:06.046005 kubelet[2454]: E1213 03:57:06.045983 2454 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 03:57:06.046105 kubelet[2454]: I1213 03:57:06.046094 2454 factory.go:221] Registration of the systemd container factory successfully Dec 13 03:57:06.046153 kubelet[2454]: I1213 03:57:06.046094 2454 reconciler.go:26] "Reconciler: start to sync state" Dec 13 03:57:06.046907 kubelet[2454]: I1213 03:57:06.046881 2454 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 03:57:06.047391 kubelet[2454]: I1213 03:57:06.047377 2454 server.go:460] "Adding debug handlers to kubelet server" Dec 13 03:57:06.047659 kubelet[2454]: I1213 03:57:06.047645 2454 factory.go:221] Registration of the containerd container factory successfully Dec 13 03:57:06.052284 kubelet[2454]: I1213 03:57:06.052260 2454 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 03:57:06.052825 kubelet[2454]: I1213 03:57:06.052812 2454 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 03:57:06.052884 kubelet[2454]: I1213 03:57:06.052831 2454 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 03:57:06.052884 kubelet[2454]: I1213 03:57:06.052842 2454 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 03:57:06.052938 kubelet[2454]: E1213 03:57:06.052872 2454 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 03:57:06.062476 kubelet[2454]: I1213 03:57:06.062434 2454 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 03:57:06.062476 kubelet[2454]: I1213 03:57:06.062444 2454 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 03:57:06.062476 kubelet[2454]: I1213 03:57:06.062455 2454 state_mem.go:36] "Initialized new in-memory state store" Dec 13 03:57:06.062578 kubelet[2454]: I1213 03:57:06.062541 2454 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 03:57:06.062578 kubelet[2454]: I1213 03:57:06.062547 2454 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 03:57:06.062578 kubelet[2454]: I1213 03:57:06.062558 2454 policy_none.go:49] "None policy: Start" Dec 13 03:57:06.062798 kubelet[2454]: I1213 03:57:06.062761 2454 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 03:57:06.062798 kubelet[2454]: I1213 03:57:06.062770 2454 state_mem.go:35] "Initializing new in-memory state store" Dec 13 03:57:06.062853 kubelet[2454]: I1213 03:57:06.062834 2454 state_mem.go:75] "Updated machine memory state" Dec 13 03:57:06.064660 kubelet[2454]: I1213 03:57:06.064624 2454 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 03:57:06.064779 kubelet[2454]: I1213 03:57:06.064703 2454 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 03:57:06.064779 kubelet[2454]: I1213 03:57:06.064710 2454 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 03:57:06.064836 kubelet[2454]: I1213 03:57:06.064789 2454 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 03:57:06.162926 kubelet[2454]: W1213 03:57:06.162842 2454 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 03:57:06.163839 kubelet[2454]: W1213 03:57:06.163797 2454 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 03:57:06.164685 kubelet[2454]: W1213 03:57:06.164647 2454 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 03:57:06.164821 kubelet[2454]: E1213 03:57:06.164768 2454 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-840ab18f38\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:06.170814 kubelet[2454]: I1213 03:57:06.170762 2454 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.6-a-840ab18f38" Dec 13 03:57:06.181783 kubelet[2454]: I1213 03:57:06.181698 2454 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.6-a-840ab18f38" Dec 13 03:57:06.182001 kubelet[2454]: I1213 03:57:06.181859 2454 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.6-a-840ab18f38" Dec 13 03:57:06.247896 kubelet[2454]: I1213 03:57:06.247671 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06e4c65c55f69d97d1662d5dc2f53a1a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-840ab18f38\" (UID: \"06e4c65c55f69d97d1662d5dc2f53a1a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:06.247896 kubelet[2454]: I1213 03:57:06.247762 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06e4c65c55f69d97d1662d5dc2f53a1a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-840ab18f38\" (UID: \"06e4c65c55f69d97d1662d5dc2f53a1a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:06.247896 kubelet[2454]: I1213 03:57:06.247817 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/06e4c65c55f69d97d1662d5dc2f53a1a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-840ab18f38\" (UID: \"06e4c65c55f69d97d1662d5dc2f53a1a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:06.247896 kubelet[2454]: I1213 03:57:06.247865 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd35b18421b872994841d1d0422c1420-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-840ab18f38\" (UID: \"dd35b18421b872994841d1d0422c1420\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:06.248549 kubelet[2454]: I1213 03:57:06.247916 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd35b18421b872994841d1d0422c1420-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-840ab18f38\" (UID: \"dd35b18421b872994841d1d0422c1420\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:06.248549 kubelet[2454]: I1213 03:57:06.247964 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd35b18421b872994841d1d0422c1420-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-840ab18f38\" (UID: \"dd35b18421b872994841d1d0422c1420\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:06.248549 kubelet[2454]: I1213 03:57:06.248015 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/06e4c65c55f69d97d1662d5dc2f53a1a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-840ab18f38\" (UID: \"06e4c65c55f69d97d1662d5dc2f53a1a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:06.248549 kubelet[2454]: I1213 03:57:06.248064 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06e4c65c55f69d97d1662d5dc2f53a1a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-840ab18f38\" (UID: \"06e4c65c55f69d97d1662d5dc2f53a1a\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:06.248549 kubelet[2454]: I1213 03:57:06.248114 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa1622841b957bd222340aaf38774aed-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-840ab18f38\" (UID: \"aa1622841b957bd222340aaf38774aed\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-840ab18f38" Dec 13 03:57:06.616099 sudo[2496]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 03:57:06.616215 sudo[2496]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 03:57:06.940732 sudo[2496]: pam_unix(sudo:session): session closed for user root Dec 13 03:57:07.043093 kubelet[2454]: I1213 03:57:07.043049 2454 apiserver.go:52] "Watching apiserver" Dec 13 03:57:07.046333 kubelet[2454]: I1213 03:57:07.046296 2454 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 03:57:07.068567 kubelet[2454]: I1213 03:57:07.068509 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-840ab18f38" podStartSLOduration=1.068500151 podStartE2EDuration="1.068500151s" podCreationTimestamp="2024-12-13 03:57:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:57:07.068445823 +0000 UTC m=+1.070818190" watchObservedRunningTime="2024-12-13 03:57:07.068500151 +0000 UTC m=+1.070872512" Dec 13 03:57:07.072944 kubelet[2454]: I1213 03:57:07.072887 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.6-a-840ab18f38" podStartSLOduration=3.072879179 podStartE2EDuration="3.072879179s" podCreationTimestamp="2024-12-13 03:57:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:57:07.07285547 +0000 UTC m=+1.075227832" watchObservedRunningTime="2024-12-13 03:57:07.072879179 +0000 UTC m=+1.075251546" Dec 13 03:57:07.077170 kubelet[2454]: I1213 03:57:07.077122 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.6-a-840ab18f38" podStartSLOduration=1.077117771 podStartE2EDuration="1.077117771s" podCreationTimestamp="2024-12-13 03:57:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:57:07.076949401 +0000 UTC m=+1.079321763" watchObservedRunningTime="2024-12-13 03:57:07.077117771 +0000 UTC m=+1.079490130" Dec 13 03:57:08.329483 sudo[1733]: pam_unix(sudo:session): session closed for user root Dec 13 03:57:08.330363 sshd[1730]: pam_unix(sshd:session): session closed for user core Dec 13 03:57:08.331810 systemd[1]: sshd@6-145.40.90.151:22-139.178.68.195:39316.service: Deactivated successfully. Dec 13 03:57:08.332252 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 03:57:08.332345 systemd[1]: session-9.scope: Consumed 3.581s CPU time. Dec 13 03:57:08.332735 systemd-logind[1614]: Session 9 logged out. Waiting for processes to exit. Dec 13 03:57:08.333350 systemd-logind[1614]: Removed session 9. Dec 13 03:57:11.645261 kubelet[2454]: I1213 03:57:11.645160 2454 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 03:57:11.646545 kubelet[2454]: I1213 03:57:11.646412 2454 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 03:57:11.646702 env[1562]: time="2024-12-13T03:57:11.645944702Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 03:57:12.589362 systemd[1]: Created slice kubepods-besteffort-pod5f09b42b_0142_434f_9ef8_0b7fa93b82e7.slice. Dec 13 03:57:12.594870 kubelet[2454]: I1213 03:57:12.594830 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-cni-path\") pod \"cilium-xtf7k\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " pod="kube-system/cilium-xtf7k" Dec 13 03:57:12.595026 kubelet[2454]: I1213 03:57:12.594890 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-bpf-maps\") pod \"cilium-xtf7k\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " pod="kube-system/cilium-xtf7k" Dec 13 03:57:12.595026 kubelet[2454]: I1213 03:57:12.594925 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-cilium-cgroup\") pod \"cilium-xtf7k\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " pod="kube-system/cilium-xtf7k" Dec 13 03:57:12.595026 kubelet[2454]: I1213 03:57:12.594960 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-host-proc-sys-kernel\") pod \"cilium-xtf7k\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " pod="kube-system/cilium-xtf7k" Dec 13 03:57:12.595026 kubelet[2454]: I1213 03:57:12.594989 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f317399-93d4-4c84-961f-f2a797300b9c-hubble-tls\") pod \"cilium-xtf7k\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " pod="kube-system/cilium-xtf7k" Dec 13 03:57:12.595026 kubelet[2454]: I1213 03:57:12.595019 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5f09b42b-0142-434f-9ef8-0b7fa93b82e7-kube-proxy\") pod \"kube-proxy-q7px9\" (UID: \"5f09b42b-0142-434f-9ef8-0b7fa93b82e7\") " pod="kube-system/kube-proxy-q7px9" Dec 13 03:57:12.595357 kubelet[2454]: I1213 03:57:12.595045 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-hostproc\") pod \"cilium-xtf7k\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " pod="kube-system/cilium-xtf7k" Dec 13 03:57:12.595357 kubelet[2454]: I1213 03:57:12.595076 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwdwm\" (UniqueName: \"kubernetes.io/projected/5f09b42b-0142-434f-9ef8-0b7fa93b82e7-kube-api-access-nwdwm\") pod \"kube-proxy-q7px9\" (UID: \"5f09b42b-0142-434f-9ef8-0b7fa93b82e7\") " pod="kube-system/kube-proxy-q7px9" Dec 13 03:57:12.595357 kubelet[2454]: I1213 03:57:12.595105 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-cilium-run\") pod \"cilium-xtf7k\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " pod="kube-system/cilium-xtf7k" Dec 13 03:57:12.595558 kubelet[2454]: I1213 03:57:12.595365 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-etc-cni-netd\") pod \"cilium-xtf7k\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " pod="kube-system/cilium-xtf7k" Dec 13 03:57:12.595558 kubelet[2454]: I1213 03:57:12.595447 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kr62\" (UniqueName: \"kubernetes.io/projected/4f317399-93d4-4c84-961f-f2a797300b9c-kube-api-access-4kr62\") pod \"cilium-xtf7k\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " pod="kube-system/cilium-xtf7k" Dec 13 03:57:12.595558 kubelet[2454]: I1213 03:57:12.595488 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f09b42b-0142-434f-9ef8-0b7fa93b82e7-xtables-lock\") pod \"kube-proxy-q7px9\" (UID: \"5f09b42b-0142-434f-9ef8-0b7fa93b82e7\") " pod="kube-system/kube-proxy-q7px9" Dec 13 03:57:12.595738 kubelet[2454]: I1213 03:57:12.595624 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f317399-93d4-4c84-961f-f2a797300b9c-cilium-config-path\") pod \"cilium-xtf7k\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " pod="kube-system/cilium-xtf7k" Dec 13 03:57:12.595738 kubelet[2454]: I1213 03:57:12.595700 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f09b42b-0142-434f-9ef8-0b7fa93b82e7-lib-modules\") pod \"kube-proxy-q7px9\" (UID: \"5f09b42b-0142-434f-9ef8-0b7fa93b82e7\") " pod="kube-system/kube-proxy-q7px9" Dec 13 03:57:12.595846 kubelet[2454]: I1213 03:57:12.595737 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-lib-modules\") pod \"cilium-xtf7k\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " pod="kube-system/cilium-xtf7k" Dec 13 03:57:12.595846 kubelet[2454]: I1213 03:57:12.595779 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-xtables-lock\") pod \"cilium-xtf7k\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " pod="kube-system/cilium-xtf7k" Dec 13 03:57:12.595846 kubelet[2454]: I1213 03:57:12.595819 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f317399-93d4-4c84-961f-f2a797300b9c-clustermesh-secrets\") pod \"cilium-xtf7k\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " pod="kube-system/cilium-xtf7k" Dec 13 03:57:12.596016 kubelet[2454]: I1213 03:57:12.595862 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-host-proc-sys-net\") pod \"cilium-xtf7k\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " pod="kube-system/cilium-xtf7k" Dec 13 03:57:12.611855 systemd[1]: Created slice kubepods-burstable-pod4f317399_93d4_4c84_961f_f2a797300b9c.slice. Dec 13 03:57:12.697748 kubelet[2454]: I1213 03:57:12.697665 2454 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 03:57:12.744169 systemd[1]: Created slice kubepods-besteffort-pod69933a75_f9e6_4329_b848_5137d6d4be6d.slice. Dec 13 03:57:12.799132 kubelet[2454]: I1213 03:57:12.799045 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flnjk\" (UniqueName: \"kubernetes.io/projected/69933a75-f9e6-4329-b848-5137d6d4be6d-kube-api-access-flnjk\") pod \"cilium-operator-5d85765b45-cdl6w\" (UID: \"69933a75-f9e6-4329-b848-5137d6d4be6d\") " pod="kube-system/cilium-operator-5d85765b45-cdl6w" Dec 13 03:57:12.799461 kubelet[2454]: I1213 03:57:12.799182 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69933a75-f9e6-4329-b848-5137d6d4be6d-cilium-config-path\") pod \"cilium-operator-5d85765b45-cdl6w\" (UID: \"69933a75-f9e6-4329-b848-5137d6d4be6d\") " pod="kube-system/cilium-operator-5d85765b45-cdl6w" Dec 13 03:57:12.814708 update_engine[1556]: I1213 03:57:12.814608 1556 update_attempter.cc:509] Updating boot flags... Dec 13 03:57:12.911898 env[1562]: time="2024-12-13T03:57:12.911804773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q7px9,Uid:5f09b42b-0142-434f-9ef8-0b7fa93b82e7,Namespace:kube-system,Attempt:0,}" Dec 13 03:57:12.913864 env[1562]: time="2024-12-13T03:57:12.913797010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xtf7k,Uid:4f317399-93d4-4c84-961f-f2a797300b9c,Namespace:kube-system,Attempt:0,}" Dec 13 03:57:12.925645 env[1562]: time="2024-12-13T03:57:12.925574675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:57:12.925645 env[1562]: time="2024-12-13T03:57:12.925617621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:57:12.925645 env[1562]: time="2024-12-13T03:57:12.925633654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:57:12.925881 env[1562]: time="2024-12-13T03:57:12.925773412Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2e6c0e123877ee027f291d896af3ddac29b40244f2d4f1aa14a4abf4c803d1c pid=2623 runtime=io.containerd.runc.v2 Dec 13 03:57:12.927053 env[1562]: time="2024-12-13T03:57:12.927000021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:57:12.927053 env[1562]: time="2024-12-13T03:57:12.927035889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:57:12.927168 env[1562]: time="2024-12-13T03:57:12.927050821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:57:12.927225 env[1562]: time="2024-12-13T03:57:12.927183541Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5 pid=2631 runtime=io.containerd.runc.v2 Dec 13 03:57:12.956955 systemd[1]: Started cri-containerd-18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5.scope. Dec 13 03:57:12.957641 systemd[1]: Started cri-containerd-a2e6c0e123877ee027f291d896af3ddac29b40244f2d4f1aa14a4abf4c803d1c.scope. Dec 13 03:57:12.971590 env[1562]: time="2024-12-13T03:57:12.971563719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q7px9,Uid:5f09b42b-0142-434f-9ef8-0b7fa93b82e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2e6c0e123877ee027f291d896af3ddac29b40244f2d4f1aa14a4abf4c803d1c\"" Dec 13 03:57:12.971863 env[1562]: time="2024-12-13T03:57:12.971845705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xtf7k,Uid:4f317399-93d4-4c84-961f-f2a797300b9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\"" Dec 13 03:57:12.972538 env[1562]: time="2024-12-13T03:57:12.972524173Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 03:57:12.972799 env[1562]: time="2024-12-13T03:57:12.972784999Z" level=info msg="CreateContainer within sandbox \"a2e6c0e123877ee027f291d896af3ddac29b40244f2d4f1aa14a4abf4c803d1c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 03:57:12.978695 env[1562]: time="2024-12-13T03:57:12.978651040Z" level=info msg="CreateContainer within sandbox \"a2e6c0e123877ee027f291d896af3ddac29b40244f2d4f1aa14a4abf4c803d1c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2219308c06e3956b2639fec3fc6acae5ddf63a6c1d8bbe98332b4d220b021d69\"" Dec 13 03:57:12.978931 env[1562]: time="2024-12-13T03:57:12.978885416Z" level=info msg="StartContainer for \"2219308c06e3956b2639fec3fc6acae5ddf63a6c1d8bbe98332b4d220b021d69\"" Dec 13 03:57:12.986704 systemd[1]: Started cri-containerd-2219308c06e3956b2639fec3fc6acae5ddf63a6c1d8bbe98332b4d220b021d69.scope. Dec 13 03:57:13.000198 env[1562]: time="2024-12-13T03:57:13.000171421Z" level=info msg="StartContainer for \"2219308c06e3956b2639fec3fc6acae5ddf63a6c1d8bbe98332b4d220b021d69\" returns successfully" Dec 13 03:57:13.047120 env[1562]: time="2024-12-13T03:57:13.047063240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-cdl6w,Uid:69933a75-f9e6-4329-b848-5137d6d4be6d,Namespace:kube-system,Attempt:0,}" Dec 13 03:57:13.053930 env[1562]: time="2024-12-13T03:57:13.053880105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:57:13.053930 env[1562]: time="2024-12-13T03:57:13.053907447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:57:13.053930 env[1562]: time="2024-12-13T03:57:13.053916486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:57:13.054072 env[1562]: time="2024-12-13T03:57:13.053997063Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8c30e0edbc5946b884034bd6ca810494d7f1b8d9a23eb0a7e87acc76d32f7f4 pid=2750 runtime=io.containerd.runc.v2 Dec 13 03:57:13.061498 systemd[1]: Started cri-containerd-a8c30e0edbc5946b884034bd6ca810494d7f1b8d9a23eb0a7e87acc76d32f7f4.scope. Dec 13 03:57:13.080155 kubelet[2454]: I1213 03:57:13.080089 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q7px9" podStartSLOduration=1.080073278 podStartE2EDuration="1.080073278s" podCreationTimestamp="2024-12-13 03:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:57:13.079577818 +0000 UTC m=+7.081950199" watchObservedRunningTime="2024-12-13 03:57:13.080073278 +0000 UTC m=+7.082445647" Dec 13 03:57:13.100649 env[1562]: time="2024-12-13T03:57:13.100598985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-cdl6w,Uid:69933a75-f9e6-4329-b848-5137d6d4be6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8c30e0edbc5946b884034bd6ca810494d7f1b8d9a23eb0a7e87acc76d32f7f4\"" Dec 13 03:57:18.448527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2924236164.mount: Deactivated successfully. Dec 13 03:57:20.122115 env[1562]: time="2024-12-13T03:57:20.122090359Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:57:20.122637 env[1562]: time="2024-12-13T03:57:20.122625477Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:57:20.123548 env[1562]: time="2024-12-13T03:57:20.123518956Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:57:20.123864 env[1562]: time="2024-12-13T03:57:20.123849600Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 03:57:20.124810 env[1562]: time="2024-12-13T03:57:20.124776670Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 03:57:20.125577 env[1562]: time="2024-12-13T03:57:20.125527298Z" level=info msg="CreateContainer within sandbox \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 03:57:20.130111 env[1562]: time="2024-12-13T03:57:20.130061814Z" level=info msg="CreateContainer within sandbox \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d5fbcf38d633c99cdf918a26d071fc5f597a5d48f99e3b8f72516ae9db8cbb0f\"" Dec 13 03:57:20.130347 env[1562]: time="2024-12-13T03:57:20.130333427Z" level=info msg="StartContainer for \"d5fbcf38d633c99cdf918a26d071fc5f597a5d48f99e3b8f72516ae9db8cbb0f\"" Dec 13 03:57:20.150775 systemd[1]: Started cri-containerd-d5fbcf38d633c99cdf918a26d071fc5f597a5d48f99e3b8f72516ae9db8cbb0f.scope. Dec 13 03:57:20.161318 env[1562]: time="2024-12-13T03:57:20.161293549Z" level=info msg="StartContainer for \"d5fbcf38d633c99cdf918a26d071fc5f597a5d48f99e3b8f72516ae9db8cbb0f\" returns successfully" Dec 13 03:57:20.166269 systemd[1]: cri-containerd-d5fbcf38d633c99cdf918a26d071fc5f597a5d48f99e3b8f72516ae9db8cbb0f.scope: Deactivated successfully. Dec 13 03:57:21.134239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5fbcf38d633c99cdf918a26d071fc5f597a5d48f99e3b8f72516ae9db8cbb0f-rootfs.mount: Deactivated successfully. Dec 13 03:57:21.232611 env[1562]: time="2024-12-13T03:57:21.232498789Z" level=info msg="shim disconnected" id=d5fbcf38d633c99cdf918a26d071fc5f597a5d48f99e3b8f72516ae9db8cbb0f Dec 13 03:57:21.233636 env[1562]: time="2024-12-13T03:57:21.232613829Z" level=warning msg="cleaning up after shim disconnected" id=d5fbcf38d633c99cdf918a26d071fc5f597a5d48f99e3b8f72516ae9db8cbb0f namespace=k8s.io Dec 13 03:57:21.233636 env[1562]: time="2024-12-13T03:57:21.232648456Z" level=info msg="cleaning up dead shim" Dec 13 03:57:21.248238 env[1562]: time="2024-12-13T03:57:21.248104116Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:57:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2960 runtime=io.containerd.runc.v2\n" Dec 13 03:57:22.093301 env[1562]: time="2024-12-13T03:57:22.093267540Z" level=info msg="CreateContainer within sandbox \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 03:57:22.100184 env[1562]: time="2024-12-13T03:57:22.100121595Z" level=info msg="CreateContainer within sandbox \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dded5cc15a09f57324e96cc713af7d19ea9b77fcbddf527997e9246f8b030279\"" Dec 13 03:57:22.100547 env[1562]: time="2024-12-13T03:57:22.100522308Z" level=info msg="StartContainer for \"dded5cc15a09f57324e96cc713af7d19ea9b77fcbddf527997e9246f8b030279\"" Dec 13 03:57:22.103463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2378598073.mount: Deactivated successfully. Dec 13 03:57:22.109414 systemd[1]: Started cri-containerd-dded5cc15a09f57324e96cc713af7d19ea9b77fcbddf527997e9246f8b030279.scope. Dec 13 03:57:22.120068 env[1562]: time="2024-12-13T03:57:22.120042565Z" level=info msg="StartContainer for \"dded5cc15a09f57324e96cc713af7d19ea9b77fcbddf527997e9246f8b030279\" returns successfully" Dec 13 03:57:22.126764 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 03:57:22.126932 systemd[1]: Stopped systemd-sysctl.service. Dec 13 03:57:22.127042 systemd[1]: Stopping systemd-sysctl.service... Dec 13 03:57:22.127865 systemd[1]: Starting systemd-sysctl.service... Dec 13 03:57:22.128099 systemd[1]: cri-containerd-dded5cc15a09f57324e96cc713af7d19ea9b77fcbddf527997e9246f8b030279.scope: Deactivated successfully. Dec 13 03:57:22.132053 systemd[1]: Finished systemd-sysctl.service. Dec 13 03:57:22.135744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dded5cc15a09f57324e96cc713af7d19ea9b77fcbddf527997e9246f8b030279-rootfs.mount: Deactivated successfully. Dec 13 03:57:22.153987 env[1562]: time="2024-12-13T03:57:22.153917796Z" level=info msg="shim disconnected" id=dded5cc15a09f57324e96cc713af7d19ea9b77fcbddf527997e9246f8b030279 Dec 13 03:57:22.153987 env[1562]: time="2024-12-13T03:57:22.153943151Z" level=warning msg="cleaning up after shim disconnected" id=dded5cc15a09f57324e96cc713af7d19ea9b77fcbddf527997e9246f8b030279 namespace=k8s.io Dec 13 03:57:22.153987 env[1562]: time="2024-12-13T03:57:22.153949726Z" level=info msg="cleaning up dead shim" Dec 13 03:57:22.157634 env[1562]: time="2024-12-13T03:57:22.157584039Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:57:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3021 runtime=io.containerd.runc.v2\n" Dec 13 03:57:23.103808 env[1562]: time="2024-12-13T03:57:23.103710060Z" level=info msg="CreateContainer within sandbox \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 03:57:23.123265 env[1562]: time="2024-12-13T03:57:23.123163201Z" level=info msg="CreateContainer within sandbox \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5c56f9f5f23274c6a9ecd0389759c856f6f0792aa54b6eb5a0283f4069402732\"" Dec 13 03:57:23.124224 env[1562]: time="2024-12-13T03:57:23.124149218Z" level=info msg="StartContainer for \"5c56f9f5f23274c6a9ecd0389759c856f6f0792aa54b6eb5a0283f4069402732\"" Dec 13 03:57:23.157810 systemd[1]: Started cri-containerd-5c56f9f5f23274c6a9ecd0389759c856f6f0792aa54b6eb5a0283f4069402732.scope. Dec 13 03:57:23.173667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2788794665.mount: Deactivated successfully. Dec 13 03:57:23.187421 env[1562]: time="2024-12-13T03:57:23.187384507Z" level=info msg="StartContainer for \"5c56f9f5f23274c6a9ecd0389759c856f6f0792aa54b6eb5a0283f4069402732\" returns successfully" Dec 13 03:57:23.190494 systemd[1]: cri-containerd-5c56f9f5f23274c6a9ecd0389759c856f6f0792aa54b6eb5a0283f4069402732.scope: Deactivated successfully. Dec 13 03:57:23.203102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c56f9f5f23274c6a9ecd0389759c856f6f0792aa54b6eb5a0283f4069402732-rootfs.mount: Deactivated successfully. Dec 13 03:57:23.210345 env[1562]: time="2024-12-13T03:57:23.210320736Z" level=info msg="shim disconnected" id=5c56f9f5f23274c6a9ecd0389759c856f6f0792aa54b6eb5a0283f4069402732 Dec 13 03:57:23.210433 env[1562]: time="2024-12-13T03:57:23.210346318Z" level=warning msg="cleaning up after shim disconnected" id=5c56f9f5f23274c6a9ecd0389759c856f6f0792aa54b6eb5a0283f4069402732 namespace=k8s.io Dec 13 03:57:23.210433 env[1562]: time="2024-12-13T03:57:23.210352264Z" level=info msg="cleaning up dead shim" Dec 13 03:57:23.214167 env[1562]: time="2024-12-13T03:57:23.214125154Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:57:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3078 runtime=io.containerd.runc.v2\n" Dec 13 03:57:23.583402 env[1562]: time="2024-12-13T03:57:23.583348063Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:57:23.583914 env[1562]: time="2024-12-13T03:57:23.583901380Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:57:23.584525 env[1562]: time="2024-12-13T03:57:23.584511858Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 03:57:23.584813 env[1562]: time="2024-12-13T03:57:23.584785564Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 03:57:23.585936 env[1562]: time="2024-12-13T03:57:23.585921783Z" level=info msg="CreateContainer within sandbox \"a8c30e0edbc5946b884034bd6ca810494d7f1b8d9a23eb0a7e87acc76d32f7f4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 03:57:23.590785 env[1562]: time="2024-12-13T03:57:23.590742984Z" level=info msg="CreateContainer within sandbox \"a8c30e0edbc5946b884034bd6ca810494d7f1b8d9a23eb0a7e87acc76d32f7f4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ef2d8805e30c20274908bc390229f6933f98d2e62d1a9dee15e234dcb0c4673e\"" Dec 13 03:57:23.591146 env[1562]: time="2024-12-13T03:57:23.591093394Z" level=info msg="StartContainer for \"ef2d8805e30c20274908bc390229f6933f98d2e62d1a9dee15e234dcb0c4673e\"" Dec 13 03:57:23.599807 systemd[1]: Started cri-containerd-ef2d8805e30c20274908bc390229f6933f98d2e62d1a9dee15e234dcb0c4673e.scope. Dec 13 03:57:23.611907 env[1562]: time="2024-12-13T03:57:23.611849366Z" level=info msg="StartContainer for \"ef2d8805e30c20274908bc390229f6933f98d2e62d1a9dee15e234dcb0c4673e\" returns successfully" Dec 13 03:57:24.104150 env[1562]: time="2024-12-13T03:57:24.104125263Z" level=info msg="CreateContainer within sandbox \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 03:57:24.108845 kubelet[2454]: I1213 03:57:24.108807 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-cdl6w" podStartSLOduration=1.625012956 podStartE2EDuration="12.108794251s" podCreationTimestamp="2024-12-13 03:57:12 +0000 UTC" firstStartedPulling="2024-12-13 03:57:13.101577586 +0000 UTC m=+7.103949948" lastFinishedPulling="2024-12-13 03:57:23.585358885 +0000 UTC m=+17.587731243" observedRunningTime="2024-12-13 03:57:24.108555194 +0000 UTC m=+18.110927560" watchObservedRunningTime="2024-12-13 03:57:24.108794251 +0000 UTC m=+18.111166611" Dec 13 03:57:24.109158 env[1562]: time="2024-12-13T03:57:24.108995662Z" level=info msg="CreateContainer within sandbox \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ec9b686f1bba2b395bb7aff8be3375abb358625e164b0e6abcee0264aaebbd2f\"" Dec 13 03:57:24.109251 env[1562]: time="2024-12-13T03:57:24.109238037Z" level=info msg="StartContainer for \"ec9b686f1bba2b395bb7aff8be3375abb358625e164b0e6abcee0264aaebbd2f\"" Dec 13 03:57:24.124958 systemd[1]: Started cri-containerd-ec9b686f1bba2b395bb7aff8be3375abb358625e164b0e6abcee0264aaebbd2f.scope. Dec 13 03:57:24.137999 env[1562]: time="2024-12-13T03:57:24.137947253Z" level=info msg="StartContainer for \"ec9b686f1bba2b395bb7aff8be3375abb358625e164b0e6abcee0264aaebbd2f\" returns successfully" Dec 13 03:57:24.140305 systemd[1]: cri-containerd-ec9b686f1bba2b395bb7aff8be3375abb358625e164b0e6abcee0264aaebbd2f.scope: Deactivated successfully. Dec 13 03:57:24.146591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec9b686f1bba2b395bb7aff8be3375abb358625e164b0e6abcee0264aaebbd2f-rootfs.mount: Deactivated successfully. Dec 13 03:57:24.297091 env[1562]: time="2024-12-13T03:57:24.297023086Z" level=info msg="shim disconnected" id=ec9b686f1bba2b395bb7aff8be3375abb358625e164b0e6abcee0264aaebbd2f Dec 13 03:57:24.297091 env[1562]: time="2024-12-13T03:57:24.297089557Z" level=warning msg="cleaning up after shim disconnected" id=ec9b686f1bba2b395bb7aff8be3375abb358625e164b0e6abcee0264aaebbd2f namespace=k8s.io Dec 13 03:57:24.297388 env[1562]: time="2024-12-13T03:57:24.297107256Z" level=info msg="cleaning up dead shim" Dec 13 03:57:24.309527 env[1562]: time="2024-12-13T03:57:24.309403357Z" level=warning msg="cleanup warnings time=\"2024-12-13T03:57:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3179 runtime=io.containerd.runc.v2\n" Dec 13 03:57:25.107216 env[1562]: time="2024-12-13T03:57:25.107194038Z" level=info msg="CreateContainer within sandbox \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 03:57:25.112991 env[1562]: time="2024-12-13T03:57:25.112940159Z" level=info msg="CreateContainer within sandbox \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056\"" Dec 13 03:57:25.113249 env[1562]: time="2024-12-13T03:57:25.113209506Z" level=info msg="StartContainer for \"05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056\"" Dec 13 03:57:25.113965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount725557230.mount: Deactivated successfully. Dec 13 03:57:25.121238 systemd[1]: Started cri-containerd-05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056.scope. Dec 13 03:57:25.134584 env[1562]: time="2024-12-13T03:57:25.134530082Z" level=info msg="StartContainer for \"05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056\" returns successfully" Dec 13 03:57:25.187485 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 03:57:25.258799 kubelet[2454]: I1213 03:57:25.258781 2454 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 03:57:25.274377 systemd[1]: Created slice kubepods-burstable-pod762a22ac_14ae_43c3_929d_dad5a4d3d015.slice. Dec 13 03:57:25.276981 systemd[1]: Created slice kubepods-burstable-podb60cc10f_cd70_4714_b626_ebabe3a61835.slice. Dec 13 03:57:25.286961 kubelet[2454]: I1213 03:57:25.286938 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj62f\" (UniqueName: \"kubernetes.io/projected/b60cc10f-cd70-4714-b626-ebabe3a61835-kube-api-access-jj62f\") pod \"coredns-6f6b679f8f-pm4b5\" (UID: \"b60cc10f-cd70-4714-b626-ebabe3a61835\") " pod="kube-system/coredns-6f6b679f8f-pm4b5" Dec 13 03:57:25.286961 kubelet[2454]: I1213 03:57:25.286961 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/762a22ac-14ae-43c3-929d-dad5a4d3d015-config-volume\") pod \"coredns-6f6b679f8f-4frv7\" (UID: \"762a22ac-14ae-43c3-929d-dad5a4d3d015\") " pod="kube-system/coredns-6f6b679f8f-4frv7" Dec 13 03:57:25.287066 kubelet[2454]: I1213 03:57:25.286992 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5mm2\" (UniqueName: \"kubernetes.io/projected/762a22ac-14ae-43c3-929d-dad5a4d3d015-kube-api-access-w5mm2\") pod \"coredns-6f6b679f8f-4frv7\" (UID: \"762a22ac-14ae-43c3-929d-dad5a4d3d015\") " pod="kube-system/coredns-6f6b679f8f-4frv7" Dec 13 03:57:25.287066 kubelet[2454]: I1213 03:57:25.287025 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b60cc10f-cd70-4714-b626-ebabe3a61835-config-volume\") pod \"coredns-6f6b679f8f-pm4b5\" (UID: \"b60cc10f-cd70-4714-b626-ebabe3a61835\") " pod="kube-system/coredns-6f6b679f8f-pm4b5" Dec 13 03:57:25.335509 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 03:57:25.578305 env[1562]: time="2024-12-13T03:57:25.578155414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4frv7,Uid:762a22ac-14ae-43c3-929d-dad5a4d3d015,Namespace:kube-system,Attempt:0,}" Dec 13 03:57:25.579069 env[1562]: time="2024-12-13T03:57:25.578978874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pm4b5,Uid:b60cc10f-cd70-4714-b626-ebabe3a61835,Namespace:kube-system,Attempt:0,}" Dec 13 03:57:26.118624 kubelet[2454]: I1213 03:57:26.118593 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xtf7k" podStartSLOduration=6.966223592 podStartE2EDuration="14.118582762s" podCreationTimestamp="2024-12-13 03:57:12 +0000 UTC" firstStartedPulling="2024-12-13 03:57:12.972317864 +0000 UTC m=+6.974690223" lastFinishedPulling="2024-12-13 03:57:20.124677028 +0000 UTC m=+14.127049393" observedRunningTime="2024-12-13 03:57:26.118057317 +0000 UTC m=+20.120429680" watchObservedRunningTime="2024-12-13 03:57:26.118582762 +0000 UTC m=+20.120955121" Dec 13 03:57:26.946192 systemd-networkd[1318]: cilium_host: Link UP Dec 13 03:57:26.946295 systemd-networkd[1318]: cilium_net: Link UP Dec 13 03:57:26.953380 systemd-networkd[1318]: cilium_net: Gained carrier Dec 13 03:57:26.960543 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 03:57:26.960579 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 03:57:26.960563 systemd-networkd[1318]: cilium_host: Gained carrier Dec 13 03:57:26.960747 systemd-networkd[1318]: cilium_host: Gained IPv6LL Dec 13 03:57:27.004984 systemd-networkd[1318]: cilium_vxlan: Link UP Dec 13 03:57:27.004987 systemd-networkd[1318]: cilium_vxlan: Gained carrier Dec 13 03:57:27.137436 kernel: NET: Registered PF_ALG protocol family Dec 13 03:57:27.222547 systemd-networkd[1318]: cilium_net: Gained IPv6LL Dec 13 03:57:27.700910 systemd-networkd[1318]: lxc_health: Link UP Dec 13 03:57:27.723272 systemd-networkd[1318]: lxc_health: Gained carrier Dec 13 03:57:27.723439 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 03:57:28.036840 systemd[1]: Started sshd@8-145.40.90.151:22-218.92.0.230:60214.service. Dec 13 03:57:28.110771 systemd-networkd[1318]: lxcdf436ecb1648: Link UP Dec 13 03:57:28.145502 kernel: eth0: renamed from tmpd0265 Dec 13 03:57:28.168495 kernel: eth0: renamed from tmp7c64e Dec 13 03:57:28.179692 systemd-networkd[1318]: lxc3cd93575c417: Link UP Dec 13 03:57:28.193873 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 03:57:28.193926 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcdf436ecb1648: link becomes ready Dec 13 03:57:28.193941 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 03:57:28.207996 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3cd93575c417: link becomes ready Dec 13 03:57:28.208605 systemd-networkd[1318]: lxcdf436ecb1648: Gained carrier Dec 13 03:57:28.208756 systemd-networkd[1318]: lxc3cd93575c417: Gained carrier Dec 13 03:57:28.294520 systemd-networkd[1318]: cilium_vxlan: Gained IPv6LL Dec 13 03:57:28.806568 systemd-networkd[1318]: lxc_health: Gained IPv6LL Dec 13 03:57:29.027900 sshd[3817]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.230 user=root Dec 13 03:57:29.766559 systemd-networkd[1318]: lxc3cd93575c417: Gained IPv6LL Dec 13 03:57:29.958564 systemd-networkd[1318]: lxcdf436ecb1648: Gained IPv6LL Dec 13 03:57:30.484150 env[1562]: time="2024-12-13T03:57:30.484111717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:57:30.484150 env[1562]: time="2024-12-13T03:57:30.484131767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:57:30.484150 env[1562]: time="2024-12-13T03:57:30.484138813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:57:30.484150 env[1562]: time="2024-12-13T03:57:30.484133165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 03:57:30.484150 env[1562]: time="2024-12-13T03:57:30.484149848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 03:57:30.484445 env[1562]: time="2024-12-13T03:57:30.484158513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 03:57:30.484445 env[1562]: time="2024-12-13T03:57:30.484202155Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0265e1ccb4218945d5be4353ccd8e63baaeb681b919d2131c680c94c488be67 pid=3873 runtime=io.containerd.runc.v2 Dec 13 03:57:30.484445 env[1562]: time="2024-12-13T03:57:30.484289699Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c64ee86b6b28a7e3fa9bb7a9567250c6886a38500c961ea960bb2f097c4f4f2 pid=3874 runtime=io.containerd.runc.v2 Dec 13 03:57:30.492084 systemd[1]: Started cri-containerd-7c64ee86b6b28a7e3fa9bb7a9567250c6886a38500c961ea960bb2f097c4f4f2.scope. Dec 13 03:57:30.492668 systemd[1]: Started cri-containerd-d0265e1ccb4218945d5be4353ccd8e63baaeb681b919d2131c680c94c488be67.scope. Dec 13 03:57:30.512889 env[1562]: time="2024-12-13T03:57:30.512838182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4frv7,Uid:762a22ac-14ae-43c3-929d-dad5a4d3d015,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c64ee86b6b28a7e3fa9bb7a9567250c6886a38500c961ea960bb2f097c4f4f2\"" Dec 13 03:57:30.513761 env[1562]: time="2024-12-13T03:57:30.513743567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pm4b5,Uid:b60cc10f-cd70-4714-b626-ebabe3a61835,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0265e1ccb4218945d5be4353ccd8e63baaeb681b919d2131c680c94c488be67\"" Dec 13 03:57:30.514016 env[1562]: time="2024-12-13T03:57:30.514002741Z" level=info msg="CreateContainer within sandbox \"7c64ee86b6b28a7e3fa9bb7a9567250c6886a38500c961ea960bb2f097c4f4f2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 03:57:30.514638 env[1562]: time="2024-12-13T03:57:30.514625339Z" level=info msg="CreateContainer within sandbox \"d0265e1ccb4218945d5be4353ccd8e63baaeb681b919d2131c680c94c488be67\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 03:57:30.519385 env[1562]: time="2024-12-13T03:57:30.519367487Z" level=info msg="CreateContainer within sandbox \"7c64ee86b6b28a7e3fa9bb7a9567250c6886a38500c961ea960bb2f097c4f4f2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"edcb8fa2cc9431d482fa5f08e74af56a507a9dc066944d925e4b20855333d0c7\"" Dec 13 03:57:30.519607 env[1562]: time="2024-12-13T03:57:30.519590128Z" level=info msg="StartContainer for \"edcb8fa2cc9431d482fa5f08e74af56a507a9dc066944d925e4b20855333d0c7\"" Dec 13 03:57:30.520317 env[1562]: time="2024-12-13T03:57:30.520301152Z" level=info msg="CreateContainer within sandbox \"d0265e1ccb4218945d5be4353ccd8e63baaeb681b919d2131c680c94c488be67\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"835bd13b987f368b9144365fe2ee0fce737080407aa6b14772e141ca6d519dd6\"" Dec 13 03:57:30.520483 env[1562]: time="2024-12-13T03:57:30.520471237Z" level=info msg="StartContainer for \"835bd13b987f368b9144365fe2ee0fce737080407aa6b14772e141ca6d519dd6\"" Dec 13 03:57:30.527162 systemd[1]: Started cri-containerd-835bd13b987f368b9144365fe2ee0fce737080407aa6b14772e141ca6d519dd6.scope. Dec 13 03:57:30.527785 systemd[1]: Started cri-containerd-edcb8fa2cc9431d482fa5f08e74af56a507a9dc066944d925e4b20855333d0c7.scope. Dec 13 03:57:30.542834 env[1562]: time="2024-12-13T03:57:30.542801426Z" level=info msg="StartContainer for \"835bd13b987f368b9144365fe2ee0fce737080407aa6b14772e141ca6d519dd6\" returns successfully" Dec 13 03:57:30.542933 env[1562]: time="2024-12-13T03:57:30.542803051Z" level=info msg="StartContainer for \"edcb8fa2cc9431d482fa5f08e74af56a507a9dc066944d925e4b20855333d0c7\" returns successfully" Dec 13 03:57:31.147320 kubelet[2454]: I1213 03:57:31.147278 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-pm4b5" podStartSLOduration=19.147264071 podStartE2EDuration="19.147264071s" podCreationTimestamp="2024-12-13 03:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:57:31.147002124 +0000 UTC m=+25.149374489" watchObservedRunningTime="2024-12-13 03:57:31.147264071 +0000 UTC m=+25.149636431" Dec 13 03:57:31.155305 kubelet[2454]: I1213 03:57:31.155271 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-4frv7" podStartSLOduration=19.155257769 podStartE2EDuration="19.155257769s" podCreationTimestamp="2024-12-13 03:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 03:57:31.155250335 +0000 UTC m=+25.157622706" watchObservedRunningTime="2024-12-13 03:57:31.155257769 +0000 UTC m=+25.157630135" Dec 13 03:57:31.225004 sshd[3817]: Failed password for root from 218.92.0.230 port 60214 ssh2 Dec 13 03:57:34.203098 sshd[3817]: Failed password for root from 218.92.0.230 port 60214 ssh2 Dec 13 03:57:37.842250 systemd[1]: Started sshd@9-145.40.90.151:22-218.92.0.155:58261.service. Dec 13 03:57:37.847541 sshd[3817]: Failed password for root from 218.92.0.230 port 60214 ssh2 Dec 13 03:57:38.723288 sshd[3817]: Received disconnect from 218.92.0.230 port 60214:11: [preauth] Dec 13 03:57:38.723288 sshd[3817]: Disconnected from authenticating user root 218.92.0.230 port 60214 [preauth] Dec 13 03:57:38.723897 sshd[3817]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.230 user=root Dec 13 03:57:38.725918 systemd[1]: sshd@8-145.40.90.151:22-218.92.0.230:60214.service: Deactivated successfully. Dec 13 03:57:38.759400 sshd[4043]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Dec 13 03:57:38.880672 systemd[1]: Started sshd@10-145.40.90.151:22-218.92.0.230:63620.service. Dec 13 03:57:39.501391 kubelet[2454]: I1213 03:57:39.501314 2454 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 03:57:39.857147 sshd[4047]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.230 user=root Dec 13 03:57:39.857382 sshd[4047]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Dec 13 03:57:40.191049 systemd[1]: Started sshd@11-145.40.90.151:22-51.89.216.178:41192.service. Dec 13 03:57:40.524876 sshd[4043]: Failed password for root from 218.92.0.155 port 58261 ssh2 Dec 13 03:57:41.028413 sshd[4050]: Invalid user gitadmin from 51.89.216.178 port 41192 Dec 13 03:57:41.035276 sshd[4050]: pam_faillock(sshd:auth): User unknown Dec 13 03:57:41.036388 sshd[4050]: pam_unix(sshd:auth): check pass; user unknown Dec 13 03:57:41.036532 sshd[4050]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=51.89.216.178 Dec 13 03:57:41.037595 sshd[4050]: pam_faillock(sshd:auth): User unknown Dec 13 03:57:42.094182 sshd[4047]: Failed password for root from 218.92.0.230 port 63620 ssh2 Dec 13 03:57:43.213757 sshd[4050]: Failed password for invalid user gitadmin from 51.89.216.178 port 41192 ssh2 Dec 13 03:57:44.157632 sshd[4043]: Failed password for root from 218.92.0.155 port 58261 ssh2 Dec 13 03:57:44.521602 sshd[4050]: Received disconnect from 51.89.216.178 port 41192:11: Bye Bye [preauth] Dec 13 03:57:44.521602 sshd[4050]: Disconnected from invalid user gitadmin 51.89.216.178 port 41192 [preauth] Dec 13 03:57:44.524196 systemd[1]: sshd@11-145.40.90.151:22-51.89.216.178:41192.service: Deactivated successfully. Dec 13 03:57:44.873721 sshd[4047]: Failed password for root from 218.92.0.230 port 63620 ssh2 Dec 13 03:57:46.926824 sshd[4043]: Failed password for root from 218.92.0.155 port 58261 ssh2 Dec 13 03:57:48.423694 sshd[4043]: Received disconnect from 218.92.0.155 port 58261:11: [preauth] Dec 13 03:57:48.423694 sshd[4043]: Disconnected from authenticating user root 218.92.0.155 port 58261 [preauth] Dec 13 03:57:48.424244 sshd[4043]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Dec 13 03:57:48.426280 systemd[1]: sshd@9-145.40.90.151:22-218.92.0.155:58261.service: Deactivated successfully. Dec 13 03:57:48.517347 sshd[4047]: Failed password for root from 218.92.0.230 port 63620 ssh2 Dec 13 03:57:49.552028 sshd[4047]: Received disconnect from 218.92.0.230 port 63620:11: [preauth] Dec 13 03:57:49.552028 sshd[4047]: Disconnected from authenticating user root 218.92.0.230 port 63620 [preauth] Dec 13 03:57:49.552669 sshd[4047]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.230 user=root Dec 13 03:57:49.554754 systemd[1]: sshd@10-145.40.90.151:22-218.92.0.230:63620.service: Deactivated successfully. Dec 13 03:57:49.741756 systemd[1]: Started sshd@12-145.40.90.151:22-218.92.0.230:60470.service. Dec 13 03:57:50.819487 sshd[4060]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.230 user=root Dec 13 03:57:52.563711 sshd[4060]: Failed password for root from 218.92.0.230 port 60470 ssh2 Dec 13 03:57:56.695885 sshd[4060]: Failed password for root from 218.92.0.230 port 60470 ssh2 Dec 13 03:57:59.020628 sshd[4060]: Failed password for root from 218.92.0.230 port 60470 ssh2 Dec 13 03:58:00.563980 sshd[4060]: Received disconnect from 218.92.0.230 port 60470:11: [preauth] Dec 13 03:58:00.563980 sshd[4060]: Disconnected from authenticating user root 218.92.0.230 port 60470 [preauth] Dec 13 03:58:00.564549 sshd[4060]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.230 user=root Dec 13 03:58:00.566591 systemd[1]: sshd@12-145.40.90.151:22-218.92.0.230:60470.service: Deactivated successfully. Dec 13 03:58:24.846952 systemd[1]: Started sshd@13-145.40.90.151:22-92.255.85.188:35952.service. Dec 13 03:58:26.306551 sshd[4070]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.188 user=root Dec 13 03:58:28.327792 sshd[4070]: Failed password for root from 92.255.85.188 port 35952 ssh2 Dec 13 03:58:29.567654 sshd[4070]: Connection closed by authenticating user root 92.255.85.188 port 35952 [preauth] Dec 13 03:58:29.570202 systemd[1]: sshd@13-145.40.90.151:22-92.255.85.188:35952.service: Deactivated successfully. Dec 13 03:58:31.393581 systemd[1]: Started sshd@14-145.40.90.151:22-92.27.157.252:35664.service. Dec 13 03:58:32.240733 sshd[4075]: Invalid user abrt from 92.27.157.252 port 35664 Dec 13 03:58:32.246281 sshd[4075]: pam_faillock(sshd:auth): User unknown Dec 13 03:58:32.247323 sshd[4075]: pam_unix(sshd:auth): check pass; user unknown Dec 13 03:58:32.247418 sshd[4075]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.27.157.252 Dec 13 03:58:32.248393 sshd[4075]: pam_faillock(sshd:auth): User unknown Dec 13 03:58:34.093884 sshd[4075]: Failed password for invalid user abrt from 92.27.157.252 port 35664 ssh2 Dec 13 03:58:35.940737 sshd[4075]: Received disconnect from 92.27.157.252 port 35664:11: Bye Bye [preauth] Dec 13 03:58:35.940737 sshd[4075]: Disconnected from invalid user abrt 92.27.157.252 port 35664 [preauth] Dec 13 03:58:35.943263 systemd[1]: sshd@14-145.40.90.151:22-92.27.157.252:35664.service: Deactivated successfully. Dec 13 03:59:16.360682 systemd[1]: Started sshd@15-145.40.90.151:22-51.89.216.178:42486.service. Dec 13 03:59:17.174332 sshd[4086]: Invalid user dkv from 51.89.216.178 port 42486 Dec 13 03:59:17.175685 sshd[4086]: pam_faillock(sshd:auth): User unknown Dec 13 03:59:17.175925 sshd[4086]: pam_unix(sshd:auth): check pass; user unknown Dec 13 03:59:17.175944 sshd[4086]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=51.89.216.178 Dec 13 03:59:17.176162 sshd[4086]: pam_faillock(sshd:auth): User unknown Dec 13 03:59:18.865891 sshd[4086]: Failed password for invalid user dkv from 51.89.216.178 port 42486 ssh2 Dec 13 03:59:19.385796 systemd[1]: Started sshd@16-145.40.90.151:22-218.92.0.155:23682.service. Dec 13 03:59:19.468587 sshd[4086]: Received disconnect from 51.89.216.178 port 42486:11: Bye Bye [preauth] Dec 13 03:59:19.468587 sshd[4086]: Disconnected from invalid user dkv 51.89.216.178 port 42486 [preauth] Dec 13 03:59:19.471159 systemd[1]: sshd@15-145.40.90.151:22-51.89.216.178:42486.service: Deactivated successfully. Dec 13 03:59:20.460619 sshd[4089]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Dec 13 03:59:22.561706 sshd[4089]: Failed password for root from 218.92.0.155 port 23682 ssh2 Dec 13 03:59:26.216679 sshd[4089]: Failed password for root from 218.92.0.155 port 23682 ssh2 Dec 13 03:59:28.536414 sshd[4089]: Failed password for root from 218.92.0.155 port 23682 ssh2 Dec 13 03:59:30.187039 sshd[4089]: Received disconnect from 218.92.0.155 port 23682:11: [preauth] Dec 13 03:59:30.187039 sshd[4089]: Disconnected from authenticating user root 218.92.0.155 port 23682 [preauth] Dec 13 03:59:30.187646 sshd[4089]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Dec 13 03:59:30.189692 systemd[1]: sshd@16-145.40.90.151:22-218.92.0.155:23682.service: Deactivated successfully. Dec 13 04:00:16.302537 systemd[1]: Started sshd@17-145.40.90.151:22-92.27.157.252:56501.service. Dec 13 04:00:17.151241 sshd[4100]: Invalid user rh from 92.27.157.252 port 56501 Dec 13 04:00:17.157801 sshd[4100]: pam_faillock(sshd:auth): User unknown Dec 13 04:00:17.158987 sshd[4100]: pam_unix(sshd:auth): check pass; user unknown Dec 13 04:00:17.159085 sshd[4100]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.27.157.252 Dec 13 04:00:17.160141 sshd[4100]: pam_faillock(sshd:auth): User unknown Dec 13 04:00:19.753309 sshd[4100]: Failed password for invalid user rh from 92.27.157.252 port 56501 ssh2 Dec 13 04:00:21.249245 sshd[4100]: Received disconnect from 92.27.157.252 port 56501:11: Bye Bye [preauth] Dec 13 04:00:21.249245 sshd[4100]: Disconnected from invalid user rh 92.27.157.252 port 56501 [preauth] Dec 13 04:00:21.251827 systemd[1]: sshd@17-145.40.90.151:22-92.27.157.252:56501.service: Deactivated successfully. Dec 13 04:00:36.074277 systemd[1]: Started sshd@18-145.40.90.151:22-218.92.0.114:18942.service. Dec 13 04:00:37.018517 sshd[4104]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.114 user=root Dec 13 04:00:38.356815 sshd[4104]: Failed password for root from 218.92.0.114 port 18942 ssh2 Dec 13 04:00:38.702989 sshd[4104]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Dec 13 04:00:40.176945 sshd[4104]: Failed password for root from 218.92.0.114 port 18942 ssh2 Dec 13 04:00:42.804596 sshd[4104]: Failed password for root from 218.92.0.114 port 18942 ssh2 Dec 13 04:00:43.612947 sshd[4104]: Received disconnect from 218.92.0.114 port 18942:11: [preauth] Dec 13 04:00:43.612947 sshd[4104]: Disconnected from authenticating user root 218.92.0.114 port 18942 [preauth] Dec 13 04:00:43.613522 sshd[4104]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.114 user=root Dec 13 04:00:43.615637 systemd[1]: sshd@18-145.40.90.151:22-218.92.0.114:18942.service: Deactivated successfully. Dec 13 04:00:48.811056 systemd[1]: Started sshd@19-145.40.90.151:22-218.92.0.114:28684.service. Dec 13 04:00:49.946079 sshd[4110]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.114 user=root Dec 13 04:00:51.119415 systemd[1]: Started sshd@20-145.40.90.151:22-51.89.216.178:60398.service. Dec 13 04:00:51.264718 sshd[4110]: Failed password for root from 218.92.0.114 port 28684 ssh2 Dec 13 04:00:51.922856 sshd[4113]: Invalid user ts3 from 51.89.216.178 port 60398 Dec 13 04:00:51.929370 sshd[4113]: pam_faillock(sshd:auth): User unknown Dec 13 04:00:51.930458 sshd[4113]: pam_unix(sshd:auth): check pass; user unknown Dec 13 04:00:51.930552 sshd[4113]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=51.89.216.178 Dec 13 04:00:51.931550 sshd[4113]: pam_faillock(sshd:auth): User unknown Dec 13 04:00:52.919683 sshd[4110]: Failed password for root from 218.92.0.114 port 28684 ssh2 Dec 13 04:00:53.189799 sshd[4113]: Failed password for invalid user ts3 from 51.89.216.178 port 60398 ssh2 Dec 13 04:00:54.186441 sshd[4113]: Received disconnect from 51.89.216.178 port 60398:11: Bye Bye [preauth] Dec 13 04:00:54.186441 sshd[4113]: Disconnected from invalid user ts3 51.89.216.178 port 60398 [preauth] Dec 13 04:00:54.187191 systemd[1]: sshd@20-145.40.90.151:22-51.89.216.178:60398.service: Deactivated successfully. Dec 13 04:00:55.577632 sshd[4110]: Failed password for root from 218.92.0.114 port 28684 ssh2 Dec 13 04:00:56.635817 sshd[4110]: Received disconnect from 218.92.0.114 port 28684:11: [preauth] Dec 13 04:00:56.635817 sshd[4110]: Disconnected from authenticating user root 218.92.0.114 port 28684 [preauth] Dec 13 04:00:56.636367 sshd[4110]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.114 user=root Dec 13 04:00:56.638450 systemd[1]: sshd@19-145.40.90.151:22-218.92.0.114:28684.service: Deactivated successfully. Dec 13 04:00:56.809297 systemd[1]: Started sshd@21-145.40.90.151:22-218.92.0.114:40548.service. Dec 13 04:00:57.919218 sshd[4118]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.114 user=root Dec 13 04:00:59.669474 sshd[4118]: Failed password for root from 218.92.0.114 port 40548 ssh2 Dec 13 04:01:02.031109 systemd[1]: Started sshd@22-145.40.90.151:22-218.92.0.155:53624.service. Dec 13 04:01:02.471036 sshd[4118]: Failed password for root from 218.92.0.114 port 40548 ssh2 Dec 13 04:01:03.064157 sshd[4121]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Dec 13 04:01:04.655082 sshd[4118]: Failed password for root from 218.92.0.114 port 40548 ssh2 Dec 13 04:01:04.969818 sshd[4121]: Failed password for root from 218.92.0.155 port 53624 ssh2 Dec 13 04:01:06.138356 sshd[4118]: Received disconnect from 218.92.0.114 port 40548:11: [preauth] Dec 13 04:01:06.138356 sshd[4118]: Disconnected from authenticating user root 218.92.0.114 port 40548 [preauth] Dec 13 04:01:06.138929 sshd[4118]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.114 user=root Dec 13 04:01:06.140986 systemd[1]: sshd@21-145.40.90.151:22-218.92.0.114:40548.service: Deactivated successfully. Dec 13 04:01:08.957608 sshd[4121]: Failed password for root from 218.92.0.155 port 53624 ssh2 Dec 13 04:01:11.617861 sshd[4121]: Failed password for root from 218.92.0.155 port 53624 ssh2 Dec 13 04:01:12.794529 sshd[4121]: Received disconnect from 218.92.0.155 port 53624:11: [preauth] Dec 13 04:01:12.794529 sshd[4121]: Disconnected from authenticating user root 218.92.0.155 port 53624 [preauth] Dec 13 04:01:12.795104 sshd[4121]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Dec 13 04:01:12.797184 systemd[1]: sshd@22-145.40.90.151:22-218.92.0.155:53624.service: Deactivated successfully. Dec 13 04:02:01.972324 systemd[1]: Started sshd@23-145.40.90.151:22-92.27.157.252:49105.service. Dec 13 04:02:02.835415 sshd[4136]: Invalid user tempuser from 92.27.157.252 port 49105 Dec 13 04:02:02.842049 sshd[4136]: pam_faillock(sshd:auth): User unknown Dec 13 04:02:02.842377 sshd[4136]: pam_unix(sshd:auth): check pass; user unknown Dec 13 04:02:02.842395 sshd[4136]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.27.157.252 Dec 13 04:02:02.842714 sshd[4136]: pam_faillock(sshd:auth): User unknown Dec 13 04:02:04.516691 sshd[4136]: Failed password for invalid user tempuser from 92.27.157.252 port 49105 ssh2 Dec 13 04:02:05.830243 sshd[4136]: Received disconnect from 92.27.157.252 port 49105:11: Bye Bye [preauth] Dec 13 04:02:05.830243 sshd[4136]: Disconnected from invalid user tempuser 92.27.157.252 port 49105 [preauth] Dec 13 04:02:05.832882 systemd[1]: sshd@23-145.40.90.151:22-92.27.157.252:49105.service: Deactivated successfully. Dec 13 04:02:22.959700 systemd[1]: Started sshd@24-145.40.90.151:22-51.89.216.178:50152.service. Dec 13 04:02:23.763915 sshd[4145]: Invalid user zxc from 51.89.216.178 port 50152 Dec 13 04:02:23.770517 sshd[4145]: pam_faillock(sshd:auth): User unknown Dec 13 04:02:23.771716 sshd[4145]: pam_unix(sshd:auth): check pass; user unknown Dec 13 04:02:23.771810 sshd[4145]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=51.89.216.178 Dec 13 04:02:23.772891 sshd[4145]: pam_faillock(sshd:auth): User unknown Dec 13 04:02:25.326708 sshd[4145]: Failed password for invalid user zxc from 51.89.216.178 port 50152 ssh2 Dec 13 04:02:26.240949 sshd[4145]: Received disconnect from 51.89.216.178 port 50152:11: Bye Bye [preauth] Dec 13 04:02:26.240949 sshd[4145]: Disconnected from invalid user zxc 51.89.216.178 port 50152 [preauth] Dec 13 04:02:26.243634 systemd[1]: sshd@24-145.40.90.151:22-51.89.216.178:50152.service: Deactivated successfully. Dec 13 04:02:42.435489 systemd[1]: Started sshd@25-145.40.90.151:22-218.92.0.155:18739.service. Dec 13 04:02:43.872964 sshd[4149]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Dec 13 04:02:46.174697 sshd[4149]: Failed password for root from 218.92.0.155 port 18739 ssh2 Dec 13 04:02:49.301460 sshd[4149]: Failed password for root from 218.92.0.155 port 18739 ssh2 Dec 13 04:02:52.284625 sshd[4149]: Failed password for root from 218.92.0.155 port 18739 ssh2 Dec 13 04:02:53.592869 sshd[4149]: Received disconnect from 218.92.0.155 port 18739:11: [preauth] Dec 13 04:02:53.592869 sshd[4149]: Disconnected from authenticating user root 218.92.0.155 port 18739 [preauth] Dec 13 04:02:53.593408 sshd[4149]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.155 user=root Dec 13 04:02:53.595516 systemd[1]: sshd@25-145.40.90.151:22-218.92.0.155:18739.service: Deactivated successfully. Dec 13 04:03:10.947875 systemd[1]: Started sshd@26-145.40.90.151:22-139.178.68.195:41750.service. Dec 13 04:03:11.049527 sshd[4158]: Accepted publickey for core from 139.178.68.195 port 41750 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:11.052985 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:11.064369 systemd-logind[1614]: New session 10 of user core. Dec 13 04:03:11.066909 systemd[1]: Started session-10.scope. Dec 13 04:03:11.195768 sshd[4158]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:11.197208 systemd[1]: sshd@26-145.40.90.151:22-139.178.68.195:41750.service: Deactivated successfully. Dec 13 04:03:11.197658 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 04:03:11.198031 systemd-logind[1614]: Session 10 logged out. Waiting for processes to exit. Dec 13 04:03:11.198582 systemd-logind[1614]: Removed session 10. Dec 13 04:03:16.205833 systemd[1]: Started sshd@27-145.40.90.151:22-139.178.68.195:33758.service. Dec 13 04:03:16.269587 sshd[4191]: Accepted publickey for core from 139.178.68.195 port 33758 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:16.272916 sshd[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:16.283661 systemd-logind[1614]: New session 11 of user core. Dec 13 04:03:16.286198 systemd[1]: Started session-11.scope. Dec 13 04:03:16.395355 sshd[4191]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:16.396902 systemd[1]: sshd@27-145.40.90.151:22-139.178.68.195:33758.service: Deactivated successfully. Dec 13 04:03:16.397319 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 04:03:16.397755 systemd-logind[1614]: Session 11 logged out. Waiting for processes to exit. Dec 13 04:03:16.398316 systemd-logind[1614]: Removed session 11. Dec 13 04:03:21.404395 systemd[1]: Started sshd@28-145.40.90.151:22-139.178.68.195:33764.service. Dec 13 04:03:21.441641 sshd[4217]: Accepted publickey for core from 139.178.68.195 port 33764 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:21.442600 sshd[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:21.445500 systemd-logind[1614]: New session 12 of user core. Dec 13 04:03:21.446433 systemd[1]: Started session-12.scope. Dec 13 04:03:21.540763 sshd[4217]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:21.542187 systemd[1]: sshd@28-145.40.90.151:22-139.178.68.195:33764.service: Deactivated successfully. Dec 13 04:03:21.542658 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 04:03:21.543128 systemd-logind[1614]: Session 12 logged out. Waiting for processes to exit. Dec 13 04:03:21.543660 systemd-logind[1614]: Removed session 12. Dec 13 04:03:26.549947 systemd[1]: Started sshd@29-145.40.90.151:22-139.178.68.195:58150.service. Dec 13 04:03:26.586832 sshd[4243]: Accepted publickey for core from 139.178.68.195 port 58150 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:26.587537 sshd[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:26.589746 systemd-logind[1614]: New session 13 of user core. Dec 13 04:03:26.590321 systemd[1]: Started session-13.scope. Dec 13 04:03:26.681735 sshd[4243]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:26.683444 systemd[1]: sshd@29-145.40.90.151:22-139.178.68.195:58150.service: Deactivated successfully. Dec 13 04:03:26.683791 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 04:03:26.684180 systemd-logind[1614]: Session 13 logged out. Waiting for processes to exit. Dec 13 04:03:26.684753 systemd[1]: Started sshd@30-145.40.90.151:22-139.178.68.195:58164.service. Dec 13 04:03:26.685217 systemd-logind[1614]: Removed session 13. Dec 13 04:03:26.721047 sshd[4269]: Accepted publickey for core from 139.178.68.195 port 58164 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:26.721845 sshd[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:26.724371 systemd-logind[1614]: New session 14 of user core. Dec 13 04:03:26.724914 systemd[1]: Started session-14.scope. Dec 13 04:03:26.891484 sshd[4269]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:26.893514 systemd[1]: sshd@30-145.40.90.151:22-139.178.68.195:58164.service: Deactivated successfully. Dec 13 04:03:26.893875 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 04:03:26.894199 systemd-logind[1614]: Session 14 logged out. Waiting for processes to exit. Dec 13 04:03:26.894850 systemd[1]: Started sshd@31-145.40.90.151:22-139.178.68.195:58172.service. Dec 13 04:03:26.895246 systemd-logind[1614]: Removed session 14. Dec 13 04:03:26.931729 sshd[4293]: Accepted publickey for core from 139.178.68.195 port 58172 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:26.932647 sshd[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:26.935467 systemd-logind[1614]: New session 15 of user core. Dec 13 04:03:26.936051 systemd[1]: Started session-15.scope. Dec 13 04:03:27.066146 sshd[4293]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:27.069418 systemd[1]: sshd@31-145.40.90.151:22-139.178.68.195:58172.service: Deactivated successfully. Dec 13 04:03:27.070492 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 04:03:27.071391 systemd-logind[1614]: Session 15 logged out. Waiting for processes to exit. Dec 13 04:03:27.072747 systemd-logind[1614]: Removed session 15. Dec 13 04:03:32.075314 systemd[1]: Started sshd@32-145.40.90.151:22-139.178.68.195:58186.service. Dec 13 04:03:32.112497 sshd[4320]: Accepted publickey for core from 139.178.68.195 port 58186 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:32.113252 sshd[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:32.115553 systemd-logind[1614]: New session 16 of user core. Dec 13 04:03:32.116033 systemd[1]: Started session-16.scope. Dec 13 04:03:32.197598 sshd[4320]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:32.199147 systemd[1]: sshd@32-145.40.90.151:22-139.178.68.195:58186.service: Deactivated successfully. Dec 13 04:03:32.199576 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 04:03:32.200042 systemd-logind[1614]: Session 16 logged out. Waiting for processes to exit. Dec 13 04:03:32.200520 systemd-logind[1614]: Removed session 16. Dec 13 04:03:37.201290 systemd[1]: Started sshd@33-145.40.90.151:22-139.178.68.195:55370.service. Dec 13 04:03:37.238683 sshd[4345]: Accepted publickey for core from 139.178.68.195 port 55370 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:37.239599 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:37.242592 systemd-logind[1614]: New session 17 of user core. Dec 13 04:03:37.243229 systemd[1]: Started session-17.scope. Dec 13 04:03:37.335447 sshd[4345]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:37.337056 systemd[1]: sshd@33-145.40.90.151:22-139.178.68.195:55370.service: Deactivated successfully. Dec 13 04:03:37.337387 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 04:03:37.337809 systemd-logind[1614]: Session 17 logged out. Waiting for processes to exit. Dec 13 04:03:37.338296 systemd[1]: Started sshd@34-145.40.90.151:22-139.178.68.195:55384.service. Dec 13 04:03:37.338754 systemd-logind[1614]: Removed session 17. Dec 13 04:03:37.410066 sshd[4367]: Accepted publickey for core from 139.178.68.195 port 55384 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:37.411485 sshd[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:37.415978 systemd-logind[1614]: New session 18 of user core. Dec 13 04:03:37.417051 systemd[1]: Started session-18.scope. Dec 13 04:03:37.688895 sshd[4367]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:37.696149 systemd[1]: sshd@34-145.40.90.151:22-139.178.68.195:55384.service: Deactivated successfully. Dec 13 04:03:37.697063 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 04:03:37.697407 systemd-logind[1614]: Session 18 logged out. Waiting for processes to exit. Dec 13 04:03:37.698068 systemd[1]: Started sshd@35-145.40.90.151:22-139.178.68.195:55390.service. Dec 13 04:03:37.698414 systemd-logind[1614]: Removed session 18. Dec 13 04:03:37.735366 sshd[4391]: Accepted publickey for core from 139.178.68.195 port 55390 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:37.736257 sshd[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:37.739176 systemd-logind[1614]: New session 19 of user core. Dec 13 04:03:37.739818 systemd[1]: Started session-19.scope. Dec 13 04:03:38.641406 sshd[4391]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:38.650965 systemd[1]: sshd@35-145.40.90.151:22-139.178.68.195:55390.service: Deactivated successfully. Dec 13 04:03:38.652228 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 04:03:38.653356 systemd-logind[1614]: Session 19 logged out. Waiting for processes to exit. Dec 13 04:03:38.655537 systemd[1]: Started sshd@36-145.40.90.151:22-139.178.68.195:55404.service. Dec 13 04:03:38.656859 systemd-logind[1614]: Removed session 19. Dec 13 04:03:38.701061 sshd[4422]: Accepted publickey for core from 139.178.68.195 port 55404 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:38.702121 sshd[4422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:38.705404 systemd-logind[1614]: New session 20 of user core. Dec 13 04:03:38.706183 systemd[1]: Started session-20.scope. Dec 13 04:03:38.884670 sshd[4422]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:38.886090 systemd[1]: sshd@36-145.40.90.151:22-139.178.68.195:55404.service: Deactivated successfully. Dec 13 04:03:38.886421 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 04:03:38.886809 systemd-logind[1614]: Session 20 logged out. Waiting for processes to exit. Dec 13 04:03:38.887385 systemd[1]: Started sshd@37-145.40.90.151:22-139.178.68.195:55418.service. Dec 13 04:03:38.887879 systemd-logind[1614]: Removed session 20. Dec 13 04:03:38.978314 sshd[4446]: Accepted publickey for core from 139.178.68.195 port 55418 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:38.981652 sshd[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:38.992508 systemd-logind[1614]: New session 21 of user core. Dec 13 04:03:38.995102 systemd[1]: Started session-21.scope. Dec 13 04:03:39.139482 sshd[4446]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:39.140984 systemd[1]: sshd@37-145.40.90.151:22-139.178.68.195:55418.service: Deactivated successfully. Dec 13 04:03:39.141405 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 04:03:39.141821 systemd-logind[1614]: Session 21 logged out. Waiting for processes to exit. Dec 13 04:03:39.142311 systemd-logind[1614]: Removed session 21. Dec 13 04:03:44.151128 systemd[1]: Started sshd@38-145.40.90.151:22-139.178.68.195:55434.service. Dec 13 04:03:44.226983 sshd[4476]: Accepted publickey for core from 139.178.68.195 port 55434 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:44.228366 sshd[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:44.232292 systemd-logind[1614]: New session 22 of user core. Dec 13 04:03:44.233201 systemd[1]: Started session-22.scope. Dec 13 04:03:44.321626 sshd[4476]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:44.323236 systemd[1]: sshd@38-145.40.90.151:22-139.178.68.195:55434.service: Deactivated successfully. Dec 13 04:03:44.323711 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 04:03:44.324112 systemd-logind[1614]: Session 22 logged out. Waiting for processes to exit. Dec 13 04:03:44.324696 systemd-logind[1614]: Removed session 22. Dec 13 04:03:49.331033 systemd[1]: Started sshd@39-145.40.90.151:22-139.178.68.195:49212.service. Dec 13 04:03:49.367826 sshd[4501]: Accepted publickey for core from 139.178.68.195 port 49212 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:49.368570 sshd[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:49.370873 systemd-logind[1614]: New session 23 of user core. Dec 13 04:03:49.371401 systemd[1]: Started session-23.scope. Dec 13 04:03:49.454284 sshd[4501]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:49.455803 systemd[1]: sshd@39-145.40.90.151:22-139.178.68.195:49212.service: Deactivated successfully. Dec 13 04:03:49.456206 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 04:03:49.456617 systemd-logind[1614]: Session 23 logged out. Waiting for processes to exit. Dec 13 04:03:49.457115 systemd-logind[1614]: Removed session 23. Dec 13 04:03:52.843910 systemd[1]: Started sshd@40-145.40.90.151:22-92.27.157.252:41711.service. Dec 13 04:03:53.682512 sshd[4526]: Invalid user user1 from 92.27.157.252 port 41711 Dec 13 04:03:53.689240 sshd[4526]: pam_faillock(sshd:auth): User unknown Dec 13 04:03:53.690357 sshd[4526]: pam_unix(sshd:auth): check pass; user unknown Dec 13 04:03:53.690482 sshd[4526]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.27.157.252 Dec 13 04:03:53.691475 sshd[4526]: pam_faillock(sshd:auth): User unknown Dec 13 04:03:54.463773 systemd[1]: Started sshd@41-145.40.90.151:22-139.178.68.195:49228.service. Dec 13 04:03:54.500836 sshd[4529]: Accepted publickey for core from 139.178.68.195 port 49228 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:54.501568 sshd[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:54.503952 systemd-logind[1614]: New session 24 of user core. Dec 13 04:03:54.504397 systemd[1]: Started session-24.scope. Dec 13 04:03:54.587109 sshd[4529]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:54.588624 systemd[1]: sshd@41-145.40.90.151:22-139.178.68.195:49228.service: Deactivated successfully. Dec 13 04:03:54.589060 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 04:03:54.589371 systemd-logind[1614]: Session 24 logged out. Waiting for processes to exit. Dec 13 04:03:54.590037 systemd-logind[1614]: Removed session 24. Dec 13 04:03:55.938112 sshd[4526]: Failed password for invalid user user1 from 92.27.157.252 port 41711 ssh2 Dec 13 04:03:57.426052 systemd[1]: Started sshd@42-145.40.90.151:22-218.92.0.215:7936.service. Dec 13 04:03:58.120052 sshd[4526]: Received disconnect from 92.27.157.252 port 41711:11: Bye Bye [preauth] Dec 13 04:03:58.120052 sshd[4526]: Disconnected from invalid user user1 92.27.157.252 port 41711 [preauth] Dec 13 04:03:58.122640 systemd[1]: sshd@40-145.40.90.151:22-92.27.157.252:41711.service: Deactivated successfully. Dec 13 04:03:58.391977 sshd[4554]: Unable to negotiate with 218.92.0.215 port 7936: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Dec 13 04:03:58.393781 systemd[1]: sshd@42-145.40.90.151:22-218.92.0.215:7936.service: Deactivated successfully. Dec 13 04:03:59.597356 systemd[1]: Started sshd@43-145.40.90.151:22-139.178.68.195:41474.service. Dec 13 04:03:59.634154 sshd[4559]: Accepted publickey for core from 139.178.68.195 port 41474 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:59.634844 sshd[4559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:59.637326 systemd-logind[1614]: New session 25 of user core. Dec 13 04:03:59.637794 systemd[1]: Started session-25.scope. Dec 13 04:03:59.719596 sshd[4559]: pam_unix(sshd:session): session closed for user core Dec 13 04:03:59.721771 systemd[1]: sshd@43-145.40.90.151:22-139.178.68.195:41474.service: Deactivated successfully. Dec 13 04:03:59.722176 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 04:03:59.722585 systemd-logind[1614]: Session 25 logged out. Waiting for processes to exit. Dec 13 04:03:59.723177 systemd[1]: Started sshd@44-145.40.90.151:22-139.178.68.195:41478.service. Dec 13 04:03:59.723681 systemd-logind[1614]: Removed session 25. Dec 13 04:03:59.759844 sshd[4581]: Accepted publickey for core from 139.178.68.195 port 41478 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:03:59.760686 sshd[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:03:59.763549 systemd-logind[1614]: New session 26 of user core. Dec 13 04:03:59.764415 systemd[1]: Started session-26.scope. Dec 13 04:04:00.919816 systemd[1]: Started sshd@45-145.40.90.151:22-51.89.216.178:48408.service. Dec 13 04:04:01.139167 env[1562]: time="2024-12-13T04:04:01.139076040Z" level=info msg="StopContainer for \"ef2d8805e30c20274908bc390229f6933f98d2e62d1a9dee15e234dcb0c4673e\" with timeout 30 (s)" Dec 13 04:04:01.140074 env[1562]: time="2024-12-13T04:04:01.139756003Z" level=info msg="Stop container \"ef2d8805e30c20274908bc390229f6933f98d2e62d1a9dee15e234dcb0c4673e\" with signal terminated" Dec 13 04:04:01.160970 systemd[1]: cri-containerd-ef2d8805e30c20274908bc390229f6933f98d2e62d1a9dee15e234dcb0c4673e.scope: Deactivated successfully. Dec 13 04:04:01.179023 env[1562]: time="2024-12-13T04:04:01.178886305Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 04:04:01.181554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef2d8805e30c20274908bc390229f6933f98d2e62d1a9dee15e234dcb0c4673e-rootfs.mount: Deactivated successfully. Dec 13 04:04:01.185140 env[1562]: time="2024-12-13T04:04:01.185105556Z" level=info msg="StopContainer for \"05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056\" with timeout 2 (s)" Dec 13 04:04:01.186961 env[1562]: time="2024-12-13T04:04:01.186925031Z" level=info msg="Stop container \"05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056\" with signal terminated" Dec 13 04:04:01.193311 systemd-networkd[1318]: lxc_health: Link DOWN Dec 13 04:04:01.193317 systemd-networkd[1318]: lxc_health: Lost carrier Dec 13 04:04:01.201004 env[1562]: time="2024-12-13T04:04:01.200954586Z" level=info msg="shim disconnected" id=ef2d8805e30c20274908bc390229f6933f98d2e62d1a9dee15e234dcb0c4673e Dec 13 04:04:01.201136 env[1562]: time="2024-12-13T04:04:01.201006504Z" level=warning msg="cleaning up after shim disconnected" id=ef2d8805e30c20274908bc390229f6933f98d2e62d1a9dee15e234dcb0c4673e namespace=k8s.io Dec 13 04:04:01.201136 env[1562]: time="2024-12-13T04:04:01.201020374Z" level=info msg="cleaning up dead shim" Dec 13 04:04:01.208228 env[1562]: time="2024-12-13T04:04:01.208163641Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:04:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4653 runtime=io.containerd.runc.v2\n" Dec 13 04:04:01.209592 env[1562]: time="2024-12-13T04:04:01.209529827Z" level=info msg="StopContainer for \"ef2d8805e30c20274908bc390229f6933f98d2e62d1a9dee15e234dcb0c4673e\" returns successfully" Dec 13 04:04:01.210231 env[1562]: time="2024-12-13T04:04:01.210168892Z" level=info msg="StopPodSandbox for \"a8c30e0edbc5946b884034bd6ca810494d7f1b8d9a23eb0a7e87acc76d32f7f4\"" Dec 13 04:04:01.210317 env[1562]: time="2024-12-13T04:04:01.210239611Z" level=info msg="Container to stop \"ef2d8805e30c20274908bc390229f6933f98d2e62d1a9dee15e234dcb0c4673e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:04:01.212828 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8c30e0edbc5946b884034bd6ca810494d7f1b8d9a23eb0a7e87acc76d32f7f4-shm.mount: Deactivated successfully. Dec 13 04:04:01.216866 systemd[1]: cri-containerd-a8c30e0edbc5946b884034bd6ca810494d7f1b8d9a23eb0a7e87acc76d32f7f4.scope: Deactivated successfully. Dec 13 04:04:01.235827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8c30e0edbc5946b884034bd6ca810494d7f1b8d9a23eb0a7e87acc76d32f7f4-rootfs.mount: Deactivated successfully. Dec 13 04:04:01.256891 systemd[1]: cri-containerd-05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056.scope: Deactivated successfully. Dec 13 04:04:01.257170 systemd[1]: cri-containerd-05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056.scope: Consumed 6.403s CPU time. Dec 13 04:04:01.263992 env[1562]: time="2024-12-13T04:04:01.263939372Z" level=info msg="shim disconnected" id=a8c30e0edbc5946b884034bd6ca810494d7f1b8d9a23eb0a7e87acc76d32f7f4 Dec 13 04:04:01.264125 env[1562]: time="2024-12-13T04:04:01.263993009Z" level=warning msg="cleaning up after shim disconnected" id=a8c30e0edbc5946b884034bd6ca810494d7f1b8d9a23eb0a7e87acc76d32f7f4 namespace=k8s.io Dec 13 04:04:01.264125 env[1562]: time="2024-12-13T04:04:01.264006237Z" level=info msg="cleaning up dead shim" Dec 13 04:04:01.271166 env[1562]: time="2024-12-13T04:04:01.271113224Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:04:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4694 runtime=io.containerd.runc.v2\n" Dec 13 04:04:01.271540 env[1562]: time="2024-12-13T04:04:01.271506522Z" level=info msg="TearDown network for sandbox \"a8c30e0edbc5946b884034bd6ca810494d7f1b8d9a23eb0a7e87acc76d32f7f4\" successfully" Dec 13 04:04:01.271650 env[1562]: time="2024-12-13T04:04:01.271537001Z" level=info msg="StopPodSandbox for \"a8c30e0edbc5946b884034bd6ca810494d7f1b8d9a23eb0a7e87acc76d32f7f4\" returns successfully" Dec 13 04:04:01.274944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056-rootfs.mount: Deactivated successfully. Dec 13 04:04:01.286957 env[1562]: time="2024-12-13T04:04:01.286884937Z" level=info msg="shim disconnected" id=05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056 Dec 13 04:04:01.286957 env[1562]: time="2024-12-13T04:04:01.286930454Z" level=warning msg="cleaning up after shim disconnected" id=05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056 namespace=k8s.io Dec 13 04:04:01.286957 env[1562]: time="2024-12-13T04:04:01.286945192Z" level=info msg="cleaning up dead shim" Dec 13 04:04:01.294595 env[1562]: time="2024-12-13T04:04:01.294526340Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:04:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4711 runtime=io.containerd.runc.v2\n" Dec 13 04:04:01.295763 env[1562]: time="2024-12-13T04:04:01.295700759Z" level=info msg="StopContainer for \"05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056\" returns successfully" Dec 13 04:04:01.296184 env[1562]: time="2024-12-13T04:04:01.296157611Z" level=info msg="StopPodSandbox for \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\"" Dec 13 04:04:01.296268 env[1562]: time="2024-12-13T04:04:01.296218052Z" level=info msg="Container to stop \"ec9b686f1bba2b395bb7aff8be3375abb358625e164b0e6abcee0264aaebbd2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:04:01.296268 env[1562]: time="2024-12-13T04:04:01.296237583Z" level=info msg="Container to stop \"5c56f9f5f23274c6a9ecd0389759c856f6f0792aa54b6eb5a0283f4069402732\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:04:01.296268 env[1562]: time="2024-12-13T04:04:01.296251715Z" level=info msg="Container to stop \"dded5cc15a09f57324e96cc713af7d19ea9b77fcbddf527997e9246f8b030279\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:04:01.296450 env[1562]: time="2024-12-13T04:04:01.296265758Z" level=info msg="Container to stop \"05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:04:01.296450 env[1562]: time="2024-12-13T04:04:01.296278748Z" level=info msg="Container to stop \"d5fbcf38d633c99cdf918a26d071fc5f597a5d48f99e3b8f72516ae9db8cbb0f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:04:01.302130 systemd[1]: cri-containerd-18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5.scope: Deactivated successfully. Dec 13 04:04:01.318881 env[1562]: time="2024-12-13T04:04:01.318830013Z" level=info msg="shim disconnected" id=18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5 Dec 13 04:04:01.318881 env[1562]: time="2024-12-13T04:04:01.318879191Z" level=warning msg="cleaning up after shim disconnected" id=18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5 namespace=k8s.io Dec 13 04:04:01.319121 env[1562]: time="2024-12-13T04:04:01.318892132Z" level=info msg="cleaning up dead shim" Dec 13 04:04:01.326258 env[1562]: time="2024-12-13T04:04:01.326224571Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:04:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4743 runtime=io.containerd.runc.v2\n" Dec 13 04:04:01.326587 env[1562]: time="2024-12-13T04:04:01.326529944Z" level=info msg="TearDown network for sandbox \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\" successfully" Dec 13 04:04:01.326587 env[1562]: time="2024-12-13T04:04:01.326557434Z" level=info msg="StopPodSandbox for \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\" returns successfully" Dec 13 04:04:01.387961 kubelet[2454]: I1213 04:04:01.387868 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-cilium-cgroup\") pod \"4f317399-93d4-4c84-961f-f2a797300b9c\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " Dec 13 04:04:01.387961 kubelet[2454]: I1213 04:04:01.387974 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-lib-modules\") pod \"4f317399-93d4-4c84-961f-f2a797300b9c\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " Dec 13 04:04:01.389406 kubelet[2454]: I1213 04:04:01.388030 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-host-proc-sys-net\") pod \"4f317399-93d4-4c84-961f-f2a797300b9c\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " Dec 13 04:04:01.389406 kubelet[2454]: I1213 04:04:01.388032 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4f317399-93d4-4c84-961f-f2a797300b9c" (UID: "4f317399-93d4-4c84-961f-f2a797300b9c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:01.389406 kubelet[2454]: I1213 04:04:01.388083 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-host-proc-sys-kernel\") pod \"4f317399-93d4-4c84-961f-f2a797300b9c\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " Dec 13 04:04:01.389406 kubelet[2454]: I1213 04:04:01.388151 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f317399-93d4-4c84-961f-f2a797300b9c-hubble-tls\") pod \"4f317399-93d4-4c84-961f-f2a797300b9c\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " Dec 13 04:04:01.389406 kubelet[2454]: I1213 04:04:01.388126 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4f317399-93d4-4c84-961f-f2a797300b9c" (UID: "4f317399-93d4-4c84-961f-f2a797300b9c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:01.390192 kubelet[2454]: I1213 04:04:01.388163 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4f317399-93d4-4c84-961f-f2a797300b9c" (UID: "4f317399-93d4-4c84-961f-f2a797300b9c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:01.390192 kubelet[2454]: I1213 04:04:01.388202 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kr62\" (UniqueName: \"kubernetes.io/projected/4f317399-93d4-4c84-961f-f2a797300b9c-kube-api-access-4kr62\") pod \"4f317399-93d4-4c84-961f-f2a797300b9c\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " Dec 13 04:04:01.390192 kubelet[2454]: I1213 04:04:01.388192 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4f317399-93d4-4c84-961f-f2a797300b9c" (UID: "4f317399-93d4-4c84-961f-f2a797300b9c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:01.390192 kubelet[2454]: I1213 04:04:01.388258 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f317399-93d4-4c84-961f-f2a797300b9c-cilium-config-path\") pod \"4f317399-93d4-4c84-961f-f2a797300b9c\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " Dec 13 04:04:01.390192 kubelet[2454]: I1213 04:04:01.388303 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-bpf-maps\") pod \"4f317399-93d4-4c84-961f-f2a797300b9c\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " Dec 13 04:04:01.390822 kubelet[2454]: I1213 04:04:01.388344 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-hostproc\") pod \"4f317399-93d4-4c84-961f-f2a797300b9c\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " Dec 13 04:04:01.390822 kubelet[2454]: I1213 04:04:01.388365 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4f317399-93d4-4c84-961f-f2a797300b9c" (UID: "4f317399-93d4-4c84-961f-f2a797300b9c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:01.390822 kubelet[2454]: I1213 04:04:01.388386 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-etc-cni-netd\") pod \"4f317399-93d4-4c84-961f-f2a797300b9c\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " Dec 13 04:04:01.390822 kubelet[2454]: I1213 04:04:01.388488 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4f317399-93d4-4c84-961f-f2a797300b9c" (UID: "4f317399-93d4-4c84-961f-f2a797300b9c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:01.390822 kubelet[2454]: I1213 04:04:01.388508 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-hostproc" (OuterVolumeSpecName: "hostproc") pod "4f317399-93d4-4c84-961f-f2a797300b9c" (UID: "4f317399-93d4-4c84-961f-f2a797300b9c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:01.391389 kubelet[2454]: I1213 04:04:01.388580 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f317399-93d4-4c84-961f-f2a797300b9c-clustermesh-secrets\") pod \"4f317399-93d4-4c84-961f-f2a797300b9c\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " Dec 13 04:04:01.391389 kubelet[2454]: I1213 04:04:01.388674 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flnjk\" (UniqueName: \"kubernetes.io/projected/69933a75-f9e6-4329-b848-5137d6d4be6d-kube-api-access-flnjk\") pod \"69933a75-f9e6-4329-b848-5137d6d4be6d\" (UID: \"69933a75-f9e6-4329-b848-5137d6d4be6d\") " Dec 13 04:04:01.391389 kubelet[2454]: I1213 04:04:01.388756 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-cilium-run\") pod \"4f317399-93d4-4c84-961f-f2a797300b9c\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " Dec 13 04:04:01.391389 kubelet[2454]: I1213 04:04:01.388858 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69933a75-f9e6-4329-b848-5137d6d4be6d-cilium-config-path\") pod \"69933a75-f9e6-4329-b848-5137d6d4be6d\" (UID: \"69933a75-f9e6-4329-b848-5137d6d4be6d\") " Dec 13 04:04:01.391389 kubelet[2454]: I1213 04:04:01.388882 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4f317399-93d4-4c84-961f-f2a797300b9c" (UID: "4f317399-93d4-4c84-961f-f2a797300b9c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:01.391389 kubelet[2454]: I1213 04:04:01.388947 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-cni-path\") pod \"4f317399-93d4-4c84-961f-f2a797300b9c\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " Dec 13 04:04:01.392019 kubelet[2454]: I1213 04:04:01.389033 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-xtables-lock\") pod \"4f317399-93d4-4c84-961f-f2a797300b9c\" (UID: \"4f317399-93d4-4c84-961f-f2a797300b9c\") " Dec 13 04:04:01.392019 kubelet[2454]: I1213 04:04:01.389093 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-cni-path" (OuterVolumeSpecName: "cni-path") pod "4f317399-93d4-4c84-961f-f2a797300b9c" (UID: "4f317399-93d4-4c84-961f-f2a797300b9c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:01.392019 kubelet[2454]: I1213 04:04:01.389167 2454 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-bpf-maps\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:01.392019 kubelet[2454]: I1213 04:04:01.389186 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4f317399-93d4-4c84-961f-f2a797300b9c" (UID: "4f317399-93d4-4c84-961f-f2a797300b9c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:01.392019 kubelet[2454]: I1213 04:04:01.389215 2454 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-hostproc\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:01.392019 kubelet[2454]: I1213 04:04:01.389328 2454 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-etc-cni-netd\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:01.392644 kubelet[2454]: I1213 04:04:01.389393 2454 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-cilium-run\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:01.392644 kubelet[2454]: I1213 04:04:01.389484 2454 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-cilium-cgroup\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:01.392644 kubelet[2454]: I1213 04:04:01.389540 2454 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-lib-modules\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:01.392644 kubelet[2454]: I1213 04:04:01.389590 2454 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-host-proc-sys-net\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:01.392644 kubelet[2454]: I1213 04:04:01.389637 2454 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:01.394677 kubelet[2454]: I1213 04:04:01.394559 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f317399-93d4-4c84-961f-f2a797300b9c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4f317399-93d4-4c84-961f-f2a797300b9c" (UID: "4f317399-93d4-4c84-961f-f2a797300b9c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:04:01.395189 kubelet[2454]: I1213 04:04:01.395118 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69933a75-f9e6-4329-b848-5137d6d4be6d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "69933a75-f9e6-4329-b848-5137d6d4be6d" (UID: "69933a75-f9e6-4329-b848-5137d6d4be6d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:04:01.395489 kubelet[2454]: I1213 04:04:01.395406 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f317399-93d4-4c84-961f-f2a797300b9c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4f317399-93d4-4c84-961f-f2a797300b9c" (UID: "4f317399-93d4-4c84-961f-f2a797300b9c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:04:01.395700 kubelet[2454]: I1213 04:04:01.395545 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f317399-93d4-4c84-961f-f2a797300b9c-kube-api-access-4kr62" (OuterVolumeSpecName: "kube-api-access-4kr62") pod "4f317399-93d4-4c84-961f-f2a797300b9c" (UID: "4f317399-93d4-4c84-961f-f2a797300b9c"). InnerVolumeSpecName "kube-api-access-4kr62". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:04:01.395700 kubelet[2454]: I1213 04:04:01.395625 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f317399-93d4-4c84-961f-f2a797300b9c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4f317399-93d4-4c84-961f-f2a797300b9c" (UID: "4f317399-93d4-4c84-961f-f2a797300b9c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:04:01.395933 kubelet[2454]: I1213 04:04:01.395729 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69933a75-f9e6-4329-b848-5137d6d4be6d-kube-api-access-flnjk" (OuterVolumeSpecName: "kube-api-access-flnjk") pod "69933a75-f9e6-4329-b848-5137d6d4be6d" (UID: "69933a75-f9e6-4329-b848-5137d6d4be6d"). InnerVolumeSpecName "kube-api-access-flnjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:04:01.491020 kubelet[2454]: I1213 04:04:01.490787 2454 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69933a75-f9e6-4329-b848-5137d6d4be6d-cilium-config-path\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:01.491020 kubelet[2454]: I1213 04:04:01.490862 2454 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-cni-path\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:01.491020 kubelet[2454]: I1213 04:04:01.490891 2454 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f317399-93d4-4c84-961f-f2a797300b9c-xtables-lock\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:01.491020 kubelet[2454]: I1213 04:04:01.490924 2454 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f317399-93d4-4c84-961f-f2a797300b9c-hubble-tls\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:01.491020 kubelet[2454]: I1213 04:04:01.490949 2454 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4kr62\" (UniqueName: \"kubernetes.io/projected/4f317399-93d4-4c84-961f-f2a797300b9c-kube-api-access-4kr62\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:01.491020 kubelet[2454]: I1213 04:04:01.490976 2454 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f317399-93d4-4c84-961f-f2a797300b9c-cilium-config-path\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:01.491020 kubelet[2454]: I1213 04:04:01.491004 2454 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-flnjk\" (UniqueName: \"kubernetes.io/projected/69933a75-f9e6-4329-b848-5137d6d4be6d-kube-api-access-flnjk\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:01.491020 kubelet[2454]: I1213 04:04:01.491032 2454 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f317399-93d4-4c84-961f-f2a797300b9c-clustermesh-secrets\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:01.720867 sshd[4605]: Invalid user text001 from 51.89.216.178 port 48408 Dec 13 04:04:01.727409 sshd[4605]: pam_faillock(sshd:auth): User unknown Dec 13 04:04:01.728505 sshd[4605]: pam_unix(sshd:auth): check pass; user unknown Dec 13 04:04:01.728598 sshd[4605]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=51.89.216.178 Dec 13 04:04:01.729558 sshd[4605]: pam_faillock(sshd:auth): User unknown Dec 13 04:04:02.065540 systemd[1]: Removed slice kubepods-burstable-pod4f317399_93d4_4c84_961f_f2a797300b9c.slice. Dec 13 04:04:02.065668 systemd[1]: kubepods-burstable-pod4f317399_93d4_4c84_961f_f2a797300b9c.slice: Consumed 6.464s CPU time. Dec 13 04:04:02.066413 systemd[1]: Removed slice kubepods-besteffort-pod69933a75_f9e6_4329_b848_5137d6d4be6d.slice. Dec 13 04:04:02.160183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5-rootfs.mount: Deactivated successfully. Dec 13 04:04:02.160241 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5-shm.mount: Deactivated successfully. Dec 13 04:04:02.160279 systemd[1]: var-lib-kubelet-pods-69933a75\x2df9e6\x2d4329\x2db848\x2d5137d6d4be6d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dflnjk.mount: Deactivated successfully. Dec 13 04:04:02.160314 systemd[1]: var-lib-kubelet-pods-4f317399\x2d93d4\x2d4c84\x2d961f\x2df2a797300b9c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4kr62.mount: Deactivated successfully. Dec 13 04:04:02.160346 systemd[1]: var-lib-kubelet-pods-4f317399\x2d93d4\x2d4c84\x2d961f\x2df2a797300b9c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 04:04:02.160377 systemd[1]: var-lib-kubelet-pods-4f317399\x2d93d4\x2d4c84\x2d961f\x2df2a797300b9c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 04:04:02.230939 kubelet[2454]: I1213 04:04:02.230841 2454 scope.go:117] "RemoveContainer" containerID="05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056" Dec 13 04:04:02.233809 env[1562]: time="2024-12-13T04:04:02.233729368Z" level=info msg="RemoveContainer for \"05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056\"" Dec 13 04:04:02.239421 env[1562]: time="2024-12-13T04:04:02.239343660Z" level=info msg="RemoveContainer for \"05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056\" returns successfully" Dec 13 04:04:02.239945 kubelet[2454]: I1213 04:04:02.239881 2454 scope.go:117] "RemoveContainer" containerID="ec9b686f1bba2b395bb7aff8be3375abb358625e164b0e6abcee0264aaebbd2f" Dec 13 04:04:02.242530 env[1562]: time="2024-12-13T04:04:02.242421135Z" level=info msg="RemoveContainer for \"ec9b686f1bba2b395bb7aff8be3375abb358625e164b0e6abcee0264aaebbd2f\"" Dec 13 04:04:02.246902 env[1562]: time="2024-12-13T04:04:02.246821875Z" level=info msg="RemoveContainer for \"ec9b686f1bba2b395bb7aff8be3375abb358625e164b0e6abcee0264aaebbd2f\" returns successfully" Dec 13 04:04:02.247200 kubelet[2454]: I1213 04:04:02.247159 2454 scope.go:117] "RemoveContainer" containerID="5c56f9f5f23274c6a9ecd0389759c856f6f0792aa54b6eb5a0283f4069402732" Dec 13 04:04:02.249722 env[1562]: time="2024-12-13T04:04:02.249606322Z" level=info msg="RemoveContainer for \"5c56f9f5f23274c6a9ecd0389759c856f6f0792aa54b6eb5a0283f4069402732\"" Dec 13 04:04:02.254175 env[1562]: time="2024-12-13T04:04:02.254098428Z" level=info msg="RemoveContainer for \"5c56f9f5f23274c6a9ecd0389759c856f6f0792aa54b6eb5a0283f4069402732\" returns successfully" Dec 13 04:04:02.254517 kubelet[2454]: I1213 04:04:02.254463 2454 scope.go:117] "RemoveContainer" containerID="dded5cc15a09f57324e96cc713af7d19ea9b77fcbddf527997e9246f8b030279" Dec 13 04:04:02.256971 env[1562]: time="2024-12-13T04:04:02.256870007Z" level=info msg="RemoveContainer for \"dded5cc15a09f57324e96cc713af7d19ea9b77fcbddf527997e9246f8b030279\"" Dec 13 04:04:02.261287 env[1562]: time="2024-12-13T04:04:02.261178540Z" level=info msg="RemoveContainer for \"dded5cc15a09f57324e96cc713af7d19ea9b77fcbddf527997e9246f8b030279\" returns successfully" Dec 13 04:04:02.261633 kubelet[2454]: I1213 04:04:02.261551 2454 scope.go:117] "RemoveContainer" containerID="d5fbcf38d633c99cdf918a26d071fc5f597a5d48f99e3b8f72516ae9db8cbb0f" Dec 13 04:04:02.264174 env[1562]: time="2024-12-13T04:04:02.264079092Z" level=info msg="RemoveContainer for \"d5fbcf38d633c99cdf918a26d071fc5f597a5d48f99e3b8f72516ae9db8cbb0f\"" Dec 13 04:04:02.269226 env[1562]: time="2024-12-13T04:04:02.269154405Z" level=info msg="RemoveContainer for \"d5fbcf38d633c99cdf918a26d071fc5f597a5d48f99e3b8f72516ae9db8cbb0f\" returns successfully" Dec 13 04:04:02.269622 kubelet[2454]: I1213 04:04:02.269571 2454 scope.go:117] "RemoveContainer" containerID="05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056" Dec 13 04:04:02.270219 env[1562]: time="2024-12-13T04:04:02.270051174Z" level=error msg="ContainerStatus for \"05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056\": not found" Dec 13 04:04:02.270574 kubelet[2454]: E1213 04:04:02.270519 2454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056\": not found" containerID="05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056" Dec 13 04:04:02.270784 kubelet[2454]: I1213 04:04:02.270594 2454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056"} err="failed to get container status \"05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056\": rpc error: code = NotFound desc = an error occurred when try to find container \"05e8a4b14fda32eb02a9d673efe297b7647f05073d55b486a90ffb15d8792056\": not found" Dec 13 04:04:02.270939 kubelet[2454]: I1213 04:04:02.270791 2454 scope.go:117] "RemoveContainer" containerID="ec9b686f1bba2b395bb7aff8be3375abb358625e164b0e6abcee0264aaebbd2f" Dec 13 04:04:02.271512 env[1562]: time="2024-12-13T04:04:02.271306150Z" level=error msg="ContainerStatus for \"ec9b686f1bba2b395bb7aff8be3375abb358625e164b0e6abcee0264aaebbd2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec9b686f1bba2b395bb7aff8be3375abb358625e164b0e6abcee0264aaebbd2f\": not found" Dec 13 04:04:02.271845 kubelet[2454]: E1213 04:04:02.271793 2454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec9b686f1bba2b395bb7aff8be3375abb358625e164b0e6abcee0264aaebbd2f\": not found" containerID="ec9b686f1bba2b395bb7aff8be3375abb358625e164b0e6abcee0264aaebbd2f" Dec 13 04:04:02.272013 kubelet[2454]: I1213 04:04:02.271860 2454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec9b686f1bba2b395bb7aff8be3375abb358625e164b0e6abcee0264aaebbd2f"} err="failed to get container status \"ec9b686f1bba2b395bb7aff8be3375abb358625e164b0e6abcee0264aaebbd2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec9b686f1bba2b395bb7aff8be3375abb358625e164b0e6abcee0264aaebbd2f\": not found" Dec 13 04:04:02.272013 kubelet[2454]: I1213 04:04:02.271911 2454 scope.go:117] "RemoveContainer" containerID="5c56f9f5f23274c6a9ecd0389759c856f6f0792aa54b6eb5a0283f4069402732" Dec 13 04:04:02.272592 env[1562]: time="2024-12-13T04:04:02.272439540Z" level=error msg="ContainerStatus for \"5c56f9f5f23274c6a9ecd0389759c856f6f0792aa54b6eb5a0283f4069402732\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5c56f9f5f23274c6a9ecd0389759c856f6f0792aa54b6eb5a0283f4069402732\": not found" Dec 13 04:04:02.272897 kubelet[2454]: E1213 04:04:02.272843 2454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5c56f9f5f23274c6a9ecd0389759c856f6f0792aa54b6eb5a0283f4069402732\": not found" containerID="5c56f9f5f23274c6a9ecd0389759c856f6f0792aa54b6eb5a0283f4069402732" Dec 13 04:04:02.273064 kubelet[2454]: I1213 04:04:02.272912 2454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5c56f9f5f23274c6a9ecd0389759c856f6f0792aa54b6eb5a0283f4069402732"} err="failed to get container status \"5c56f9f5f23274c6a9ecd0389759c856f6f0792aa54b6eb5a0283f4069402732\": rpc error: code = NotFound desc = an error occurred when try to find container \"5c56f9f5f23274c6a9ecd0389759c856f6f0792aa54b6eb5a0283f4069402732\": not found" Dec 13 04:04:02.273064 kubelet[2454]: I1213 04:04:02.272962 2454 scope.go:117] "RemoveContainer" containerID="dded5cc15a09f57324e96cc713af7d19ea9b77fcbddf527997e9246f8b030279" Dec 13 04:04:02.273674 env[1562]: time="2024-12-13T04:04:02.273496627Z" level=error msg="ContainerStatus for \"dded5cc15a09f57324e96cc713af7d19ea9b77fcbddf527997e9246f8b030279\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dded5cc15a09f57324e96cc713af7d19ea9b77fcbddf527997e9246f8b030279\": not found" Dec 13 04:04:02.273939 kubelet[2454]: E1213 04:04:02.273897 2454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dded5cc15a09f57324e96cc713af7d19ea9b77fcbddf527997e9246f8b030279\": not found" containerID="dded5cc15a09f57324e96cc713af7d19ea9b77fcbddf527997e9246f8b030279" Dec 13 04:04:02.274125 kubelet[2454]: I1213 04:04:02.273959 2454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dded5cc15a09f57324e96cc713af7d19ea9b77fcbddf527997e9246f8b030279"} err="failed to get container status \"dded5cc15a09f57324e96cc713af7d19ea9b77fcbddf527997e9246f8b030279\": rpc error: code = NotFound desc = an error occurred when try to find container \"dded5cc15a09f57324e96cc713af7d19ea9b77fcbddf527997e9246f8b030279\": not found" Dec 13 04:04:02.274125 kubelet[2454]: I1213 04:04:02.274006 2454 scope.go:117] "RemoveContainer" containerID="d5fbcf38d633c99cdf918a26d071fc5f597a5d48f99e3b8f72516ae9db8cbb0f" Dec 13 04:04:02.274677 env[1562]: time="2024-12-13T04:04:02.274540489Z" level=error msg="ContainerStatus for \"d5fbcf38d633c99cdf918a26d071fc5f597a5d48f99e3b8f72516ae9db8cbb0f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5fbcf38d633c99cdf918a26d071fc5f597a5d48f99e3b8f72516ae9db8cbb0f\": not found" Dec 13 04:04:02.274991 kubelet[2454]: E1213 04:04:02.274940 2454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5fbcf38d633c99cdf918a26d071fc5f597a5d48f99e3b8f72516ae9db8cbb0f\": not found" containerID="d5fbcf38d633c99cdf918a26d071fc5f597a5d48f99e3b8f72516ae9db8cbb0f" Dec 13 04:04:02.275148 kubelet[2454]: I1213 04:04:02.275011 2454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5fbcf38d633c99cdf918a26d071fc5f597a5d48f99e3b8f72516ae9db8cbb0f"} err="failed to get container status \"d5fbcf38d633c99cdf918a26d071fc5f597a5d48f99e3b8f72516ae9db8cbb0f\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5fbcf38d633c99cdf918a26d071fc5f597a5d48f99e3b8f72516ae9db8cbb0f\": not found" Dec 13 04:04:02.275148 kubelet[2454]: I1213 04:04:02.275058 2454 scope.go:117] "RemoveContainer" containerID="ef2d8805e30c20274908bc390229f6933f98d2e62d1a9dee15e234dcb0c4673e" Dec 13 04:04:02.279034 env[1562]: time="2024-12-13T04:04:02.278932799Z" level=info msg="RemoveContainer for \"ef2d8805e30c20274908bc390229f6933f98d2e62d1a9dee15e234dcb0c4673e\"" Dec 13 04:04:02.284165 env[1562]: time="2024-12-13T04:04:02.284092915Z" level=info msg="RemoveContainer for \"ef2d8805e30c20274908bc390229f6933f98d2e62d1a9dee15e234dcb0c4673e\" returns successfully" Dec 13 04:04:03.077244 sshd[4581]: pam_unix(sshd:session): session closed for user core Dec 13 04:04:03.079026 systemd[1]: sshd@44-145.40.90.151:22-139.178.68.195:41478.service: Deactivated successfully. Dec 13 04:04:03.079345 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 04:04:03.079806 systemd-logind[1614]: Session 26 logged out. Waiting for processes to exit. Dec 13 04:04:03.080362 systemd[1]: Started sshd@46-145.40.90.151:22-139.178.68.195:41490.service. Dec 13 04:04:03.080975 systemd-logind[1614]: Removed session 26. Dec 13 04:04:03.117604 sshd[4761]: Accepted publickey for core from 139.178.68.195 port 41490 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:04:03.118296 sshd[4761]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:04:03.120825 systemd-logind[1614]: New session 27 of user core. Dec 13 04:04:03.121326 systemd[1]: Started session-27.scope. Dec 13 04:04:03.753805 sshd[4761]: pam_unix(sshd:session): session closed for user core Dec 13 04:04:03.756042 systemd[1]: sshd@46-145.40.90.151:22-139.178.68.195:41490.service: Deactivated successfully. Dec 13 04:04:03.756572 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 04:04:03.757127 systemd-logind[1614]: Session 27 logged out. Waiting for processes to exit. Dec 13 04:04:03.758322 systemd[1]: Started sshd@47-145.40.90.151:22-139.178.68.195:41498.service. Dec 13 04:04:03.759621 systemd-logind[1614]: Removed session 27. Dec 13 04:04:03.767577 kubelet[2454]: E1213 04:04:03.767556 2454 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f317399-93d4-4c84-961f-f2a797300b9c" containerName="apply-sysctl-overwrites" Dec 13 04:04:03.767577 kubelet[2454]: E1213 04:04:03.767573 2454 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69933a75-f9e6-4329-b848-5137d6d4be6d" containerName="cilium-operator" Dec 13 04:04:03.767577 kubelet[2454]: E1213 04:04:03.767577 2454 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f317399-93d4-4c84-961f-f2a797300b9c" containerName="clean-cilium-state" Dec 13 04:04:03.767577 kubelet[2454]: E1213 04:04:03.767581 2454 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f317399-93d4-4c84-961f-f2a797300b9c" containerName="cilium-agent" Dec 13 04:04:03.767577 kubelet[2454]: E1213 04:04:03.767584 2454 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f317399-93d4-4c84-961f-f2a797300b9c" containerName="mount-cgroup" Dec 13 04:04:03.767957 kubelet[2454]: E1213 04:04:03.767588 2454 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f317399-93d4-4c84-961f-f2a797300b9c" containerName="mount-bpf-fs" Dec 13 04:04:03.767957 kubelet[2454]: I1213 04:04:03.767603 2454 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f317399-93d4-4c84-961f-f2a797300b9c" containerName="cilium-agent" Dec 13 04:04:03.767957 kubelet[2454]: I1213 04:04:03.767607 2454 memory_manager.go:354] "RemoveStaleState removing state" podUID="69933a75-f9e6-4329-b848-5137d6d4be6d" containerName="cilium-operator" Dec 13 04:04:03.771234 systemd[1]: Created slice kubepods-burstable-pod6dd90e50_fd16_434b_bb49_0370f5c04734.slice. Dec 13 04:04:03.798771 sshd[4784]: Accepted publickey for core from 139.178.68.195 port 41498 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:04:03.799597 sshd[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:04:03.802078 systemd-logind[1614]: New session 28 of user core. Dec 13 04:04:03.802792 systemd[1]: Started session-28.scope. Dec 13 04:04:03.804403 kubelet[2454]: I1213 04:04:03.804359 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6dd90e50-fd16-434b-bb49-0370f5c04734-hubble-tls\") pod \"cilium-t7kxl\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " pod="kube-system/cilium-t7kxl" Dec 13 04:04:03.804403 kubelet[2454]: I1213 04:04:03.804380 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-xtables-lock\") pod \"cilium-t7kxl\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " pod="kube-system/cilium-t7kxl" Dec 13 04:04:03.804403 kubelet[2454]: I1213 04:04:03.804391 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-host-proc-sys-net\") pod \"cilium-t7kxl\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " pod="kube-system/cilium-t7kxl" Dec 13 04:04:03.804403 kubelet[2454]: I1213 04:04:03.804402 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-host-proc-sys-kernel\") pod \"cilium-t7kxl\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " pod="kube-system/cilium-t7kxl" Dec 13 04:04:03.804537 kubelet[2454]: I1213 04:04:03.804414 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-bpf-maps\") pod \"cilium-t7kxl\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " pod="kube-system/cilium-t7kxl" Dec 13 04:04:03.804537 kubelet[2454]: I1213 04:04:03.804432 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-lib-modules\") pod \"cilium-t7kxl\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " pod="kube-system/cilium-t7kxl" Dec 13 04:04:03.804537 kubelet[2454]: I1213 04:04:03.804442 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6dd90e50-fd16-434b-bb49-0370f5c04734-cilium-config-path\") pod \"cilium-t7kxl\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " pod="kube-system/cilium-t7kxl" Dec 13 04:04:03.804537 kubelet[2454]: I1213 04:04:03.804451 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-cilium-run\") pod \"cilium-t7kxl\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " pod="kube-system/cilium-t7kxl" Dec 13 04:04:03.804537 kubelet[2454]: I1213 04:04:03.804459 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-hostproc\") pod \"cilium-t7kxl\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " pod="kube-system/cilium-t7kxl" Dec 13 04:04:03.804537 kubelet[2454]: I1213 04:04:03.804467 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-cilium-cgroup\") pod \"cilium-t7kxl\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " pod="kube-system/cilium-t7kxl" Dec 13 04:04:03.804736 kubelet[2454]: I1213 04:04:03.804475 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-etc-cni-netd\") pod \"cilium-t7kxl\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " pod="kube-system/cilium-t7kxl" Dec 13 04:04:03.804736 kubelet[2454]: I1213 04:04:03.804483 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6dd90e50-fd16-434b-bb49-0370f5c04734-cilium-ipsec-secrets\") pod \"cilium-t7kxl\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " pod="kube-system/cilium-t7kxl" Dec 13 04:04:03.804736 kubelet[2454]: I1213 04:04:03.804511 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6dd90e50-fd16-434b-bb49-0370f5c04734-clustermesh-secrets\") pod \"cilium-t7kxl\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " pod="kube-system/cilium-t7kxl" Dec 13 04:04:03.804736 kubelet[2454]: I1213 04:04:03.804537 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-cni-path\") pod \"cilium-t7kxl\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " pod="kube-system/cilium-t7kxl" Dec 13 04:04:03.804736 kubelet[2454]: I1213 04:04:03.804550 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vqb9\" (UniqueName: \"kubernetes.io/projected/6dd90e50-fd16-434b-bb49-0370f5c04734-kube-api-access-6vqb9\") pod \"cilium-t7kxl\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " pod="kube-system/cilium-t7kxl" Dec 13 04:04:03.912181 sshd[4784]: pam_unix(sshd:session): session closed for user core Dec 13 04:04:03.919801 systemd[1]: sshd@47-145.40.90.151:22-139.178.68.195:41498.service: Deactivated successfully. Dec 13 04:04:03.920326 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 04:04:03.920805 systemd-logind[1614]: Session 28 logged out. Waiting for processes to exit. Dec 13 04:04:03.922634 env[1562]: time="2024-12-13T04:04:03.922606903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t7kxl,Uid:6dd90e50-fd16-434b-bb49-0370f5c04734,Namespace:kube-system,Attempt:0,}" Dec 13 04:04:03.924326 systemd[1]: Started sshd@48-145.40.90.151:22-139.178.68.195:41502.service. Dec 13 04:04:03.924793 systemd-logind[1614]: Removed session 28. Dec 13 04:04:03.929519 env[1562]: time="2024-12-13T04:04:03.929469109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:04:03.929519 env[1562]: time="2024-12-13T04:04:03.929501423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:04:03.929519 env[1562]: time="2024-12-13T04:04:03.929512095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:04:03.929697 env[1562]: time="2024-12-13T04:04:03.929653546Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b42812f4b31410caff3e512f54cb6f3f006811145182ecbeae839f1dc3a48c40 pid=4821 runtime=io.containerd.runc.v2 Dec 13 04:04:03.939182 systemd[1]: Started cri-containerd-b42812f4b31410caff3e512f54cb6f3f006811145182ecbeae839f1dc3a48c40.scope. Dec 13 04:04:03.953073 env[1562]: time="2024-12-13T04:04:03.953042007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t7kxl,Uid:6dd90e50-fd16-434b-bb49-0370f5c04734,Namespace:kube-system,Attempt:0,} returns sandbox id \"b42812f4b31410caff3e512f54cb6f3f006811145182ecbeae839f1dc3a48c40\"" Dec 13 04:04:03.954603 env[1562]: time="2024-12-13T04:04:03.954581755Z" level=info msg="CreateContainer within sandbox \"b42812f4b31410caff3e512f54cb6f3f006811145182ecbeae839f1dc3a48c40\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 04:04:03.965481 sshd[4813]: Accepted publickey for core from 139.178.68.195 port 41502 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 04:04:03.966569 sshd[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 04:04:03.969826 systemd-logind[1614]: New session 29 of user core. Dec 13 04:04:03.970684 systemd[1]: Started session-29.scope. Dec 13 04:04:03.979656 env[1562]: time="2024-12-13T04:04:03.979596569Z" level=info msg="CreateContainer within sandbox \"b42812f4b31410caff3e512f54cb6f3f006811145182ecbeae839f1dc3a48c40\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7bfac9d09c1939cffe4c1531d10d1284ca797fcdf5f89ba741f249cf7bd87d85\"" Dec 13 04:04:03.979962 env[1562]: time="2024-12-13T04:04:03.979941383Z" level=info msg="StartContainer for \"7bfac9d09c1939cffe4c1531d10d1284ca797fcdf5f89ba741f249cf7bd87d85\"" Dec 13 04:04:03.990403 systemd[1]: Started cri-containerd-7bfac9d09c1939cffe4c1531d10d1284ca797fcdf5f89ba741f249cf7bd87d85.scope. Dec 13 04:04:03.997950 systemd[1]: cri-containerd-7bfac9d09c1939cffe4c1531d10d1284ca797fcdf5f89ba741f249cf7bd87d85.scope: Deactivated successfully. Dec 13 04:04:03.998141 systemd[1]: Stopped cri-containerd-7bfac9d09c1939cffe4c1531d10d1284ca797fcdf5f89ba741f249cf7bd87d85.scope. Dec 13 04:04:04.015347 env[1562]: time="2024-12-13T04:04:04.015266855Z" level=info msg="shim disconnected" id=7bfac9d09c1939cffe4c1531d10d1284ca797fcdf5f89ba741f249cf7bd87d85 Dec 13 04:04:04.015347 env[1562]: time="2024-12-13T04:04:04.015315332Z" level=warning msg="cleaning up after shim disconnected" id=7bfac9d09c1939cffe4c1531d10d1284ca797fcdf5f89ba741f249cf7bd87d85 namespace=k8s.io Dec 13 04:04:04.015347 env[1562]: time="2024-12-13T04:04:04.015326498Z" level=info msg="cleaning up dead shim" Dec 13 04:04:04.022380 env[1562]: time="2024-12-13T04:04:04.022322758Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:04:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4883 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T04:04:04Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7bfac9d09c1939cffe4c1531d10d1284ca797fcdf5f89ba741f249cf7bd87d85/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 04:04:04.022782 env[1562]: time="2024-12-13T04:04:04.022633485Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Dec 13 04:04:04.022937 env[1562]: time="2024-12-13T04:04:04.022872720Z" level=error msg="Failed to pipe stdout of container \"7bfac9d09c1939cffe4c1531d10d1284ca797fcdf5f89ba741f249cf7bd87d85\"" error="reading from a closed fifo" Dec 13 04:04:04.023041 env[1562]: time="2024-12-13T04:04:04.022940838Z" level=error msg="Failed to pipe stderr of container \"7bfac9d09c1939cffe4c1531d10d1284ca797fcdf5f89ba741f249cf7bd87d85\"" error="reading from a closed fifo" Dec 13 04:04:04.023809 env[1562]: time="2024-12-13T04:04:04.023720047Z" level=error msg="StartContainer for \"7bfac9d09c1939cffe4c1531d10d1284ca797fcdf5f89ba741f249cf7bd87d85\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 04:04:04.024065 kubelet[2454]: E1213 04:04:04.023990 2454 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7bfac9d09c1939cffe4c1531d10d1284ca797fcdf5f89ba741f249cf7bd87d85" Dec 13 04:04:04.024197 kubelet[2454]: E1213 04:04:04.024170 2454 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 04:04:04.024197 kubelet[2454]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 04:04:04.024197 kubelet[2454]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 04:04:04.024197 kubelet[2454]: rm /hostbin/cilium-mount Dec 13 04:04:04.024384 kubelet[2454]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vqb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-t7kxl_kube-system(6dd90e50-fd16-434b-bb49-0370f5c04734): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 04:04:04.024384 kubelet[2454]: > logger="UnhandledError" Dec 13 04:04:04.025443 kubelet[2454]: E1213 04:04:04.025352 2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-t7kxl" podUID="6dd90e50-fd16-434b-bb49-0370f5c04734" Dec 13 04:04:04.055802 kubelet[2454]: I1213 04:04:04.055771 2454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f317399-93d4-4c84-961f-f2a797300b9c" path="/var/lib/kubelet/pods/4f317399-93d4-4c84-961f-f2a797300b9c/volumes" Dec 13 04:04:04.056359 kubelet[2454]: I1213 04:04:04.056344 2454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69933a75-f9e6-4329-b848-5137d6d4be6d" path="/var/lib/kubelet/pods/69933a75-f9e6-4329-b848-5137d6d4be6d/volumes" Dec 13 04:04:04.071504 sshd[4605]: Failed password for invalid user text001 from 51.89.216.178 port 48408 ssh2 Dec 13 04:04:04.243001 env[1562]: time="2024-12-13T04:04:04.242855193Z" level=info msg="StopPodSandbox for \"b42812f4b31410caff3e512f54cb6f3f006811145182ecbeae839f1dc3a48c40\"" Dec 13 04:04:04.243332 env[1562]: time="2024-12-13T04:04:04.243018937Z" level=info msg="Container to stop \"7bfac9d09c1939cffe4c1531d10d1284ca797fcdf5f89ba741f249cf7bd87d85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 04:04:04.256567 systemd[1]: cri-containerd-b42812f4b31410caff3e512f54cb6f3f006811145182ecbeae839f1dc3a48c40.scope: Deactivated successfully. Dec 13 04:04:04.292569 env[1562]: time="2024-12-13T04:04:04.292366298Z" level=info msg="shim disconnected" id=b42812f4b31410caff3e512f54cb6f3f006811145182ecbeae839f1dc3a48c40 Dec 13 04:04:04.292569 env[1562]: time="2024-12-13T04:04:04.292488561Z" level=warning msg="cleaning up after shim disconnected" id=b42812f4b31410caff3e512f54cb6f3f006811145182ecbeae839f1dc3a48c40 namespace=k8s.io Dec 13 04:04:04.292569 env[1562]: time="2024-12-13T04:04:04.292517589Z" level=info msg="cleaning up dead shim" Dec 13 04:04:04.304761 env[1562]: time="2024-12-13T04:04:04.304664615Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:04:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4931 runtime=io.containerd.runc.v2\n" Dec 13 04:04:04.305281 env[1562]: time="2024-12-13T04:04:04.305152341Z" level=info msg="TearDown network for sandbox \"b42812f4b31410caff3e512f54cb6f3f006811145182ecbeae839f1dc3a48c40\" successfully" Dec 13 04:04:04.305281 env[1562]: time="2024-12-13T04:04:04.305230081Z" level=info msg="StopPodSandbox for \"b42812f4b31410caff3e512f54cb6f3f006811145182ecbeae839f1dc3a48c40\" returns successfully" Dec 13 04:04:04.409089 kubelet[2454]: I1213 04:04:04.408952 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-cilium-run\") pod \"6dd90e50-fd16-434b-bb49-0370f5c04734\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " Dec 13 04:04:04.409089 kubelet[2454]: I1213 04:04:04.409059 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-etc-cni-netd\") pod \"6dd90e50-fd16-434b-bb49-0370f5c04734\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " Dec 13 04:04:04.409628 kubelet[2454]: I1213 04:04:04.409087 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6dd90e50-fd16-434b-bb49-0370f5c04734" (UID: "6dd90e50-fd16-434b-bb49-0370f5c04734"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:04.409628 kubelet[2454]: I1213 04:04:04.409125 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6dd90e50-fd16-434b-bb49-0370f5c04734-clustermesh-secrets\") pod \"6dd90e50-fd16-434b-bb49-0370f5c04734\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " Dec 13 04:04:04.409628 kubelet[2454]: I1213 04:04:04.409174 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-cilium-cgroup\") pod \"6dd90e50-fd16-434b-bb49-0370f5c04734\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " Dec 13 04:04:04.409628 kubelet[2454]: I1213 04:04:04.409196 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6dd90e50-fd16-434b-bb49-0370f5c04734" (UID: "6dd90e50-fd16-434b-bb49-0370f5c04734"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:04.409628 kubelet[2454]: I1213 04:04:04.409228 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vqb9\" (UniqueName: \"kubernetes.io/projected/6dd90e50-fd16-434b-bb49-0370f5c04734-kube-api-access-6vqb9\") pod \"6dd90e50-fd16-434b-bb49-0370f5c04734\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " Dec 13 04:04:04.409628 kubelet[2454]: I1213 04:04:04.409364 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-host-proc-sys-net\") pod \"6dd90e50-fd16-434b-bb49-0370f5c04734\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " Dec 13 04:04:04.409628 kubelet[2454]: I1213 04:04:04.409489 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6dd90e50-fd16-434b-bb49-0370f5c04734-cilium-config-path\") pod \"6dd90e50-fd16-434b-bb49-0370f5c04734\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " Dec 13 04:04:04.409628 kubelet[2454]: I1213 04:04:04.409356 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6dd90e50-fd16-434b-bb49-0370f5c04734" (UID: "6dd90e50-fd16-434b-bb49-0370f5c04734"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:04.409628 kubelet[2454]: I1213 04:04:04.409470 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6dd90e50-fd16-434b-bb49-0370f5c04734" (UID: "6dd90e50-fd16-434b-bb49-0370f5c04734"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:04.409628 kubelet[2454]: I1213 04:04:04.409566 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6dd90e50-fd16-434b-bb49-0370f5c04734-hubble-tls\") pod \"6dd90e50-fd16-434b-bb49-0370f5c04734\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " Dec 13 04:04:04.409628 kubelet[2454]: I1213 04:04:04.409617 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-lib-modules\") pod \"6dd90e50-fd16-434b-bb49-0370f5c04734\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " Dec 13 04:04:04.411029 kubelet[2454]: I1213 04:04:04.409685 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-hostproc\") pod \"6dd90e50-fd16-434b-bb49-0370f5c04734\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " Dec 13 04:04:04.411029 kubelet[2454]: I1213 04:04:04.409762 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-xtables-lock\") pod \"6dd90e50-fd16-434b-bb49-0370f5c04734\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " Dec 13 04:04:04.411029 kubelet[2454]: I1213 04:04:04.409754 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6dd90e50-fd16-434b-bb49-0370f5c04734" (UID: "6dd90e50-fd16-434b-bb49-0370f5c04734"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:04.411029 kubelet[2454]: I1213 04:04:04.409827 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-hostproc" (OuterVolumeSpecName: "hostproc") pod "6dd90e50-fd16-434b-bb49-0370f5c04734" (UID: "6dd90e50-fd16-434b-bb49-0370f5c04734"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:04.411029 kubelet[2454]: I1213 04:04:04.409838 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-bpf-maps\") pod \"6dd90e50-fd16-434b-bb49-0370f5c04734\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " Dec 13 04:04:04.411029 kubelet[2454]: I1213 04:04:04.409872 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6dd90e50-fd16-434b-bb49-0370f5c04734" (UID: "6dd90e50-fd16-434b-bb49-0370f5c04734"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:04.411029 kubelet[2454]: I1213 04:04:04.409967 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6dd90e50-fd16-434b-bb49-0370f5c04734-cilium-ipsec-secrets\") pod \"6dd90e50-fd16-434b-bb49-0370f5c04734\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " Dec 13 04:04:04.411029 kubelet[2454]: I1213 04:04:04.409898 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6dd90e50-fd16-434b-bb49-0370f5c04734" (UID: "6dd90e50-fd16-434b-bb49-0370f5c04734"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:04.411029 kubelet[2454]: I1213 04:04:04.410034 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-host-proc-sys-kernel\") pod \"6dd90e50-fd16-434b-bb49-0370f5c04734\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " Dec 13 04:04:04.411029 kubelet[2454]: I1213 04:04:04.410087 2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-cni-path\") pod \"6dd90e50-fd16-434b-bb49-0370f5c04734\" (UID: \"6dd90e50-fd16-434b-bb49-0370f5c04734\") " Dec 13 04:04:04.411029 kubelet[2454]: I1213 04:04:04.410171 2454 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-etc-cni-netd\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:04.411029 kubelet[2454]: I1213 04:04:04.410202 2454 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-cilium-run\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:04.411029 kubelet[2454]: I1213 04:04:04.410176 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6dd90e50-fd16-434b-bb49-0370f5c04734" (UID: "6dd90e50-fd16-434b-bb49-0370f5c04734"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:04.411029 kubelet[2454]: I1213 04:04:04.410229 2454 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-cilium-cgroup\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:04.411029 kubelet[2454]: I1213 04:04:04.410256 2454 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-host-proc-sys-net\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:04.411029 kubelet[2454]: I1213 04:04:04.410282 2454 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-lib-modules\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:04.412773 kubelet[2454]: I1213 04:04:04.410279 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-cni-path" (OuterVolumeSpecName: "cni-path") pod "6dd90e50-fd16-434b-bb49-0370f5c04734" (UID: "6dd90e50-fd16-434b-bb49-0370f5c04734"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 04:04:04.412773 kubelet[2454]: I1213 04:04:04.410309 2454 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-hostproc\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:04.412773 kubelet[2454]: I1213 04:04:04.410402 2454 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-xtables-lock\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:04.412773 kubelet[2454]: I1213 04:04:04.410489 2454 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-bpf-maps\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:04.415136 kubelet[2454]: I1213 04:04:04.415069 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dd90e50-fd16-434b-bb49-0370f5c04734-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6dd90e50-fd16-434b-bb49-0370f5c04734" (UID: "6dd90e50-fd16-434b-bb49-0370f5c04734"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 04:04:04.416261 kubelet[2454]: I1213 04:04:04.416163 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dd90e50-fd16-434b-bb49-0370f5c04734-kube-api-access-6vqb9" (OuterVolumeSpecName: "kube-api-access-6vqb9") pod "6dd90e50-fd16-434b-bb49-0370f5c04734" (UID: "6dd90e50-fd16-434b-bb49-0370f5c04734"). InnerVolumeSpecName "kube-api-access-6vqb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:04:04.416543 kubelet[2454]: I1213 04:04:04.416283 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd90e50-fd16-434b-bb49-0370f5c04734-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6dd90e50-fd16-434b-bb49-0370f5c04734" (UID: "6dd90e50-fd16-434b-bb49-0370f5c04734"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:04:04.416688 kubelet[2454]: I1213 04:04:04.416552 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd90e50-fd16-434b-bb49-0370f5c04734-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "6dd90e50-fd16-434b-bb49-0370f5c04734" (UID: "6dd90e50-fd16-434b-bb49-0370f5c04734"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 04:04:04.416688 kubelet[2454]: I1213 04:04:04.416565 2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dd90e50-fd16-434b-bb49-0370f5c04734-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6dd90e50-fd16-434b-bb49-0370f5c04734" (UID: "6dd90e50-fd16-434b-bb49-0370f5c04734"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 04:04:04.511266 kubelet[2454]: I1213 04:04:04.511143 2454 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6dd90e50-fd16-434b-bb49-0370f5c04734-cilium-config-path\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:04.511266 kubelet[2454]: I1213 04:04:04.511209 2454 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6vqb9\" (UniqueName: \"kubernetes.io/projected/6dd90e50-fd16-434b-bb49-0370f5c04734-kube-api-access-6vqb9\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:04.511266 kubelet[2454]: I1213 04:04:04.511241 2454 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6dd90e50-fd16-434b-bb49-0370f5c04734-hubble-tls\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:04.511266 kubelet[2454]: I1213 04:04:04.511271 2454 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6dd90e50-fd16-434b-bb49-0370f5c04734-cilium-ipsec-secrets\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:04.511988 kubelet[2454]: I1213 04:04:04.511301 2454 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:04.511988 kubelet[2454]: I1213 04:04:04.511332 2454 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6dd90e50-fd16-434b-bb49-0370f5c04734-cni-path\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:04.511988 kubelet[2454]: I1213 04:04:04.511359 2454 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6dd90e50-fd16-434b-bb49-0370f5c04734-clustermesh-secrets\") on node \"ci-3510.3.6-a-840ab18f38\" DevicePath \"\"" Dec 13 04:04:04.912931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b42812f4b31410caff3e512f54cb6f3f006811145182ecbeae839f1dc3a48c40-rootfs.mount: Deactivated successfully. Dec 13 04:04:04.913035 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b42812f4b31410caff3e512f54cb6f3f006811145182ecbeae839f1dc3a48c40-shm.mount: Deactivated successfully. Dec 13 04:04:04.913068 systemd[1]: var-lib-kubelet-pods-6dd90e50\x2dfd16\x2d434b\x2dbb49\x2d0370f5c04734-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6vqb9.mount: Deactivated successfully. Dec 13 04:04:04.913100 systemd[1]: var-lib-kubelet-pods-6dd90e50\x2dfd16\x2d434b\x2dbb49\x2d0370f5c04734-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 04:04:04.913131 systemd[1]: var-lib-kubelet-pods-6dd90e50\x2dfd16\x2d434b\x2dbb49\x2d0370f5c04734-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 04:04:04.913162 systemd[1]: var-lib-kubelet-pods-6dd90e50\x2dfd16\x2d434b\x2dbb49\x2d0370f5c04734-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 04:04:05.187882 sshd[4605]: Received disconnect from 51.89.216.178 port 48408:11: Bye Bye [preauth] Dec 13 04:04:05.187882 sshd[4605]: Disconnected from invalid user text001 51.89.216.178 port 48408 [preauth] Dec 13 04:04:05.188608 systemd[1]: sshd@45-145.40.90.151:22-51.89.216.178:48408.service: Deactivated successfully. Dec 13 04:04:05.247983 kubelet[2454]: I1213 04:04:05.247910 2454 scope.go:117] "RemoveContainer" containerID="7bfac9d09c1939cffe4c1531d10d1284ca797fcdf5f89ba741f249cf7bd87d85" Dec 13 04:04:05.250639 env[1562]: time="2024-12-13T04:04:05.250525340Z" level=info msg="RemoveContainer for \"7bfac9d09c1939cffe4c1531d10d1284ca797fcdf5f89ba741f249cf7bd87d85\"" Dec 13 04:04:05.254458 systemd[1]: Removed slice kubepods-burstable-pod6dd90e50_fd16_434b_bb49_0370f5c04734.slice. Dec 13 04:04:05.266410 env[1562]: time="2024-12-13T04:04:05.266343117Z" level=info msg="RemoveContainer for \"7bfac9d09c1939cffe4c1531d10d1284ca797fcdf5f89ba741f249cf7bd87d85\" returns successfully" Dec 13 04:04:05.274192 kubelet[2454]: E1213 04:04:05.273664 2454 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6dd90e50-fd16-434b-bb49-0370f5c04734" containerName="mount-cgroup" Dec 13 04:04:05.274192 kubelet[2454]: I1213 04:04:05.273714 2454 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dd90e50-fd16-434b-bb49-0370f5c04734" containerName="mount-cgroup" Dec 13 04:04:05.277294 systemd[1]: Created slice kubepods-burstable-pode5ff0195_9188_4bad_9142_79a544d91252.slice. Dec 13 04:04:05.316797 kubelet[2454]: I1213 04:04:05.316751 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e5ff0195-9188-4bad-9142-79a544d91252-hubble-tls\") pod \"cilium-f6sdl\" (UID: \"e5ff0195-9188-4bad-9142-79a544d91252\") " pod="kube-system/cilium-f6sdl" Dec 13 04:04:05.316797 kubelet[2454]: I1213 04:04:05.316795 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e5ff0195-9188-4bad-9142-79a544d91252-cilium-config-path\") pod \"cilium-f6sdl\" (UID: \"e5ff0195-9188-4bad-9142-79a544d91252\") " pod="kube-system/cilium-f6sdl" Dec 13 04:04:05.317018 kubelet[2454]: I1213 04:04:05.316816 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5t94\" (UniqueName: \"kubernetes.io/projected/e5ff0195-9188-4bad-9142-79a544d91252-kube-api-access-g5t94\") pod \"cilium-f6sdl\" (UID: \"e5ff0195-9188-4bad-9142-79a544d91252\") " pod="kube-system/cilium-f6sdl" Dec 13 04:04:05.317018 kubelet[2454]: I1213 04:04:05.316837 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e5ff0195-9188-4bad-9142-79a544d91252-hostproc\") pod \"cilium-f6sdl\" (UID: \"e5ff0195-9188-4bad-9142-79a544d91252\") " pod="kube-system/cilium-f6sdl" Dec 13 04:04:05.317018 kubelet[2454]: I1213 04:04:05.316883 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e5ff0195-9188-4bad-9142-79a544d91252-cilium-run\") pod \"cilium-f6sdl\" (UID: \"e5ff0195-9188-4bad-9142-79a544d91252\") " pod="kube-system/cilium-f6sdl" Dec 13 04:04:05.317018 kubelet[2454]: I1213 04:04:05.316920 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5ff0195-9188-4bad-9142-79a544d91252-lib-modules\") pod \"cilium-f6sdl\" (UID: \"e5ff0195-9188-4bad-9142-79a544d91252\") " pod="kube-system/cilium-f6sdl" Dec 13 04:04:05.317018 kubelet[2454]: I1213 04:04:05.316940 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e5ff0195-9188-4bad-9142-79a544d91252-host-proc-sys-net\") pod \"cilium-f6sdl\" (UID: \"e5ff0195-9188-4bad-9142-79a544d91252\") " pod="kube-system/cilium-f6sdl" Dec 13 04:04:05.317018 kubelet[2454]: I1213 04:04:05.316962 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5ff0195-9188-4bad-9142-79a544d91252-xtables-lock\") pod \"cilium-f6sdl\" (UID: \"e5ff0195-9188-4bad-9142-79a544d91252\") " pod="kube-system/cilium-f6sdl" Dec 13 04:04:05.317018 kubelet[2454]: I1213 04:04:05.316992 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e5ff0195-9188-4bad-9142-79a544d91252-bpf-maps\") pod \"cilium-f6sdl\" (UID: \"e5ff0195-9188-4bad-9142-79a544d91252\") " pod="kube-system/cilium-f6sdl" Dec 13 04:04:05.317018 kubelet[2454]: I1213 04:04:05.317017 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e5ff0195-9188-4bad-9142-79a544d91252-etc-cni-netd\") pod \"cilium-f6sdl\" (UID: \"e5ff0195-9188-4bad-9142-79a544d91252\") " pod="kube-system/cilium-f6sdl" Dec 13 04:04:05.317462 kubelet[2454]: I1213 04:04:05.317040 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e5ff0195-9188-4bad-9142-79a544d91252-cni-path\") pod \"cilium-f6sdl\" (UID: \"e5ff0195-9188-4bad-9142-79a544d91252\") " pod="kube-system/cilium-f6sdl" Dec 13 04:04:05.317462 kubelet[2454]: I1213 04:04:05.317066 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e5ff0195-9188-4bad-9142-79a544d91252-host-proc-sys-kernel\") pod \"cilium-f6sdl\" (UID: \"e5ff0195-9188-4bad-9142-79a544d91252\") " pod="kube-system/cilium-f6sdl" Dec 13 04:04:05.317462 kubelet[2454]: I1213 04:04:05.317125 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e5ff0195-9188-4bad-9142-79a544d91252-clustermesh-secrets\") pod \"cilium-f6sdl\" (UID: \"e5ff0195-9188-4bad-9142-79a544d91252\") " pod="kube-system/cilium-f6sdl" Dec 13 04:04:05.317462 kubelet[2454]: I1213 04:04:05.317174 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e5ff0195-9188-4bad-9142-79a544d91252-cilium-ipsec-secrets\") pod \"cilium-f6sdl\" (UID: \"e5ff0195-9188-4bad-9142-79a544d91252\") " pod="kube-system/cilium-f6sdl" Dec 13 04:04:05.317462 kubelet[2454]: I1213 04:04:05.317217 2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e5ff0195-9188-4bad-9142-79a544d91252-cilium-cgroup\") pod \"cilium-f6sdl\" (UID: \"e5ff0195-9188-4bad-9142-79a544d91252\") " pod="kube-system/cilium-f6sdl" Dec 13 04:04:05.580030 env[1562]: time="2024-12-13T04:04:05.579797824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f6sdl,Uid:e5ff0195-9188-4bad-9142-79a544d91252,Namespace:kube-system,Attempt:0,}" Dec 13 04:04:05.595663 env[1562]: time="2024-12-13T04:04:05.595586385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 04:04:05.595663 env[1562]: time="2024-12-13T04:04:05.595606173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 04:04:05.595663 env[1562]: time="2024-12-13T04:04:05.595612737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 04:04:05.595857 env[1562]: time="2024-12-13T04:04:05.595745248Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de216d7aa8260341cf33d9103461df04549e2bc6c9a5f918bae902c0493b067d pid=4960 runtime=io.containerd.runc.v2 Dec 13 04:04:05.601506 systemd[1]: Started cri-containerd-de216d7aa8260341cf33d9103461df04549e2bc6c9a5f918bae902c0493b067d.scope. Dec 13 04:04:05.613553 env[1562]: time="2024-12-13T04:04:05.613527884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f6sdl,Uid:e5ff0195-9188-4bad-9142-79a544d91252,Namespace:kube-system,Attempt:0,} returns sandbox id \"de216d7aa8260341cf33d9103461df04549e2bc6c9a5f918bae902c0493b067d\"" Dec 13 04:04:05.614626 env[1562]: time="2024-12-13T04:04:05.614610525Z" level=info msg="CreateContainer within sandbox \"de216d7aa8260341cf33d9103461df04549e2bc6c9a5f918bae902c0493b067d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 04:04:05.645862 env[1562]: time="2024-12-13T04:04:05.645737455Z" level=info msg="CreateContainer within sandbox \"de216d7aa8260341cf33d9103461df04549e2bc6c9a5f918bae902c0493b067d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d85d04f1e138e388419c149c54197720574584feeb3a344a59018b6acfca04c7\"" Dec 13 04:04:05.646662 env[1562]: time="2024-12-13T04:04:05.646579102Z" level=info msg="StartContainer for \"d85d04f1e138e388419c149c54197720574584feeb3a344a59018b6acfca04c7\"" Dec 13 04:04:05.682310 systemd[1]: Started cri-containerd-d85d04f1e138e388419c149c54197720574584feeb3a344a59018b6acfca04c7.scope. Dec 13 04:04:05.728050 env[1562]: time="2024-12-13T04:04:05.727937490Z" level=info msg="StartContainer for \"d85d04f1e138e388419c149c54197720574584feeb3a344a59018b6acfca04c7\" returns successfully" Dec 13 04:04:05.745542 systemd[1]: cri-containerd-d85d04f1e138e388419c149c54197720574584feeb3a344a59018b6acfca04c7.scope: Deactivated successfully. Dec 13 04:04:05.775845 env[1562]: time="2024-12-13T04:04:05.775753973Z" level=info msg="shim disconnected" id=d85d04f1e138e388419c149c54197720574584feeb3a344a59018b6acfca04c7 Dec 13 04:04:05.775845 env[1562]: time="2024-12-13T04:04:05.775812692Z" level=warning msg="cleaning up after shim disconnected" id=d85d04f1e138e388419c149c54197720574584feeb3a344a59018b6acfca04c7 namespace=k8s.io Dec 13 04:04:05.775845 env[1562]: time="2024-12-13T04:04:05.775825609Z" level=info msg="cleaning up dead shim" Dec 13 04:04:05.783107 env[1562]: time="2024-12-13T04:04:05.783071008Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:04:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5041 runtime=io.containerd.runc.v2\n" Dec 13 04:04:06.055031 kubelet[2454]: I1213 04:04:06.054984 2454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dd90e50-fd16-434b-bb49-0370f5c04734" path="/var/lib/kubelet/pods/6dd90e50-fd16-434b-bb49-0370f5c04734/volumes" Dec 13 04:04:06.056646 env[1562]: time="2024-12-13T04:04:06.056628706Z" level=info msg="StopPodSandbox for \"b42812f4b31410caff3e512f54cb6f3f006811145182ecbeae839f1dc3a48c40\"" Dec 13 04:04:06.056726 env[1562]: time="2024-12-13T04:04:06.056676355Z" level=info msg="TearDown network for sandbox \"b42812f4b31410caff3e512f54cb6f3f006811145182ecbeae839f1dc3a48c40\" successfully" Dec 13 04:04:06.056726 env[1562]: time="2024-12-13T04:04:06.056697863Z" level=info msg="StopPodSandbox for \"b42812f4b31410caff3e512f54cb6f3f006811145182ecbeae839f1dc3a48c40\" returns successfully" Dec 13 04:04:06.056892 env[1562]: time="2024-12-13T04:04:06.056852771Z" level=info msg="RemovePodSandbox for \"b42812f4b31410caff3e512f54cb6f3f006811145182ecbeae839f1dc3a48c40\"" Dec 13 04:04:06.056892 env[1562]: time="2024-12-13T04:04:06.056865495Z" level=info msg="Forcibly stopping sandbox \"b42812f4b31410caff3e512f54cb6f3f006811145182ecbeae839f1dc3a48c40\"" Dec 13 04:04:06.056951 env[1562]: time="2024-12-13T04:04:06.056895441Z" level=info msg="TearDown network for sandbox \"b42812f4b31410caff3e512f54cb6f3f006811145182ecbeae839f1dc3a48c40\" successfully" Dec 13 04:04:06.058039 env[1562]: time="2024-12-13T04:04:06.057999038Z" level=info msg="RemovePodSandbox \"b42812f4b31410caff3e512f54cb6f3f006811145182ecbeae839f1dc3a48c40\" returns successfully" Dec 13 04:04:06.058164 env[1562]: time="2024-12-13T04:04:06.058126931Z" level=info msg="StopPodSandbox for \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\"" Dec 13 04:04:06.058192 env[1562]: time="2024-12-13T04:04:06.058170835Z" level=info msg="TearDown network for sandbox \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\" successfully" Dec 13 04:04:06.058192 env[1562]: time="2024-12-13T04:04:06.058187653Z" level=info msg="StopPodSandbox for \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\" returns successfully" Dec 13 04:04:06.058304 env[1562]: time="2024-12-13T04:04:06.058293379Z" level=info msg="RemovePodSandbox for \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\"" Dec 13 04:04:06.058325 env[1562]: time="2024-12-13T04:04:06.058307081Z" level=info msg="Forcibly stopping sandbox \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\"" Dec 13 04:04:06.058349 env[1562]: time="2024-12-13T04:04:06.058341841Z" level=info msg="TearDown network for sandbox \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\" successfully" Dec 13 04:04:06.059631 env[1562]: time="2024-12-13T04:04:06.059599391Z" level=info msg="RemovePodSandbox \"18cc1c7a80eb3a311f8071d9e03463d4b8df85fd4fa68ca5c7b93f41e01ab4c5\" returns successfully" Dec 13 04:04:06.059815 env[1562]: time="2024-12-13T04:04:06.059764207Z" level=info msg="StopPodSandbox for \"a8c30e0edbc5946b884034bd6ca810494d7f1b8d9a23eb0a7e87acc76d32f7f4\"" Dec 13 04:04:06.059853 env[1562]: time="2024-12-13T04:04:06.059803307Z" level=info msg="TearDown network for sandbox \"a8c30e0edbc5946b884034bd6ca810494d7f1b8d9a23eb0a7e87acc76d32f7f4\" successfully" Dec 13 04:04:06.059853 env[1562]: time="2024-12-13T04:04:06.059821995Z" level=info msg="StopPodSandbox for \"a8c30e0edbc5946b884034bd6ca810494d7f1b8d9a23eb0a7e87acc76d32f7f4\" returns successfully" Dec 13 04:04:06.060000 env[1562]: time="2024-12-13T04:04:06.059951844Z" level=info msg="RemovePodSandbox for \"a8c30e0edbc5946b884034bd6ca810494d7f1b8d9a23eb0a7e87acc76d32f7f4\"" Dec 13 04:04:06.060000 env[1562]: time="2024-12-13T04:04:06.059967766Z" level=info msg="Forcibly stopping sandbox \"a8c30e0edbc5946b884034bd6ca810494d7f1b8d9a23eb0a7e87acc76d32f7f4\"" Dec 13 04:04:06.060060 env[1562]: time="2024-12-13T04:04:06.060003830Z" level=info msg="TearDown network for sandbox \"a8c30e0edbc5946b884034bd6ca810494d7f1b8d9a23eb0a7e87acc76d32f7f4\" successfully" Dec 13 04:04:06.061191 env[1562]: time="2024-12-13T04:04:06.061179448Z" level=info msg="RemovePodSandbox \"a8c30e0edbc5946b884034bd6ca810494d7f1b8d9a23eb0a7e87acc76d32f7f4\" returns successfully" Dec 13 04:04:06.176464 kubelet[2454]: E1213 04:04:06.176329 2454 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 04:04:06.259272 env[1562]: time="2024-12-13T04:04:06.259159466Z" level=info msg="CreateContainer within sandbox \"de216d7aa8260341cf33d9103461df04549e2bc6c9a5f918bae902c0493b067d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 04:04:06.280420 env[1562]: time="2024-12-13T04:04:06.280280026Z" level=info msg="CreateContainer within sandbox \"de216d7aa8260341cf33d9103461df04549e2bc6c9a5f918bae902c0493b067d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f63906de0c41e311e2a01f97cf35b5aaca9d9f91563696b05a447034caa07a97\"" Dec 13 04:04:06.281337 env[1562]: time="2024-12-13T04:04:06.281250772Z" level=info msg="StartContainer for \"f63906de0c41e311e2a01f97cf35b5aaca9d9f91563696b05a447034caa07a97\"" Dec 13 04:04:06.308406 systemd[1]: Started cri-containerd-f63906de0c41e311e2a01f97cf35b5aaca9d9f91563696b05a447034caa07a97.scope. Dec 13 04:04:06.326850 env[1562]: time="2024-12-13T04:04:06.326807224Z" level=info msg="StartContainer for \"f63906de0c41e311e2a01f97cf35b5aaca9d9f91563696b05a447034caa07a97\" returns successfully" Dec 13 04:04:06.333140 systemd[1]: cri-containerd-f63906de0c41e311e2a01f97cf35b5aaca9d9f91563696b05a447034caa07a97.scope: Deactivated successfully. Dec 13 04:04:06.364545 env[1562]: time="2024-12-13T04:04:06.364413698Z" level=info msg="shim disconnected" id=f63906de0c41e311e2a01f97cf35b5aaca9d9f91563696b05a447034caa07a97 Dec 13 04:04:06.364957 env[1562]: time="2024-12-13T04:04:06.364547311Z" level=warning msg="cleaning up after shim disconnected" id=f63906de0c41e311e2a01f97cf35b5aaca9d9f91563696b05a447034caa07a97 namespace=k8s.io Dec 13 04:04:06.364957 env[1562]: time="2024-12-13T04:04:06.364579744Z" level=info msg="cleaning up dead shim" Dec 13 04:04:06.380923 env[1562]: time="2024-12-13T04:04:06.380798041Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:04:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5104 runtime=io.containerd.runc.v2\n" Dec 13 04:04:06.913405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f63906de0c41e311e2a01f97cf35b5aaca9d9f91563696b05a447034caa07a97-rootfs.mount: Deactivated successfully. Dec 13 04:04:07.122352 kubelet[2454]: W1213 04:04:07.122221 2454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dd90e50_fd16_434b_bb49_0370f5c04734.slice/cri-containerd-7bfac9d09c1939cffe4c1531d10d1284ca797fcdf5f89ba741f249cf7bd87d85.scope WatchSource:0}: container "7bfac9d09c1939cffe4c1531d10d1284ca797fcdf5f89ba741f249cf7bd87d85" in namespace "k8s.io": not found Dec 13 04:04:07.267054 env[1562]: time="2024-12-13T04:04:07.266797645Z" level=info msg="CreateContainer within sandbox \"de216d7aa8260341cf33d9103461df04549e2bc6c9a5f918bae902c0493b067d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 04:04:07.299415 env[1562]: time="2024-12-13T04:04:07.299390599Z" level=info msg="CreateContainer within sandbox \"de216d7aa8260341cf33d9103461df04549e2bc6c9a5f918bae902c0493b067d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d11f1fdacb4ce27577bfeaf878b369b2cf29c4b6d8250e2fdf5ccde290ab5b16\"" Dec 13 04:04:07.299853 env[1562]: time="2024-12-13T04:04:07.299782934Z" level=info msg="StartContainer for \"d11f1fdacb4ce27577bfeaf878b369b2cf29c4b6d8250e2fdf5ccde290ab5b16\"" Dec 13 04:04:07.310440 systemd[1]: Started cri-containerd-d11f1fdacb4ce27577bfeaf878b369b2cf29c4b6d8250e2fdf5ccde290ab5b16.scope. Dec 13 04:04:07.322938 env[1562]: time="2024-12-13T04:04:07.322915130Z" level=info msg="StartContainer for \"d11f1fdacb4ce27577bfeaf878b369b2cf29c4b6d8250e2fdf5ccde290ab5b16\" returns successfully" Dec 13 04:04:07.324467 systemd[1]: cri-containerd-d11f1fdacb4ce27577bfeaf878b369b2cf29c4b6d8250e2fdf5ccde290ab5b16.scope: Deactivated successfully. Dec 13 04:04:07.335724 env[1562]: time="2024-12-13T04:04:07.335695509Z" level=info msg="shim disconnected" id=d11f1fdacb4ce27577bfeaf878b369b2cf29c4b6d8250e2fdf5ccde290ab5b16 Dec 13 04:04:07.335724 env[1562]: time="2024-12-13T04:04:07.335724174Z" level=warning msg="cleaning up after shim disconnected" id=d11f1fdacb4ce27577bfeaf878b369b2cf29c4b6d8250e2fdf5ccde290ab5b16 namespace=k8s.io Dec 13 04:04:07.335852 env[1562]: time="2024-12-13T04:04:07.335732376Z" level=info msg="cleaning up dead shim" Dec 13 04:04:07.339556 env[1562]: time="2024-12-13T04:04:07.339508465Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:04:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5160 runtime=io.containerd.runc.v2\n" Dec 13 04:04:07.912982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d11f1fdacb4ce27577bfeaf878b369b2cf29c4b6d8250e2fdf5ccde290ab5b16-rootfs.mount: Deactivated successfully. Dec 13 04:04:08.275065 env[1562]: time="2024-12-13T04:04:08.274847960Z" level=info msg="CreateContainer within sandbox \"de216d7aa8260341cf33d9103461df04549e2bc6c9a5f918bae902c0493b067d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 04:04:08.294493 env[1562]: time="2024-12-13T04:04:08.294356992Z" level=info msg="CreateContainer within sandbox \"de216d7aa8260341cf33d9103461df04549e2bc6c9a5f918bae902c0493b067d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"12527c81f4208f0281ea02c26a2eaad85039f1ccef177448d4614d640c2f745d\"" Dec 13 04:04:08.295589 env[1562]: time="2024-12-13T04:04:08.295472978Z" level=info msg="StartContainer for \"12527c81f4208f0281ea02c26a2eaad85039f1ccef177448d4614d640c2f745d\"" Dec 13 04:04:08.331644 systemd[1]: Started cri-containerd-12527c81f4208f0281ea02c26a2eaad85039f1ccef177448d4614d640c2f745d.scope. Dec 13 04:04:08.367297 env[1562]: time="2024-12-13T04:04:08.367196063Z" level=info msg="StartContainer for \"12527c81f4208f0281ea02c26a2eaad85039f1ccef177448d4614d640c2f745d\" returns successfully" Dec 13 04:04:08.369320 systemd[1]: cri-containerd-12527c81f4208f0281ea02c26a2eaad85039f1ccef177448d4614d640c2f745d.scope: Deactivated successfully. Dec 13 04:04:08.398321 env[1562]: time="2024-12-13T04:04:08.398223407Z" level=info msg="shim disconnected" id=12527c81f4208f0281ea02c26a2eaad85039f1ccef177448d4614d640c2f745d Dec 13 04:04:08.398704 env[1562]: time="2024-12-13T04:04:08.398325611Z" level=warning msg="cleaning up after shim disconnected" id=12527c81f4208f0281ea02c26a2eaad85039f1ccef177448d4614d640c2f745d namespace=k8s.io Dec 13 04:04:08.398704 env[1562]: time="2024-12-13T04:04:08.398361518Z" level=info msg="cleaning up dead shim" Dec 13 04:04:08.410456 env[1562]: time="2024-12-13T04:04:08.410355690Z" level=warning msg="cleanup warnings time=\"2024-12-13T04:04:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5216 runtime=io.containerd.runc.v2\n" Dec 13 04:04:08.913338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12527c81f4208f0281ea02c26a2eaad85039f1ccef177448d4614d640c2f745d-rootfs.mount: Deactivated successfully. Dec 13 04:04:09.284370 env[1562]: time="2024-12-13T04:04:09.284119913Z" level=info msg="CreateContainer within sandbox \"de216d7aa8260341cf33d9103461df04549e2bc6c9a5f918bae902c0493b067d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 04:04:09.320710 env[1562]: time="2024-12-13T04:04:09.320657058Z" level=info msg="CreateContainer within sandbox \"de216d7aa8260341cf33d9103461df04549e2bc6c9a5f918bae902c0493b067d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"549f5259afce22e7207b699f0bbd1f0137c64c1d71517ce539c7b1e547f80eaf\"" Dec 13 04:04:09.321121 env[1562]: time="2024-12-13T04:04:09.321060449Z" level=info msg="StartContainer for \"549f5259afce22e7207b699f0bbd1f0137c64c1d71517ce539c7b1e547f80eaf\"" Dec 13 04:04:09.330332 systemd[1]: Started cri-containerd-549f5259afce22e7207b699f0bbd1f0137c64c1d71517ce539c7b1e547f80eaf.scope. Dec 13 04:04:09.343776 env[1562]: time="2024-12-13T04:04:09.343748649Z" level=info msg="StartContainer for \"549f5259afce22e7207b699f0bbd1f0137c64c1d71517ce539c7b1e547f80eaf\" returns successfully" Dec 13 04:04:09.511434 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 04:04:10.236618 kubelet[2454]: W1213 04:04:10.236507 2454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5ff0195_9188_4bad_9142_79a544d91252.slice/cri-containerd-d85d04f1e138e388419c149c54197720574584feeb3a344a59018b6acfca04c7.scope WatchSource:0}: task d85d04f1e138e388419c149c54197720574584feeb3a344a59018b6acfca04c7 not found: not found Dec 13 04:04:12.705846 systemd-networkd[1318]: lxc_health: Link UP Dec 13 04:04:12.730444 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 04:04:12.731011 systemd-networkd[1318]: lxc_health: Gained carrier Dec 13 04:04:13.347266 kubelet[2454]: W1213 04:04:13.347224 2454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5ff0195_9188_4bad_9142_79a544d91252.slice/cri-containerd-f63906de0c41e311e2a01f97cf35b5aaca9d9f91563696b05a447034caa07a97.scope WatchSource:0}: task f63906de0c41e311e2a01f97cf35b5aaca9d9f91563696b05a447034caa07a97 not found: not found Dec 13 04:04:13.590360 kubelet[2454]: I1213 04:04:13.590328 2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f6sdl" podStartSLOduration=8.590315802 podStartE2EDuration="8.590315802s" podCreationTimestamp="2024-12-13 04:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 04:04:10.32314744 +0000 UTC m=+424.325519870" watchObservedRunningTime="2024-12-13 04:04:13.590315802 +0000 UTC m=+427.592688165" Dec 13 04:04:13.926549 systemd-networkd[1318]: lxc_health: Gained IPv6LL Dec 13 04:04:16.454468 kubelet[2454]: W1213 04:04:16.454351 2454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5ff0195_9188_4bad_9142_79a544d91252.slice/cri-containerd-d11f1fdacb4ce27577bfeaf878b369b2cf29c4b6d8250e2fdf5ccde290ab5b16.scope WatchSource:0}: task d11f1fdacb4ce27577bfeaf878b369b2cf29c4b6d8250e2fdf5ccde290ab5b16 not found: not found Dec 13 04:04:18.572483 sshd[4813]: pam_unix(sshd:session): session closed for user core Dec 13 04:04:18.573870 systemd[1]: sshd@48-145.40.90.151:22-139.178.68.195:41502.service: Deactivated successfully. Dec 13 04:04:18.574349 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 04:04:18.574724 systemd-logind[1614]: Session 29 logged out. Waiting for processes to exit. Dec 13 04:04:18.575149 systemd-logind[1614]: Removed session 29. Dec 13 04:04:19.563070 kubelet[2454]: W1213 04:04:19.562931 2454 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5ff0195_9188_4bad_9142_79a544d91252.slice/cri-containerd-12527c81f4208f0281ea02c26a2eaad85039f1ccef177448d4614d640c2f745d.scope WatchSource:0}: task 12527c81f4208f0281ea02c26a2eaad85039f1ccef177448d4614d640c2f745d not found: not found