Jul 2 11:44:37.560366 kernel: microcode: microcode updated early to revision 0xf4, date = 2022-07-31 Jul 2 11:44:37.560379 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 11:44:37.560387 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 11:44:37.560391 kernel: BIOS-provided physical RAM map: Jul 2 11:44:37.560395 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000008f7ff] usable Jul 2 11:44:37.560398 kernel: BIOS-e820: [mem 0x000000000008f800-0x000000000009ffff] reserved Jul 2 11:44:37.560403 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jul 2 11:44:37.560407 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jul 2 11:44:37.560411 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jul 2 11:44:37.560416 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000005ff2efff] usable Jul 2 11:44:37.560420 kernel: BIOS-e820: [mem 0x000000005ff2f000-0x000000005ff2ffff] ACPI NVS Jul 2 11:44:37.560423 kernel: BIOS-e820: [mem 0x000000005ff30000-0x000000005ff30fff] reserved Jul 2 11:44:37.560427 kernel: BIOS-e820: [mem 0x000000005ff31000-0x000000005fffffff] usable Jul 2 11:44:37.560431 kernel: BIOS-e820: [mem 0x0000000060000000-0x0000000067ffffff] reserved Jul 2 11:44:37.560436 kernel: BIOS-e820: [mem 0x0000000068000000-0x0000000077fc4fff] usable Jul 2 11:44:37.560441 kernel: BIOS-e820: [mem 0x0000000077fc5000-0x00000000790a7fff] reserved Jul 2 11:44:37.560446 kernel: BIOS-e820: [mem 0x00000000790a8000-0x0000000079230fff] usable Jul 2 11:44:37.560452 kernel: BIOS-e820: [mem 0x0000000079231000-0x0000000079662fff] ACPI NVS Jul 2 11:44:37.560456 kernel: BIOS-e820: [mem 0x0000000079663000-0x000000007befefff] reserved Jul 2 11:44:37.560461 kernel: BIOS-e820: [mem 0x000000007beff000-0x000000007befffff] usable Jul 2 11:44:37.560483 kernel: BIOS-e820: [mem 0x000000007bf00000-0x000000007f7fffff] reserved Jul 2 11:44:37.560488 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 2 11:44:37.560492 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jul 2 11:44:37.560496 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jul 2 11:44:37.560515 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 2 11:44:37.560519 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jul 2 11:44:37.560524 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000087f7fffff] usable Jul 2 11:44:37.560528 kernel: NX (Execute Disable) protection: active Jul 2 11:44:37.560533 kernel: SMBIOS 3.2.1 present. Jul 2 11:44:37.560537 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5.V1 04/14/2021 Jul 2 11:44:37.560541 kernel: tsc: Detected 3400.000 MHz processor Jul 2 11:44:37.560545 kernel: tsc: Detected 3399.906 MHz TSC Jul 2 11:44:37.560550 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 11:44:37.560554 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 11:44:37.560559 kernel: last_pfn = 0x87f800 max_arch_pfn = 0x400000000 Jul 2 11:44:37.560563 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 11:44:37.560568 kernel: last_pfn = 0x7bf00 max_arch_pfn = 0x400000000 Jul 2 11:44:37.560573 kernel: Using GB pages for direct mapping Jul 2 11:44:37.560577 kernel: ACPI: Early table checksum verification disabled Jul 2 11:44:37.560582 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jul 2 11:44:37.560588 kernel: ACPI: XSDT 0x00000000795440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jul 2 11:44:37.560593 kernel: ACPI: FACP 0x0000000079580620 000114 (v06 01072009 AMI 00010013) Jul 2 11:44:37.560597 kernel: ACPI: DSDT 0x0000000079544268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jul 2 11:44:37.560603 kernel: ACPI: FACS 0x0000000079662F80 000040 Jul 2 11:44:37.560608 kernel: ACPI: APIC 0x0000000079580738 00012C (v04 01072009 AMI 00010013) Jul 2 11:44:37.560613 kernel: ACPI: FPDT 0x0000000079580868 000044 (v01 01072009 AMI 00010013) Jul 2 11:44:37.560617 kernel: ACPI: FIDT 0x00000000795808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jul 2 11:44:37.560622 kernel: ACPI: MCFG 0x0000000079580950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jul 2 11:44:37.560627 kernel: ACPI: SPMI 0x0000000079580990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jul 2 11:44:37.560631 kernel: ACPI: SSDT 0x00000000795809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jul 2 11:44:37.560637 kernel: ACPI: SSDT 0x00000000795824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jul 2 11:44:37.560642 kernel: ACPI: SSDT 0x00000000795856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jul 2 11:44:37.560647 kernel: ACPI: HPET 0x00000000795879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 2 11:44:37.560651 kernel: ACPI: SSDT 0x0000000079587A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jul 2 11:44:37.560656 kernel: ACPI: SSDT 0x00000000795889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jul 2 11:44:37.560661 kernel: ACPI: UEFI 0x00000000795892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 2 11:44:37.560665 kernel: ACPI: LPIT 0x0000000079589318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 2 11:44:37.560670 kernel: ACPI: SSDT 0x00000000795893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jul 2 11:44:37.560675 kernel: ACPI: SSDT 0x000000007958BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jul 2 11:44:37.560680 kernel: ACPI: DBGP 0x000000007958D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 2 11:44:37.560685 kernel: ACPI: DBG2 0x000000007958D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jul 2 11:44:37.560690 kernel: ACPI: SSDT 0x000000007958D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jul 2 11:44:37.560695 kernel: ACPI: DMAR 0x000000007958EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Jul 2 11:44:37.560699 kernel: ACPI: SSDT 0x000000007958ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jul 2 11:44:37.560704 kernel: ACPI: TPM2 0x000000007958EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jul 2 11:44:37.560709 kernel: ACPI: SSDT 0x000000007958EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jul 2 11:44:37.560714 kernel: ACPI: WSMT 0x000000007958FC28 000028 (v01 \xec_ 01072009 AMI 00010013) Jul 2 11:44:37.560719 kernel: ACPI: EINJ 0x000000007958FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jul 2 11:44:37.560724 kernel: ACPI: ERST 0x000000007958FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jul 2 11:44:37.560729 kernel: ACPI: BERT 0x000000007958FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jul 2 11:44:37.560733 kernel: ACPI: HEST 0x000000007958FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jul 2 11:44:37.560738 kernel: ACPI: SSDT 0x0000000079590260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jul 2 11:44:37.560743 kernel: ACPI: Reserving FACP table memory at [mem 0x79580620-0x79580733] Jul 2 11:44:37.560747 kernel: ACPI: Reserving DSDT table memory at [mem 0x79544268-0x7958061e] Jul 2 11:44:37.560752 kernel: ACPI: Reserving FACS table memory at [mem 0x79662f80-0x79662fbf] Jul 2 11:44:37.560757 kernel: ACPI: Reserving APIC table memory at [mem 0x79580738-0x79580863] Jul 2 11:44:37.560762 kernel: ACPI: Reserving FPDT table memory at [mem 0x79580868-0x795808ab] Jul 2 11:44:37.560767 kernel: ACPI: Reserving FIDT table memory at [mem 0x795808b0-0x7958094b] Jul 2 11:44:37.560772 kernel: ACPI: Reserving MCFG table memory at [mem 0x79580950-0x7958098b] Jul 2 11:44:37.560776 kernel: ACPI: Reserving SPMI table memory at [mem 0x79580990-0x795809d0] Jul 2 11:44:37.560781 kernel: ACPI: Reserving SSDT table memory at [mem 0x795809d8-0x795824f3] Jul 2 11:44:37.560786 kernel: ACPI: Reserving SSDT table memory at [mem 0x795824f8-0x795856bd] Jul 2 11:44:37.560790 kernel: ACPI: Reserving SSDT table memory at [mem 0x795856c0-0x795879ea] Jul 2 11:44:37.560795 kernel: ACPI: Reserving HPET table memory at [mem 0x795879f0-0x79587a27] Jul 2 11:44:37.560800 kernel: ACPI: Reserving SSDT table memory at [mem 0x79587a28-0x795889d5] Jul 2 11:44:37.560805 kernel: ACPI: Reserving SSDT table memory at [mem 0x795889d8-0x795892ce] Jul 2 11:44:37.560810 kernel: ACPI: Reserving UEFI table memory at [mem 0x795892d0-0x79589311] Jul 2 11:44:37.560815 kernel: ACPI: Reserving LPIT table memory at [mem 0x79589318-0x795893ab] Jul 2 11:44:37.560819 kernel: ACPI: Reserving SSDT table memory at [mem 0x795893b0-0x7958bb8d] Jul 2 11:44:37.560824 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958bb90-0x7958d071] Jul 2 11:44:37.560829 kernel: ACPI: Reserving DBGP table memory at [mem 0x7958d078-0x7958d0ab] Jul 2 11:44:37.560833 kernel: ACPI: Reserving DBG2 table memory at [mem 0x7958d0b0-0x7958d103] Jul 2 11:44:37.560838 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958d108-0x7958ec6e] Jul 2 11:44:37.560843 kernel: ACPI: Reserving DMAR table memory at [mem 0x7958ec70-0x7958ed17] Jul 2 11:44:37.560848 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ed18-0x7958ee5b] Jul 2 11:44:37.560853 kernel: ACPI: Reserving TPM2 table memory at [mem 0x7958ee60-0x7958ee93] Jul 2 11:44:37.560857 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ee98-0x7958fc26] Jul 2 11:44:37.560862 kernel: ACPI: Reserving WSMT table memory at [mem 0x7958fc28-0x7958fc4f] Jul 2 11:44:37.560867 kernel: ACPI: Reserving EINJ table memory at [mem 0x7958fc50-0x7958fd7f] Jul 2 11:44:37.560871 kernel: ACPI: Reserving ERST table memory at [mem 0x7958fd80-0x7958ffaf] Jul 2 11:44:37.560876 kernel: ACPI: Reserving BERT table memory at [mem 0x7958ffb0-0x7958ffdf] Jul 2 11:44:37.560881 kernel: ACPI: Reserving HEST table memory at [mem 0x7958ffe0-0x7959025b] Jul 2 11:44:37.560885 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590260-0x795903c1] Jul 2 11:44:37.560891 kernel: No NUMA configuration found Jul 2 11:44:37.560896 kernel: Faking a node at [mem 0x0000000000000000-0x000000087f7fffff] Jul 2 11:44:37.560900 kernel: NODE_DATA(0) allocated [mem 0x87f7fa000-0x87f7fffff] Jul 2 11:44:37.560905 kernel: Zone ranges: Jul 2 11:44:37.560910 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 11:44:37.560915 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 2 11:44:37.560919 kernel: Normal [mem 0x0000000100000000-0x000000087f7fffff] Jul 2 11:44:37.560924 kernel: Movable zone start for each node Jul 2 11:44:37.560929 kernel: Early memory node ranges Jul 2 11:44:37.560934 kernel: node 0: [mem 0x0000000000001000-0x000000000008efff] Jul 2 11:44:37.560939 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jul 2 11:44:37.560943 kernel: node 0: [mem 0x0000000040400000-0x000000005ff2efff] Jul 2 11:44:37.560948 kernel: node 0: [mem 0x000000005ff31000-0x000000005fffffff] Jul 2 11:44:37.560953 kernel: node 0: [mem 0x0000000068000000-0x0000000077fc4fff] Jul 2 11:44:37.560958 kernel: node 0: [mem 0x00000000790a8000-0x0000000079230fff] Jul 2 11:44:37.560965 kernel: node 0: [mem 0x000000007beff000-0x000000007befffff] Jul 2 11:44:37.560971 kernel: node 0: [mem 0x0000000100000000-0x000000087f7fffff] Jul 2 11:44:37.560976 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000087f7fffff] Jul 2 11:44:37.560982 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 11:44:37.560988 kernel: On node 0, zone DMA: 113 pages in unavailable ranges Jul 2 11:44:37.560993 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 2 11:44:37.560998 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jul 2 11:44:37.561003 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Jul 2 11:44:37.561008 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Jul 2 11:44:37.561013 kernel: On node 0, zone Normal: 16640 pages in unavailable ranges Jul 2 11:44:37.561018 kernel: On node 0, zone Normal: 2048 pages in unavailable ranges Jul 2 11:44:37.561024 kernel: ACPI: PM-Timer IO Port: 0x1808 Jul 2 11:44:37.561029 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 2 11:44:37.561034 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 2 11:44:37.561039 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 2 11:44:37.561044 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 2 11:44:37.561050 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 2 11:44:37.561055 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 2 11:44:37.561060 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 2 11:44:37.561065 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 2 11:44:37.561070 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 2 11:44:37.561076 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 2 11:44:37.561081 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 2 11:44:37.561086 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 2 11:44:37.561091 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 2 11:44:37.561096 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 2 11:44:37.561101 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 2 11:44:37.561106 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 2 11:44:37.561111 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jul 2 11:44:37.561117 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 11:44:37.561122 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 11:44:37.561127 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 11:44:37.561132 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 11:44:37.561137 kernel: TSC deadline timer available Jul 2 11:44:37.561142 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Jul 2 11:44:37.561147 kernel: [mem 0x7f800000-0xdfffffff] available for PCI devices Jul 2 11:44:37.561152 kernel: Booting paravirtualized kernel on bare hardware Jul 2 11:44:37.561157 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 11:44:37.561163 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Jul 2 11:44:37.561169 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Jul 2 11:44:37.561174 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Jul 2 11:44:37.561179 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 2 11:44:37.561184 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8190071 Jul 2 11:44:37.561189 kernel: Policy zone: Normal Jul 2 11:44:37.561194 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 11:44:37.561200 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 11:44:37.561205 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jul 2 11:44:37.561211 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jul 2 11:44:37.561216 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 11:44:37.561221 kernel: Memory: 32552588K/33280876K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 728028K reserved, 0K cma-reserved) Jul 2 11:44:37.561226 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 2 11:44:37.561231 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 11:44:37.561236 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 11:44:37.561241 kernel: rcu: Hierarchical RCU implementation. Jul 2 11:44:37.561247 kernel: rcu: RCU event tracing is enabled. Jul 2 11:44:37.561253 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 2 11:44:37.561258 kernel: Rude variant of Tasks RCU enabled. Jul 2 11:44:37.561263 kernel: Tracing variant of Tasks RCU enabled. Jul 2 11:44:37.561268 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 11:44:37.561273 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 2 11:44:37.561278 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jul 2 11:44:37.561284 kernel: random: crng init done Jul 2 11:44:37.561288 kernel: Console: colour dummy device 80x25 Jul 2 11:44:37.561293 kernel: printk: console [tty0] enabled Jul 2 11:44:37.561299 kernel: printk: console [ttyS1] enabled Jul 2 11:44:37.561304 kernel: ACPI: Core revision 20210730 Jul 2 11:44:37.561310 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Jul 2 11:44:37.561315 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 11:44:37.561320 kernel: DMAR: Host address width 39 Jul 2 11:44:37.561325 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Jul 2 11:44:37.561330 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Jul 2 11:44:37.561335 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jul 2 11:44:37.561340 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jul 2 11:44:37.561346 kernel: DMAR: RMRR base: 0x00000079f11000 end: 0x0000007a15afff Jul 2 11:44:37.561351 kernel: DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff Jul 2 11:44:37.561356 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Jul 2 11:44:37.561361 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jul 2 11:44:37.561367 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jul 2 11:44:37.561372 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jul 2 11:44:37.561377 kernel: x2apic enabled Jul 2 11:44:37.561382 kernel: Switched APIC routing to cluster x2apic. Jul 2 11:44:37.561387 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 11:44:37.561393 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jul 2 11:44:37.561398 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jul 2 11:44:37.561403 kernel: CPU0: Thermal monitoring enabled (TM1) Jul 2 11:44:37.561408 kernel: process: using mwait in idle threads Jul 2 11:44:37.561413 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 11:44:37.561418 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 11:44:37.561423 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 11:44:37.561429 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 11:44:37.561435 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 2 11:44:37.561440 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 2 11:44:37.561445 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 2 11:44:37.561452 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 11:44:37.561473 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 2 11:44:37.561478 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 2 11:44:37.561483 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 11:44:37.561489 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 11:44:37.561494 kernel: TAA: Mitigation: TSX disabled Jul 2 11:44:37.561500 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jul 2 11:44:37.561520 kernel: SRBDS: Mitigation: Microcode Jul 2 11:44:37.561525 kernel: GDS: Vulnerable: No microcode Jul 2 11:44:37.561530 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 11:44:37.561535 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 11:44:37.561540 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 11:44:37.561545 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 2 11:44:37.561551 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 2 11:44:37.561556 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 11:44:37.561561 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 2 11:44:37.561567 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 2 11:44:37.561572 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jul 2 11:44:37.561577 kernel: Freeing SMP alternatives memory: 32K Jul 2 11:44:37.561582 kernel: pid_max: default: 32768 minimum: 301 Jul 2 11:44:37.561587 kernel: LSM: Security Framework initializing Jul 2 11:44:37.561592 kernel: SELinux: Initializing. Jul 2 11:44:37.561597 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 11:44:37.561602 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 11:44:37.561608 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jul 2 11:44:37.561613 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 2 11:44:37.561618 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jul 2 11:44:37.561623 kernel: ... version: 4 Jul 2 11:44:37.561628 kernel: ... bit width: 48 Jul 2 11:44:37.561633 kernel: ... generic registers: 4 Jul 2 11:44:37.561638 kernel: ... value mask: 0000ffffffffffff Jul 2 11:44:37.561643 kernel: ... max period: 00007fffffffffff Jul 2 11:44:37.561648 kernel: ... fixed-purpose events: 3 Jul 2 11:44:37.561654 kernel: ... event mask: 000000070000000f Jul 2 11:44:37.561659 kernel: signal: max sigframe size: 2032 Jul 2 11:44:37.561664 kernel: rcu: Hierarchical SRCU implementation. Jul 2 11:44:37.561670 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jul 2 11:44:37.561675 kernel: smp: Bringing up secondary CPUs ... Jul 2 11:44:37.561680 kernel: x86: Booting SMP configuration: Jul 2 11:44:37.561685 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Jul 2 11:44:37.561690 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 11:44:37.561695 kernel: #9 #10 #11 #12 #13 #14 #15 Jul 2 11:44:37.561701 kernel: smp: Brought up 1 node, 16 CPUs Jul 2 11:44:37.561706 kernel: smpboot: Max logical packages: 1 Jul 2 11:44:37.561711 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jul 2 11:44:37.561716 kernel: devtmpfs: initialized Jul 2 11:44:37.561721 kernel: x86/mm: Memory block size: 128MB Jul 2 11:44:37.561726 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x5ff2f000-0x5ff2ffff] (4096 bytes) Jul 2 11:44:37.561731 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x79231000-0x79662fff] (4399104 bytes) Jul 2 11:44:37.561736 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 11:44:37.561741 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 2 11:44:37.561747 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 11:44:37.561752 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 11:44:37.561757 kernel: audit: initializing netlink subsys (disabled) Jul 2 11:44:37.561762 kernel: audit: type=2000 audit(1719920672.123:1): state=initialized audit_enabled=0 res=1 Jul 2 11:44:37.561767 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 11:44:37.561772 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 11:44:37.561778 kernel: cpuidle: using governor menu Jul 2 11:44:37.561783 kernel: ACPI: bus type PCI registered Jul 2 11:44:37.561788 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 11:44:37.561793 kernel: dca service started, version 1.12.1 Jul 2 11:44:37.561798 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jul 2 11:44:37.561804 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Jul 2 11:44:37.561809 kernel: PCI: Using configuration type 1 for base access Jul 2 11:44:37.561814 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jul 2 11:44:37.561819 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 11:44:37.561824 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 11:44:37.561829 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 11:44:37.561834 kernel: ACPI: Added _OSI(Module Device) Jul 2 11:44:37.561840 kernel: ACPI: Added _OSI(Processor Device) Jul 2 11:44:37.561845 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 11:44:37.561850 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 11:44:37.561855 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 11:44:37.561860 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 11:44:37.561865 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 11:44:37.561870 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jul 2 11:44:37.561875 kernel: ACPI: Dynamic OEM Table Load: Jul 2 11:44:37.561880 kernel: ACPI: SSDT 0xFFFF88B580223100 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jul 2 11:44:37.561886 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Jul 2 11:44:37.561891 kernel: ACPI: Dynamic OEM Table Load: Jul 2 11:44:37.561896 kernel: ACPI: SSDT 0xFFFF88B581C67800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jul 2 11:44:37.561901 kernel: ACPI: Dynamic OEM Table Load: Jul 2 11:44:37.561906 kernel: ACPI: SSDT 0xFFFF88B581D54000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jul 2 11:44:37.561911 kernel: ACPI: Dynamic OEM Table Load: Jul 2 11:44:37.561916 kernel: ACPI: SSDT 0xFFFF88B580156000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jul 2 11:44:37.561921 kernel: ACPI: Interpreter enabled Jul 2 11:44:37.561926 kernel: ACPI: PM: (supports S0 S5) Jul 2 11:44:37.561932 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 11:44:37.561937 kernel: HEST: Enabling Firmware First mode for corrected errors. Jul 2 11:44:37.561942 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jul 2 11:44:37.561947 kernel: HEST: Table parsing has been initialized. Jul 2 11:44:37.561952 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jul 2 11:44:37.561958 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 11:44:37.561963 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jul 2 11:44:37.561968 kernel: ACPI: PM: Power Resource [USBC] Jul 2 11:44:37.561973 kernel: ACPI: PM: Power Resource [V0PR] Jul 2 11:44:37.561978 kernel: ACPI: PM: Power Resource [V1PR] Jul 2 11:44:37.561983 kernel: ACPI: PM: Power Resource [V2PR] Jul 2 11:44:37.561988 kernel: ACPI: PM: Power Resource [WRST] Jul 2 11:44:37.561993 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 2 11:44:37.561998 kernel: ACPI: PM: Power Resource [FN00] Jul 2 11:44:37.562003 kernel: ACPI: PM: Power Resource [FN01] Jul 2 11:44:37.562008 kernel: ACPI: PM: Power Resource [FN02] Jul 2 11:44:37.562013 kernel: ACPI: PM: Power Resource [FN03] Jul 2 11:44:37.562018 kernel: ACPI: PM: Power Resource [FN04] Jul 2 11:44:37.562023 kernel: ACPI: PM: Power Resource [PIN] Jul 2 11:44:37.562029 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jul 2 11:44:37.562092 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 11:44:37.562138 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jul 2 11:44:37.562179 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jul 2 11:44:37.562186 kernel: PCI host bridge to bus 0000:00 Jul 2 11:44:37.562228 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 11:44:37.562269 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 11:44:37.562305 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 11:44:37.562342 kernel: pci_bus 0000:00: root bus resource [mem 0x7f800000-0xdfffffff window] Jul 2 11:44:37.562378 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jul 2 11:44:37.562415 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jul 2 11:44:37.562483 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Jul 2 11:44:37.562535 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Jul 2 11:44:37.562580 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jul 2 11:44:37.562627 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Jul 2 11:44:37.562671 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Jul 2 11:44:37.562718 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Jul 2 11:44:37.562762 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x94000000-0x94ffffff 64bit] Jul 2 11:44:37.562804 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Jul 2 11:44:37.562848 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Jul 2 11:44:37.562895 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Jul 2 11:44:37.562938 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9651e000-0x9651efff 64bit] Jul 2 11:44:37.562985 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Jul 2 11:44:37.563027 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x96500000-0x9650ffff 64bit] Jul 2 11:44:37.563069 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jul 2 11:44:37.563115 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Jul 2 11:44:37.563160 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x96512000-0x96513fff 64bit] Jul 2 11:44:37.563202 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9651d000-0x9651dfff 64bit] Jul 2 11:44:37.563250 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Jul 2 11:44:37.563292 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 2 11:44:37.563337 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Jul 2 11:44:37.563380 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 2 11:44:37.563427 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Jul 2 11:44:37.563473 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9651a000-0x9651afff 64bit] Jul 2 11:44:37.563516 kernel: pci 0000:00:16.0: PME# supported from D3hot Jul 2 11:44:37.563570 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Jul 2 11:44:37.563613 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x96519000-0x96519fff 64bit] Jul 2 11:44:37.563655 kernel: pci 0000:00:16.1: PME# supported from D3hot Jul 2 11:44:37.563703 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Jul 2 11:44:37.563745 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x96518000-0x96518fff 64bit] Jul 2 11:44:37.563787 kernel: pci 0000:00:16.4: PME# supported from D3hot Jul 2 11:44:37.563832 kernel: pci 0000:00:17.0: [8086:2826] type 00 class 0x010400 Jul 2 11:44:37.563875 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x96510000-0x96511fff] Jul 2 11:44:37.563916 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x96517000-0x965170ff] Jul 2 11:44:37.563959 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Jul 2 11:44:37.564002 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Jul 2 11:44:37.564044 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Jul 2 11:44:37.564086 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x96516000-0x965167ff] Jul 2 11:44:37.564127 kernel: pci 0000:00:17.0: PME# supported from D3hot Jul 2 11:44:37.564173 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Jul 2 11:44:37.564217 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jul 2 11:44:37.564267 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Jul 2 11:44:37.564310 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jul 2 11:44:37.564356 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Jul 2 11:44:37.564400 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jul 2 11:44:37.564447 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Jul 2 11:44:37.564493 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jul 2 11:44:37.564540 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Jul 2 11:44:37.564583 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Jul 2 11:44:37.564629 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Jul 2 11:44:37.564672 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Jul 2 11:44:37.564719 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Jul 2 11:44:37.564767 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Jul 2 11:44:37.564810 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x96514000-0x965140ff 64bit] Jul 2 11:44:37.564852 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Jul 2 11:44:37.564898 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Jul 2 11:44:37.564941 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Jul 2 11:44:37.564985 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 2 11:44:37.565035 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Jul 2 11:44:37.565080 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Jul 2 11:44:37.565124 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x96200000-0x962fffff pref] Jul 2 11:44:37.565167 kernel: pci 0000:02:00.0: PME# supported from D3cold Jul 2 11:44:37.565211 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jul 2 11:44:37.565255 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jul 2 11:44:37.565303 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Jul 2 11:44:37.565349 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Jul 2 11:44:37.565394 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x96100000-0x961fffff pref] Jul 2 11:44:37.565437 kernel: pci 0000:02:00.1: PME# supported from D3cold Jul 2 11:44:37.565484 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Jul 2 11:44:37.565528 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Jul 2 11:44:37.565572 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jul 2 11:44:37.565615 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Jul 2 11:44:37.565659 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 2 11:44:37.565722 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jul 2 11:44:37.565768 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jul 2 11:44:37.565813 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Jul 2 11:44:37.565856 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x96400000-0x9647ffff] Jul 2 11:44:37.565898 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Jul 2 11:44:37.565941 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x96480000-0x96483fff] Jul 2 11:44:37.565984 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jul 2 11:44:37.566028 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jul 2 11:44:37.566071 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 2 11:44:37.566112 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Jul 2 11:44:37.566160 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Jul 2 11:44:37.566203 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Jul 2 11:44:37.566246 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x96300000-0x9637ffff] Jul 2 11:44:37.566343 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Jul 2 11:44:37.566388 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x96380000-0x96383fff] Jul 2 11:44:37.566434 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Jul 2 11:44:37.566499 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jul 2 11:44:37.566542 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 2 11:44:37.566584 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Jul 2 11:44:37.566626 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jul 2 11:44:37.566676 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Jul 2 11:44:37.566722 kernel: pci 0000:07:00.0: enabling Extended Tags Jul 2 11:44:37.566765 kernel: pci 0000:07:00.0: supports D1 D2 Jul 2 11:44:37.566809 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 11:44:37.566851 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jul 2 11:44:37.566893 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jul 2 11:44:37.566935 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Jul 2 11:44:37.566982 kernel: pci_bus 0000:08: extended config space not accessible Jul 2 11:44:37.567033 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Jul 2 11:44:37.567080 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x95000000-0x95ffffff] Jul 2 11:44:37.567127 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x96000000-0x9601ffff] Jul 2 11:44:37.567172 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Jul 2 11:44:37.567218 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 11:44:37.567262 kernel: pci 0000:08:00.0: supports D1 D2 Jul 2 11:44:37.567307 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 11:44:37.567350 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jul 2 11:44:37.567395 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jul 2 11:44:37.567438 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Jul 2 11:44:37.567445 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jul 2 11:44:37.567475 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jul 2 11:44:37.567480 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jul 2 11:44:37.567486 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jul 2 11:44:37.567492 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jul 2 11:44:37.567518 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jul 2 11:44:37.567524 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jul 2 11:44:37.567530 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jul 2 11:44:37.567535 kernel: iommu: Default domain type: Translated Jul 2 11:44:37.567541 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 11:44:37.567586 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Jul 2 11:44:37.567630 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 11:44:37.567676 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Jul 2 11:44:37.567684 kernel: vgaarb: loaded Jul 2 11:44:37.567689 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 11:44:37.567696 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 11:44:37.567701 kernel: PTP clock support registered Jul 2 11:44:37.567707 kernel: PCI: Using ACPI for IRQ routing Jul 2 11:44:37.567712 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 11:44:37.567718 kernel: e820: reserve RAM buffer [mem 0x0008f800-0x0008ffff] Jul 2 11:44:37.567723 kernel: e820: reserve RAM buffer [mem 0x5ff2f000-0x5fffffff] Jul 2 11:44:37.567728 kernel: e820: reserve RAM buffer [mem 0x77fc5000-0x77ffffff] Jul 2 11:44:37.567734 kernel: e820: reserve RAM buffer [mem 0x79231000-0x7bffffff] Jul 2 11:44:37.567739 kernel: e820: reserve RAM buffer [mem 0x7bf00000-0x7bffffff] Jul 2 11:44:37.567745 kernel: e820: reserve RAM buffer [mem 0x87f800000-0x87fffffff] Jul 2 11:44:37.567750 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 2 11:44:37.567755 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Jul 2 11:44:37.567761 kernel: clocksource: Switched to clocksource tsc-early Jul 2 11:44:37.567766 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 11:44:37.567772 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 11:44:37.567777 kernel: pnp: PnP ACPI init Jul 2 11:44:37.567820 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jul 2 11:44:37.567863 kernel: pnp 00:02: [dma 0 disabled] Jul 2 11:44:37.567906 kernel: pnp 00:03: [dma 0 disabled] Jul 2 11:44:37.567947 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jul 2 11:44:37.567986 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jul 2 11:44:37.568026 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Jul 2 11:44:37.568068 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Jul 2 11:44:37.568107 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Jul 2 11:44:37.568145 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Jul 2 11:44:37.568183 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Jul 2 11:44:37.568221 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Jul 2 11:44:37.568258 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Jul 2 11:44:37.568295 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Jul 2 11:44:37.568333 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Jul 2 11:44:37.568374 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Jul 2 11:44:37.568414 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Jul 2 11:44:37.568475 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jul 2 11:44:37.568532 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Jul 2 11:44:37.568569 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Jul 2 11:44:37.568606 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Jul 2 11:44:37.568645 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Jul 2 11:44:37.568688 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Jul 2 11:44:37.568696 kernel: pnp: PnP ACPI: found 10 devices Jul 2 11:44:37.568701 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 11:44:37.568707 kernel: NET: Registered PF_INET protocol family Jul 2 11:44:37.568712 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 11:44:37.568718 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 2 11:44:37.568723 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 11:44:37.568729 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 11:44:37.568734 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jul 2 11:44:37.568741 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jul 2 11:44:37.568746 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 11:44:37.568752 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 11:44:37.568757 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 11:44:37.568763 kernel: NET: Registered PF_XDP protocol family Jul 2 11:44:37.568805 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7f800000-0x7f800fff 64bit] Jul 2 11:44:37.568848 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7f801000-0x7f801fff 64bit] Jul 2 11:44:37.568891 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7f802000-0x7f802fff 64bit] Jul 2 11:44:37.568935 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 2 11:44:37.568980 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Jul 2 11:44:37.569024 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jul 2 11:44:37.569070 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Jul 2 11:44:37.569113 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Jul 2 11:44:37.569158 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jul 2 11:44:37.569202 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Jul 2 11:44:37.569244 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 2 11:44:37.569286 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Jul 2 11:44:37.569328 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Jul 2 11:44:37.569372 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 2 11:44:37.569414 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Jul 2 11:44:37.569480 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Jul 2 11:44:37.569524 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 2 11:44:37.569567 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Jul 2 11:44:37.569610 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Jul 2 11:44:37.569654 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Jul 2 11:44:37.569698 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Jul 2 11:44:37.569742 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Jul 2 11:44:37.569784 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Jul 2 11:44:37.569828 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Jul 2 11:44:37.569870 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Jul 2 11:44:37.569911 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 2 11:44:37.569949 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 11:44:37.569986 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 11:44:37.570024 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 11:44:37.570062 kernel: pci_bus 0000:00: resource 7 [mem 0x7f800000-0xdfffffff window] Jul 2 11:44:37.570099 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jul 2 11:44:37.570145 kernel: pci_bus 0000:02: resource 1 [mem 0x96100000-0x962fffff] Jul 2 11:44:37.570187 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jul 2 11:44:37.570232 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Jul 2 11:44:37.570273 kernel: pci_bus 0000:04: resource 1 [mem 0x96400000-0x964fffff] Jul 2 11:44:37.570316 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jul 2 11:44:37.570356 kernel: pci_bus 0000:05: resource 1 [mem 0x96300000-0x963fffff] Jul 2 11:44:37.570399 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jul 2 11:44:37.570439 kernel: pci_bus 0000:07: resource 1 [mem 0x95000000-0x960fffff] Jul 2 11:44:37.570485 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Jul 2 11:44:37.570527 kernel: pci_bus 0000:08: resource 1 [mem 0x95000000-0x960fffff] Jul 2 11:44:37.570534 kernel: PCI: CLS 64 bytes, default 64 Jul 2 11:44:37.570540 kernel: DMAR: No ATSR found Jul 2 11:44:37.570546 kernel: DMAR: No SATC found Jul 2 11:44:37.570552 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Jul 2 11:44:37.570557 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Jul 2 11:44:37.570563 kernel: DMAR: IOMMU feature nwfs inconsistent Jul 2 11:44:37.570570 kernel: DMAR: IOMMU feature pasid inconsistent Jul 2 11:44:37.570575 kernel: DMAR: IOMMU feature eafs inconsistent Jul 2 11:44:37.570581 kernel: DMAR: IOMMU feature prs inconsistent Jul 2 11:44:37.570586 kernel: DMAR: IOMMU feature nest inconsistent Jul 2 11:44:37.570592 kernel: DMAR: IOMMU feature mts inconsistent Jul 2 11:44:37.570597 kernel: DMAR: IOMMU feature sc_support inconsistent Jul 2 11:44:37.570603 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Jul 2 11:44:37.570609 kernel: DMAR: dmar0: Using Queued invalidation Jul 2 11:44:37.570614 kernel: DMAR: dmar1: Using Queued invalidation Jul 2 11:44:37.570658 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jul 2 11:44:37.570721 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jul 2 11:44:37.570764 kernel: pci 0000:00:01.1: Adding to iommu group 1 Jul 2 11:44:37.570805 kernel: pci 0000:00:02.0: Adding to iommu group 2 Jul 2 11:44:37.570847 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jul 2 11:44:37.570888 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jul 2 11:44:37.570929 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jul 2 11:44:37.570971 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jul 2 11:44:37.571013 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jul 2 11:44:37.571055 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jul 2 11:44:37.571095 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jul 2 11:44:37.571136 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jul 2 11:44:37.571177 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jul 2 11:44:37.571219 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jul 2 11:44:37.571260 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jul 2 11:44:37.571302 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jul 2 11:44:37.571346 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jul 2 11:44:37.571388 kernel: pci 0000:00:1c.1: Adding to iommu group 12 Jul 2 11:44:37.571429 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jul 2 11:44:37.571515 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jul 2 11:44:37.571557 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jul 2 11:44:37.571599 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jul 2 11:44:37.571641 kernel: pci 0000:02:00.0: Adding to iommu group 1 Jul 2 11:44:37.571685 kernel: pci 0000:02:00.1: Adding to iommu group 1 Jul 2 11:44:37.571728 kernel: pci 0000:04:00.0: Adding to iommu group 15 Jul 2 11:44:37.571774 kernel: pci 0000:05:00.0: Adding to iommu group 16 Jul 2 11:44:37.571816 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jul 2 11:44:37.571864 kernel: pci 0000:08:00.0: Adding to iommu group 17 Jul 2 11:44:37.571872 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jul 2 11:44:37.571877 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 11:44:37.571883 kernel: software IO TLB: mapped [mem 0x0000000073fc5000-0x0000000077fc5000] (64MB) Jul 2 11:44:37.571888 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Jul 2 11:44:37.571894 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jul 2 11:44:37.571900 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jul 2 11:44:37.571906 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jul 2 11:44:37.571911 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Jul 2 11:44:37.571955 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jul 2 11:44:37.571964 kernel: Initialise system trusted keyrings Jul 2 11:44:37.571969 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jul 2 11:44:37.571975 kernel: Key type asymmetric registered Jul 2 11:44:37.571980 kernel: Asymmetric key parser 'x509' registered Jul 2 11:44:37.571986 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 11:44:37.571992 kernel: io scheduler mq-deadline registered Jul 2 11:44:37.571997 kernel: io scheduler kyber registered Jul 2 11:44:37.572003 kernel: io scheduler bfq registered Jul 2 11:44:37.572046 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Jul 2 11:44:37.572087 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Jul 2 11:44:37.572129 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Jul 2 11:44:37.572170 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Jul 2 11:44:37.572214 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Jul 2 11:44:37.572256 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Jul 2 11:44:37.572297 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Jul 2 11:44:37.572346 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jul 2 11:44:37.572354 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jul 2 11:44:37.572359 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jul 2 11:44:37.572365 kernel: pstore: Registered erst as persistent store backend Jul 2 11:44:37.572370 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 11:44:37.572377 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 11:44:37.572382 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 11:44:37.572388 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 2 11:44:37.572430 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jul 2 11:44:37.572437 kernel: i8042: PNP: No PS/2 controller found. Jul 2 11:44:37.572520 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jul 2 11:44:37.572559 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jul 2 11:44:37.572597 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-07-02T11:44:36 UTC (1719920676) Jul 2 11:44:37.572636 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jul 2 11:44:37.572644 kernel: fail to initialize ptp_kvm Jul 2 11:44:37.572649 kernel: intel_pstate: Intel P-state driver initializing Jul 2 11:44:37.572655 kernel: intel_pstate: Disabling energy efficiency optimization Jul 2 11:44:37.572660 kernel: intel_pstate: HWP enabled Jul 2 11:44:37.572666 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Jul 2 11:44:37.572671 kernel: vesafb: scrolling: redraw Jul 2 11:44:37.572677 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Jul 2 11:44:37.572683 kernel: vesafb: framebuffer at 0x95000000, mapped to 0x000000004bb7bfc6, using 768k, total 768k Jul 2 11:44:37.572689 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 11:44:37.572694 kernel: fb0: VESA VGA frame buffer device Jul 2 11:44:37.572700 kernel: NET: Registered PF_INET6 protocol family Jul 2 11:44:37.572705 kernel: Segment Routing with IPv6 Jul 2 11:44:37.572710 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 11:44:37.572716 kernel: NET: Registered PF_PACKET protocol family Jul 2 11:44:37.572721 kernel: Key type dns_resolver registered Jul 2 11:44:37.572726 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Jul 2 11:44:37.572732 kernel: microcode: Microcode Update Driver: v2.2. Jul 2 11:44:37.572738 kernel: IPI shorthand broadcast: enabled Jul 2 11:44:37.572743 kernel: sched_clock: Marking stable (1841688128, 1380415559)->(4636277996, -1414174309) Jul 2 11:44:37.572749 kernel: registered taskstats version 1 Jul 2 11:44:37.572754 kernel: Loading compiled-in X.509 certificates Jul 2 11:44:37.572760 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 11:44:37.572765 kernel: Key type .fscrypt registered Jul 2 11:44:37.572771 kernel: Key type fscrypt-provisioning registered Jul 2 11:44:37.572776 kernel: pstore: Using crash dump compression: deflate Jul 2 11:44:37.572782 kernel: ima: Allocated hash algorithm: sha1 Jul 2 11:44:37.572787 kernel: ima: No architecture policies found Jul 2 11:44:37.572793 kernel: clk: Disabling unused clocks Jul 2 11:44:37.572798 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 11:44:37.572804 kernel: Write protecting the kernel read-only data: 28672k Jul 2 11:44:37.572809 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 11:44:37.572815 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 11:44:37.572820 kernel: Run /init as init process Jul 2 11:44:37.572825 kernel: with arguments: Jul 2 11:44:37.572832 kernel: /init Jul 2 11:44:37.572837 kernel: with environment: Jul 2 11:44:37.572842 kernel: HOME=/ Jul 2 11:44:37.572847 kernel: TERM=linux Jul 2 11:44:37.572853 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 11:44:37.572859 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 11:44:37.572866 systemd[1]: Detected architecture x86-64. Jul 2 11:44:37.572872 systemd[1]: Running in initrd. Jul 2 11:44:37.572878 systemd[1]: No hostname configured, using default hostname. Jul 2 11:44:37.572884 systemd[1]: Hostname set to . Jul 2 11:44:37.572889 systemd[1]: Initializing machine ID from random generator. Jul 2 11:44:37.572895 systemd[1]: Queued start job for default target initrd.target. Jul 2 11:44:37.572901 systemd[1]: Started systemd-ask-password-console.path. Jul 2 11:44:37.572906 systemd[1]: Reached target cryptsetup.target. Jul 2 11:44:37.572912 systemd[1]: Reached target paths.target. Jul 2 11:44:37.572917 systemd[1]: Reached target slices.target. Jul 2 11:44:37.572923 systemd[1]: Reached target swap.target. Jul 2 11:44:37.572929 systemd[1]: Reached target timers.target. Jul 2 11:44:37.572934 systemd[1]: Listening on iscsid.socket. Jul 2 11:44:37.572940 systemd[1]: Listening on iscsiuio.socket. Jul 2 11:44:37.572946 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 11:44:37.572952 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 11:44:37.572957 systemd[1]: Listening on systemd-journald.socket. Jul 2 11:44:37.572963 systemd[1]: Listening on systemd-networkd.socket. Jul 2 11:44:37.572969 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Jul 2 11:44:37.572975 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 11:44:37.572981 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Jul 2 11:44:37.572986 kernel: clocksource: Switched to clocksource tsc Jul 2 11:44:37.572992 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 11:44:37.572998 systemd[1]: Reached target sockets.target. Jul 2 11:44:37.573003 systemd[1]: Starting kmod-static-nodes.service... Jul 2 11:44:37.573009 systemd[1]: Finished network-cleanup.service. Jul 2 11:44:37.573014 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 11:44:37.573021 systemd[1]: Starting systemd-journald.service... Jul 2 11:44:37.573027 systemd[1]: Starting systemd-modules-load.service... Jul 2 11:44:37.573035 systemd-journald[268]: Journal started Jul 2 11:44:37.573059 systemd-journald[268]: Runtime Journal (/run/log/journal/c9d8021bd0614c549a22b302627f4926) is 8.0M, max 636.7M, 628.7M free. Jul 2 11:44:37.575043 systemd-modules-load[269]: Inserted module 'overlay' Jul 2 11:44:37.633553 kernel: audit: type=1334 audit(1719920677.580:2): prog-id=6 op=LOAD Jul 2 11:44:37.633564 systemd[1]: Starting systemd-resolved.service... Jul 2 11:44:37.633572 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 11:44:37.580000 audit: BPF prog-id=6 op=LOAD Jul 2 11:44:37.667517 kernel: Bridge firewalling registered Jul 2 11:44:37.667533 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 11:44:37.682874 systemd-modules-load[269]: Inserted module 'br_netfilter' Jul 2 11:44:37.718551 systemd[1]: Started systemd-journald.service. Jul 2 11:44:37.718564 kernel: SCSI subsystem initialized Jul 2 11:44:37.685109 systemd-resolved[271]: Positive Trust Anchors: Jul 2 11:44:37.835669 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 11:44:37.835682 kernel: audit: type=1130 audit(1719920677.738:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:37.835690 kernel: device-mapper: uevent: version 1.0.3 Jul 2 11:44:37.835696 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 11:44:37.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:37.685115 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 11:44:37.886811 kernel: audit: type=1130 audit(1719920677.843:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:37.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:37.685134 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 11:44:37.959626 kernel: audit: type=1130 audit(1719920677.894:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:37.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:37.686722 systemd-resolved[271]: Defaulting to hostname 'linux'. Jul 2 11:44:37.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:37.739763 systemd[1]: Started systemd-resolved.service. Jul 2 11:44:38.063037 kernel: audit: type=1130 audit(1719920677.967:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:38.063047 kernel: audit: type=1130 audit(1719920678.018:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:38.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:37.837785 systemd-modules-load[269]: Inserted module 'dm_multipath' Jul 2 11:44:38.117389 kernel: audit: type=1130 audit(1719920678.071:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:38.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:37.845019 systemd[1]: Finished kmod-static-nodes.service. Jul 2 11:44:37.895693 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 11:44:37.967710 systemd[1]: Finished systemd-modules-load.service. Jul 2 11:44:38.018711 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 11:44:38.071711 systemd[1]: Reached target nss-lookup.target. Jul 2 11:44:38.126033 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 11:44:38.145990 systemd[1]: Starting systemd-sysctl.service... Jul 2 11:44:38.146281 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 11:44:38.149102 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 11:44:38.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:38.149752 systemd[1]: Finished systemd-sysctl.service. Jul 2 11:44:38.198632 kernel: audit: type=1130 audit(1719920678.147:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:38.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:38.210774 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 11:44:38.275497 kernel: audit: type=1130 audit(1719920678.210:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:38.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:38.267045 systemd[1]: Starting dracut-cmdline.service... Jul 2 11:44:38.289493 dracut-cmdline[293]: dracut-dracut-053 Jul 2 11:44:38.289493 dracut-cmdline[293]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Jul 2 11:44:38.289493 dracut-cmdline[293]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 11:44:38.420548 kernel: Loading iSCSI transport class v2.0-870. Jul 2 11:44:38.420563 kernel: iscsi: registered transport (tcp) Jul 2 11:44:38.420571 kernel: iscsi: registered transport (qla4xxx) Jul 2 11:44:38.420577 kernel: QLogic iSCSI HBA Driver Jul 2 11:44:38.433200 systemd[1]: Finished dracut-cmdline.service. Jul 2 11:44:38.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:38.442191 systemd[1]: Starting dracut-pre-udev.service... Jul 2 11:44:38.497530 kernel: raid6: avx2x4 gen() 48914 MB/s Jul 2 11:44:38.532515 kernel: raid6: avx2x4 xor() 22250 MB/s Jul 2 11:44:38.567487 kernel: raid6: avx2x2 gen() 54918 MB/s Jul 2 11:44:38.602488 kernel: raid6: avx2x2 xor() 32742 MB/s Jul 2 11:44:38.637522 kernel: raid6: avx2x1 gen() 46051 MB/s Jul 2 11:44:38.672486 kernel: raid6: avx2x1 xor() 28449 MB/s Jul 2 11:44:38.706504 kernel: raid6: sse2x4 gen() 21810 MB/s Jul 2 11:44:38.740519 kernel: raid6: sse2x4 xor() 11989 MB/s Jul 2 11:44:38.774519 kernel: raid6: sse2x2 gen() 22110 MB/s Jul 2 11:44:38.808487 kernel: raid6: sse2x2 xor() 13718 MB/s Jul 2 11:44:38.842518 kernel: raid6: sse2x1 gen() 18665 MB/s Jul 2 11:44:38.893932 kernel: raid6: sse2x1 xor() 9114 MB/s Jul 2 11:44:38.893949 kernel: raid6: using algorithm avx2x2 gen() 54918 MB/s Jul 2 11:44:38.893957 kernel: raid6: .... xor() 32742 MB/s, rmw enabled Jul 2 11:44:38.911903 kernel: raid6: using avx2x2 recovery algorithm Jul 2 11:44:38.957482 kernel: xor: automatically using best checksumming function avx Jul 2 11:44:39.056517 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 11:44:39.060916 systemd[1]: Finished dracut-pre-udev.service. Jul 2 11:44:39.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:39.069000 audit: BPF prog-id=7 op=LOAD Jul 2 11:44:39.069000 audit: BPF prog-id=8 op=LOAD Jul 2 11:44:39.070352 systemd[1]: Starting systemd-udevd.service... Jul 2 11:44:39.078295 systemd-udevd[471]: Using default interface naming scheme 'v252'. Jul 2 11:44:39.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:39.083592 systemd[1]: Started systemd-udevd.service. Jul 2 11:44:39.123584 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Jul 2 11:44:39.099082 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 11:44:39.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:39.127992 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 11:44:39.140635 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 11:44:39.193354 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 11:44:39.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:39.220459 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 11:44:39.256699 kernel: ACPI: bus type USB registered Jul 2 11:44:39.256740 kernel: usbcore: registered new interface driver usbfs Jul 2 11:44:39.256751 kernel: usbcore: registered new interface driver hub Jul 2 11:44:39.291710 kernel: usbcore: registered new device driver usb Jul 2 11:44:39.296459 kernel: libata version 3.00 loaded. Jul 2 11:44:39.330610 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 11:44:39.330664 kernel: AES CTR mode by8 optimization enabled Jul 2 11:44:39.367417 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jul 2 11:44:39.367440 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Jul 2 11:44:39.367525 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jul 2 11:44:39.367533 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 2 11:44:39.405457 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 2 11:44:39.440190 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jul 2 11:44:39.440275 kernel: pps pps0: new PPS source ptp0 Jul 2 11:44:39.474864 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jul 2 11:44:39.474935 kernel: igb 0000:04:00.0: added PHC on eth0 Jul 2 11:44:39.489865 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 2 11:44:39.489930 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 2 11:44:39.522245 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jul 2 11:44:39.522316 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:c8:96 Jul 2 11:44:39.539735 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jul 2 11:44:39.539829 kernel: ahci 0000:00:17.0: version 3.0 Jul 2 11:44:39.540462 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl RAID mode Jul 2 11:44:39.540568 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jul 2 11:44:39.573201 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Jul 2 11:44:39.574454 kernel: hub 1-0:1.0: USB hub found Jul 2 11:44:39.574548 kernel: scsi host0: ahci Jul 2 11:44:39.574612 kernel: scsi host1: ahci Jul 2 11:44:39.574679 kernel: scsi host2: ahci Jul 2 11:44:39.574749 kernel: scsi host3: ahci Jul 2 11:44:39.574810 kernel: scsi host4: ahci Jul 2 11:44:39.574865 kernel: scsi host5: ahci Jul 2 11:44:39.574920 kernel: scsi host6: ahci Jul 2 11:44:39.574970 kernel: scsi host7: ahci Jul 2 11:44:39.575021 kernel: ata1: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516100 irq 135 Jul 2 11:44:39.575029 kernel: ata2: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516180 irq 135 Jul 2 11:44:39.575035 kernel: ata3: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516200 irq 135 Jul 2 11:44:39.575041 kernel: ata4: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516280 irq 135 Jul 2 11:44:39.575048 kernel: ata5: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516300 irq 135 Jul 2 11:44:39.575055 kernel: ata6: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516380 irq 135 Jul 2 11:44:39.575062 kernel: ata7: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516400 irq 135 Jul 2 11:44:39.575068 kernel: ata8: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516480 irq 135 Jul 2 11:44:39.590929 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 2 11:44:39.623507 kernel: hub 1-0:1.0: 16 ports detected Jul 2 11:44:39.667519 kernel: pps pps1: new PPS source ptp1 Jul 2 11:44:39.671516 kernel: hub 2-0:1.0: USB hub found Jul 2 11:44:39.671598 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jul 2 11:44:39.686494 kernel: igb 0000:05:00.0: added PHC on eth1 Jul 2 11:44:39.686569 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Jul 2 11:44:39.697983 kernel: hub 2-0:1.0: 10 ports detected Jul 2 11:44:39.698058 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 2 11:44:39.886454 kernel: mlx5_core 0000:02:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Jul 2 11:44:39.886567 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:c8:97 Jul 2 11:44:39.886648 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Jul 2 11:44:39.886701 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 2 11:44:39.886709 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 2 11:44:39.886715 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 2 11:44:39.886939 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 2 11:44:39.886954 kernel: ata8: SATA link down (SStatus 0 SControl 300) Jul 2 11:44:39.897479 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Jul 2 11:44:39.897555 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 2 11:44:39.904500 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 2 11:44:39.932493 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jul 2 11:44:39.932522 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 2 11:44:40.075513 kernel: hub 1-14:1.0: USB hub found Jul 2 11:44:40.075613 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jul 2 11:44:40.105675 kernel: hub 1-14:1.0: 4 ports detected Jul 2 11:44:40.105753 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jul 2 11:44:40.185454 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jul 2 11:44:40.185533 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 2 11:44:40.217491 kernel: port_module: 9 callbacks suppressed Jul 2 11:44:40.217508 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Jul 2 11:44:40.217575 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Jul 2 11:44:40.258502 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Jul 2 11:44:40.273491 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 2 11:44:40.317284 kernel: ata2.00: Features: NCQ-prio Jul 2 11:44:40.346663 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 2 11:44:40.346679 kernel: ata1.00: Features: NCQ-prio Jul 2 11:44:40.365501 kernel: ata2.00: configured for UDMA/133 Jul 2 11:44:40.365534 kernel: ata1.00: configured for UDMA/133 Jul 2 11:44:40.378493 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jul 2 11:44:40.429329 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Jul 2 11:44:40.429469 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jul 2 11:44:40.445456 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Jul 2 11:44:40.466519 kernel: ata2.00: Enabling discard_zeroes_data Jul 2 11:44:40.466536 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 11:44:40.481018 kernel: mlx5_core 0000:02:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Jul 2 11:44:40.481102 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 2 11:44:40.481163 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 2 11:44:40.481222 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 11:44:40.481275 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 11:44:40.481364 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jul 2 11:44:40.481420 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 2 11:44:40.481485 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 11:44:40.482497 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Jul 2 11:44:40.483493 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 11:44:40.483538 kernel: GPT:9289727 != 937703087 Jul 2 11:44:40.483545 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 11:44:40.483552 kernel: GPT:9289727 != 937703087 Jul 2 11:44:40.483557 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 11:44:40.483566 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 11:44:40.483573 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 11:44:40.483579 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 11:44:40.615477 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 11:44:40.615493 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Jul 2 11:44:40.766534 kernel: sd 1:0:0:0: [sdb] Write Protect is off Jul 2 11:44:40.798824 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jul 2 11:44:40.798902 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 2 11:44:40.813540 kernel: ata2.00: Enabling discard_zeroes_data Jul 2 11:44:40.827817 kernel: ata2.00: Enabling discard_zeroes_data Jul 2 11:44:40.827833 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Jul 2 11:44:40.859455 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth0 Jul 2 11:44:40.873344 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 11:44:40.966622 kernel: usbcore: registered new interface driver usbhid Jul 2 11:44:40.966657 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (528) Jul 2 11:44:40.966686 kernel: usbhid: USB HID core driver Jul 2 11:44:40.966705 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jul 2 11:44:40.966725 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth2 Jul 2 11:44:40.931203 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 11:44:40.990576 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 11:44:41.013537 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 11:44:41.093387 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jul 2 11:44:41.093484 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jul 2 11:44:41.093493 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jul 2 11:44:41.076615 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 11:44:41.111150 systemd[1]: Starting disk-uuid.service... Jul 2 11:44:41.162576 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 11:44:41.162587 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 11:44:41.162594 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 11:44:41.162647 disk-uuid[691]: Primary Header is updated. Jul 2 11:44:41.162647 disk-uuid[691]: Secondary Entries is updated. Jul 2 11:44:41.162647 disk-uuid[691]: Secondary Header is updated. Jul 2 11:44:41.217504 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 11:44:41.217528 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 11:44:41.217545 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 11:44:42.203153 kernel: ata1.00: Enabling discard_zeroes_data Jul 2 11:44:42.221415 disk-uuid[692]: The operation has completed successfully. Jul 2 11:44:42.229661 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 11:44:42.257026 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 11:44:42.350824 kernel: audit: type=1130 audit(1719920682.263:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:42.350839 kernel: audit: type=1131 audit(1719920682.263:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:42.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:42.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:42.257068 systemd[1]: Finished disk-uuid.service. Jul 2 11:44:42.379543 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 11:44:42.271667 systemd[1]: Starting verity-setup.service... Jul 2 11:44:42.454459 systemd[1]: Found device dev-mapper-usr.device. Jul 2 11:44:42.465673 systemd[1]: Mounting sysusr-usr.mount... Jul 2 11:44:42.477083 systemd[1]: Finished verity-setup.service. Jul 2 11:44:42.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:42.543459 kernel: audit: type=1130 audit(1719920682.490:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:42.620527 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 11:44:42.620820 systemd[1]: Mounted sysusr-usr.mount. Jul 2 11:44:42.620926 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 11:44:42.734531 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 11:44:42.734570 kernel: BTRFS info (device sda6): using free space tree Jul 2 11:44:42.734578 kernel: BTRFS info (device sda6): has skinny extents Jul 2 11:44:42.734588 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 11:44:42.621324 systemd[1]: Starting ignition-setup.service... Jul 2 11:44:42.721520 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 11:44:42.815689 kernel: audit: type=1130 audit(1719920682.758:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:42.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:42.743871 systemd[1]: Finished ignition-setup.service. Jul 2 11:44:42.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:42.759842 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 11:44:42.908086 kernel: audit: type=1130 audit(1719920682.824:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:42.908102 kernel: audit: type=1334 audit(1719920682.884:24): prog-id=9 op=LOAD Jul 2 11:44:42.884000 audit: BPF prog-id=9 op=LOAD Jul 2 11:44:42.825580 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 11:44:42.886652 systemd[1]: Starting systemd-networkd.service... Jul 2 11:44:42.922385 systemd-networkd[880]: lo: Link UP Jul 2 11:44:42.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:42.949002 ignition[870]: Ignition 2.14.0 Jul 2 11:44:43.001704 kernel: audit: type=1130 audit(1719920682.937:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:42.922388 systemd-networkd[880]: lo: Gained carrier Jul 2 11:44:42.949015 ignition[870]: Stage: fetch-offline Jul 2 11:44:43.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:42.922687 systemd-networkd[880]: Enumeration completed Jul 2 11:44:43.149284 kernel: audit: type=1130 audit(1719920683.015:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:43.149299 kernel: audit: type=1130 audit(1719920683.075:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:43.149306 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Jul 2 11:44:43.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:42.949085 ignition[870]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 11:44:43.184559 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f1np1: link becomes ready Jul 2 11:44:42.922755 systemd[1]: Started systemd-networkd.service. Jul 2 11:44:42.949119 ignition[870]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 11:44:42.923423 systemd-networkd[880]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 11:44:43.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:42.959405 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 11:44:43.232547 iscsid[900]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 11:44:43.232547 iscsid[900]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 11:44:43.232547 iscsid[900]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 11:44:43.232547 iscsid[900]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 11:44:43.232547 iscsid[900]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 11:44:43.232547 iscsid[900]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 11:44:43.232547 iscsid[900]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 11:44:43.405600 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Jul 2 11:44:43.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:43.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:42.938657 systemd[1]: Reached target network.target. Jul 2 11:44:42.959561 ignition[870]: parsed url from cmdline: "" Jul 2 11:44:42.965905 unknown[870]: fetched base config from "system" Jul 2 11:44:42.959573 ignition[870]: no config URL provided Jul 2 11:44:42.965918 unknown[870]: fetched user config from "system" Jul 2 11:44:42.959585 ignition[870]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 11:44:42.997093 systemd[1]: Starting iscsiuio.service... Jul 2 11:44:42.959637 ignition[870]: parsing config with SHA512: 758f8f501b09c91616bddc1a35d42fe00ecbce4bf1dd28f24a752540d1210be706813c98ed4cc8a51f2796f85a4f76b97c2d0f7a9db97612da3538c5d2fe557d Jul 2 11:44:43.008789 systemd[1]: Started iscsiuio.service. Jul 2 11:44:42.966458 ignition[870]: fetch-offline: fetch-offline passed Jul 2 11:44:43.015971 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 11:44:42.966472 ignition[870]: POST message to Packet Timeline Jul 2 11:44:43.075684 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 11:44:42.966491 ignition[870]: POST Status error: resource requires networking Jul 2 11:44:43.076138 systemd[1]: Starting ignition-kargs.service... Jul 2 11:44:42.966590 ignition[870]: Ignition finished successfully Jul 2 11:44:43.150629 systemd-networkd[880]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 11:44:43.153977 ignition[889]: Ignition 2.14.0 Jul 2 11:44:43.163155 systemd[1]: Starting iscsid.service... Jul 2 11:44:43.153980 ignition[889]: Stage: kargs Jul 2 11:44:43.191700 systemd[1]: Started iscsid.service. Jul 2 11:44:43.154037 ignition[889]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 11:44:43.205974 systemd[1]: Starting dracut-initqueue.service... Jul 2 11:44:43.154046 ignition[889]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 11:44:43.217700 systemd[1]: Finished dracut-initqueue.service. Jul 2 11:44:43.156945 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 11:44:43.240622 systemd[1]: Reached target remote-fs-pre.target. Jul 2 11:44:43.157580 ignition[889]: kargs: kargs passed Jul 2 11:44:43.284632 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 11:44:43.157583 ignition[889]: POST message to Packet Timeline Jul 2 11:44:43.310644 systemd[1]: Reached target remote-fs.target. Jul 2 11:44:43.157592 ignition[889]: GET https://metadata.packet.net/metadata: attempt #1 Jul 2 11:44:43.326052 systemd[1]: Starting dracut-pre-mount.service... Jul 2 11:44:43.160651 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44151->[::1]:53: read: connection refused Jul 2 11:44:43.337216 systemd[1]: Finished dracut-pre-mount.service. Jul 2 11:44:43.361133 ignition[889]: GET https://metadata.packet.net/metadata: attempt #2 Jul 2 11:44:43.397903 systemd-networkd[880]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 11:44:43.362398 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34593->[::1]:53: read: connection refused Jul 2 11:44:43.426937 systemd-networkd[880]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 11:44:43.455822 systemd-networkd[880]: enp2s0f1np1: Link UP Jul 2 11:44:43.455975 systemd-networkd[880]: enp2s0f1np1: Gained carrier Jul 2 11:44:43.472713 systemd-networkd[880]: enp2s0f0np0: Link UP Jul 2 11:44:43.472879 systemd-networkd[880]: eno2: Link UP Jul 2 11:44:43.473034 systemd-networkd[880]: eno1: Link UP Jul 2 11:44:43.763056 ignition[889]: GET https://metadata.packet.net/metadata: attempt #3 Jul 2 11:44:43.764099 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52584->[::1]:53: read: connection refused Jul 2 11:44:44.215823 systemd-networkd[880]: enp2s0f0np0: Gained carrier Jul 2 11:44:44.224664 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0np0: link becomes ready Jul 2 11:44:44.239622 systemd-networkd[880]: enp2s0f0np0: DHCPv4 address 147.75.203.15/31, gateway 147.75.203.14 acquired from 145.40.83.140 Jul 2 11:44:44.564310 ignition[889]: GET https://metadata.packet.net/metadata: attempt #4 Jul 2 11:44:44.565605 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59123->[::1]:53: read: connection refused Jul 2 11:44:44.704671 systemd-networkd[880]: enp2s0f1np1: Gained IPv6LL Jul 2 11:44:45.280671 systemd-networkd[880]: enp2s0f0np0: Gained IPv6LL Jul 2 11:44:46.166493 ignition[889]: GET https://metadata.packet.net/metadata: attempt #5 Jul 2 11:44:46.167660 ignition[889]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39036->[::1]:53: read: connection refused Jul 2 11:44:49.370095 ignition[889]: GET https://metadata.packet.net/metadata: attempt #6 Jul 2 11:44:49.410065 ignition[889]: GET result: OK Jul 2 11:44:49.597950 ignition[889]: Ignition finished successfully Jul 2 11:44:49.602036 systemd[1]: Finished ignition-kargs.service. Jul 2 11:44:49.696698 kernel: kauditd_printk_skb: 3 callbacks suppressed Jul 2 11:44:49.696718 kernel: audit: type=1130 audit(1719920689.612:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:49.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:49.622740 ignition[918]: Ignition 2.14.0 Jul 2 11:44:49.615485 systemd[1]: Starting ignition-disks.service... Jul 2 11:44:49.622744 ignition[918]: Stage: disks Jul 2 11:44:49.622817 ignition[918]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 11:44:49.622827 ignition[918]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 11:44:49.624210 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 11:44:49.625788 ignition[918]: disks: disks passed Jul 2 11:44:49.625791 ignition[918]: POST message to Packet Timeline Jul 2 11:44:49.625803 ignition[918]: GET https://metadata.packet.net/metadata: attempt #1 Jul 2 11:44:49.650046 ignition[918]: GET result: OK Jul 2 11:44:49.820376 ignition[918]: Ignition finished successfully Jul 2 11:44:49.823069 systemd[1]: Finished ignition-disks.service. Jul 2 11:44:49.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:49.836948 systemd[1]: Reached target initrd-root-device.target. Jul 2 11:44:49.915648 kernel: audit: type=1130 audit(1719920689.835:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:49.901645 systemd[1]: Reached target local-fs-pre.target. Jul 2 11:44:49.901681 systemd[1]: Reached target local-fs.target. Jul 2 11:44:49.915690 systemd[1]: Reached target sysinit.target. Jul 2 11:44:49.929666 systemd[1]: Reached target basic.target. Jul 2 11:44:49.943151 systemd[1]: Starting systemd-fsck-root.service... Jul 2 11:44:49.985155 systemd-fsck[935]: ROOT: clean, 614/553520 files, 56020/553472 blocks Jul 2 11:44:49.997981 systemd[1]: Finished systemd-fsck-root.service. Jul 2 11:44:50.088719 kernel: audit: type=1130 audit(1719920690.005:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:50.088735 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 11:44:50.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:50.012833 systemd[1]: Mounting sysroot.mount... Jul 2 11:44:50.096071 systemd[1]: Mounted sysroot.mount. Jul 2 11:44:50.109695 systemd[1]: Reached target initrd-root-fs.target. Jul 2 11:44:50.117278 systemd[1]: Mounting sysroot-usr.mount... Jul 2 11:44:50.142218 systemd[1]: Starting flatcar-metadata-hostname.service... Jul 2 11:44:50.150995 systemd[1]: Starting flatcar-static-network.service... Jul 2 11:44:50.167566 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 11:44:50.167600 systemd[1]: Reached target ignition-diskful.target. Jul 2 11:44:50.185630 systemd[1]: Mounted sysroot-usr.mount. Jul 2 11:44:50.210605 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 11:44:50.322082 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (948) Jul 2 11:44:50.322101 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 11:44:50.322108 kernel: BTRFS info (device sda6): using free space tree Jul 2 11:44:50.322115 kernel: BTRFS info (device sda6): has skinny extents Jul 2 11:44:50.221835 systemd[1]: Starting initrd-setup-root.service... Jul 2 11:44:50.407669 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 11:44:50.407681 kernel: audit: type=1130 audit(1719920690.336:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:50.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:50.407717 coreos-metadata[943]: Jul 02 11:44:50.260 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 2 11:44:50.407717 coreos-metadata[943]: Jul 02 11:44:50.283 INFO Fetch successful Jul 2 11:44:50.591230 kernel: audit: type=1130 audit(1719920690.415:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:50.591320 kernel: audit: type=1130 audit(1719920690.478:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:50.591328 kernel: audit: type=1131 audit(1719920690.478:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:50.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:50.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:50.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:50.591390 coreos-metadata[942]: Jul 02 11:44:50.260 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 2 11:44:50.591390 coreos-metadata[942]: Jul 02 11:44:50.284 INFO Fetch successful Jul 2 11:44:50.591390 coreos-metadata[942]: Jul 02 11:44:50.304 INFO wrote hostname ci-3510.3.5-a-3cadf325ae to /sysroot/etc/hostname Jul 2 11:44:50.637691 initrd-setup-root[953]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 11:44:50.255806 systemd[1]: Finished initrd-setup-root.service. Jul 2 11:44:50.662642 initrd-setup-root[961]: cut: /sysroot/etc/group: No such file or directory Jul 2 11:44:50.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:50.337727 systemd[1]: Finished flatcar-metadata-hostname.service. Jul 2 11:44:50.738645 kernel: audit: type=1130 audit(1719920690.669:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:50.738706 initrd-setup-root[969]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 11:44:50.416723 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jul 2 11:44:50.759683 initrd-setup-root[977]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 11:44:50.416763 systemd[1]: Finished flatcar-static-network.service. Jul 2 11:44:50.778597 ignition[1016]: INFO : Ignition 2.14.0 Jul 2 11:44:50.778597 ignition[1016]: INFO : Stage: mount Jul 2 11:44:50.778597 ignition[1016]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 11:44:50.778597 ignition[1016]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 11:44:50.778597 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 11:44:50.778597 ignition[1016]: INFO : mount: mount passed Jul 2 11:44:50.778597 ignition[1016]: INFO : POST message to Packet Timeline Jul 2 11:44:50.778597 ignition[1016]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 2 11:44:50.778597 ignition[1016]: INFO : GET result: OK Jul 2 11:44:50.478698 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 11:44:50.600016 systemd[1]: Starting ignition-mount.service... Jul 2 11:44:50.625995 systemd[1]: Starting sysroot-boot.service... Jul 2 11:44:50.646854 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 11:44:50.647086 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 11:44:50.660375 systemd[1]: Finished sysroot-boot.service. Jul 2 11:44:50.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:50.974191 ignition[1016]: INFO : Ignition finished successfully Jul 2 11:44:50.989651 kernel: audit: type=1130 audit(1719920690.915:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:50.904794 systemd[1]: Finished ignition-mount.service. Jul 2 11:44:50.918382 systemd[1]: Starting ignition-files.service... Jul 2 11:44:51.082543 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1035) Jul 2 11:44:51.082554 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 11:44:51.082562 kernel: BTRFS info (device sda6): using free space tree Jul 2 11:44:51.082569 kernel: BTRFS info (device sda6): has skinny extents Jul 2 11:44:51.082575 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 11:44:50.983299 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 11:44:51.116969 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 11:44:51.139586 ignition[1054]: INFO : Ignition 2.14.0 Jul 2 11:44:51.139586 ignition[1054]: INFO : Stage: files Jul 2 11:44:51.139586 ignition[1054]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 11:44:51.139586 ignition[1054]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 11:44:51.139586 ignition[1054]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 11:44:51.139586 ignition[1054]: DEBUG : files: compiled without relabeling support, skipping Jul 2 11:44:51.142426 unknown[1054]: wrote ssh authorized keys file for user: core Jul 2 11:44:51.214647 ignition[1054]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 11:44:51.214647 ignition[1054]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 11:44:51.214647 ignition[1054]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 11:44:51.214647 ignition[1054]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 11:44:51.214647 ignition[1054]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 11:44:51.214647 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 11:44:51.214647 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 11:44:51.214647 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 11:44:51.214647 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 11:44:51.339607 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 11:44:51.385628 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 11:44:51.402677 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 11:44:51.402677 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 11:44:51.960764 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 2 11:44:51.993534 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 11:44:51.993534 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 2 11:44:52.041629 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1061) Jul 2 11:44:52.041643 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 11:44:52.041643 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 11:44:52.041643 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 11:44:52.041643 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 11:44:52.041643 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 11:44:52.041643 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 11:44:52.041643 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 11:44:52.041643 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 11:44:52.041643 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 11:44:52.041643 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 11:44:52.041643 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 11:44:52.041643 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Jul 2 11:44:52.041643 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(c): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 11:44:52.041643 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2775941872" Jul 2 11:44:52.041643 ignition[1054]: CRITICAL : files: createFilesystemsFiles: createFiles: op(c): op(d): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2775941872": device or resource busy Jul 2 11:44:52.288666 ignition[1054]: ERROR : files: createFilesystemsFiles: createFiles: op(c): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2775941872", trying btrfs: device or resource busy Jul 2 11:44:52.288666 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2775941872" Jul 2 11:44:52.288666 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2775941872" Jul 2 11:44:52.288666 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [started] unmounting "/mnt/oem2775941872" Jul 2 11:44:52.288666 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [finished] unmounting "/mnt/oem2775941872" Jul 2 11:44:52.288666 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Jul 2 11:44:52.288666 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 11:44:52.288666 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 11:44:52.506825 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET result: OK Jul 2 11:44:52.616861 ignition[1054]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 11:44:52.616861 ignition[1054]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 2 11:44:52.616861 ignition[1054]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 2 11:44:52.616861 ignition[1054]: INFO : files: op(12): [started] processing unit "packet-phone-home.service" Jul 2 11:44:52.616861 ignition[1054]: INFO : files: op(12): [finished] processing unit "packet-phone-home.service" Jul 2 11:44:52.616861 ignition[1054]: INFO : files: op(13): [started] processing unit "containerd.service" Jul 2 11:44:52.616861 ignition[1054]: INFO : files: op(13): op(14): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 11:44:52.714644 ignition[1054]: INFO : files: op(13): op(14): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 11:44:52.714644 ignition[1054]: INFO : files: op(13): [finished] processing unit "containerd.service" Jul 2 11:44:52.714644 ignition[1054]: INFO : files: op(15): [started] processing unit "prepare-helm.service" Jul 2 11:44:52.714644 ignition[1054]: INFO : files: op(15): op(16): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 11:44:52.714644 ignition[1054]: INFO : files: op(15): op(16): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 11:44:52.714644 ignition[1054]: INFO : files: op(15): [finished] processing unit "prepare-helm.service" Jul 2 11:44:52.714644 ignition[1054]: INFO : files: op(17): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 11:44:52.714644 ignition[1054]: INFO : files: op(17): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 11:44:52.714644 ignition[1054]: INFO : files: op(18): [started] setting preset to enabled for "packet-phone-home.service" Jul 2 11:44:52.714644 ignition[1054]: INFO : files: op(18): [finished] setting preset to enabled for "packet-phone-home.service" Jul 2 11:44:52.714644 ignition[1054]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Jul 2 11:44:52.714644 ignition[1054]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 11:44:52.714644 ignition[1054]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 11:44:52.714644 ignition[1054]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 11:44:52.714644 ignition[1054]: INFO : files: files passed Jul 2 11:44:52.714644 ignition[1054]: INFO : POST message to Packet Timeline Jul 2 11:44:52.714644 ignition[1054]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 2 11:44:52.714644 ignition[1054]: INFO : GET result: OK Jul 2 11:44:53.024626 kernel: audit: type=1130 audit(1719920692.841:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:52.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:52.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:52.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:52.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:52.838325 systemd[1]: Finished ignition-files.service. Jul 2 11:44:53.038605 ignition[1054]: INFO : Ignition finished successfully Jul 2 11:44:53.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:52.848919 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 11:44:53.060750 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 11:44:52.909696 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 11:44:52.910015 systemd[1]: Starting ignition-quench.service... Jul 2 11:44:53.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:52.940726 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 11:44:52.977811 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 11:44:52.977907 systemd[1]: Finished ignition-quench.service. Jul 2 11:44:52.984893 systemd[1]: Reached target ignition-complete.target. Jul 2 11:44:53.008351 systemd[1]: Starting initrd-parse-etc.service... Jul 2 11:44:53.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.038094 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 11:44:53.038151 systemd[1]: Finished initrd-parse-etc.service. Jul 2 11:44:53.038731 systemd[1]: Reached target initrd-fs.target. Jul 2 11:44:53.060636 systemd[1]: Reached target initrd.target. Jul 2 11:44:53.075772 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 11:44:53.077468 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 11:44:53.107928 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 11:44:53.124599 systemd[1]: Starting initrd-cleanup.service... Jul 2 11:44:53.139593 systemd[1]: Stopped target nss-lookup.target. Jul 2 11:44:53.152651 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 11:44:53.169686 systemd[1]: Stopped target timers.target. Jul 2 11:44:53.184743 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 11:44:53.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.184928 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 11:44:53.202185 systemd[1]: Stopped target initrd.target. Jul 2 11:44:53.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.216924 systemd[1]: Stopped target basic.target. Jul 2 11:44:53.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.231931 systemd[1]: Stopped target ignition-complete.target. Jul 2 11:44:53.246930 systemd[1]: Stopped target ignition-diskful.target. Jul 2 11:44:53.262915 systemd[1]: Stopped target initrd-root-device.target. Jul 2 11:44:53.277929 systemd[1]: Stopped target remote-fs.target. Jul 2 11:44:53.296919 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 11:44:53.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.312943 systemd[1]: Stopped target sysinit.target. Jul 2 11:44:53.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.327939 systemd[1]: Stopped target local-fs.target. Jul 2 11:44:53.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.342929 systemd[1]: Stopped target local-fs-pre.target. Jul 2 11:44:53.568568 ignition[1102]: INFO : Ignition 2.14.0 Jul 2 11:44:53.568568 ignition[1102]: INFO : Stage: umount Jul 2 11:44:53.568568 ignition[1102]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 11:44:53.568568 ignition[1102]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Jul 2 11:44:53.568568 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 2 11:44:53.568568 ignition[1102]: INFO : umount: umount passed Jul 2 11:44:53.568568 ignition[1102]: INFO : POST message to Packet Timeline Jul 2 11:44:53.568568 ignition[1102]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 2 11:44:53.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.691113 iscsid[900]: iscsid shutting down. Jul 2 11:44:53.358921 systemd[1]: Stopped target swap.target. Jul 2 11:44:53.720695 ignition[1102]: INFO : GET result: OK Jul 2 11:44:53.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.373810 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 11:44:53.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.374144 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 11:44:53.390119 systemd[1]: Stopped target cryptsetup.target. Jul 2 11:44:53.779691 ignition[1102]: INFO : Ignition finished successfully Jul 2 11:44:53.405818 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 11:44:53.406155 systemd[1]: Stopped dracut-initqueue.service. Jul 2 11:44:53.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.421038 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 11:44:53.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.421375 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 11:44:53.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.437102 systemd[1]: Stopped target paths.target. Jul 2 11:44:53.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.868000 audit: BPF prog-id=6 op=UNLOAD Jul 2 11:44:53.450798 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 11:44:53.454632 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 11:44:53.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.467922 systemd[1]: Stopped target slices.target. Jul 2 11:44:53.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.482914 systemd[1]: Stopped target sockets.target. Jul 2 11:44:53.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.499910 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 11:44:53.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.500264 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 11:44:53.516996 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 11:44:53.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.517321 systemd[1]: Stopped ignition-files.service. Jul 2 11:44:54.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.532993 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 11:44:54.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.533329 systemd[1]: Stopped flatcar-metadata-hostname.service. Jul 2 11:44:53.549902 systemd[1]: Stopping ignition-mount.service... Jul 2 11:44:54.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.561668 systemd[1]: Stopping iscsid.service... Jul 2 11:44:53.575599 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 11:44:53.575691 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 11:44:54.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.596390 systemd[1]: Stopping sysroot-boot.service... Jul 2 11:44:54.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.606652 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 11:44:54.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.606840 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 11:44:53.638040 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 11:44:54.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.638385 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 11:44:54.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:54.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.665428 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 11:44:53.667211 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 11:44:53.667460 systemd[1]: Stopped iscsid.service. Jul 2 11:44:53.681938 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 11:44:53.682113 systemd[1]: Closed iscsid.socket. Jul 2 11:44:53.697867 systemd[1]: Stopping iscsiuio.service... Jul 2 11:44:53.713149 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 11:44:53.713357 systemd[1]: Stopped iscsiuio.service. Jul 2 11:44:53.728196 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 11:44:53.728387 systemd[1]: Finished initrd-cleanup.service. Jul 2 11:44:53.744431 systemd[1]: Stopped target network.target. Jul 2 11:44:53.758759 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 11:44:53.758850 systemd[1]: Closed iscsiuio.socket. Jul 2 11:44:53.772941 systemd[1]: Stopping systemd-networkd.service... Jul 2 11:44:53.783617 systemd-networkd[880]: enp2s0f0np0: DHCPv6 lease lost Jul 2 11:44:53.786846 systemd[1]: Stopping systemd-resolved.service... Jul 2 11:44:54.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:53.791629 systemd-networkd[880]: enp2s0f1np1: DHCPv6 lease lost Jul 2 11:44:54.322000 audit: BPF prog-id=9 op=UNLOAD Jul 2 11:44:53.802290 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 11:44:54.341000 audit: BPF prog-id=8 op=UNLOAD Jul 2 11:44:54.341000 audit: BPF prog-id=7 op=UNLOAD Jul 2 11:44:54.341000 audit: BPF prog-id=5 op=UNLOAD Jul 2 11:44:54.341000 audit: BPF prog-id=4 op=UNLOAD Jul 2 11:44:54.341000 audit: BPF prog-id=3 op=UNLOAD Jul 2 11:44:53.802547 systemd[1]: Stopped systemd-resolved.service. Jul 2 11:44:53.820240 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 11:44:53.820512 systemd[1]: Stopped systemd-networkd.service. Jul 2 11:44:53.836203 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 11:44:53.836399 systemd[1]: Stopped ignition-mount.service. Jul 2 11:44:54.404467 systemd-journald[268]: Received SIGTERM from PID 1 (systemd). Jul 2 11:44:53.855127 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 11:44:53.855323 systemd[1]: Stopped sysroot-boot.service. Jul 2 11:44:53.870097 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 11:44:53.870184 systemd[1]: Closed systemd-networkd.socket. Jul 2 11:44:53.886740 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 11:44:53.886854 systemd[1]: Stopped ignition-disks.service. Jul 2 11:44:53.904731 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 11:44:53.904843 systemd[1]: Stopped ignition-kargs.service. Jul 2 11:44:53.920733 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 11:44:53.920852 systemd[1]: Stopped ignition-setup.service. Jul 2 11:44:53.935707 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 11:44:53.935823 systemd[1]: Stopped initrd-setup-root.service. Jul 2 11:44:53.952070 systemd[1]: Stopping network-cleanup.service... Jul 2 11:44:53.968627 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 11:44:53.968761 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 11:44:53.985770 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 11:44:53.985886 systemd[1]: Stopped systemd-sysctl.service. Jul 2 11:44:54.002030 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 11:44:54.002158 systemd[1]: Stopped systemd-modules-load.service. Jul 2 11:44:54.016899 systemd[1]: Stopping systemd-udevd.service... Jul 2 11:44:54.036955 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 11:44:54.038203 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 11:44:54.038499 systemd[1]: Stopped systemd-udevd.service. Jul 2 11:44:54.053997 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 11:44:54.054126 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 11:44:54.068726 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 11:44:54.068813 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 11:44:54.085676 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 11:44:54.085791 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 11:44:54.103765 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 11:44:54.103878 systemd[1]: Stopped dracut-cmdline.service. Jul 2 11:44:54.118753 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 11:44:54.118880 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 11:44:54.135301 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 11:44:54.151639 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 11:44:54.151785 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 11:44:54.168563 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 11:44:54.168770 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 11:44:54.303954 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 11:44:54.304172 systemd[1]: Stopped network-cleanup.service. Jul 2 11:44:54.315894 systemd[1]: Reached target initrd-switch-root.target. Jul 2 11:44:54.333185 systemd[1]: Starting initrd-switch-root.service... Jul 2 11:44:54.341150 systemd[1]: Switching root. Jul 2 11:44:54.406348 systemd-journald[268]: Journal stopped Jul 2 11:44:58.266033 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 11:44:58.266046 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 11:44:58.266054 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 11:44:58.266060 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 11:44:58.266065 kernel: SELinux: policy capability open_perms=1 Jul 2 11:44:58.266070 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 11:44:58.266076 kernel: SELinux: policy capability always_check_network=0 Jul 2 11:44:58.266082 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 11:44:58.266087 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 11:44:58.266093 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 11:44:58.266099 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 11:44:58.266104 kernel: kauditd_printk_skb: 47 callbacks suppressed Jul 2 11:44:58.266109 kernel: audit: type=1403 audit(1719920694.892:88): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 11:44:58.266116 systemd[1]: Successfully loaded SELinux policy in 297.889ms. Jul 2 11:44:58.266124 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.408ms. Jul 2 11:44:58.266131 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 11:44:58.266138 systemd[1]: Detected architecture x86-64. Jul 2 11:44:58.266144 systemd[1]: Detected first boot. Jul 2 11:44:58.266150 systemd[1]: Hostname set to . Jul 2 11:44:58.266156 systemd[1]: Initializing machine ID from random generator. Jul 2 11:44:58.266162 kernel: audit: type=1400 audit(1719920695.236:89): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 11:44:58.266169 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 11:44:58.266175 kernel: audit: type=1400 audit(1719920695.356:90): avc: denied { associate } for pid=1161 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 11:44:58.266182 kernel: audit: type=1300 audit(1719920695.356:90): arch=c000003e syscall=188 success=yes exit=0 a0=c000257672 a1=c00015aaf8 a2=c000162a00 a3=32 items=0 ppid=1144 pid=1161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 11:44:58.266188 kernel: audit: type=1327 audit(1719920695.356:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 11:44:58.266194 kernel: audit: type=1400 audit(1719920695.381:91): avc: denied { associate } for pid=1161 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 11:44:58.266201 kernel: audit: type=1300 audit(1719920695.381:91): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000257749 a2=1ed a3=0 items=2 ppid=1144 pid=1161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 11:44:58.266207 kernel: audit: type=1307 audit(1719920695.381:91): cwd="/" Jul 2 11:44:58.266213 kernel: audit: type=1302 audit(1719920695.381:91): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.266219 kernel: audit: type=1302 audit(1719920695.381:91): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.266225 systemd[1]: Populated /etc with preset unit settings. Jul 2 11:44:58.266231 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 11:44:58.266238 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 11:44:58.266245 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 11:44:58.266251 systemd[1]: Queued start job for default target multi-user.target. Jul 2 11:44:58.266258 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 11:44:58.266264 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 11:44:58.266270 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 2 11:44:58.266277 systemd[1]: Created slice system-getty.slice. Jul 2 11:44:58.266283 systemd[1]: Created slice system-modprobe.slice. Jul 2 11:44:58.266291 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 11:44:58.266298 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 11:44:58.266304 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 11:44:58.266311 systemd[1]: Created slice user.slice. Jul 2 11:44:58.266317 systemd[1]: Started systemd-ask-password-console.path. Jul 2 11:44:58.266323 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 11:44:58.266330 systemd[1]: Set up automount boot.automount. Jul 2 11:44:58.266336 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 11:44:58.266342 systemd[1]: Reached target integritysetup.target. Jul 2 11:44:58.266350 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 11:44:58.266356 systemd[1]: Reached target remote-fs.target. Jul 2 11:44:58.266362 systemd[1]: Reached target slices.target. Jul 2 11:44:58.266369 systemd[1]: Reached target swap.target. Jul 2 11:44:58.266375 systemd[1]: Reached target torcx.target. Jul 2 11:44:58.266381 systemd[1]: Reached target veritysetup.target. Jul 2 11:44:58.266387 systemd[1]: Listening on systemd-coredump.socket. Jul 2 11:44:58.266394 systemd[1]: Listening on systemd-initctl.socket. Jul 2 11:44:58.266401 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 11:44:58.266407 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 11:44:58.266414 systemd[1]: Listening on systemd-journald.socket. Jul 2 11:44:58.266420 systemd[1]: Listening on systemd-networkd.socket. Jul 2 11:44:58.266426 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 11:44:58.266433 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 11:44:58.266440 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 11:44:58.266447 systemd[1]: Mounting dev-hugepages.mount... Jul 2 11:44:58.266476 systemd[1]: Mounting dev-mqueue.mount... Jul 2 11:44:58.266499 systemd[1]: Mounting media.mount... Jul 2 11:44:58.266525 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 11:44:58.266555 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 11:44:58.266562 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 11:44:58.266570 systemd[1]: Mounting tmp.mount... Jul 2 11:44:58.266598 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 11:44:58.266622 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 11:44:58.266628 systemd[1]: Starting kmod-static-nodes.service... Jul 2 11:44:58.266635 systemd[1]: Starting modprobe@configfs.service... Jul 2 11:44:58.266641 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 11:44:58.266648 systemd[1]: Starting modprobe@drm.service... Jul 2 11:44:58.266654 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 11:44:58.266661 systemd[1]: Starting modprobe@fuse.service... Jul 2 11:44:58.266668 kernel: fuse: init (API version 7.34) Jul 2 11:44:58.266674 systemd[1]: Starting modprobe@loop.service... Jul 2 11:44:58.266681 kernel: loop: module loaded Jul 2 11:44:58.266687 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 11:44:58.266694 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 11:44:58.266700 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 2 11:44:58.266706 systemd[1]: Starting systemd-journald.service... Jul 2 11:44:58.266713 systemd[1]: Starting systemd-modules-load.service... Jul 2 11:44:58.266722 systemd-journald[1295]: Journal started Jul 2 11:44:58.266748 systemd-journald[1295]: Runtime Journal (/run/log/journal/4f77406107dd49ba85d0695b76ffc675) is 8.0M, max 636.7M, 628.7M free. Jul 2 11:44:57.630000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 11:44:57.630000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 2 11:44:58.262000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 11:44:58.262000 audit[1295]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd3a07e100 a2=4000 a3=7ffd3a07e19c items=0 ppid=1 pid=1295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 11:44:58.262000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 11:44:58.298629 systemd[1]: Starting systemd-network-generator.service... Jul 2 11:44:58.321524 systemd[1]: Starting systemd-remount-fs.service... Jul 2 11:44:58.343505 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 11:44:58.378497 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 11:44:58.393607 systemd[1]: Started systemd-journald.service. Jul 2 11:44:58.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.402192 systemd[1]: Mounted dev-hugepages.mount. Jul 2 11:44:58.409683 systemd[1]: Mounted dev-mqueue.mount. Jul 2 11:44:58.416680 systemd[1]: Mounted media.mount. Jul 2 11:44:58.423678 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 11:44:58.431654 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 11:44:58.439644 systemd[1]: Mounted tmp.mount. Jul 2 11:44:58.446783 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 11:44:58.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.454836 systemd[1]: Finished kmod-static-nodes.service. Jul 2 11:44:58.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.462809 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 11:44:58.462979 systemd[1]: Finished modprobe@configfs.service. Jul 2 11:44:58.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.471883 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 11:44:58.472058 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 11:44:58.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.482013 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 11:44:58.482257 systemd[1]: Finished modprobe@drm.service. Jul 2 11:44:58.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.492260 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 11:44:58.492650 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 11:44:58.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.501269 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 11:44:58.501689 systemd[1]: Finished modprobe@fuse.service. Jul 2 11:44:58.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.510310 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 11:44:58.510741 systemd[1]: Finished modprobe@loop.service. Jul 2 11:44:58.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.518810 systemd[1]: Finished systemd-modules-load.service. Jul 2 11:44:58.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.527773 systemd[1]: Finished systemd-network-generator.service. Jul 2 11:44:58.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.536788 systemd[1]: Finished systemd-remount-fs.service. Jul 2 11:44:58.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.544798 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 11:44:58.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.552866 systemd[1]: Reached target network-pre.target. Jul 2 11:44:58.562802 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 11:44:58.572103 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 11:44:58.578711 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 11:44:58.582423 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 11:44:58.593198 systemd[1]: Starting systemd-journal-flush.service... Jul 2 11:44:58.601729 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 11:44:58.604354 systemd[1]: Starting systemd-random-seed.service... Jul 2 11:44:58.605949 systemd-journald[1295]: Time spent on flushing to /var/log/journal/4f77406107dd49ba85d0695b76ffc675 is 14.323ms for 1552 entries. Jul 2 11:44:58.605949 systemd-journald[1295]: System Journal (/var/log/journal/4f77406107dd49ba85d0695b76ffc675) is 8.0M, max 195.6M, 187.6M free. Jul 2 11:44:58.648112 systemd-journald[1295]: Received client request to flush runtime journal. Jul 2 11:44:58.619613 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 11:44:58.620149 systemd[1]: Starting systemd-sysctl.service... Jul 2 11:44:58.632087 systemd[1]: Starting systemd-sysusers.service... Jul 2 11:44:58.639122 systemd[1]: Starting systemd-udev-settle.service... Jul 2 11:44:58.647708 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 11:44:58.655707 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 11:44:58.664755 systemd[1]: Finished systemd-journal-flush.service. Jul 2 11:44:58.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.673739 systemd[1]: Finished systemd-random-seed.service. Jul 2 11:44:58.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.682706 systemd[1]: Finished systemd-sysctl.service. Jul 2 11:44:58.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.690694 systemd[1]: Finished systemd-sysusers.service. Jul 2 11:44:58.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.699643 systemd[1]: Reached target first-boot-complete.target. Jul 2 11:44:58.708256 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 11:44:58.716914 udevadm[1321]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 11:44:58.726974 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 11:44:58.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.905919 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 11:44:58.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.916354 systemd[1]: Starting systemd-udevd.service... Jul 2 11:44:58.928758 systemd-udevd[1330]: Using default interface naming scheme 'v252'. Jul 2 11:44:58.944062 systemd[1]: Started systemd-udevd.service. Jul 2 11:44:58.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:58.956065 systemd[1]: Found device dev-ttyS1.device. Jul 2 11:44:58.990893 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jul 2 11:44:58.990962 kernel: ACPI: button: Sleep Button [SLPB] Jul 2 11:44:58.990979 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1398) Jul 2 11:44:58.995089 systemd[1]: Starting systemd-networkd.service... Jul 2 11:44:59.011492 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 2 11:44:59.029454 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 11:44:59.046496 kernel: ACPI: button: Power Button [PWRF] Jul 2 11:44:59.060439 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Jul 2 11:44:59.060554 kernel: IPMI message handler: version 39.2 Jul 2 11:44:59.065321 systemd[1]: Starting systemd-userdbd.service... Jul 2 11:44:58.990000 audit[1345]: AVC avc: denied { confidentiality } for pid=1345 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 11:44:58.990000 audit[1345]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7efda43a3010 a1=4d8bc a2=7efda604dbc5 a3=5 items=42 ppid=1330 pid=1345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 11:44:58.990000 audit: CWD cwd="/" Jul 2 11:44:58.990000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=1 name=(null) inode=24936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=2 name=(null) inode=24936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=3 name=(null) inode=24937 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=4 name=(null) inode=24936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=5 name=(null) inode=24938 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=6 name=(null) inode=24936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=7 name=(null) inode=24939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=8 name=(null) inode=24939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=9 name=(null) inode=24940 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=10 name=(null) inode=24939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=11 name=(null) inode=24941 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=12 name=(null) inode=24939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=13 name=(null) inode=24942 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=14 name=(null) inode=24939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=15 name=(null) inode=24943 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=16 name=(null) inode=24939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=17 name=(null) inode=24944 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=18 name=(null) inode=24936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=19 name=(null) inode=24945 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=20 name=(null) inode=24945 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=21 name=(null) inode=24946 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=22 name=(null) inode=24945 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=23 name=(null) inode=24947 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=24 name=(null) inode=24945 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=25 name=(null) inode=24948 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=26 name=(null) inode=24945 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=27 name=(null) inode=24949 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=28 name=(null) inode=24945 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=29 name=(null) inode=24950 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=30 name=(null) inode=24936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=31 name=(null) inode=24951 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=32 name=(null) inode=24951 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=33 name=(null) inode=24952 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=34 name=(null) inode=24951 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=35 name=(null) inode=24953 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=36 name=(null) inode=24951 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=37 name=(null) inode=24954 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=38 name=(null) inode=24951 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=39 name=(null) inode=24955 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=40 name=(null) inode=24951 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PATH item=41 name=(null) inode=24956 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 11:44:58.990000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 11:44:59.106254 systemd[1]: Started systemd-userdbd.service. Jul 2 11:44:59.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:59.138465 kernel: ipmi device interface Jul 2 11:44:59.138525 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jul 2 11:44:59.155460 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jul 2 11:44:59.155594 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jul 2 11:44:59.188292 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jul 2 11:44:59.205455 kernel: ipmi_si: IPMI System Interface driver Jul 2 11:44:59.236788 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Jul 2 11:44:59.236889 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jul 2 11:44:59.272420 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jul 2 11:44:59.272455 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jul 2 11:44:59.325779 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jul 2 11:44:59.325893 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jul 2 11:44:59.343458 kernel: iTCO_vendor_support: vendor-support=0 Jul 2 11:44:59.387339 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jul 2 11:44:59.387466 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jul 2 11:44:59.387489 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jul 2 11:44:59.387501 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Jul 2 11:44:59.452458 kernel: intel_rapl_common: Found RAPL domain package Jul 2 11:44:59.452511 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jul 2 11:44:59.452603 kernel: intel_rapl_common: Found RAPL domain core Jul 2 11:44:59.452618 kernel: intel_rapl_common: Found RAPL domain uncore Jul 2 11:44:59.452628 kernel: intel_rapl_common: Found RAPL domain dram Jul 2 11:44:59.468442 systemd-networkd[1408]: bond0: netdev ready Jul 2 11:44:59.470954 systemd-networkd[1408]: lo: Link UP Jul 2 11:44:59.470956 systemd-networkd[1408]: lo: Gained carrier Jul 2 11:44:59.471478 systemd-networkd[1408]: Enumeration completed Jul 2 11:44:59.471553 systemd[1]: Started systemd-networkd.service. Jul 2 11:44:59.471788 systemd-networkd[1408]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jul 2 11:44:59.478038 systemd-networkd[1408]: enp2s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:65:fd:df.network. Jul 2 11:44:59.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:59.584492 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Jul 2 11:44:59.755508 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jul 2 11:44:59.778456 kernel: ipmi_ssif: IPMI SSIF Interface driver Jul 2 11:44:59.781795 systemd[1]: Finished systemd-udev-settle.service. Jul 2 11:44:59.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:59.791354 systemd[1]: Starting lvm2-activation-early.service... Jul 2 11:44:59.807589 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 11:44:59.838003 systemd[1]: Finished lvm2-activation-early.service. Jul 2 11:44:59.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:59.847787 systemd[1]: Reached target cryptsetup.target. Jul 2 11:44:59.857930 systemd[1]: Starting lvm2-activation.service... Jul 2 11:44:59.863363 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 11:44:59.900594 systemd[1]: Finished lvm2-activation.service. Jul 2 11:44:59.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:59.909850 systemd[1]: Reached target local-fs-pre.target. Jul 2 11:44:59.929090 kernel: kauditd_printk_skb: 82 callbacks suppressed Jul 2 11:44:59.929178 kernel: audit: type=1130 audit(1719920699.908:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:44:59.985515 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 11:44:59.985531 systemd[1]: Reached target local-fs.target. Jul 2 11:44:59.994499 systemd[1]: Reached target machines.target. Jul 2 11:45:00.003211 systemd[1]: Starting ldconfig.service... Jul 2 11:45:00.010324 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 11:45:00.010355 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 11:45:00.010990 systemd[1]: Starting systemd-boot-update.service... Jul 2 11:45:00.018018 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 11:45:00.026865 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 11:45:00.027545 systemd[1]: Starting systemd-sysext.service... Jul 2 11:45:00.027768 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1440 (bootctl) Jul 2 11:45:00.028495 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 11:45:00.046347 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 11:45:00.064455 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Jul 2 11:45:00.065536 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 11:45:00.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:00.065739 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 11:45:00.065877 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 11:45:00.129515 kernel: audit: type=1130 audit(1719920700.064:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:00.129540 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Jul 2 11:45:00.129556 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 2 11:45:00.176816 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 11:45:00.177174 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 11:45:00.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:00.198209 systemd-networkd[1408]: enp2s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:65:fd:de.network. Jul 2 11:45:00.198456 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Jul 2 11:45:00.198514 kernel: audit: type=1130 audit(1719920700.196:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:00.233457 kernel: loop0: detected capacity change from 0 to 209816 Jul 2 11:45:00.233500 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 2 11:45:00.263116 systemd-fsck[1450]: fsck.fat 4.2 (2021-01-31) Jul 2 11:45:00.263116 systemd-fsck[1450]: /dev/sda1: 789 files, 119238/258078 clusters Jul 2 11:45:00.269064 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 11:45:00.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:00.316492 systemd[1]: Mounting boot.mount... Jul 2 11:45:00.381509 kernel: audit: type=1130 audit(1719920700.313:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:00.381556 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Jul 2 11:45:00.400454 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 11:45:00.413550 systemd[1]: Mounted boot.mount. Jul 2 11:45:00.421484 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Jul 2 11:45:00.426766 ldconfig[1439]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 11:45:00.444627 systemd-networkd[1408]: bond0: Link UP Jul 2 11:45:00.444821 systemd-networkd[1408]: enp2s0f1np1: Link UP Jul 2 11:45:00.444956 systemd-networkd[1408]: enp2s0f1np1: Gained carrier Jul 2 11:45:00.445914 systemd-networkd[1408]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:65:fd:de.network. Jul 2 11:45:00.456463 systemd[1]: Finished ldconfig.service. Jul 2 11:45:00.470491 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Jul 2 11:45:00.470524 kernel: bond0: active interface up! Jul 2 11:45:00.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:00.504677 systemd[1]: Finished systemd-boot-update.service. Jul 2 11:45:00.511458 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Jul 2 11:45:00.511509 kernel: audit: type=1130 audit(1719920700.502:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:00.561456 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 2 11:45:00.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:00.608455 kernel: loop1: detected capacity change from 0 to 209816 Jul 2 11:45:00.608487 kernel: audit: type=1130 audit(1719920700.597:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:00.661454 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 11:45:00.663486 (sd-sysext)[1463]: Using extensions 'kubernetes'. Jul 2 11:45:00.663668 (sd-sysext)[1463]: Merged extensions into '/usr'. Jul 2 11:45:00.694090 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 11:45:00.694845 systemd[1]: Mounting usr-share-oem.mount... Jul 2 11:45:00.712499 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 11:45:00.728806 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 11:45:00.730374 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 11:45:00.740497 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 11:45:00.757064 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 11:45:00.766453 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 11:45:00.783051 systemd[1]: Starting modprobe@loop.service... Jul 2 11:45:00.793513 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 11:45:00.808582 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 11:45:00.808653 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 11:45:00.808718 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 11:45:00.810489 systemd[1]: Mounted usr-share-oem.mount. Jul 2 11:45:00.820528 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 11:45:00.835665 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 11:45:00.835741 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 11:45:00.846547 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 11:45:00.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:00.863205 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 11:45:00.863406 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 11:45:00.872529 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 11:45:00.872552 kernel: audit: type=1130 audit(1719920700.861:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:00.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:00.922516 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 11:45:00.922539 kernel: audit: type=1131 audit(1719920700.861:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:01.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:01.013730 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 11:45:01.013808 systemd[1]: Finished modprobe@loop.service. Jul 2 11:45:01.024454 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 11:45:01.024483 kernel: audit: type=1130 audit(1719920701.012:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:01.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:01.103075 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 11:45:01.103114 kernel: audit: type=1131 audit(1719920701.012:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:01.158531 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 11:45:01.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:01.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:01.199760 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 11:45:01.199818 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 11:45:01.200288 systemd[1]: Finished systemd-sysext.service. Jul 2 11:45:01.210455 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 11:45:01.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:01.227228 systemd[1]: Starting ensure-sysext.service... Jul 2 11:45:01.237464 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 11:45:01.253053 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 11:45:01.264542 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 11:45:01.270972 systemd-tmpfiles[1478]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 11:45:01.272067 systemd-tmpfiles[1478]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 11:45:01.273992 systemd-tmpfiles[1478]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 11:45:01.282817 systemd[1]: Reloading. Jul 2 11:45:01.290455 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 11:45:01.290641 systemd-networkd[1408]: bond0: Gained carrier Jul 2 11:45:01.290853 systemd-networkd[1408]: enp2s0f0np0: Link UP Jul 2 11:45:01.291048 systemd-networkd[1408]: enp2s0f0np0: Gained carrier Jul 2 11:45:01.302957 /usr/lib/systemd/system-generators/torcx-generator[1500]: time="2024-07-02T11:45:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 11:45:01.302978 /usr/lib/systemd/system-generators/torcx-generator[1500]: time="2024-07-02T11:45:01Z" level=info msg="torcx already run" Jul 2 11:45:01.334829 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Jul 2 11:45:01.334884 kernel: bond0: (slave enp2s0f1np1): invalid new link 1 on slave Jul 2 11:45:01.340930 systemd-networkd[1408]: enp2s0f1np1: Link DOWN Jul 2 11:45:01.340935 systemd-networkd[1408]: enp2s0f1np1: Lost carrier Jul 2 11:45:01.358913 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 11:45:01.358920 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 11:45:01.369980 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 11:45:01.414172 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 11:45:01.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 11:45:01.424178 systemd[1]: Starting audit-rules.service... Jul 2 11:45:01.431088 systemd[1]: Starting clean-ca-certificates.service... Jul 2 11:45:01.440177 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 11:45:01.438000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 11:45:01.438000 audit[1584]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd0cd42870 a2=420 a3=0 items=0 ppid=1567 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 11:45:01.438000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 11:45:01.440439 augenrules[1584]: No rules Jul 2 11:45:01.450380 systemd[1]: Starting systemd-resolved.service... Jul 2 11:45:01.458453 systemd[1]: Starting systemd-timesyncd.service... Jul 2 11:45:01.466146 systemd[1]: Starting systemd-update-utmp.service... Jul 2 11:45:01.472975 systemd[1]: Finished audit-rules.service. Jul 2 11:45:01.479765 systemd[1]: Finished clean-ca-certificates.service. Jul 2 11:45:01.487786 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 11:45:01.511067 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 11:45:01.511473 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 11:45:01.512918 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 11:45:01.517465 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Jul 2 11:45:01.532285 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 11:45:01.542496 kernel: bond0: (slave enp2s0f1np1): speed changed to 0 on port 1 Jul 2 11:45:01.543678 systemd-networkd[1408]: enp2s0f1np1: Link UP Jul 2 11:45:01.543683 systemd-networkd[1408]: enp2s0f1np1: Gained carrier Jul 2 11:45:01.556477 systemd[1]: Starting modprobe@loop.service... Jul 2 11:45:01.565471 kernel: bond0: (slave enp2s0f1np1): link status up again after 200 ms Jul 2 11:45:01.580579 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 11:45:01.580681 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 11:45:01.581709 systemd[1]: Starting systemd-update-done.service... Jul 2 11:45:01.588465 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Jul 2 11:45:01.594497 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 11:45:01.594573 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 11:45:01.595366 systemd[1]: Finished systemd-update-utmp.service. Jul 2 11:45:01.603740 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 11:45:01.603822 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 11:45:01.605405 systemd-resolved[1591]: Positive Trust Anchors: Jul 2 11:45:01.605412 systemd-resolved[1591]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 11:45:01.605432 systemd-resolved[1591]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 11:45:01.609529 systemd-resolved[1591]: Using system hostname 'ci-3510.3.5-a-3cadf325ae'. Jul 2 11:45:01.611666 systemd[1]: Started systemd-resolved.service. Jul 2 11:45:01.619707 systemd[1]: Started systemd-timesyncd.service. Jul 2 11:45:01.627833 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 11:45:01.627916 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 11:45:01.636740 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 11:45:01.636824 systemd[1]: Finished modprobe@loop.service. Jul 2 11:45:01.644902 systemd[1]: Finished systemd-update-done.service. Jul 2 11:45:01.655019 systemd[1]: Reached target network.target. Jul 2 11:45:01.663623 systemd[1]: Reached target nss-lookup.target. Jul 2 11:45:01.671653 systemd[1]: Reached target time-set.target. Jul 2 11:45:01.679646 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 11:45:01.679972 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 11:45:01.681267 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 11:45:01.689111 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 11:45:01.697559 systemd[1]: Starting modprobe@loop.service... Jul 2 11:45:01.704679 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 11:45:01.704913 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 11:45:01.705122 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 11:45:01.705281 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 11:45:01.707441 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 11:45:01.707727 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 11:45:01.716337 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 11:45:01.716617 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 11:45:01.725583 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 11:45:01.725954 systemd[1]: Finished modprobe@loop.service. Jul 2 11:45:01.735647 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 11:45:01.736019 systemd[1]: Reached target sysinit.target. Jul 2 11:45:01.746119 systemd[1]: Started motdgen.path. Jul 2 11:45:01.753075 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 11:45:01.763324 systemd[1]: Started logrotate.timer. Jul 2 11:45:01.771271 systemd[1]: Started mdadm.timer. Jul 2 11:45:01.778983 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 11:45:01.787847 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 11:45:01.788176 systemd[1]: Reached target paths.target. Jul 2 11:45:01.795937 systemd[1]: Reached target timers.target. Jul 2 11:45:01.803547 systemd[1]: Listening on dbus.socket. Jul 2 11:45:01.813733 systemd[1]: Starting docker.socket... Jul 2 11:45:01.823407 systemd[1]: Listening on sshd.socket. Jul 2 11:45:01.831062 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 11:45:01.831395 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 11:45:01.835029 systemd[1]: Listening on docker.socket. Jul 2 11:45:01.846133 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 11:45:01.846466 systemd[1]: Reached target sockets.target. Jul 2 11:45:01.854928 systemd[1]: Reached target basic.target. Jul 2 11:45:01.862099 systemd[1]: System is tainted: cgroupsv1 Jul 2 11:45:01.862227 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 11:45:01.862537 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 11:45:01.865572 systemd[1]: Starting containerd.service... Jul 2 11:45:01.875315 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 2 11:45:01.886625 systemd[1]: Starting coreos-metadata.service... Jul 2 11:45:01.894147 systemd[1]: Starting dbus.service... Jul 2 11:45:01.900273 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 11:45:01.904956 jq[1624]: false Jul 2 11:45:01.908201 systemd[1]: Starting extend-filesystems.service... Jul 2 11:45:01.910834 dbus-daemon[1623]: [system] SELinux support is enabled Jul 2 11:45:01.914582 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 11:45:01.915340 systemd[1]: Starting modprobe@drm.service... Jul 2 11:45:01.915959 extend-filesystems[1626]: Found loop1 Jul 2 11:45:01.937540 extend-filesystems[1626]: Found sda Jul 2 11:45:01.937540 extend-filesystems[1626]: Found sda1 Jul 2 11:45:01.937540 extend-filesystems[1626]: Found sda2 Jul 2 11:45:01.937540 extend-filesystems[1626]: Found sda3 Jul 2 11:45:01.937540 extend-filesystems[1626]: Found usr Jul 2 11:45:01.937540 extend-filesystems[1626]: Found sda4 Jul 2 11:45:01.937540 extend-filesystems[1626]: Found sda6 Jul 2 11:45:01.937540 extend-filesystems[1626]: Found sda7 Jul 2 11:45:01.937540 extend-filesystems[1626]: Found sda9 Jul 2 11:45:01.937540 extend-filesystems[1626]: Checking size of /dev/sda9 Jul 2 11:45:01.937540 extend-filesystems[1626]: Resized partition /dev/sda9 Jul 2 11:45:02.057547 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Jul 2 11:45:02.057582 coreos-metadata[1617]: Jul 02 11:45:01.918 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 2 11:45:02.057730 coreos-metadata[1620]: Jul 02 11:45:01.921 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 2 11:45:01.923400 systemd[1]: Starting motdgen.service... Jul 2 11:45:02.057900 extend-filesystems[1636]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 11:45:01.957308 systemd[1]: Starting prepare-helm.service... Jul 2 11:45:01.976175 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 11:45:01.996352 systemd[1]: Starting sshd-keygen.service... Jul 2 11:45:02.011936 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 11:45:02.026568 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 11:45:02.028058 systemd[1]: Starting tcsd.service... Jul 2 11:45:02.035035 systemd[1]: Starting update-engine.service... Jul 2 11:45:02.080957 jq[1668]: true Jul 2 11:45:02.045568 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 11:45:02.072625 systemd[1]: Started dbus.service. Jul 2 11:45:02.089686 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 11:45:02.089815 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 11:45:02.090068 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 11:45:02.090155 systemd[1]: Finished modprobe@drm.service. Jul 2 11:45:02.090934 update_engine[1667]: I0702 11:45:02.090348 1667 main.cc:92] Flatcar Update Engine starting Jul 2 11:45:02.093999 update_engine[1667]: I0702 11:45:02.093961 1667 update_check_scheduler.cc:74] Next update check in 7m45s Jul 2 11:45:02.098776 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 11:45:02.098894 systemd[1]: Finished motdgen.service. Jul 2 11:45:02.106108 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 11:45:02.106225 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 11:45:02.117280 jq[1675]: true Jul 2 11:45:02.118021 systemd[1]: Finished ensure-sysext.service. Jul 2 11:45:02.126822 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jul 2 11:45:02.126952 systemd[1]: Condition check resulted in tcsd.service being skipped. Jul 2 11:45:02.127320 env[1676]: time="2024-07-02T11:45:02.127298512Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 11:45:02.127725 tar[1673]: linux-amd64/helm Jul 2 11:45:02.135518 env[1676]: time="2024-07-02T11:45:02.135496454Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 11:45:02.135583 env[1676]: time="2024-07-02T11:45:02.135574214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 11:45:02.135727 systemd[1]: Started update-engine.service. Jul 2 11:45:02.136149 env[1676]: time="2024-07-02T11:45:02.136107386Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 11:45:02.136149 env[1676]: time="2024-07-02T11:45:02.136120821Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 11:45:02.136267 env[1676]: time="2024-07-02T11:45:02.136257247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 11:45:02.136290 env[1676]: time="2024-07-02T11:45:02.136268145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 11:45:02.136290 env[1676]: time="2024-07-02T11:45:02.136275643Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 11:45:02.136290 env[1676]: time="2024-07-02T11:45:02.136281044Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 11:45:02.136335 env[1676]: time="2024-07-02T11:45:02.136325139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 11:45:02.136516 env[1676]: time="2024-07-02T11:45:02.136466740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 11:45:02.136638 env[1676]: time="2024-07-02T11:45:02.136603420Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 11:45:02.136638 env[1676]: time="2024-07-02T11:45:02.136614008Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 11:45:02.136683 env[1676]: time="2024-07-02T11:45:02.136640948Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 11:45:02.136683 env[1676]: time="2024-07-02T11:45:02.136649391Z" level=info msg="metadata content store policy set" policy=shared Jul 2 11:45:02.147376 systemd[1]: Started locksmithd.service. Jul 2 11:45:02.152044 env[1676]: time="2024-07-02T11:45:02.152016324Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 11:45:02.152044 env[1676]: time="2024-07-02T11:45:02.152037133Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 11:45:02.152101 env[1676]: time="2024-07-02T11:45:02.152047239Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 11:45:02.152101 env[1676]: time="2024-07-02T11:45:02.152066972Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 11:45:02.152101 env[1676]: time="2024-07-02T11:45:02.152077054Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 11:45:02.152101 env[1676]: time="2024-07-02T11:45:02.152092068Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 11:45:02.153806 env[1676]: time="2024-07-02T11:45:02.152100228Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 11:45:02.153806 env[1676]: time="2024-07-02T11:45:02.152113737Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 11:45:02.153806 env[1676]: time="2024-07-02T11:45:02.152128911Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 11:45:02.153806 env[1676]: time="2024-07-02T11:45:02.152138702Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 11:45:02.153806 env[1676]: time="2024-07-02T11:45:02.152145795Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 11:45:02.153806 env[1676]: time="2024-07-02T11:45:02.152152355Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 11:45:02.154441 env[1676]: time="2024-07-02T11:45:02.154428754Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 11:45:02.154526 env[1676]: time="2024-07-02T11:45:02.154487500Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 11:45:02.154531 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 11:45:02.154547 systemd[1]: Reached target system-config.target. Jul 2 11:45:02.154703 env[1676]: time="2024-07-02T11:45:02.154665471Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 11:45:02.154703 env[1676]: time="2024-07-02T11:45:02.154681357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 11:45:02.154703 env[1676]: time="2024-07-02T11:45:02.154690215Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 11:45:02.154770 env[1676]: time="2024-07-02T11:45:02.154715780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 11:45:02.154770 env[1676]: time="2024-07-02T11:45:02.154723363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 11:45:02.154770 env[1676]: time="2024-07-02T11:45:02.154730158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 11:45:02.154770 env[1676]: time="2024-07-02T11:45:02.154736375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 11:45:02.154770 env[1676]: time="2024-07-02T11:45:02.154742620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 11:45:02.154770 env[1676]: time="2024-07-02T11:45:02.154749368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 11:45:02.154770 env[1676]: time="2024-07-02T11:45:02.154755590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 11:45:02.154770 env[1676]: time="2024-07-02T11:45:02.154761729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 11:45:02.154770 env[1676]: time="2024-07-02T11:45:02.154769125Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 11:45:02.154905 env[1676]: time="2024-07-02T11:45:02.154833030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 11:45:02.154905 env[1676]: time="2024-07-02T11:45:02.154842167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 11:45:02.154905 env[1676]: time="2024-07-02T11:45:02.154848571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 11:45:02.154905 env[1676]: time="2024-07-02T11:45:02.154855456Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 11:45:02.154905 env[1676]: time="2024-07-02T11:45:02.154863196Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 11:45:02.154905 env[1676]: time="2024-07-02T11:45:02.154869899Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 11:45:02.154905 env[1676]: time="2024-07-02T11:45:02.154879581Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 11:45:02.154905 env[1676]: time="2024-07-02T11:45:02.154899989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 11:45:02.155066 env[1676]: time="2024-07-02T11:45:02.155013318Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 11:45:02.155066 env[1676]: time="2024-07-02T11:45:02.155046150Z" level=info msg="Connect containerd service" Jul 2 11:45:02.157634 env[1676]: time="2024-07-02T11:45:02.155066260Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 11:45:02.157634 env[1676]: time="2024-07-02T11:45:02.155327098Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 11:45:02.157634 env[1676]: time="2024-07-02T11:45:02.155421514Z" level=info msg="Start subscribing containerd event" Jul 2 11:45:02.157634 env[1676]: time="2024-07-02T11:45:02.155460279Z" level=info msg="Start recovering state" Jul 2 11:45:02.157634 env[1676]: time="2024-07-02T11:45:02.155598933Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 11:45:02.157634 env[1676]: time="2024-07-02T11:45:02.155636735Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 11:45:02.157634 env[1676]: time="2024-07-02T11:45:02.155676587Z" level=info msg="containerd successfully booted in 0.028764s" Jul 2 11:45:02.157634 env[1676]: time="2024-07-02T11:45:02.155918062Z" level=info msg="Start event monitor" Jul 2 11:45:02.157634 env[1676]: time="2024-07-02T11:45:02.155932010Z" level=info msg="Start snapshots syncer" Jul 2 11:45:02.157634 env[1676]: time="2024-07-02T11:45:02.155938398Z" level=info msg="Start cni network conf syncer for default" Jul 2 11:45:02.157634 env[1676]: time="2024-07-02T11:45:02.155942871Z" level=info msg="Start streaming server" Jul 2 11:45:02.159746 bash[1711]: Updated "/home/core/.ssh/authorized_keys" Jul 2 11:45:02.163661 systemd[1]: Starting systemd-logind.service... Jul 2 11:45:02.170557 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 11:45:02.170574 systemd[1]: Reached target user-config.target. Jul 2 11:45:02.178656 systemd[1]: Started containerd.service. Jul 2 11:45:02.185725 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 11:45:02.190648 systemd-logind[1717]: Watching system buttons on /dev/input/event3 (Power Button) Jul 2 11:45:02.190659 systemd-logind[1717]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 2 11:45:02.190669 systemd-logind[1717]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jul 2 11:45:02.190774 systemd-logind[1717]: New seat seat0. Jul 2 11:45:02.195771 systemd[1]: Started systemd-logind.service. Jul 2 11:45:02.206934 locksmithd[1713]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 11:45:02.304533 systemd-networkd[1408]: bond0: Gained IPv6LL Jul 2 11:45:02.377884 tar[1673]: linux-amd64/LICENSE Jul 2 11:45:02.377974 tar[1673]: linux-amd64/README.md Jul 2 11:45:02.380533 systemd[1]: Finished prepare-helm.service. Jul 2 11:45:02.450455 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Jul 2 11:45:02.479564 extend-filesystems[1636]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 2 11:45:02.479564 extend-filesystems[1636]: old_desc_blocks = 1, new_desc_blocks = 56 Jul 2 11:45:02.479564 extend-filesystems[1636]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Jul 2 11:45:02.518526 extend-filesystems[1626]: Resized filesystem in /dev/sda9 Jul 2 11:45:02.518526 extend-filesystems[1626]: Found sdb Jul 2 11:45:02.479995 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 11:45:02.480115 systemd[1]: Finished extend-filesystems.service. Jul 2 11:45:02.753558 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 11:45:02.763756 systemd[1]: Reached target network-online.target. Jul 2 11:45:02.772575 systemd[1]: Starting kubelet.service... Jul 2 11:45:03.045822 sshd_keygen[1664]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 11:45:03.057724 systemd[1]: Finished sshd-keygen.service. Jul 2 11:45:03.065651 systemd[1]: Starting issuegen.service... Jul 2 11:45:03.072777 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 11:45:03.072886 systemd[1]: Finished issuegen.service. Jul 2 11:45:03.080383 systemd[1]: Starting systemd-user-sessions.service... Jul 2 11:45:03.088802 systemd[1]: Finished systemd-user-sessions.service. Jul 2 11:45:03.097300 systemd[1]: Started getty@tty1.service. Jul 2 11:45:03.105261 systemd[1]: Started serial-getty@ttyS1.service. Jul 2 11:45:03.113655 systemd[1]: Reached target getty.target. Jul 2 11:45:03.411444 systemd[1]: Started kubelet.service. Jul 2 11:45:03.636636 kernel: mlx5_core 0000:02:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Jul 2 11:45:03.722456 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:1 Jul 2 11:45:03.787490 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:2 Jul 2 11:45:03.980652 kubelet[1756]: E0702 11:45:03.980560 1756 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 11:45:03.981769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 11:45:03.981853 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 11:45:07.745389 coreos-metadata[1620]: Jul 02 11:45:07.745 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Jul 2 11:45:07.746211 coreos-metadata[1617]: Jul 02 11:45:07.745 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Jul 2 11:45:07.746062 systemd-resolved[1591]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 147.75.207.208. Jul 2 11:45:08.126010 login[1750]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 11:45:08.133303 login[1749]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 11:45:08.160071 systemd-logind[1717]: New session 1 of user core. Jul 2 11:45:08.160698 systemd[1]: Created slice user-500.slice. Jul 2 11:45:08.161172 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 11:45:08.162548 systemd-logind[1717]: New session 2 of user core. Jul 2 11:45:08.166901 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 11:45:08.167612 systemd[1]: Starting user@500.service... Jul 2 11:45:08.169918 (systemd)[1778]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:45:08.236227 systemd[1778]: Queued start job for default target default.target. Jul 2 11:45:08.236330 systemd[1778]: Reached target paths.target. Jul 2 11:45:08.236341 systemd[1778]: Reached target sockets.target. Jul 2 11:45:08.236349 systemd[1778]: Reached target timers.target. Jul 2 11:45:08.236356 systemd[1778]: Reached target basic.target. Jul 2 11:45:08.236375 systemd[1778]: Reached target default.target. Jul 2 11:45:08.236388 systemd[1778]: Startup finished in 63ms. Jul 2 11:45:08.236453 systemd[1]: Started user@500.service. Jul 2 11:45:08.237034 systemd[1]: Started session-1.scope. Jul 2 11:45:08.237373 systemd[1]: Started session-2.scope. Jul 2 11:45:08.559487 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:2 port 2:2 Jul 2 11:45:08.559625 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:2 Jul 2 11:45:08.745561 coreos-metadata[1620]: Jul 02 11:45:08.745 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 2 11:45:08.746385 coreos-metadata[1617]: Jul 02 11:45:08.745 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 2 11:45:08.794319 coreos-metadata[1617]: Jul 02 11:45:08.794 INFO Fetch successful Jul 2 11:45:08.795682 coreos-metadata[1620]: Jul 02 11:45:08.795 INFO Fetch successful Jul 2 11:45:08.823649 systemd[1]: Finished coreos-metadata.service. Jul 2 11:45:08.823868 unknown[1617]: wrote ssh authorized keys file for user: core Jul 2 11:45:08.824849 systemd[1]: Started packet-phone-home.service. Jul 2 11:45:08.830345 curl[1805]: % Total % Received % Xferd Average Speed Time Time Time Current Jul 2 11:45:08.830345 curl[1805]: Dload Upload Total Spent Left Speed Jul 2 11:45:08.836327 update-ssh-keys[1807]: Updated "/home/core/.ssh/authorized_keys" Jul 2 11:45:08.836612 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 2 11:45:08.836819 systemd[1]: Reached target multi-user.target. Jul 2 11:45:08.837606 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 11:45:08.841513 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 11:45:08.841631 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 11:45:08.841783 systemd[1]: Startup finished in 19.754s (kernel) + 14.266s (userspace) = 34.021s. Jul 2 11:45:08.920403 systemd-timesyncd[1593]: Contacted time server 104.131.139.195:123 (1.flatcar.pool.ntp.org). Jul 2 11:45:08.920600 systemd-timesyncd[1593]: Initial clock synchronization to Tue 2024-07-02 11:45:09.215320 UTC. Jul 2 11:45:08.945571 systemd[1]: Created slice system-sshd.slice. Jul 2 11:45:08.946148 systemd[1]: Started sshd@0-147.75.203.15:22-139.178.68.195:34990.service. Jul 2 11:45:08.987620 sshd[1812]: Accepted publickey for core from 139.178.68.195 port 34990 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:45:08.988774 sshd[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:45:08.992981 systemd-logind[1717]: New session 3 of user core. Jul 2 11:45:08.994121 systemd[1]: Started session-3.scope. Jul 2 11:45:09.010813 curl[1805]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Jul 2 11:45:09.012740 systemd[1]: packet-phone-home.service: Deactivated successfully. Jul 2 11:45:09.046853 systemd[1]: Started sshd@1-147.75.203.15:22-139.178.68.195:34996.service. Jul 2 11:45:09.078867 sshd[1818]: Accepted publickey for core from 139.178.68.195 port 34996 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:45:09.079580 sshd[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:45:09.081975 systemd-logind[1717]: New session 4 of user core. Jul 2 11:45:09.082487 systemd[1]: Started session-4.scope. Jul 2 11:45:09.134725 sshd[1818]: pam_unix(sshd:session): session closed for user core Jul 2 11:45:09.136186 systemd[1]: Started sshd@2-147.75.203.15:22-139.178.68.195:35012.service. Jul 2 11:45:09.136548 systemd[1]: sshd@1-147.75.203.15:22-139.178.68.195:34996.service: Deactivated successfully. Jul 2 11:45:09.137008 systemd-logind[1717]: Session 4 logged out. Waiting for processes to exit. Jul 2 11:45:09.137071 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 11:45:09.137494 systemd-logind[1717]: Removed session 4. Jul 2 11:45:09.167817 sshd[1824]: Accepted publickey for core from 139.178.68.195 port 35012 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:45:09.168914 sshd[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:45:09.172542 systemd-logind[1717]: New session 5 of user core. Jul 2 11:45:09.173662 systemd[1]: Started session-5.scope. Jul 2 11:45:09.231373 sshd[1824]: pam_unix(sshd:session): session closed for user core Jul 2 11:45:09.238099 systemd[1]: Started sshd@3-147.75.203.15:22-139.178.68.195:35024.service. Jul 2 11:45:09.240003 systemd[1]: sshd@2-147.75.203.15:22-139.178.68.195:35012.service: Deactivated successfully. Jul 2 11:45:09.242652 systemd-logind[1717]: Session 5 logged out. Waiting for processes to exit. Jul 2 11:45:09.242862 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 11:45:09.245699 systemd-logind[1717]: Removed session 5. Jul 2 11:45:09.302095 sshd[1830]: Accepted publickey for core from 139.178.68.195 port 35024 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:45:09.304430 sshd[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:45:09.312320 systemd-logind[1717]: New session 6 of user core. Jul 2 11:45:09.314413 systemd[1]: Started session-6.scope. Jul 2 11:45:09.379621 sshd[1830]: pam_unix(sshd:session): session closed for user core Jul 2 11:45:09.381124 systemd[1]: Started sshd@4-147.75.203.15:22-139.178.68.195:35030.service. Jul 2 11:45:09.381479 systemd[1]: sshd@3-147.75.203.15:22-139.178.68.195:35024.service: Deactivated successfully. Jul 2 11:45:09.382006 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 11:45:09.382016 systemd-logind[1717]: Session 6 logged out. Waiting for processes to exit. Jul 2 11:45:09.382470 systemd-logind[1717]: Removed session 6. Jul 2 11:45:09.412884 sshd[1838]: Accepted publickey for core from 139.178.68.195 port 35030 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:45:09.413975 sshd[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:45:09.417657 systemd-logind[1717]: New session 7 of user core. Jul 2 11:45:09.418785 systemd[1]: Started session-7.scope. Jul 2 11:45:09.522692 sudo[1843]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 11:45:09.523315 sudo[1843]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 11:45:09.575583 systemd[1]: Starting docker.service... Jul 2 11:45:09.593337 env[1858]: time="2024-07-02T11:45:09.593276967Z" level=info msg="Starting up" Jul 2 11:45:09.593891 env[1858]: time="2024-07-02T11:45:09.593879921Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 11:45:09.593891 env[1858]: time="2024-07-02T11:45:09.593889523Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 11:45:09.593931 env[1858]: time="2024-07-02T11:45:09.593901657Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 11:45:09.593931 env[1858]: time="2024-07-02T11:45:09.593907976Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 11:45:09.595119 env[1858]: time="2024-07-02T11:45:09.595108279Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 11:45:09.595119 env[1858]: time="2024-07-02T11:45:09.595116595Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 11:45:09.595170 env[1858]: time="2024-07-02T11:45:09.595124246Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 11:45:09.595170 env[1858]: time="2024-07-02T11:45:09.595132178Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 11:45:09.954921 env[1858]: time="2024-07-02T11:45:09.954864126Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 2 11:45:09.954921 env[1858]: time="2024-07-02T11:45:09.954874958Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 2 11:45:09.955065 env[1858]: time="2024-07-02T11:45:09.955002853Z" level=info msg="Loading containers: start." Jul 2 11:45:10.059534 kernel: Initializing XFRM netlink socket Jul 2 11:45:10.106290 env[1858]: time="2024-07-02T11:45:10.106237854Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 11:45:10.190781 systemd-networkd[1408]: docker0: Link UP Jul 2 11:45:10.205605 env[1858]: time="2024-07-02T11:45:10.205449899Z" level=info msg="Loading containers: done." Jul 2 11:45:10.220675 env[1858]: time="2024-07-02T11:45:10.220620517Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 11:45:10.220980 env[1858]: time="2024-07-02T11:45:10.220941678Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 11:45:10.221177 env[1858]: time="2024-07-02T11:45:10.221137893Z" level=info msg="Daemon has completed initialization" Jul 2 11:45:10.242288 systemd[1]: Started docker.service. Jul 2 11:45:10.258832 env[1858]: time="2024-07-02T11:45:10.258699255Z" level=info msg="API listen on /run/docker.sock" Jul 2 11:45:11.444943 env[1676]: time="2024-07-02T11:45:11.444835195Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 11:45:12.213753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount707385492.mount: Deactivated successfully. Jul 2 11:45:14.059692 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 11:45:14.059818 systemd[1]: Stopped kubelet.service. Jul 2 11:45:14.060690 systemd[1]: Starting kubelet.service... Jul 2 11:45:14.235978 systemd[1]: Started kubelet.service. Jul 2 11:45:14.239071 env[1676]: time="2024-07-02T11:45:14.239051280Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:14.250527 env[1676]: time="2024-07-02T11:45:14.250418051Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:14.255362 env[1676]: time="2024-07-02T11:45:14.255263332Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:14.257526 env[1676]: time="2024-07-02T11:45:14.257459136Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:14.257988 env[1676]: time="2024-07-02T11:45:14.257945507Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jul 2 11:45:14.263887 env[1676]: time="2024-07-02T11:45:14.263865770Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 11:45:14.309822 kubelet[2028]: E0702 11:45:14.309706 2028 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 11:45:14.313281 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 11:45:14.313406 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 11:45:16.520523 env[1676]: time="2024-07-02T11:45:16.520490162Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:16.521273 env[1676]: time="2024-07-02T11:45:16.521232449Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:16.522574 env[1676]: time="2024-07-02T11:45:16.522554369Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:16.524368 env[1676]: time="2024-07-02T11:45:16.524323781Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:16.524769 env[1676]: time="2024-07-02T11:45:16.524718333Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jul 2 11:45:16.530355 env[1676]: time="2024-07-02T11:45:16.530341340Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 11:45:17.925777 env[1676]: time="2024-07-02T11:45:17.925744645Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:17.927351 env[1676]: time="2024-07-02T11:45:17.927329340Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:17.928425 env[1676]: time="2024-07-02T11:45:17.928411483Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:17.929610 env[1676]: time="2024-07-02T11:45:17.929548721Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:17.930042 env[1676]: time="2024-07-02T11:45:17.930015343Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jul 2 11:45:17.937730 env[1676]: time="2024-07-02T11:45:17.937710331Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 11:45:19.248279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1407547981.mount: Deactivated successfully. Jul 2 11:45:19.587220 env[1676]: time="2024-07-02T11:45:19.587174295Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:19.587758 env[1676]: time="2024-07-02T11:45:19.587725231Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:19.588391 env[1676]: time="2024-07-02T11:45:19.588352360Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:19.589034 env[1676]: time="2024-07-02T11:45:19.588994276Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:19.589341 env[1676]: time="2024-07-02T11:45:19.589297105Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 11:45:19.595352 env[1676]: time="2024-07-02T11:45:19.595320231Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 11:45:20.107722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount866020947.mount: Deactivated successfully. Jul 2 11:45:20.109384 env[1676]: time="2024-07-02T11:45:20.109339067Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:20.110317 env[1676]: time="2024-07-02T11:45:20.110281376Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:20.111136 env[1676]: time="2024-07-02T11:45:20.111092551Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:20.111902 env[1676]: time="2024-07-02T11:45:20.111850814Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:20.112622 env[1676]: time="2024-07-02T11:45:20.112566802Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 11:45:20.117895 env[1676]: time="2024-07-02T11:45:20.117838158Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 11:45:20.656098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2832754038.mount: Deactivated successfully. Jul 2 11:45:23.295355 env[1676]: time="2024-07-02T11:45:23.295305824Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:23.296106 env[1676]: time="2024-07-02T11:45:23.296059216Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:23.297205 env[1676]: time="2024-07-02T11:45:23.297162368Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:23.298272 env[1676]: time="2024-07-02T11:45:23.298199208Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:23.298832 env[1676]: time="2024-07-02T11:45:23.298774117Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 11:45:23.304508 env[1676]: time="2024-07-02T11:45:23.304447670Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 11:45:23.837581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1187723104.mount: Deactivated successfully. Jul 2 11:45:24.268318 env[1676]: time="2024-07-02T11:45:24.268230659Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:24.268919 env[1676]: time="2024-07-02T11:45:24.268878488Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:24.269582 env[1676]: time="2024-07-02T11:45:24.269543557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:24.270245 env[1676]: time="2024-07-02T11:45:24.270206873Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:24.270594 env[1676]: time="2024-07-02T11:45:24.270551492Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jul 2 11:45:24.553230 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 11:45:24.553369 systemd[1]: Stopped kubelet.service. Jul 2 11:45:24.554314 systemd[1]: Starting kubelet.service... Jul 2 11:45:24.756339 systemd[1]: Started kubelet.service. Jul 2 11:45:24.778924 kubelet[2146]: E0702 11:45:24.778899 2146 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 11:45:24.779869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 11:45:24.779977 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 11:45:25.805063 systemd[1]: Stopped kubelet.service. Jul 2 11:45:25.806396 systemd[1]: Starting kubelet.service... Jul 2 11:45:25.818818 systemd[1]: Reloading. Jul 2 11:45:25.851524 /usr/lib/systemd/system-generators/torcx-generator[2250]: time="2024-07-02T11:45:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 11:45:25.851541 /usr/lib/systemd/system-generators/torcx-generator[2250]: time="2024-07-02T11:45:25Z" level=info msg="torcx already run" Jul 2 11:45:25.912722 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 11:45:25.912732 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 11:45:25.925230 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 11:45:25.980221 systemd[1]: Started kubelet.service. Jul 2 11:45:25.982162 systemd[1]: Stopping kubelet.service... Jul 2 11:45:25.982443 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 11:45:25.982575 systemd[1]: Stopped kubelet.service. Jul 2 11:45:25.983591 systemd[1]: Starting kubelet.service... Jul 2 11:45:26.134078 systemd[1]: Started kubelet.service. Jul 2 11:45:26.225209 kubelet[2335]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 11:45:26.225209 kubelet[2335]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 11:45:26.225209 kubelet[2335]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 11:45:26.226005 kubelet[2335]: I0702 11:45:26.225542 2335 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 11:45:26.476073 kubelet[2335]: I0702 11:45:26.476031 2335 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 11:45:26.476073 kubelet[2335]: I0702 11:45:26.476043 2335 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 11:45:26.476170 kubelet[2335]: I0702 11:45:26.476161 2335 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 11:45:26.487706 kubelet[2335]: E0702 11:45:26.487657 2335 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.75.203.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.75.203.15:6443: connect: connection refused Jul 2 11:45:26.488245 kubelet[2335]: I0702 11:45:26.488204 2335 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 11:45:26.530445 kubelet[2335]: I0702 11:45:26.530358 2335 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 11:45:26.537289 kubelet[2335]: I0702 11:45:26.537215 2335 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 11:45:26.537570 kubelet[2335]: I0702 11:45:26.537504 2335 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 11:45:26.538122 kubelet[2335]: I0702 11:45:26.538078 2335 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 11:45:26.538122 kubelet[2335]: I0702 11:45:26.538100 2335 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 11:45:26.539322 kubelet[2335]: I0702 11:45:26.539276 2335 state_mem.go:36] "Initialized new in-memory state store" Jul 2 11:45:26.541634 kubelet[2335]: I0702 11:45:26.541588 2335 kubelet.go:393] "Attempting to sync node with API server" Jul 2 11:45:26.541634 kubelet[2335]: I0702 11:45:26.541611 2335 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 11:45:26.541772 kubelet[2335]: I0702 11:45:26.541643 2335 kubelet.go:309] "Adding apiserver pod source" Jul 2 11:45:26.541772 kubelet[2335]: I0702 11:45:26.541662 2335 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 11:45:26.560091 kubelet[2335]: W0702 11:45:26.559838 2335 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://147.75.203.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.203.15:6443: connect: connection refused Jul 2 11:45:26.560091 kubelet[2335]: I0702 11:45:26.560065 2335 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 11:45:26.560242 kubelet[2335]: W0702 11:45:26.560092 2335 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://147.75.203.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-3cadf325ae&limit=500&resourceVersion=0": dial tcp 147.75.203.15:6443: connect: connection refused Jul 2 11:45:26.560242 kubelet[2335]: E0702 11:45:26.560132 2335 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.203.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.203.15:6443: connect: connection refused Jul 2 11:45:26.560242 kubelet[2335]: E0702 11:45:26.560203 2335 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.203.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-3cadf325ae&limit=500&resourceVersion=0": dial tcp 147.75.203.15:6443: connect: connection refused Jul 2 11:45:26.568737 kubelet[2335]: W0702 11:45:26.568693 2335 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 11:45:26.569236 kubelet[2335]: I0702 11:45:26.569194 2335 server.go:1232] "Started kubelet" Jul 2 11:45:26.569332 kubelet[2335]: I0702 11:45:26.569265 2335 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 11:45:26.569332 kubelet[2335]: I0702 11:45:26.569277 2335 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 11:45:26.569539 kubelet[2335]: I0702 11:45:26.569522 2335 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 11:45:26.569629 kubelet[2335]: E0702 11:45:26.569612 2335 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 11:45:26.569703 kubelet[2335]: E0702 11:45:26.569637 2335 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 11:45:26.576285 kubelet[2335]: E0702 11:45:26.576240 2335 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.5-a-3cadf325ae.17de62c9ccfcb94e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.5-a-3cadf325ae", UID:"ci-3510.3.5-a-3cadf325ae", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.5-a-3cadf325ae"}, FirstTimestamp:time.Date(2024, time.July, 2, 11, 45, 26, 569171278, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 11, 45, 26, 569171278, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.5-a-3cadf325ae"}': 'Post "https://147.75.203.15:6443/api/v1/namespaces/default/events": dial tcp 147.75.203.15:6443: connect: connection refused'(may retry after sleeping) Jul 2 11:45:26.579721 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 11:45:26.579790 kubelet[2335]: I0702 11:45:26.579781 2335 server.go:462] "Adding debug handlers to kubelet server" Jul 2 11:45:26.579790 kubelet[2335]: I0702 11:45:26.579782 2335 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 11:45:26.579900 kubelet[2335]: I0702 11:45:26.579889 2335 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 11:45:26.579934 kubelet[2335]: I0702 11:45:26.579913 2335 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 11:45:26.579967 kubelet[2335]: I0702 11:45:26.579960 2335 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 11:45:26.580076 kubelet[2335]: E0702 11:45:26.580065 2335 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.203.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-3cadf325ae?timeout=10s\": dial tcp 147.75.203.15:6443: connect: connection refused" interval="200ms" Jul 2 11:45:26.580113 kubelet[2335]: W0702 11:45:26.580061 2335 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://147.75.203.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.203.15:6443: connect: connection refused Jul 2 11:45:26.580113 kubelet[2335]: E0702 11:45:26.580097 2335 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.203.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.203.15:6443: connect: connection refused Jul 2 11:45:26.588491 kubelet[2335]: I0702 11:45:26.588476 2335 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 11:45:26.588996 kubelet[2335]: I0702 11:45:26.588988 2335 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 11:45:26.589023 kubelet[2335]: I0702 11:45:26.589001 2335 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 11:45:26.589023 kubelet[2335]: I0702 11:45:26.589014 2335 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 11:45:26.589057 kubelet[2335]: E0702 11:45:26.589050 2335 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 11:45:26.589302 kubelet[2335]: W0702 11:45:26.589289 2335 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://147.75.203.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.203.15:6443: connect: connection refused Jul 2 11:45:26.589353 kubelet[2335]: E0702 11:45:26.589314 2335 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.203.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.203.15:6443: connect: connection refused Jul 2 11:45:26.664780 kubelet[2335]: I0702 11:45:26.664728 2335 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 11:45:26.664780 kubelet[2335]: I0702 11:45:26.664754 2335 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 11:45:26.664780 kubelet[2335]: I0702 11:45:26.664777 2335 state_mem.go:36] "Initialized new in-memory state store" Jul 2 11:45:26.666221 kubelet[2335]: I0702 11:45:26.666199 2335 policy_none.go:49] "None policy: Start" Jul 2 11:45:26.666988 kubelet[2335]: I0702 11:45:26.666963 2335 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 11:45:26.667086 kubelet[2335]: I0702 11:45:26.667007 2335 state_mem.go:35] "Initializing new in-memory state store" Jul 2 11:45:26.675612 kubelet[2335]: I0702 11:45:26.675566 2335 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 11:45:26.676057 kubelet[2335]: I0702 11:45:26.676028 2335 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 11:45:26.676895 kubelet[2335]: E0702 11:45:26.676839 2335 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.5-a-3cadf325ae\" not found" Jul 2 11:45:26.683781 kubelet[2335]: I0702 11:45:26.683736 2335 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:26.684447 kubelet[2335]: E0702 11:45:26.684388 2335 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://147.75.203.15:6443/api/v1/nodes\": dial tcp 147.75.203.15:6443: connect: connection refused" node="ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:26.689615 kubelet[2335]: I0702 11:45:26.689562 2335 topology_manager.go:215] "Topology Admit Handler" podUID="7ac5336ba321b8053f6d9b6ed292b3c2" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:26.693059 kubelet[2335]: I0702 11:45:26.692981 2335 topology_manager.go:215] "Topology Admit Handler" podUID="cdb812b018017269d00d2ddc9983b6dd" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:26.696519 kubelet[2335]: I0702 11:45:26.696414 2335 topology_manager.go:215] "Topology Admit Handler" podUID="d3d95c75e2fb732bcbd43d1c184d1b00" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:26.781510 kubelet[2335]: E0702 11:45:26.781323 2335 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.203.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-3cadf325ae?timeout=10s\": dial tcp 147.75.203.15:6443: connect: connection refused" interval="400ms" Jul 2 11:45:26.881080 kubelet[2335]: I0702 11:45:26.880995 2335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ac5336ba321b8053f6d9b6ed292b3c2-k8s-certs\") pod \"kube-apiserver-ci-3510.3.5-a-3cadf325ae\" (UID: \"7ac5336ba321b8053f6d9b6ed292b3c2\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:26.881298 kubelet[2335]: I0702 11:45:26.881090 2335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cdb812b018017269d00d2ddc9983b6dd-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.5-a-3cadf325ae\" (UID: \"cdb812b018017269d00d2ddc9983b6dd\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:26.881298 kubelet[2335]: I0702 11:45:26.881201 2335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cdb812b018017269d00d2ddc9983b6dd-ca-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-3cadf325ae\" (UID: \"cdb812b018017269d00d2ddc9983b6dd\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:26.881298 kubelet[2335]: I0702 11:45:26.881263 2335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cdb812b018017269d00d2ddc9983b6dd-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.5-a-3cadf325ae\" (UID: \"cdb812b018017269d00d2ddc9983b6dd\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:26.881649 kubelet[2335]: I0702 11:45:26.881402 2335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cdb812b018017269d00d2ddc9983b6dd-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-3cadf325ae\" (UID: \"cdb812b018017269d00d2ddc9983b6dd\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:26.881649 kubelet[2335]: I0702 11:45:26.881525 2335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdb812b018017269d00d2ddc9983b6dd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.5-a-3cadf325ae\" (UID: \"cdb812b018017269d00d2ddc9983b6dd\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:26.881649 kubelet[2335]: I0702 11:45:26.881592 2335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3d95c75e2fb732bcbd43d1c184d1b00-kubeconfig\") pod \"kube-scheduler-ci-3510.3.5-a-3cadf325ae\" (UID: \"d3d95c75e2fb732bcbd43d1c184d1b00\") " pod="kube-system/kube-scheduler-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:26.881649 kubelet[2335]: I0702 11:45:26.881650 2335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ac5336ba321b8053f6d9b6ed292b3c2-ca-certs\") pod \"kube-apiserver-ci-3510.3.5-a-3cadf325ae\" (UID: \"7ac5336ba321b8053f6d9b6ed292b3c2\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:26.882015 kubelet[2335]: I0702 11:45:26.881727 2335 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ac5336ba321b8053f6d9b6ed292b3c2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.5-a-3cadf325ae\" (UID: \"7ac5336ba321b8053f6d9b6ed292b3c2\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:26.888141 kubelet[2335]: I0702 11:45:26.888069 2335 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:26.888670 kubelet[2335]: E0702 11:45:26.888597 2335 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://147.75.203.15:6443/api/v1/nodes\": dial tcp 147.75.203.15:6443: connect: connection refused" node="ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:26.905425 kubelet[2335]: E0702 11:45:26.905222 2335 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.5-a-3cadf325ae.17de62c9ccfcb94e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.5-a-3cadf325ae", UID:"ci-3510.3.5-a-3cadf325ae", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.5-a-3cadf325ae"}, FirstTimestamp:time.Date(2024, time.July, 2, 11, 45, 26, 569171278, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 11, 45, 26, 569171278, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.5-a-3cadf325ae"}': 'Post "https://147.75.203.15:6443/api/v1/namespaces/default/events": dial tcp 147.75.203.15:6443: connect: connection refused'(may retry after sleeping) Jul 2 11:45:27.005981 env[1676]: time="2024-07-02T11:45:27.005858808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.5-a-3cadf325ae,Uid:7ac5336ba321b8053f6d9b6ed292b3c2,Namespace:kube-system,Attempt:0,}" Jul 2 11:45:27.011234 env[1676]: time="2024-07-02T11:45:27.011100781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.5-a-3cadf325ae,Uid:cdb812b018017269d00d2ddc9983b6dd,Namespace:kube-system,Attempt:0,}" Jul 2 11:45:27.015271 env[1676]: time="2024-07-02T11:45:27.015166204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.5-a-3cadf325ae,Uid:d3d95c75e2fb732bcbd43d1c184d1b00,Namespace:kube-system,Attempt:0,}" Jul 2 11:45:27.182358 kubelet[2335]: E0702 11:45:27.182286 2335 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.203.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-3cadf325ae?timeout=10s\": dial tcp 147.75.203.15:6443: connect: connection refused" interval="800ms" Jul 2 11:45:27.292123 kubelet[2335]: I0702 11:45:27.292046 2335 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:27.292882 kubelet[2335]: E0702 11:45:27.292681 2335 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://147.75.203.15:6443/api/v1/nodes\": dial tcp 147.75.203.15:6443: connect: connection refused" node="ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:27.490558 kubelet[2335]: W0702 11:45:27.490293 2335 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://147.75.203.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.203.15:6443: connect: connection refused Jul 2 11:45:27.490558 kubelet[2335]: E0702 11:45:27.490423 2335 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.203.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.203.15:6443: connect: connection refused Jul 2 11:45:27.554473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4114238954.mount: Deactivated successfully. Jul 2 11:45:27.555718 env[1676]: time="2024-07-02T11:45:27.555649682Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:27.557555 env[1676]: time="2024-07-02T11:45:27.557541797Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:27.558242 env[1676]: time="2024-07-02T11:45:27.558231181Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:27.558613 env[1676]: time="2024-07-02T11:45:27.558604213Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:27.558973 env[1676]: time="2024-07-02T11:45:27.558962866Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:27.560252 env[1676]: time="2024-07-02T11:45:27.560236945Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:27.560974 env[1676]: time="2024-07-02T11:45:27.560927815Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:27.561479 env[1676]: time="2024-07-02T11:45:27.561468679Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:27.562356 env[1676]: time="2024-07-02T11:45:27.562342621Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:27.563037 env[1676]: time="2024-07-02T11:45:27.563025933Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:27.563396 env[1676]: time="2024-07-02T11:45:27.563382863Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:27.566355 env[1676]: time="2024-07-02T11:45:27.566313559Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:27.568174 kubelet[2335]: W0702 11:45:27.568144 2335 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://147.75.203.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.203.15:6443: connect: connection refused Jul 2 11:45:27.568174 kubelet[2335]: E0702 11:45:27.568180 2335 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.203.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.203.15:6443: connect: connection refused Jul 2 11:45:27.569366 env[1676]: time="2024-07-02T11:45:27.569332817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:45:27.569407 env[1676]: time="2024-07-02T11:45:27.569375169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:45:27.569407 env[1676]: time="2024-07-02T11:45:27.569394384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:45:27.569561 env[1676]: time="2024-07-02T11:45:27.569523131Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cee7f939e75a58d72e8a1b2146c412cabd3d84b68042e8218968976e09109cf6 pid=2384 runtime=io.containerd.runc.v2 Jul 2 11:45:27.572487 env[1676]: time="2024-07-02T11:45:27.572446657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:45:27.572487 env[1676]: time="2024-07-02T11:45:27.572477796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:45:27.572487 env[1676]: time="2024-07-02T11:45:27.572484750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:45:27.572601 env[1676]: time="2024-07-02T11:45:27.572569457Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e50881db2c82aaa1935b22ff362d5fa415fe9c92cff27383169def131ad46a24 pid=2407 runtime=io.containerd.runc.v2 Jul 2 11:45:27.573790 env[1676]: time="2024-07-02T11:45:27.573749284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:45:27.573790 env[1676]: time="2024-07-02T11:45:27.573778169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:45:27.573900 env[1676]: time="2024-07-02T11:45:27.573788782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:45:27.573957 env[1676]: time="2024-07-02T11:45:27.573933431Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6c8b5842208093bb1025e4077e11fd5120a504b60f3afc5700b474883554bb4 pid=2422 runtime=io.containerd.runc.v2 Jul 2 11:45:27.599248 env[1676]: time="2024-07-02T11:45:27.599225209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.5-a-3cadf325ae,Uid:7ac5336ba321b8053f6d9b6ed292b3c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"cee7f939e75a58d72e8a1b2146c412cabd3d84b68042e8218968976e09109cf6\"" Jul 2 11:45:27.600980 env[1676]: time="2024-07-02T11:45:27.600953838Z" level=info msg="CreateContainer within sandbox \"cee7f939e75a58d72e8a1b2146c412cabd3d84b68042e8218968976e09109cf6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 11:45:27.601902 env[1676]: time="2024-07-02T11:45:27.601886668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.5-a-3cadf325ae,Uid:d3d95c75e2fb732bcbd43d1c184d1b00,Namespace:kube-system,Attempt:0,} returns sandbox id \"e50881db2c82aaa1935b22ff362d5fa415fe9c92cff27383169def131ad46a24\"" Jul 2 11:45:27.602287 env[1676]: time="2024-07-02T11:45:27.602274450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.5-a-3cadf325ae,Uid:cdb812b018017269d00d2ddc9983b6dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6c8b5842208093bb1025e4077e11fd5120a504b60f3afc5700b474883554bb4\"" Jul 2 11:45:27.603013 env[1676]: time="2024-07-02T11:45:27.602995748Z" level=info msg="CreateContainer within sandbox \"e50881db2c82aaa1935b22ff362d5fa415fe9c92cff27383169def131ad46a24\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 11:45:27.604061 env[1676]: time="2024-07-02T11:45:27.604046866Z" level=info msg="CreateContainer within sandbox \"f6c8b5842208093bb1025e4077e11fd5120a504b60f3afc5700b474883554bb4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 11:45:27.606276 env[1676]: time="2024-07-02T11:45:27.606232290Z" level=info msg="CreateContainer within sandbox \"cee7f939e75a58d72e8a1b2146c412cabd3d84b68042e8218968976e09109cf6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"92112ea135310b9179d0468d3ca84fdb16dd63b8df02f3882523c28efc5c445b\"" Jul 2 11:45:27.606503 env[1676]: time="2024-07-02T11:45:27.606473190Z" level=info msg="StartContainer for \"92112ea135310b9179d0468d3ca84fdb16dd63b8df02f3882523c28efc5c445b\"" Jul 2 11:45:27.608442 env[1676]: time="2024-07-02T11:45:27.608424055Z" level=info msg="CreateContainer within sandbox \"e50881db2c82aaa1935b22ff362d5fa415fe9c92cff27383169def131ad46a24\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"59e69871a8e042e17f02c975439948853234a1acca9fef1904c519483dcba2ae\"" Jul 2 11:45:27.608622 env[1676]: time="2024-07-02T11:45:27.608608263Z" level=info msg="StartContainer for \"59e69871a8e042e17f02c975439948853234a1acca9fef1904c519483dcba2ae\"" Jul 2 11:45:27.610195 env[1676]: time="2024-07-02T11:45:27.610173204Z" level=info msg="CreateContainer within sandbox \"f6c8b5842208093bb1025e4077e11fd5120a504b60f3afc5700b474883554bb4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"55603732ee2a78fedf427b911eddec5ec211e87ef706e3c4438f60bb6ec96939\"" Jul 2 11:45:27.610392 env[1676]: time="2024-07-02T11:45:27.610381120Z" level=info msg="StartContainer for \"55603732ee2a78fedf427b911eddec5ec211e87ef706e3c4438f60bb6ec96939\"" Jul 2 11:45:27.635571 kubelet[2335]: W0702 11:45:27.635530 2335 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://147.75.203.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.203.15:6443: connect: connection refused Jul 2 11:45:27.635667 kubelet[2335]: E0702 11:45:27.635580 2335 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.203.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.203.15:6443: connect: connection refused Jul 2 11:45:27.640224 env[1676]: time="2024-07-02T11:45:27.640196138Z" level=info msg="StartContainer for \"92112ea135310b9179d0468d3ca84fdb16dd63b8df02f3882523c28efc5c445b\" returns successfully" Jul 2 11:45:27.640745 env[1676]: time="2024-07-02T11:45:27.640729276Z" level=info msg="StartContainer for \"59e69871a8e042e17f02c975439948853234a1acca9fef1904c519483dcba2ae\" returns successfully" Jul 2 11:45:27.642203 env[1676]: time="2024-07-02T11:45:27.642184266Z" level=info msg="StartContainer for \"55603732ee2a78fedf427b911eddec5ec211e87ef706e3c4438f60bb6ec96939\" returns successfully" Jul 2 11:45:28.094437 kubelet[2335]: I0702 11:45:28.094395 2335 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:28.258891 kubelet[2335]: E0702 11:45:28.258875 2335 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.5-a-3cadf325ae\" not found" node="ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:28.261590 kubelet[2335]: I0702 11:45:28.261576 2335 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:28.543328 kubelet[2335]: I0702 11:45:28.543256 2335 apiserver.go:52] "Watching apiserver" Jul 2 11:45:28.580349 kubelet[2335]: I0702 11:45:28.580338 2335 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 11:45:28.596998 kubelet[2335]: E0702 11:45:28.596967 2335 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.5-a-3cadf325ae\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:28.596998 kubelet[2335]: E0702 11:45:28.596971 2335 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.5-a-3cadf325ae\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:28.597096 kubelet[2335]: E0702 11:45:28.597034 2335 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.5-a-3cadf325ae\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:29.603104 kubelet[2335]: W0702 11:45:29.603036 2335 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 11:45:29.604422 kubelet[2335]: W0702 11:45:29.604382 2335 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 11:45:31.339682 systemd[1]: Reloading. Jul 2 11:45:31.374313 /usr/lib/systemd/system-generators/torcx-generator[2666]: time="2024-07-02T11:45:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 11:45:31.374331 /usr/lib/systemd/system-generators/torcx-generator[2666]: time="2024-07-02T11:45:31Z" level=info msg="torcx already run" Jul 2 11:45:31.444045 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 11:45:31.444058 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 11:45:31.459336 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 11:45:31.519721 systemd[1]: Stopping kubelet.service... Jul 2 11:45:31.519835 kubelet[2335]: I0702 11:45:31.519747 2335 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 11:45:31.538706 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 11:45:31.538864 systemd[1]: Stopped kubelet.service. Jul 2 11:45:31.539833 systemd[1]: Starting kubelet.service... Jul 2 11:45:31.707124 systemd[1]: Started kubelet.service. Jul 2 11:45:31.730808 kubelet[2742]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 11:45:31.730808 kubelet[2742]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 11:45:31.730808 kubelet[2742]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 11:45:31.731050 kubelet[2742]: I0702 11:45:31.730835 2742 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 11:45:31.733430 kubelet[2742]: I0702 11:45:31.733422 2742 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 11:45:31.733462 kubelet[2742]: I0702 11:45:31.733432 2742 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 11:45:31.733564 kubelet[2742]: I0702 11:45:31.733557 2742 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 11:45:31.734359 kubelet[2742]: I0702 11:45:31.734351 2742 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 11:45:31.734919 kubelet[2742]: I0702 11:45:31.734899 2742 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 11:45:31.751280 sudo[2767]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 11:45:31.751414 sudo[2767]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 11:45:31.755995 kubelet[2742]: I0702 11:45:31.755967 2742 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 11:45:31.756719 kubelet[2742]: I0702 11:45:31.756680 2742 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 11:45:31.756855 kubelet[2742]: I0702 11:45:31.756817 2742 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 11:45:31.756855 kubelet[2742]: I0702 11:45:31.756833 2742 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 11:45:31.756855 kubelet[2742]: I0702 11:45:31.756841 2742 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 11:45:31.756977 kubelet[2742]: I0702 11:45:31.756865 2742 state_mem.go:36] "Initialized new in-memory state store" Jul 2 11:45:31.756977 kubelet[2742]: I0702 11:45:31.756918 2742 kubelet.go:393] "Attempting to sync node with API server" Jul 2 11:45:31.756977 kubelet[2742]: I0702 11:45:31.756931 2742 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 11:45:31.756977 kubelet[2742]: I0702 11:45:31.756955 2742 kubelet.go:309] "Adding apiserver pod source" Jul 2 11:45:31.756977 kubelet[2742]: I0702 11:45:31.756967 2742 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 11:45:31.757385 kubelet[2742]: I0702 11:45:31.757372 2742 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 11:45:31.757726 kubelet[2742]: I0702 11:45:31.757713 2742 server.go:1232] "Started kubelet" Jul 2 11:45:31.757793 kubelet[2742]: I0702 11:45:31.757750 2742 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 11:45:31.757793 kubelet[2742]: I0702 11:45:31.757762 2742 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 11:45:31.757900 kubelet[2742]: I0702 11:45:31.757890 2742 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 11:45:31.758012 kubelet[2742]: E0702 11:45:31.758000 2742 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 11:45:31.758060 kubelet[2742]: E0702 11:45:31.758020 2742 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 11:45:31.758622 kubelet[2742]: I0702 11:45:31.758615 2742 server.go:462] "Adding debug handlers to kubelet server" Jul 2 11:45:31.758723 kubelet[2742]: I0702 11:45:31.758683 2742 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 11:45:31.758768 kubelet[2742]: I0702 11:45:31.758760 2742 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 11:45:31.758801 kubelet[2742]: E0702 11:45:31.758774 2742 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-3cadf325ae\" not found" Jul 2 11:45:31.758801 kubelet[2742]: I0702 11:45:31.758786 2742 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 11:45:31.758902 kubelet[2742]: I0702 11:45:31.758890 2742 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 11:45:31.764140 kubelet[2742]: I0702 11:45:31.764121 2742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 11:45:31.764851 kubelet[2742]: I0702 11:45:31.764838 2742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 11:45:31.764851 kubelet[2742]: I0702 11:45:31.764853 2742 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 11:45:31.764963 kubelet[2742]: I0702 11:45:31.764866 2742 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 11:45:31.764963 kubelet[2742]: E0702 11:45:31.764909 2742 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 11:45:31.793756 kubelet[2742]: I0702 11:45:31.793703 2742 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 11:45:31.793756 kubelet[2742]: I0702 11:45:31.793717 2742 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 11:45:31.793756 kubelet[2742]: I0702 11:45:31.793727 2742 state_mem.go:36] "Initialized new in-memory state store" Jul 2 11:45:31.793885 kubelet[2742]: I0702 11:45:31.793814 2742 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 11:45:31.793885 kubelet[2742]: I0702 11:45:31.793828 2742 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 11:45:31.793885 kubelet[2742]: I0702 11:45:31.793833 2742 policy_none.go:49] "None policy: Start" Jul 2 11:45:31.794121 kubelet[2742]: I0702 11:45:31.794114 2742 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 11:45:31.794149 kubelet[2742]: I0702 11:45:31.794125 2742 state_mem.go:35] "Initializing new in-memory state store" Jul 2 11:45:31.794217 kubelet[2742]: I0702 11:45:31.794212 2742 state_mem.go:75] "Updated machine memory state" Jul 2 11:45:31.794761 kubelet[2742]: I0702 11:45:31.794755 2742 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 11:45:31.794886 kubelet[2742]: I0702 11:45:31.794879 2742 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 11:45:31.860428 kubelet[2742]: I0702 11:45:31.860381 2742 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:31.864973 kubelet[2742]: I0702 11:45:31.864963 2742 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:31.864973 kubelet[2742]: I0702 11:45:31.864971 2742 topology_manager.go:215] "Topology Admit Handler" podUID="7ac5336ba321b8053f6d9b6ed292b3c2" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:31.865047 kubelet[2742]: I0702 11:45:31.865001 2742 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:31.865047 kubelet[2742]: I0702 11:45:31.865025 2742 topology_manager.go:215] "Topology Admit Handler" podUID="cdb812b018017269d00d2ddc9983b6dd" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:31.865047 kubelet[2742]: I0702 11:45:31.865046 2742 topology_manager.go:215] "Topology Admit Handler" podUID="d3d95c75e2fb732bcbd43d1c184d1b00" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:31.868146 kubelet[2742]: W0702 11:45:31.868101 2742 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 11:45:31.868146 kubelet[2742]: W0702 11:45:31.868107 2742 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 11:45:31.868146 kubelet[2742]: E0702 11:45:31.868137 2742 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.5-a-3cadf325ae\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:31.868764 kubelet[2742]: W0702 11:45:31.868729 2742 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 11:45:31.868764 kubelet[2742]: E0702 11:45:31.868754 2742 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.5-a-3cadf325ae\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:31.960159 kubelet[2742]: I0702 11:45:31.960070 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ac5336ba321b8053f6d9b6ed292b3c2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.5-a-3cadf325ae\" (UID: \"7ac5336ba321b8053f6d9b6ed292b3c2\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:31.960159 kubelet[2742]: I0702 11:45:31.960096 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cdb812b018017269d00d2ddc9983b6dd-ca-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-3cadf325ae\" (UID: \"cdb812b018017269d00d2ddc9983b6dd\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:31.960159 kubelet[2742]: I0702 11:45:31.960111 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3d95c75e2fb732bcbd43d1c184d1b00-kubeconfig\") pod \"kube-scheduler-ci-3510.3.5-a-3cadf325ae\" (UID: \"d3d95c75e2fb732bcbd43d1c184d1b00\") " pod="kube-system/kube-scheduler-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:31.960159 kubelet[2742]: I0702 11:45:31.960123 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ac5336ba321b8053f6d9b6ed292b3c2-ca-certs\") pod \"kube-apiserver-ci-3510.3.5-a-3cadf325ae\" (UID: \"7ac5336ba321b8053f6d9b6ed292b3c2\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:31.960159 kubelet[2742]: I0702 11:45:31.960135 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ac5336ba321b8053f6d9b6ed292b3c2-k8s-certs\") pod \"kube-apiserver-ci-3510.3.5-a-3cadf325ae\" (UID: \"7ac5336ba321b8053f6d9b6ed292b3c2\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:31.960305 kubelet[2742]: I0702 11:45:31.960146 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cdb812b018017269d00d2ddc9983b6dd-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.5-a-3cadf325ae\" (UID: \"cdb812b018017269d00d2ddc9983b6dd\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:31.960305 kubelet[2742]: I0702 11:45:31.960158 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cdb812b018017269d00d2ddc9983b6dd-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-3cadf325ae\" (UID: \"cdb812b018017269d00d2ddc9983b6dd\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:31.960305 kubelet[2742]: I0702 11:45:31.960168 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cdb812b018017269d00d2ddc9983b6dd-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.5-a-3cadf325ae\" (UID: \"cdb812b018017269d00d2ddc9983b6dd\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:31.960305 kubelet[2742]: I0702 11:45:31.960184 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdb812b018017269d00d2ddc9983b6dd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.5-a-3cadf325ae\" (UID: \"cdb812b018017269d00d2ddc9983b6dd\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:32.097169 sudo[2767]: pam_unix(sudo:session): session closed for user root Jul 2 11:45:32.758012 kubelet[2742]: I0702 11:45:32.757917 2742 apiserver.go:52] "Watching apiserver" Jul 2 11:45:32.777748 kubelet[2742]: W0702 11:45:32.777681 2742 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 11:45:32.777907 kubelet[2742]: E0702 11:45:32.777774 2742 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.5-a-3cadf325ae\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.5-a-3cadf325ae" Jul 2 11:45:32.802446 kubelet[2742]: I0702 11:45:32.802418 2742 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.5-a-3cadf325ae" podStartSLOduration=3.802336938 podCreationTimestamp="2024-07-02 11:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:45:32.796681323 +0000 UTC m=+1.087092118" watchObservedRunningTime="2024-07-02 11:45:32.802336938 +0000 UTC m=+1.092747737" Jul 2 11:45:32.802575 kubelet[2742]: I0702 11:45:32.802565 2742 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.5-a-3cadf325ae" podStartSLOduration=3.802537165 podCreationTimestamp="2024-07-02 11:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:45:32.802218135 +0000 UTC m=+1.092628935" watchObservedRunningTime="2024-07-02 11:45:32.802537165 +0000 UTC m=+1.092947953" Jul 2 11:45:32.808967 kubelet[2742]: I0702 11:45:32.808903 2742 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.5-a-3cadf325ae" podStartSLOduration=1.8088744860000001 podCreationTimestamp="2024-07-02 11:45:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:45:32.808857541 +0000 UTC m=+1.099268334" watchObservedRunningTime="2024-07-02 11:45:32.808874486 +0000 UTC m=+1.099285274" Jul 2 11:45:32.859706 kubelet[2742]: I0702 11:45:32.859663 2742 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 11:45:33.291936 sudo[1843]: pam_unix(sudo:session): session closed for user root Jul 2 11:45:33.292963 sshd[1838]: pam_unix(sshd:session): session closed for user core Jul 2 11:45:33.294899 systemd[1]: sshd@4-147.75.203.15:22-139.178.68.195:35030.service: Deactivated successfully. Jul 2 11:45:33.295800 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 11:45:33.295812 systemd-logind[1717]: Session 7 logged out. Waiting for processes to exit. Jul 2 11:45:33.296480 systemd-logind[1717]: Removed session 7. Jul 2 11:45:45.033921 kubelet[2742]: I0702 11:45:45.033862 2742 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 11:45:45.035083 env[1676]: time="2024-07-02T11:45:45.034759290Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 11:45:45.035896 kubelet[2742]: I0702 11:45:45.035231 2742 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 11:45:45.891448 kubelet[2742]: I0702 11:45:45.891361 2742 topology_manager.go:215] "Topology Admit Handler" podUID="4a0d4f76-fb3b-4c16-9ea0-85c8574db191" podNamespace="kube-system" podName="kube-proxy-rmcwx" Jul 2 11:45:45.899234 kubelet[2742]: I0702 11:45:45.899152 2742 topology_manager.go:215] "Topology Admit Handler" podUID="c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" podNamespace="kube-system" podName="cilium-x9q26" Jul 2 11:45:45.933159 kubelet[2742]: I0702 11:45:45.933128 2742 topology_manager.go:215] "Topology Admit Handler" podUID="ec3fb2ea-afa9-4011-b261-821e8587cee4" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-f7mz7" Jul 2 11:45:45.952962 kubelet[2742]: I0702 11:45:45.952940 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-cilium-cgroup\") pod \"cilium-x9q26\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " pod="kube-system/cilium-x9q26" Jul 2 11:45:45.952962 kubelet[2742]: I0702 11:45:45.952967 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-lib-modules\") pod \"cilium-x9q26\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " pod="kube-system/cilium-x9q26" Jul 2 11:45:45.953123 kubelet[2742]: I0702 11:45:45.952984 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-cilium-config-path\") pod \"cilium-x9q26\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " pod="kube-system/cilium-x9q26" Jul 2 11:45:45.953123 kubelet[2742]: I0702 11:45:45.953000 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4a0d4f76-fb3b-4c16-9ea0-85c8574db191-kube-proxy\") pod \"kube-proxy-rmcwx\" (UID: \"4a0d4f76-fb3b-4c16-9ea0-85c8574db191\") " pod="kube-system/kube-proxy-rmcwx" Jul 2 11:45:45.953123 kubelet[2742]: I0702 11:45:45.953018 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-hubble-tls\") pod \"cilium-x9q26\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " pod="kube-system/cilium-x9q26" Jul 2 11:45:45.953123 kubelet[2742]: I0702 11:45:45.953037 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xtgh\" (UniqueName: \"kubernetes.io/projected/4a0d4f76-fb3b-4c16-9ea0-85c8574db191-kube-api-access-4xtgh\") pod \"kube-proxy-rmcwx\" (UID: \"4a0d4f76-fb3b-4c16-9ea0-85c8574db191\") " pod="kube-system/kube-proxy-rmcwx" Jul 2 11:45:45.953123 kubelet[2742]: I0702 11:45:45.953051 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-xtables-lock\") pod \"cilium-x9q26\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " pod="kube-system/cilium-x9q26" Jul 2 11:45:45.953270 kubelet[2742]: I0702 11:45:45.953066 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-host-proc-sys-net\") pod \"cilium-x9q26\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " pod="kube-system/cilium-x9q26" Jul 2 11:45:45.953270 kubelet[2742]: I0702 11:45:45.953081 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec3fb2ea-afa9-4011-b261-821e8587cee4-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-f7mz7\" (UID: \"ec3fb2ea-afa9-4011-b261-821e8587cee4\") " pod="kube-system/cilium-operator-6bc8ccdb58-f7mz7" Jul 2 11:45:45.953270 kubelet[2742]: I0702 11:45:45.953122 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a0d4f76-fb3b-4c16-9ea0-85c8574db191-xtables-lock\") pod \"kube-proxy-rmcwx\" (UID: \"4a0d4f76-fb3b-4c16-9ea0-85c8574db191\") " pod="kube-system/kube-proxy-rmcwx" Jul 2 11:45:45.953270 kubelet[2742]: I0702 11:45:45.953160 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fjkc\" (UniqueName: \"kubernetes.io/projected/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-kube-api-access-9fjkc\") pod \"cilium-x9q26\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " pod="kube-system/cilium-x9q26" Jul 2 11:45:45.953270 kubelet[2742]: I0702 11:45:45.953183 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-cilium-run\") pod \"cilium-x9q26\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " pod="kube-system/cilium-x9q26" Jul 2 11:45:45.953438 kubelet[2742]: I0702 11:45:45.953198 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-bpf-maps\") pod \"cilium-x9q26\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " pod="kube-system/cilium-x9q26" Jul 2 11:45:45.953438 kubelet[2742]: I0702 11:45:45.953214 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-cni-path\") pod \"cilium-x9q26\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " pod="kube-system/cilium-x9q26" Jul 2 11:45:45.953438 kubelet[2742]: I0702 11:45:45.953228 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-host-proc-sys-kernel\") pod \"cilium-x9q26\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " pod="kube-system/cilium-x9q26" Jul 2 11:45:45.953438 kubelet[2742]: I0702 11:45:45.953243 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a0d4f76-fb3b-4c16-9ea0-85c8574db191-lib-modules\") pod \"kube-proxy-rmcwx\" (UID: \"4a0d4f76-fb3b-4c16-9ea0-85c8574db191\") " pod="kube-system/kube-proxy-rmcwx" Jul 2 11:45:45.953438 kubelet[2742]: I0702 11:45:45.953257 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-clustermesh-secrets\") pod \"cilium-x9q26\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " pod="kube-system/cilium-x9q26" Jul 2 11:45:45.953438 kubelet[2742]: I0702 11:45:45.953274 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-hostproc\") pod \"cilium-x9q26\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " pod="kube-system/cilium-x9q26" Jul 2 11:45:45.953595 kubelet[2742]: I0702 11:45:45.953290 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtd6h\" (UniqueName: \"kubernetes.io/projected/ec3fb2ea-afa9-4011-b261-821e8587cee4-kube-api-access-wtd6h\") pod \"cilium-operator-6bc8ccdb58-f7mz7\" (UID: \"ec3fb2ea-afa9-4011-b261-821e8587cee4\") " pod="kube-system/cilium-operator-6bc8ccdb58-f7mz7" Jul 2 11:45:45.953595 kubelet[2742]: I0702 11:45:45.953335 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-etc-cni-netd\") pod \"cilium-x9q26\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " pod="kube-system/cilium-x9q26" Jul 2 11:45:46.202620 env[1676]: time="2024-07-02T11:45:46.202392083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rmcwx,Uid:4a0d4f76-fb3b-4c16-9ea0-85c8574db191,Namespace:kube-system,Attempt:0,}" Jul 2 11:45:46.210714 env[1676]: time="2024-07-02T11:45:46.210610928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x9q26,Uid:c7d0ce8c-8c9a-43e8-ba9d-f515959674fb,Namespace:kube-system,Attempt:0,}" Jul 2 11:45:46.237004 env[1676]: time="2024-07-02T11:45:46.236900247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-f7mz7,Uid:ec3fb2ea-afa9-4011-b261-821e8587cee4,Namespace:kube-system,Attempt:0,}" Jul 2 11:45:46.650929 env[1676]: time="2024-07-02T11:45:46.650840762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:45:46.650929 env[1676]: time="2024-07-02T11:45:46.650893703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:45:46.650929 env[1676]: time="2024-07-02T11:45:46.650900399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:45:46.651239 env[1676]: time="2024-07-02T11:45:46.651192974Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ffe0e992c2a83823cdb9f6438f13d386ab20a2831519093e954c10afd7dd35b6 pid=2901 runtime=io.containerd.runc.v2 Jul 2 11:45:46.667782 env[1676]: time="2024-07-02T11:45:46.667750720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rmcwx,Uid:4a0d4f76-fb3b-4c16-9ea0-85c8574db191,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffe0e992c2a83823cdb9f6438f13d386ab20a2831519093e954c10afd7dd35b6\"" Jul 2 11:45:46.669187 env[1676]: time="2024-07-02T11:45:46.669172146Z" level=info msg="CreateContainer within sandbox \"ffe0e992c2a83823cdb9f6438f13d386ab20a2831519093e954c10afd7dd35b6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 11:45:46.706053 env[1676]: time="2024-07-02T11:45:46.705908734Z" level=info msg="CreateContainer within sandbox \"ffe0e992c2a83823cdb9f6438f13d386ab20a2831519093e954c10afd7dd35b6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"86f58f2830862513e3efeda2ca968f9edf16c1c52893ac46ad7f6576f04ef861\"" Jul 2 11:45:46.707337 env[1676]: time="2024-07-02T11:45:46.707189630Z" level=info msg="StartContainer for \"86f58f2830862513e3efeda2ca968f9edf16c1c52893ac46ad7f6576f04ef861\"" Jul 2 11:45:46.710302 env[1676]: time="2024-07-02T11:45:46.710165360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:45:46.710302 env[1676]: time="2024-07-02T11:45:46.710266990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:45:46.710812 env[1676]: time="2024-07-02T11:45:46.710308459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:45:46.710812 env[1676]: time="2024-07-02T11:45:46.710691081Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05 pid=2943 runtime=io.containerd.runc.v2 Jul 2 11:45:46.713476 env[1676]: time="2024-07-02T11:45:46.713268818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:45:46.713781 env[1676]: time="2024-07-02T11:45:46.713438881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:45:46.713781 env[1676]: time="2024-07-02T11:45:46.713523393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:45:46.714219 env[1676]: time="2024-07-02T11:45:46.714093585Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6efad680b5e8e08ab3b04f781888e1d4e93205ff3ea4a9e0db6adf04abbd0b66 pid=2951 runtime=io.containerd.runc.v2 Jul 2 11:45:46.760233 env[1676]: time="2024-07-02T11:45:46.760189655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x9q26,Uid:c7d0ce8c-8c9a-43e8-ba9d-f515959674fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\"" Jul 2 11:45:46.761594 env[1676]: time="2024-07-02T11:45:46.761563480Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 11:45:46.773060 env[1676]: time="2024-07-02T11:45:46.773029098Z" level=info msg="StartContainer for \"86f58f2830862513e3efeda2ca968f9edf16c1c52893ac46ad7f6576f04ef861\" returns successfully" Jul 2 11:45:46.782233 env[1676]: time="2024-07-02T11:45:46.782206420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-f7mz7,Uid:ec3fb2ea-afa9-4011-b261-821e8587cee4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6efad680b5e8e08ab3b04f781888e1d4e93205ff3ea4a9e0db6adf04abbd0b66\"" Jul 2 11:45:46.805188 kubelet[2742]: I0702 11:45:46.805170 2742 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rmcwx" podStartSLOduration=1.805147396 podCreationTimestamp="2024-07-02 11:45:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:45:46.805005479 +0000 UTC m=+15.095416264" watchObservedRunningTime="2024-07-02 11:45:46.805147396 +0000 UTC m=+15.095558181" Jul 2 11:45:47.835672 update_engine[1667]: I0702 11:45:47.835579 1667 update_attempter.cc:509] Updating boot flags... Jul 2 11:45:50.188181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount322339713.mount: Deactivated successfully. Jul 2 11:45:51.987878 env[1676]: time="2024-07-02T11:45:51.987762608Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:51.990845 env[1676]: time="2024-07-02T11:45:51.990748926Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:51.995132 env[1676]: time="2024-07-02T11:45:51.995033956Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:51.996985 env[1676]: time="2024-07-02T11:45:51.996868489Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 11:45:51.998035 env[1676]: time="2024-07-02T11:45:51.997954723Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 11:45:52.000837 env[1676]: time="2024-07-02T11:45:52.000720058Z" level=info msg="CreateContainer within sandbox \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 11:45:52.104866 env[1676]: time="2024-07-02T11:45:52.104775413Z" level=info msg="CreateContainer within sandbox \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c839604be81b4af067f8b75298ce8cea837ca3361a9d02cf47731ee0492350cc\"" Jul 2 11:45:52.105718 env[1676]: time="2024-07-02T11:45:52.105612940Z" level=info msg="StartContainer for \"c839604be81b4af067f8b75298ce8cea837ca3361a9d02cf47731ee0492350cc\"" Jul 2 11:45:52.170348 env[1676]: time="2024-07-02T11:45:52.170280374Z" level=info msg="StartContainer for \"c839604be81b4af067f8b75298ce8cea837ca3361a9d02cf47731ee0492350cc\" returns successfully" Jul 2 11:45:53.105908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c839604be81b4af067f8b75298ce8cea837ca3361a9d02cf47731ee0492350cc-rootfs.mount: Deactivated successfully. Jul 2 11:45:53.977271 env[1676]: time="2024-07-02T11:45:53.977234681Z" level=info msg="shim disconnected" id=c839604be81b4af067f8b75298ce8cea837ca3361a9d02cf47731ee0492350cc Jul 2 11:45:53.977271 env[1676]: time="2024-07-02T11:45:53.977269275Z" level=warning msg="cleaning up after shim disconnected" id=c839604be81b4af067f8b75298ce8cea837ca3361a9d02cf47731ee0492350cc namespace=k8s.io Jul 2 11:45:53.977640 env[1676]: time="2024-07-02T11:45:53.977277786Z" level=info msg="cleaning up dead shim" Jul 2 11:45:53.983038 env[1676]: time="2024-07-02T11:45:53.982985772Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:45:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3245 runtime=io.containerd.runc.v2\n" Jul 2 11:45:54.131780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3732382835.mount: Deactivated successfully. Jul 2 11:45:54.522018 env[1676]: time="2024-07-02T11:45:54.521965521Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:54.522502 env[1676]: time="2024-07-02T11:45:54.522468302Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:54.523230 env[1676]: time="2024-07-02T11:45:54.523180118Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 11:45:54.523769 env[1676]: time="2024-07-02T11:45:54.523726407Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 11:45:54.524853 env[1676]: time="2024-07-02T11:45:54.524840073Z" level=info msg="CreateContainer within sandbox \"6efad680b5e8e08ab3b04f781888e1d4e93205ff3ea4a9e0db6adf04abbd0b66\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 11:45:54.528830 env[1676]: time="2024-07-02T11:45:54.528790965Z" level=info msg="CreateContainer within sandbox \"6efad680b5e8e08ab3b04f781888e1d4e93205ff3ea4a9e0db6adf04abbd0b66\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb\"" Jul 2 11:45:54.531673 env[1676]: time="2024-07-02T11:45:54.531582290Z" level=info msg="StartContainer for \"6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb\"" Jul 2 11:45:54.551556 env[1676]: time="2024-07-02T11:45:54.551501737Z" level=info msg="StartContainer for \"6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb\" returns successfully" Jul 2 11:45:54.820498 env[1676]: time="2024-07-02T11:45:54.820422824Z" level=info msg="CreateContainer within sandbox \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 11:45:54.826719 env[1676]: time="2024-07-02T11:45:54.826659000Z" level=info msg="CreateContainer within sandbox \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"478d3c0061b83765dbf5e8925644c40526dde8508cd74c729267b3b82a90c56d\"" Jul 2 11:45:54.827092 env[1676]: time="2024-07-02T11:45:54.827038456Z" level=info msg="StartContainer for \"478d3c0061b83765dbf5e8925644c40526dde8508cd74c729267b3b82a90c56d\"" Jul 2 11:45:54.839809 kubelet[2742]: I0702 11:45:54.839784 2742 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-f7mz7" podStartSLOduration=2.09868788 podCreationTimestamp="2024-07-02 11:45:45 +0000 UTC" firstStartedPulling="2024-07-02 11:45:46.782787138 +0000 UTC m=+15.073197923" lastFinishedPulling="2024-07-02 11:45:54.5238522 +0000 UTC m=+22.814262984" observedRunningTime="2024-07-02 11:45:54.839404044 +0000 UTC m=+23.129814831" watchObservedRunningTime="2024-07-02 11:45:54.839752941 +0000 UTC m=+23.130163724" Jul 2 11:45:54.860265 env[1676]: time="2024-07-02T11:45:54.860239257Z" level=info msg="StartContainer for \"478d3c0061b83765dbf5e8925644c40526dde8508cd74c729267b3b82a90c56d\" returns successfully" Jul 2 11:45:54.866869 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 11:45:54.867044 systemd[1]: Stopped systemd-sysctl.service. Jul 2 11:45:54.867192 systemd[1]: Stopping systemd-sysctl.service... Jul 2 11:45:54.868284 systemd[1]: Starting systemd-sysctl.service... Jul 2 11:45:54.872718 systemd[1]: Finished systemd-sysctl.service. Jul 2 11:45:55.026113 env[1676]: time="2024-07-02T11:45:55.026045074Z" level=info msg="shim disconnected" id=478d3c0061b83765dbf5e8925644c40526dde8508cd74c729267b3b82a90c56d Jul 2 11:45:55.026113 env[1676]: time="2024-07-02T11:45:55.026084422Z" level=warning msg="cleaning up after shim disconnected" id=478d3c0061b83765dbf5e8925644c40526dde8508cd74c729267b3b82a90c56d namespace=k8s.io Jul 2 11:45:55.026113 env[1676]: time="2024-07-02T11:45:55.026095929Z" level=info msg="cleaning up dead shim" Jul 2 11:45:55.031977 env[1676]: time="2024-07-02T11:45:55.031920882Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:45:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3359 runtime=io.containerd.runc.v2\n" Jul 2 11:45:55.830814 env[1676]: time="2024-07-02T11:45:55.830713957Z" level=info msg="CreateContainer within sandbox \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 11:45:55.839093 env[1676]: time="2024-07-02T11:45:55.839034051Z" level=info msg="CreateContainer within sandbox \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6eb6099c6f553cf0b3a9d21f9457545e571262b68b63fc021c099cdbf639fce1\"" Jul 2 11:45:55.839392 env[1676]: time="2024-07-02T11:45:55.839369826Z" level=info msg="StartContainer for \"6eb6099c6f553cf0b3a9d21f9457545e571262b68b63fc021c099cdbf639fce1\"" Jul 2 11:45:55.840695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4049369603.mount: Deactivated successfully. Jul 2 11:45:55.863496 env[1676]: time="2024-07-02T11:45:55.863466052Z" level=info msg="StartContainer for \"6eb6099c6f553cf0b3a9d21f9457545e571262b68b63fc021c099cdbf639fce1\" returns successfully" Jul 2 11:45:55.876041 env[1676]: time="2024-07-02T11:45:55.875981216Z" level=info msg="shim disconnected" id=6eb6099c6f553cf0b3a9d21f9457545e571262b68b63fc021c099cdbf639fce1 Jul 2 11:45:55.876041 env[1676]: time="2024-07-02T11:45:55.876013594Z" level=warning msg="cleaning up after shim disconnected" id=6eb6099c6f553cf0b3a9d21f9457545e571262b68b63fc021c099cdbf639fce1 namespace=k8s.io Jul 2 11:45:55.876041 env[1676]: time="2024-07-02T11:45:55.876020895Z" level=info msg="cleaning up dead shim" Jul 2 11:45:55.880047 env[1676]: time="2024-07-02T11:45:55.880030084Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:45:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3415 runtime=io.containerd.runc.v2\n" Jul 2 11:45:56.126101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6eb6099c6f553cf0b3a9d21f9457545e571262b68b63fc021c099cdbf639fce1-rootfs.mount: Deactivated successfully. Jul 2 11:45:56.838969 env[1676]: time="2024-07-02T11:45:56.838887020Z" level=info msg="CreateContainer within sandbox \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 11:45:56.855129 env[1676]: time="2024-07-02T11:45:56.855035121Z" level=info msg="CreateContainer within sandbox \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0726cf02cb5665efdafe5194cc28657e7cce59be1f85ea3b0cc60480356e9452\"" Jul 2 11:45:56.855912 env[1676]: time="2024-07-02T11:45:56.855825378Z" level=info msg="StartContainer for \"0726cf02cb5665efdafe5194cc28657e7cce59be1f85ea3b0cc60480356e9452\"" Jul 2 11:45:56.914162 env[1676]: time="2024-07-02T11:45:56.914068694Z" level=info msg="StartContainer for \"0726cf02cb5665efdafe5194cc28657e7cce59be1f85ea3b0cc60480356e9452\" returns successfully" Jul 2 11:45:56.940679 env[1676]: time="2024-07-02T11:45:56.940574713Z" level=info msg="shim disconnected" id=0726cf02cb5665efdafe5194cc28657e7cce59be1f85ea3b0cc60480356e9452 Jul 2 11:45:56.940679 env[1676]: time="2024-07-02T11:45:56.940641959Z" level=warning msg="cleaning up after shim disconnected" id=0726cf02cb5665efdafe5194cc28657e7cce59be1f85ea3b0cc60480356e9452 namespace=k8s.io Jul 2 11:45:56.940679 env[1676]: time="2024-07-02T11:45:56.940658444Z" level=info msg="cleaning up dead shim" Jul 2 11:45:56.949216 env[1676]: time="2024-07-02T11:45:56.949169927Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:45:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3471 runtime=io.containerd.runc.v2\n" Jul 2 11:45:57.130771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0726cf02cb5665efdafe5194cc28657e7cce59be1f85ea3b0cc60480356e9452-rootfs.mount: Deactivated successfully. Jul 2 11:45:57.846747 env[1676]: time="2024-07-02T11:45:57.846655902Z" level=info msg="CreateContainer within sandbox \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 11:45:57.856907 env[1676]: time="2024-07-02T11:45:57.856887249Z" level=info msg="CreateContainer within sandbox \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634\"" Jul 2 11:45:57.857294 env[1676]: time="2024-07-02T11:45:57.857280220Z" level=info msg="StartContainer for \"27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634\"" Jul 2 11:45:57.859083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2449829401.mount: Deactivated successfully. Jul 2 11:45:57.880179 env[1676]: time="2024-07-02T11:45:57.880151715Z" level=info msg="StartContainer for \"27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634\" returns successfully" Jul 2 11:45:57.932531 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 11:45:57.997075 kubelet[2742]: I0702 11:45:57.997057 2742 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 11:45:58.009300 kubelet[2742]: I0702 11:45:58.009279 2742 topology_manager.go:215] "Topology Admit Handler" podUID="493f4487-8420-4700-9ab6-043eb61e868b" podNamespace="kube-system" podName="coredns-5dd5756b68-r4hfg" Jul 2 11:45:58.010142 kubelet[2742]: I0702 11:45:58.010128 2742 topology_manager.go:215] "Topology Admit Handler" podUID="5a66f260-5b53-4c33-b086-041ba79716f5" podNamespace="kube-system" podName="coredns-5dd5756b68-wqrcr" Jul 2 11:45:58.040361 kubelet[2742]: I0702 11:45:58.040318 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xltr\" (UniqueName: \"kubernetes.io/projected/493f4487-8420-4700-9ab6-043eb61e868b-kube-api-access-4xltr\") pod \"coredns-5dd5756b68-r4hfg\" (UID: \"493f4487-8420-4700-9ab6-043eb61e868b\") " pod="kube-system/coredns-5dd5756b68-r4hfg" Jul 2 11:45:58.040361 kubelet[2742]: I0702 11:45:58.040343 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/493f4487-8420-4700-9ab6-043eb61e868b-config-volume\") pod \"coredns-5dd5756b68-r4hfg\" (UID: \"493f4487-8420-4700-9ab6-043eb61e868b\") " pod="kube-system/coredns-5dd5756b68-r4hfg" Jul 2 11:45:58.040361 kubelet[2742]: I0702 11:45:58.040356 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a66f260-5b53-4c33-b086-041ba79716f5-config-volume\") pod \"coredns-5dd5756b68-wqrcr\" (UID: \"5a66f260-5b53-4c33-b086-041ba79716f5\") " pod="kube-system/coredns-5dd5756b68-wqrcr" Jul 2 11:45:58.040491 kubelet[2742]: I0702 11:45:58.040375 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zztd\" (UniqueName: \"kubernetes.io/projected/5a66f260-5b53-4c33-b086-041ba79716f5-kube-api-access-9zztd\") pod \"coredns-5dd5756b68-wqrcr\" (UID: \"5a66f260-5b53-4c33-b086-041ba79716f5\") " pod="kube-system/coredns-5dd5756b68-wqrcr" Jul 2 11:45:58.092531 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 11:45:58.312877 env[1676]: time="2024-07-02T11:45:58.312760504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-r4hfg,Uid:493f4487-8420-4700-9ab6-043eb61e868b,Namespace:kube-system,Attempt:0,}" Jul 2 11:45:58.312877 env[1676]: time="2024-07-02T11:45:58.312774434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wqrcr,Uid:5a66f260-5b53-4c33-b086-041ba79716f5,Namespace:kube-system,Attempt:0,}" Jul 2 11:45:58.856047 kubelet[2742]: I0702 11:45:58.855999 2742 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-x9q26" podStartSLOduration=8.619670174 podCreationTimestamp="2024-07-02 11:45:45 +0000 UTC" firstStartedPulling="2024-07-02 11:45:46.761182726 +0000 UTC m=+15.051593517" lastFinishedPulling="2024-07-02 11:45:51.997485454 +0000 UTC m=+20.287896289" observedRunningTime="2024-07-02 11:45:58.855604004 +0000 UTC m=+27.146014789" watchObservedRunningTime="2024-07-02 11:45:58.855972946 +0000 UTC m=+27.146383728" Jul 2 11:45:59.693342 systemd-networkd[1408]: cilium_host: Link UP Jul 2 11:45:59.693446 systemd-networkd[1408]: cilium_net: Link UP Jul 2 11:45:59.693451 systemd-networkd[1408]: cilium_net: Gained carrier Jul 2 11:45:59.693546 systemd-networkd[1408]: cilium_host: Gained carrier Jul 2 11:45:59.701465 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 11:45:59.701455 systemd-networkd[1408]: cilium_host: Gained IPv6LL Jul 2 11:45:59.745000 systemd-networkd[1408]: cilium_vxlan: Link UP Jul 2 11:45:59.745003 systemd-networkd[1408]: cilium_vxlan: Gained carrier Jul 2 11:45:59.883463 kernel: NET: Registered PF_ALG protocol family Jul 2 11:46:00.393593 systemd-networkd[1408]: lxc_health: Link UP Jul 2 11:46:00.422138 systemd-networkd[1408]: lxc_health: Gained carrier Jul 2 11:46:00.422462 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 11:46:00.544565 systemd-networkd[1408]: cilium_net: Gained IPv6LL Jul 2 11:46:00.851372 systemd-networkd[1408]: lxc37fdfa304edd: Link UP Jul 2 11:46:00.887460 kernel: eth0: renamed from tmp397d0 Jul 2 11:46:00.906513 kernel: eth0: renamed from tmpe172f Jul 2 11:46:00.931817 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 11:46:00.931865 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc37fdfa304edd: link becomes ready Jul 2 11:46:00.931912 systemd-networkd[1408]: tmpe172f: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 11:46:00.931984 systemd-networkd[1408]: tmpe172f: Cannot enable IPv6, ignoring: No such file or directory Jul 2 11:46:00.932006 systemd-networkd[1408]: tmpe172f: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory Jul 2 11:46:00.932017 systemd-networkd[1408]: tmpe172f: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory Jul 2 11:46:00.932024 systemd-networkd[1408]: tmpe172f: Cannot set IPv6 proxy NDP, ignoring: No such file or directory Jul 2 11:46:00.932033 systemd-networkd[1408]: tmpe172f: Cannot enable promote_secondaries for interface, ignoring: No such file or directory Jul 2 11:46:00.932455 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 11:46:00.946111 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3f85a7ad01de: link becomes ready Jul 2 11:46:00.946300 systemd-networkd[1408]: lxc3f85a7ad01de: Link UP Jul 2 11:46:00.946655 systemd-networkd[1408]: lxc37fdfa304edd: Gained carrier Jul 2 11:46:00.946782 systemd-networkd[1408]: lxc3f85a7ad01de: Gained carrier Jul 2 11:46:01.440613 systemd-networkd[1408]: cilium_vxlan: Gained IPv6LL Jul 2 11:46:02.144552 systemd-networkd[1408]: lxc3f85a7ad01de: Gained IPv6LL Jul 2 11:46:02.336574 systemd-networkd[1408]: lxc37fdfa304edd: Gained IPv6LL Jul 2 11:46:02.465544 systemd-networkd[1408]: lxc_health: Gained IPv6LL Jul 2 11:46:03.260859 env[1676]: time="2024-07-02T11:46:03.260792969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:46:03.260859 env[1676]: time="2024-07-02T11:46:03.260813059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:46:03.260859 env[1676]: time="2024-07-02T11:46:03.260819898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:46:03.261151 env[1676]: time="2024-07-02T11:46:03.260915734Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/397d0bbb630321942740227f76d61a1ac247c8ad4db6df8b46584214af2a7435 pid=4155 runtime=io.containerd.runc.v2 Jul 2 11:46:03.261151 env[1676]: time="2024-07-02T11:46:03.260963052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:46:03.261151 env[1676]: time="2024-07-02T11:46:03.260979544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:46:03.261151 env[1676]: time="2024-07-02T11:46:03.260986514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:46:03.261151 env[1676]: time="2024-07-02T11:46:03.261057408Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e172fcfd02bbcf46c22cb40d2b8970e7fee52e2710332f88c9ec763c038b3a78 pid=4156 runtime=io.containerd.runc.v2 Jul 2 11:46:03.289254 env[1676]: time="2024-07-02T11:46:03.289219960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wqrcr,Uid:5a66f260-5b53-4c33-b086-041ba79716f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e172fcfd02bbcf46c22cb40d2b8970e7fee52e2710332f88c9ec763c038b3a78\"" Jul 2 11:46:03.289352 env[1676]: time="2024-07-02T11:46:03.289274159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-r4hfg,Uid:493f4487-8420-4700-9ab6-043eb61e868b,Namespace:kube-system,Attempt:0,} returns sandbox id \"397d0bbb630321942740227f76d61a1ac247c8ad4db6df8b46584214af2a7435\"" Jul 2 11:46:03.290486 env[1676]: time="2024-07-02T11:46:03.290471949Z" level=info msg="CreateContainer within sandbox \"e172fcfd02bbcf46c22cb40d2b8970e7fee52e2710332f88c9ec763c038b3a78\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 11:46:03.290579 env[1676]: time="2024-07-02T11:46:03.290534097Z" level=info msg="CreateContainer within sandbox \"397d0bbb630321942740227f76d61a1ac247c8ad4db6df8b46584214af2a7435\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 11:46:03.294967 env[1676]: time="2024-07-02T11:46:03.294949998Z" level=info msg="CreateContainer within sandbox \"397d0bbb630321942740227f76d61a1ac247c8ad4db6df8b46584214af2a7435\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"835abe0ba1ff322641c3ede0f2c44520aafa9310571a8dbe0f8d415e46cc79f4\"" Jul 2 11:46:03.295183 env[1676]: time="2024-07-02T11:46:03.295170925Z" level=info msg="StartContainer for \"835abe0ba1ff322641c3ede0f2c44520aafa9310571a8dbe0f8d415e46cc79f4\"" Jul 2 11:46:03.295910 env[1676]: time="2024-07-02T11:46:03.295870455Z" level=info msg="CreateContainer within sandbox \"e172fcfd02bbcf46c22cb40d2b8970e7fee52e2710332f88c9ec763c038b3a78\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"543afbe2f58d8bfc8cccb23b3b172b95e51c27adc795d0f07ce4d9fc95fac8f8\"" Jul 2 11:46:03.296101 env[1676]: time="2024-07-02T11:46:03.296048611Z" level=info msg="StartContainer for \"543afbe2f58d8bfc8cccb23b3b172b95e51c27adc795d0f07ce4d9fc95fac8f8\"" Jul 2 11:46:03.336861 env[1676]: time="2024-07-02T11:46:03.336832272Z" level=info msg="StartContainer for \"835abe0ba1ff322641c3ede0f2c44520aafa9310571a8dbe0f8d415e46cc79f4\" returns successfully" Jul 2 11:46:03.337526 env[1676]: time="2024-07-02T11:46:03.337507580Z" level=info msg="StartContainer for \"543afbe2f58d8bfc8cccb23b3b172b95e51c27adc795d0f07ce4d9fc95fac8f8\" returns successfully" Jul 2 11:46:03.873010 kubelet[2742]: I0702 11:46:03.872992 2742 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-r4hfg" podStartSLOduration=18.872966843 podCreationTimestamp="2024-07-02 11:45:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:46:03.872820927 +0000 UTC m=+32.163231714" watchObservedRunningTime="2024-07-02 11:46:03.872966843 +0000 UTC m=+32.163377627" Jul 2 11:46:03.883158 kubelet[2742]: I0702 11:46:03.883137 2742 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-wqrcr" podStartSLOduration=18.883111216 podCreationTimestamp="2024-07-02 11:45:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:46:03.882802938 +0000 UTC m=+32.173213724" watchObservedRunningTime="2024-07-02 11:46:03.883111216 +0000 UTC m=+32.173521999" Jul 2 11:52:27.567202 systemd[1]: Started sshd@5-147.75.203.15:22-139.178.68.195:46146.service. Jul 2 11:52:27.599834 sshd[4364]: Accepted publickey for core from 139.178.68.195 port 46146 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:52:27.600787 sshd[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:52:27.604455 systemd-logind[1717]: New session 8 of user core. Jul 2 11:52:27.605544 systemd[1]: Started session-8.scope. Jul 2 11:52:27.738574 sshd[4364]: pam_unix(sshd:session): session closed for user core Jul 2 11:52:27.740073 systemd[1]: sshd@5-147.75.203.15:22-139.178.68.195:46146.service: Deactivated successfully. Jul 2 11:52:27.740725 systemd-logind[1717]: Session 8 logged out. Waiting for processes to exit. Jul 2 11:52:27.740755 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 11:52:27.741233 systemd-logind[1717]: Removed session 8. Jul 2 11:52:32.748063 systemd[1]: Started sshd@6-147.75.203.15:22-139.178.68.195:55196.service. Jul 2 11:52:32.816426 sshd[4395]: Accepted publickey for core from 139.178.68.195 port 55196 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:52:32.819419 sshd[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:52:32.829481 systemd-logind[1717]: New session 9 of user core. Jul 2 11:52:32.832928 systemd[1]: Started session-9.scope. Jul 2 11:52:33.002154 sshd[4395]: pam_unix(sshd:session): session closed for user core Jul 2 11:52:33.003717 systemd[1]: sshd@6-147.75.203.15:22-139.178.68.195:55196.service: Deactivated successfully. Jul 2 11:52:33.004427 systemd-logind[1717]: Session 9 logged out. Waiting for processes to exit. Jul 2 11:52:33.004444 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 11:52:33.005201 systemd-logind[1717]: Removed session 9. Jul 2 11:52:38.008825 systemd[1]: Started sshd@7-147.75.203.15:22-139.178.68.195:55212.service. Jul 2 11:52:38.040446 sshd[4424]: Accepted publickey for core from 139.178.68.195 port 55212 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:52:38.043815 sshd[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:52:38.054738 systemd-logind[1717]: New session 10 of user core. Jul 2 11:52:38.057210 systemd[1]: Started session-10.scope. Jul 2 11:52:38.161349 sshd[4424]: pam_unix(sshd:session): session closed for user core Jul 2 11:52:38.162921 systemd[1]: sshd@7-147.75.203.15:22-139.178.68.195:55212.service: Deactivated successfully. Jul 2 11:52:38.163520 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 11:52:38.163539 systemd-logind[1717]: Session 10 logged out. Waiting for processes to exit. Jul 2 11:52:38.164105 systemd-logind[1717]: Removed session 10. Jul 2 11:52:43.168761 systemd[1]: Started sshd@8-147.75.203.15:22-139.178.68.195:53218.service. Jul 2 11:52:43.203979 sshd[4451]: Accepted publickey for core from 139.178.68.195 port 53218 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:52:43.207315 sshd[4451]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:52:43.218267 systemd-logind[1717]: New session 11 of user core. Jul 2 11:52:43.220789 systemd[1]: Started session-11.scope. Jul 2 11:52:43.314668 sshd[4451]: pam_unix(sshd:session): session closed for user core Jul 2 11:52:43.316288 systemd[1]: Started sshd@9-147.75.203.15:22-139.178.68.195:53224.service. Jul 2 11:52:43.316612 systemd[1]: sshd@8-147.75.203.15:22-139.178.68.195:53218.service: Deactivated successfully. Jul 2 11:52:43.317203 systemd-logind[1717]: Session 11 logged out. Waiting for processes to exit. Jul 2 11:52:43.317204 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 11:52:43.317650 systemd-logind[1717]: Removed session 11. Jul 2 11:52:43.346887 sshd[4477]: Accepted publickey for core from 139.178.68.195 port 53224 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:52:43.347809 sshd[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:52:43.351357 systemd-logind[1717]: New session 12 of user core. Jul 2 11:52:43.352052 systemd[1]: Started session-12.scope. Jul 2 11:52:43.777572 sshd[4477]: pam_unix(sshd:session): session closed for user core Jul 2 11:52:43.779321 systemd[1]: Started sshd@10-147.75.203.15:22-139.178.68.195:53226.service. Jul 2 11:52:43.779747 systemd[1]: sshd@9-147.75.203.15:22-139.178.68.195:53224.service: Deactivated successfully. Jul 2 11:52:43.780385 systemd-logind[1717]: Session 12 logged out. Waiting for processes to exit. Jul 2 11:52:43.780424 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 11:52:43.780912 systemd-logind[1717]: Removed session 12. Jul 2 11:52:43.811345 sshd[4504]: Accepted publickey for core from 139.178.68.195 port 53226 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:52:43.812385 sshd[4504]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:52:43.815712 systemd-logind[1717]: New session 13 of user core. Jul 2 11:52:43.816441 systemd[1]: Started session-13.scope. Jul 2 11:52:43.962130 sshd[4504]: pam_unix(sshd:session): session closed for user core Jul 2 11:52:43.964071 systemd[1]: sshd@10-147.75.203.15:22-139.178.68.195:53226.service: Deactivated successfully. Jul 2 11:52:43.964984 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 11:52:43.964994 systemd-logind[1717]: Session 13 logged out. Waiting for processes to exit. Jul 2 11:52:43.965839 systemd-logind[1717]: Removed session 13. Jul 2 11:52:47.824749 update_engine[1667]: I0702 11:52:47.824640 1667 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 2 11:52:47.824749 update_engine[1667]: I0702 11:52:47.824712 1667 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 2 11:52:47.830278 update_engine[1667]: I0702 11:52:47.830199 1667 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 2 11:52:47.831241 update_engine[1667]: I0702 11:52:47.831162 1667 omaha_request_params.cc:62] Current group set to lts Jul 2 11:52:47.831542 update_engine[1667]: I0702 11:52:47.831470 1667 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 2 11:52:47.831542 update_engine[1667]: I0702 11:52:47.831491 1667 update_attempter.cc:643] Scheduling an action processor start. Jul 2 11:52:47.831542 update_engine[1667]: I0702 11:52:47.831524 1667 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 11:52:47.831947 update_engine[1667]: I0702 11:52:47.831591 1667 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 2 11:52:47.831947 update_engine[1667]: I0702 11:52:47.831732 1667 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 2 11:52:47.831947 update_engine[1667]: I0702 11:52:47.831748 1667 omaha_request_action.cc:271] Request: Jul 2 11:52:47.831947 update_engine[1667]: Jul 2 11:52:47.831947 update_engine[1667]: Jul 2 11:52:47.831947 update_engine[1667]: Jul 2 11:52:47.831947 update_engine[1667]: Jul 2 11:52:47.831947 update_engine[1667]: Jul 2 11:52:47.831947 update_engine[1667]: Jul 2 11:52:47.831947 update_engine[1667]: Jul 2 11:52:47.831947 update_engine[1667]: Jul 2 11:52:47.831947 update_engine[1667]: I0702 11:52:47.831759 1667 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 11:52:47.833006 locksmithd[1713]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 2 11:52:47.834902 update_engine[1667]: I0702 11:52:47.834859 1667 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 11:52:47.835112 update_engine[1667]: E0702 11:52:47.835084 1667 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 11:52:47.835278 update_engine[1667]: I0702 11:52:47.835249 1667 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 2 11:52:48.969289 systemd[1]: Started sshd@11-147.75.203.15:22-139.178.68.195:53234.service. Jul 2 11:52:49.000779 sshd[4534]: Accepted publickey for core from 139.178.68.195 port 53234 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:52:49.001772 sshd[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:52:49.005180 systemd-logind[1717]: New session 14 of user core. Jul 2 11:52:49.005940 systemd[1]: Started session-14.scope. Jul 2 11:52:49.092913 sshd[4534]: pam_unix(sshd:session): session closed for user core Jul 2 11:52:49.094255 systemd[1]: sshd@11-147.75.203.15:22-139.178.68.195:53234.service: Deactivated successfully. Jul 2 11:52:49.094907 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 11:52:49.094921 systemd-logind[1717]: Session 14 logged out. Waiting for processes to exit. Jul 2 11:52:49.095413 systemd-logind[1717]: Removed session 14. Jul 2 11:52:54.099655 systemd[1]: Started sshd@12-147.75.203.15:22-139.178.68.195:44564.service. Jul 2 11:52:54.130623 sshd[4559]: Accepted publickey for core from 139.178.68.195 port 44564 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:52:54.131503 sshd[4559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:52:54.134331 systemd-logind[1717]: New session 15 of user core. Jul 2 11:52:54.135024 systemd[1]: Started session-15.scope. Jul 2 11:52:54.223815 sshd[4559]: pam_unix(sshd:session): session closed for user core Jul 2 11:52:54.225679 systemd[1]: Started sshd@13-147.75.203.15:22-139.178.68.195:44570.service. Jul 2 11:52:54.226104 systemd[1]: sshd@12-147.75.203.15:22-139.178.68.195:44564.service: Deactivated successfully. Jul 2 11:52:54.226771 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 11:52:54.226789 systemd-logind[1717]: Session 15 logged out. Waiting for processes to exit. Jul 2 11:52:54.227351 systemd-logind[1717]: Removed session 15. Jul 2 11:52:54.256628 sshd[4584]: Accepted publickey for core from 139.178.68.195 port 44570 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:52:54.257494 sshd[4584]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:52:54.260665 systemd-logind[1717]: New session 16 of user core. Jul 2 11:52:54.261364 systemd[1]: Started session-16.scope. Jul 2 11:52:54.544632 sshd[4584]: pam_unix(sshd:session): session closed for user core Jul 2 11:52:54.551177 systemd[1]: Started sshd@14-147.75.203.15:22-139.178.68.195:44572.service. Jul 2 11:52:54.552909 systemd[1]: sshd@13-147.75.203.15:22-139.178.68.195:44570.service: Deactivated successfully. Jul 2 11:52:54.555671 systemd-logind[1717]: Session 16 logged out. Waiting for processes to exit. Jul 2 11:52:54.555810 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 11:52:54.558522 systemd-logind[1717]: Removed session 16. Jul 2 11:52:54.608796 sshd[4608]: Accepted publickey for core from 139.178.68.195 port 44572 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:52:54.609487 sshd[4608]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:52:54.611941 systemd-logind[1717]: New session 17 of user core. Jul 2 11:52:54.612331 systemd[1]: Started session-17.scope. Jul 2 11:52:55.447429 sshd[4608]: pam_unix(sshd:session): session closed for user core Jul 2 11:52:55.457526 systemd[1]: Started sshd@15-147.75.203.15:22-139.178.68.195:44586.service. Jul 2 11:52:55.459598 systemd[1]: sshd@14-147.75.203.15:22-139.178.68.195:44572.service: Deactivated successfully. Jul 2 11:52:55.463153 systemd-logind[1717]: Session 17 logged out. Waiting for processes to exit. Jul 2 11:52:55.463232 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 11:52:55.466626 systemd-logind[1717]: Removed session 17. Jul 2 11:52:55.511144 sshd[4642]: Accepted publickey for core from 139.178.68.195 port 44586 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:52:55.512252 sshd[4642]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:52:55.515737 systemd-logind[1717]: New session 18 of user core. Jul 2 11:52:55.516714 systemd[1]: Started session-18.scope. Jul 2 11:52:55.741168 sshd[4642]: pam_unix(sshd:session): session closed for user core Jul 2 11:52:55.743307 systemd[1]: Started sshd@16-147.75.203.15:22-139.178.68.195:44588.service. Jul 2 11:52:55.743714 systemd[1]: sshd@15-147.75.203.15:22-139.178.68.195:44586.service: Deactivated successfully. Jul 2 11:52:55.744387 systemd-logind[1717]: Session 18 logged out. Waiting for processes to exit. Jul 2 11:52:55.744417 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 11:52:55.745105 systemd-logind[1717]: Removed session 18. Jul 2 11:52:55.776864 sshd[4666]: Accepted publickey for core from 139.178.68.195 port 44588 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:52:55.780245 sshd[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:52:55.791261 systemd-logind[1717]: New session 19 of user core. Jul 2 11:52:55.793927 systemd[1]: Started session-19.scope. Jul 2 11:52:55.946303 sshd[4666]: pam_unix(sshd:session): session closed for user core Jul 2 11:52:55.947884 systemd[1]: sshd@16-147.75.203.15:22-139.178.68.195:44588.service: Deactivated successfully. Jul 2 11:52:55.948525 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 11:52:55.948528 systemd-logind[1717]: Session 19 logged out. Waiting for processes to exit. Jul 2 11:52:55.949175 systemd-logind[1717]: Removed session 19. Jul 2 11:52:57.822619 update_engine[1667]: I0702 11:52:57.822515 1667 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 11:52:57.823568 update_engine[1667]: I0702 11:52:57.822944 1667 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 11:52:57.823568 update_engine[1667]: E0702 11:52:57.823143 1667 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 11:52:57.823568 update_engine[1667]: I0702 11:52:57.823316 1667 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 2 11:53:00.952803 systemd[1]: Started sshd@17-147.75.203.15:22-139.178.68.195:44604.service. Jul 2 11:53:00.984241 sshd[4697]: Accepted publickey for core from 139.178.68.195 port 44604 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:53:00.985337 sshd[4697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:53:00.989145 systemd-logind[1717]: New session 20 of user core. Jul 2 11:53:00.990021 systemd[1]: Started session-20.scope. Jul 2 11:53:01.077664 sshd[4697]: pam_unix(sshd:session): session closed for user core Jul 2 11:53:01.079117 systemd[1]: sshd@17-147.75.203.15:22-139.178.68.195:44604.service: Deactivated successfully. Jul 2 11:53:01.079849 systemd-logind[1717]: Session 20 logged out. Waiting for processes to exit. Jul 2 11:53:01.079866 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 11:53:01.080380 systemd-logind[1717]: Removed session 20. Jul 2 11:53:06.084269 systemd[1]: Started sshd@18-147.75.203.15:22-139.178.68.195:57552.service. Jul 2 11:53:06.116200 sshd[4723]: Accepted publickey for core from 139.178.68.195 port 57552 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:53:06.119553 sshd[4723]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:53:06.130766 systemd-logind[1717]: New session 21 of user core. Jul 2 11:53:06.133323 systemd[1]: Started session-21.scope. Jul 2 11:53:06.219436 sshd[4723]: pam_unix(sshd:session): session closed for user core Jul 2 11:53:06.220836 systemd[1]: sshd@18-147.75.203.15:22-139.178.68.195:57552.service: Deactivated successfully. Jul 2 11:53:06.221419 systemd-logind[1717]: Session 21 logged out. Waiting for processes to exit. Jul 2 11:53:06.221435 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 11:53:06.222133 systemd-logind[1717]: Removed session 21. Jul 2 11:53:07.822924 update_engine[1667]: I0702 11:53:07.822808 1667 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 11:53:07.823778 update_engine[1667]: I0702 11:53:07.823293 1667 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 11:53:07.823778 update_engine[1667]: E0702 11:53:07.823526 1667 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 11:53:07.823778 update_engine[1667]: I0702 11:53:07.823706 1667 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 2 11:53:11.226178 systemd[1]: Started sshd@19-147.75.203.15:22-139.178.68.195:57564.service. Jul 2 11:53:11.258239 sshd[4749]: Accepted publickey for core from 139.178.68.195 port 57564 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:53:11.261657 sshd[4749]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:53:11.272816 systemd-logind[1717]: New session 22 of user core. Jul 2 11:53:11.275354 systemd[1]: Started session-22.scope. Jul 2 11:53:11.377622 sshd[4749]: pam_unix(sshd:session): session closed for user core Jul 2 11:53:11.379206 systemd[1]: sshd@19-147.75.203.15:22-139.178.68.195:57564.service: Deactivated successfully. Jul 2 11:53:11.379948 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 11:53:11.379985 systemd-logind[1717]: Session 22 logged out. Waiting for processes to exit. Jul 2 11:53:11.380441 systemd-logind[1717]: Removed session 22. Jul 2 11:53:16.384051 systemd[1]: Started sshd@20-147.75.203.15:22-139.178.68.195:42644.service. Jul 2 11:53:16.416215 sshd[4776]: Accepted publickey for core from 139.178.68.195 port 42644 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:53:16.419576 sshd[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:53:16.430616 systemd-logind[1717]: New session 23 of user core. Jul 2 11:53:16.433183 systemd[1]: Started session-23.scope. Jul 2 11:53:16.522275 sshd[4776]: pam_unix(sshd:session): session closed for user core Jul 2 11:53:16.524311 systemd[1]: Started sshd@21-147.75.203.15:22-139.178.68.195:42660.service. Jul 2 11:53:16.524668 systemd[1]: sshd@20-147.75.203.15:22-139.178.68.195:42644.service: Deactivated successfully. Jul 2 11:53:16.525273 systemd-logind[1717]: Session 23 logged out. Waiting for processes to exit. Jul 2 11:53:16.525289 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 11:53:16.525792 systemd-logind[1717]: Removed session 23. Jul 2 11:53:16.555633 sshd[4798]: Accepted publickey for core from 139.178.68.195 port 42660 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:53:16.556624 sshd[4798]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:53:16.560163 systemd-logind[1717]: New session 24 of user core. Jul 2 11:53:16.561019 systemd[1]: Started session-24.scope. Jul 2 11:53:17.822775 update_engine[1667]: I0702 11:53:17.822659 1667 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 11:53:17.823675 update_engine[1667]: I0702 11:53:17.823153 1667 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 11:53:17.823675 update_engine[1667]: E0702 11:53:17.823357 1667 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 11:53:17.823675 update_engine[1667]: I0702 11:53:17.823557 1667 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 11:53:17.823675 update_engine[1667]: I0702 11:53:17.823575 1667 omaha_request_action.cc:621] Omaha request response: Jul 2 11:53:17.824050 update_engine[1667]: E0702 11:53:17.823719 1667 omaha_request_action.cc:640] Omaha request network transfer failed. Jul 2 11:53:17.824050 update_engine[1667]: I0702 11:53:17.823747 1667 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 2 11:53:17.824050 update_engine[1667]: I0702 11:53:17.823757 1667 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 11:53:17.824050 update_engine[1667]: I0702 11:53:17.823767 1667 update_attempter.cc:306] Processing Done. Jul 2 11:53:17.824050 update_engine[1667]: E0702 11:53:17.823792 1667 update_attempter.cc:619] Update failed. Jul 2 11:53:17.824050 update_engine[1667]: I0702 11:53:17.823801 1667 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 2 11:53:17.824050 update_engine[1667]: I0702 11:53:17.823819 1667 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 2 11:53:17.824050 update_engine[1667]: I0702 11:53:17.823830 1667 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 2 11:53:17.824050 update_engine[1667]: I0702 11:53:17.823983 1667 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 11:53:17.824050 update_engine[1667]: I0702 11:53:17.824036 1667 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 2 11:53:17.824050 update_engine[1667]: I0702 11:53:17.824046 1667 omaha_request_action.cc:271] Request: Jul 2 11:53:17.824050 update_engine[1667]: Jul 2 11:53:17.824050 update_engine[1667]: Jul 2 11:53:17.824050 update_engine[1667]: Jul 2 11:53:17.824050 update_engine[1667]: Jul 2 11:53:17.824050 update_engine[1667]: Jul 2 11:53:17.824050 update_engine[1667]: Jul 2 11:53:17.824050 update_engine[1667]: I0702 11:53:17.824057 1667 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 11:53:17.825870 update_engine[1667]: I0702 11:53:17.824479 1667 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 11:53:17.825870 update_engine[1667]: E0702 11:53:17.824728 1667 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 11:53:17.825870 update_engine[1667]: I0702 11:53:17.824887 1667 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 11:53:17.825870 update_engine[1667]: I0702 11:53:17.824903 1667 omaha_request_action.cc:621] Omaha request response: Jul 2 11:53:17.825870 update_engine[1667]: I0702 11:53:17.824914 1667 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 11:53:17.825870 update_engine[1667]: I0702 11:53:17.824923 1667 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 11:53:17.825870 update_engine[1667]: I0702 11:53:17.824930 1667 update_attempter.cc:306] Processing Done. Jul 2 11:53:17.825870 update_engine[1667]: I0702 11:53:17.824940 1667 update_attempter.cc:310] Error event sent. Jul 2 11:53:17.825870 update_engine[1667]: I0702 11:53:17.824960 1667 update_check_scheduler.cc:74] Next update check in 48m34s Jul 2 11:53:17.826648 locksmithd[1713]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 2 11:53:17.826648 locksmithd[1713]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 2 11:53:17.940693 env[1676]: time="2024-07-02T11:53:17.940589801Z" level=info msg="StopContainer for \"6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb\" with timeout 30 (s)" Jul 2 11:53:17.941677 env[1676]: time="2024-07-02T11:53:17.941327512Z" level=info msg="Stop container \"6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb\" with signal terminated" Jul 2 11:53:17.967385 env[1676]: time="2024-07-02T11:53:17.967298224Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 11:53:17.971509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb-rootfs.mount: Deactivated successfully. Jul 2 11:53:17.972976 env[1676]: time="2024-07-02T11:53:17.972952609Z" level=info msg="StopContainer for \"27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634\" with timeout 2 (s)" Jul 2 11:53:17.973233 env[1676]: time="2024-07-02T11:53:17.973208667Z" level=info msg="Stop container \"27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634\" with signal terminated" Jul 2 11:53:17.978313 systemd-networkd[1408]: lxc_health: Link DOWN Jul 2 11:53:17.978318 systemd-networkd[1408]: lxc_health: Lost carrier Jul 2 11:53:17.983007 env[1676]: time="2024-07-02T11:53:17.982970825Z" level=info msg="shim disconnected" id=6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb Jul 2 11:53:17.983132 env[1676]: time="2024-07-02T11:53:17.983007864Z" level=warning msg="cleaning up after shim disconnected" id=6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb namespace=k8s.io Jul 2 11:53:17.983132 env[1676]: time="2024-07-02T11:53:17.983024462Z" level=info msg="cleaning up dead shim" Jul 2 11:53:17.989235 env[1676]: time="2024-07-02T11:53:17.989178642Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:53:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4866 runtime=io.containerd.runc.v2\n" Jul 2 11:53:17.990246 env[1676]: time="2024-07-02T11:53:17.990194294Z" level=info msg="StopContainer for \"6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb\" returns successfully" Jul 2 11:53:17.990811 env[1676]: time="2024-07-02T11:53:17.990757787Z" level=info msg="StopPodSandbox for \"6efad680b5e8e08ab3b04f781888e1d4e93205ff3ea4a9e0db6adf04abbd0b66\"" Jul 2 11:53:17.990887 env[1676]: time="2024-07-02T11:53:17.990822618Z" level=info msg="Container to stop \"6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 11:53:17.993331 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6efad680b5e8e08ab3b04f781888e1d4e93205ff3ea4a9e0db6adf04abbd0b66-shm.mount: Deactivated successfully. Jul 2 11:53:18.011816 env[1676]: time="2024-07-02T11:53:18.011767148Z" level=info msg="shim disconnected" id=6efad680b5e8e08ab3b04f781888e1d4e93205ff3ea4a9e0db6adf04abbd0b66 Jul 2 11:53:18.011816 env[1676]: time="2024-07-02T11:53:18.011813097Z" level=warning msg="cleaning up after shim disconnected" id=6efad680b5e8e08ab3b04f781888e1d4e93205ff3ea4a9e0db6adf04abbd0b66 namespace=k8s.io Jul 2 11:53:18.012021 env[1676]: time="2024-07-02T11:53:18.011828351Z" level=info msg="cleaning up dead shim" Jul 2 11:53:18.012061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6efad680b5e8e08ab3b04f781888e1d4e93205ff3ea4a9e0db6adf04abbd0b66-rootfs.mount: Deactivated successfully. Jul 2 11:53:18.018218 env[1676]: time="2024-07-02T11:53:18.018158537Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:53:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4901 runtime=io.containerd.runc.v2\n" Jul 2 11:53:18.018458 env[1676]: time="2024-07-02T11:53:18.018431040Z" level=info msg="TearDown network for sandbox \"6efad680b5e8e08ab3b04f781888e1d4e93205ff3ea4a9e0db6adf04abbd0b66\" successfully" Jul 2 11:53:18.018512 env[1676]: time="2024-07-02T11:53:18.018461232Z" level=info msg="StopPodSandbox for \"6efad680b5e8e08ab3b04f781888e1d4e93205ff3ea4a9e0db6adf04abbd0b66\" returns successfully" Jul 2 11:53:18.041355 env[1676]: time="2024-07-02T11:53:18.041284984Z" level=info msg="shim disconnected" id=27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634 Jul 2 11:53:18.041355 env[1676]: time="2024-07-02T11:53:18.041332137Z" level=warning msg="cleaning up after shim disconnected" id=27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634 namespace=k8s.io Jul 2 11:53:18.041355 env[1676]: time="2024-07-02T11:53:18.041344255Z" level=info msg="cleaning up dead shim" Jul 2 11:53:18.047437 env[1676]: time="2024-07-02T11:53:18.047388906Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:53:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4927 runtime=io.containerd.runc.v2\n" Jul 2 11:53:18.048398 env[1676]: time="2024-07-02T11:53:18.048371250Z" level=info msg="StopContainer for \"27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634\" returns successfully" Jul 2 11:53:18.048772 env[1676]: time="2024-07-02T11:53:18.048749949Z" level=info msg="StopPodSandbox for \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\"" Jul 2 11:53:18.048825 env[1676]: time="2024-07-02T11:53:18.048803547Z" level=info msg="Container to stop \"27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 11:53:18.048866 env[1676]: time="2024-07-02T11:53:18.048820149Z" level=info msg="Container to stop \"c839604be81b4af067f8b75298ce8cea837ca3361a9d02cf47731ee0492350cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 11:53:18.048866 env[1676]: time="2024-07-02T11:53:18.048833386Z" level=info msg="Container to stop \"478d3c0061b83765dbf5e8925644c40526dde8508cd74c729267b3b82a90c56d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 11:53:18.048866 env[1676]: time="2024-07-02T11:53:18.048844423Z" level=info msg="Container to stop \"6eb6099c6f553cf0b3a9d21f9457545e571262b68b63fc021c099cdbf639fce1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 11:53:18.048866 env[1676]: time="2024-07-02T11:53:18.048854758Z" level=info msg="Container to stop \"0726cf02cb5665efdafe5194cc28657e7cce59be1f85ea3b0cc60480356e9452\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 11:53:18.065848 env[1676]: time="2024-07-02T11:53:18.065795335Z" level=info msg="shim disconnected" id=46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05 Jul 2 11:53:18.066015 env[1676]: time="2024-07-02T11:53:18.065852737Z" level=warning msg="cleaning up after shim disconnected" id=46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05 namespace=k8s.io Jul 2 11:53:18.066015 env[1676]: time="2024-07-02T11:53:18.065872041Z" level=info msg="cleaning up dead shim" Jul 2 11:53:18.072138 env[1676]: time="2024-07-02T11:53:18.072078474Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:53:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4960 runtime=io.containerd.runc.v2\n" Jul 2 11:53:18.072403 env[1676]: time="2024-07-02T11:53:18.072351803Z" level=info msg="TearDown network for sandbox \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\" successfully" Jul 2 11:53:18.072403 env[1676]: time="2024-07-02T11:53:18.072374359Z" level=info msg="StopPodSandbox for \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\" returns successfully" Jul 2 11:53:18.088966 kubelet[2742]: I0702 11:53:18.088894 2742 scope.go:117] "RemoveContainer" containerID="27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634" Jul 2 11:53:18.089888 env[1676]: time="2024-07-02T11:53:18.089852367Z" level=info msg="RemoveContainer for \"27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634\"" Jul 2 11:53:18.092266 env[1676]: time="2024-07-02T11:53:18.092236207Z" level=info msg="RemoveContainer for \"27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634\" returns successfully" Jul 2 11:53:18.092507 kubelet[2742]: I0702 11:53:18.092468 2742 scope.go:117] "RemoveContainer" containerID="0726cf02cb5665efdafe5194cc28657e7cce59be1f85ea3b0cc60480356e9452" Jul 2 11:53:18.093287 env[1676]: time="2024-07-02T11:53:18.093262822Z" level=info msg="RemoveContainer for \"0726cf02cb5665efdafe5194cc28657e7cce59be1f85ea3b0cc60480356e9452\"" Jul 2 11:53:18.095032 env[1676]: time="2024-07-02T11:53:18.094983424Z" level=info msg="RemoveContainer for \"0726cf02cb5665efdafe5194cc28657e7cce59be1f85ea3b0cc60480356e9452\" returns successfully" Jul 2 11:53:18.095130 kubelet[2742]: I0702 11:53:18.095085 2742 scope.go:117] "RemoveContainer" containerID="6eb6099c6f553cf0b3a9d21f9457545e571262b68b63fc021c099cdbf639fce1" Jul 2 11:53:18.095895 env[1676]: time="2024-07-02T11:53:18.095850315Z" level=info msg="RemoveContainer for \"6eb6099c6f553cf0b3a9d21f9457545e571262b68b63fc021c099cdbf639fce1\"" Jul 2 11:53:18.098361 env[1676]: time="2024-07-02T11:53:18.098301342Z" level=info msg="RemoveContainer for \"6eb6099c6f553cf0b3a9d21f9457545e571262b68b63fc021c099cdbf639fce1\" returns successfully" Jul 2 11:53:18.098505 kubelet[2742]: I0702 11:53:18.098485 2742 scope.go:117] "RemoveContainer" containerID="478d3c0061b83765dbf5e8925644c40526dde8508cd74c729267b3b82a90c56d" Jul 2 11:53:18.099426 env[1676]: time="2024-07-02T11:53:18.099395736Z" level=info msg="RemoveContainer for \"478d3c0061b83765dbf5e8925644c40526dde8508cd74c729267b3b82a90c56d\"" Jul 2 11:53:18.101144 env[1676]: time="2024-07-02T11:53:18.101122812Z" level=info msg="RemoveContainer for \"478d3c0061b83765dbf5e8925644c40526dde8508cd74c729267b3b82a90c56d\" returns successfully" Jul 2 11:53:18.101272 kubelet[2742]: I0702 11:53:18.101260 2742 scope.go:117] "RemoveContainer" containerID="c839604be81b4af067f8b75298ce8cea837ca3361a9d02cf47731ee0492350cc" Jul 2 11:53:18.102128 env[1676]: time="2024-07-02T11:53:18.102071275Z" level=info msg="RemoveContainer for \"c839604be81b4af067f8b75298ce8cea837ca3361a9d02cf47731ee0492350cc\"" Jul 2 11:53:18.103946 env[1676]: time="2024-07-02T11:53:18.103894173Z" level=info msg="RemoveContainer for \"c839604be81b4af067f8b75298ce8cea837ca3361a9d02cf47731ee0492350cc\" returns successfully" Jul 2 11:53:18.104021 kubelet[2742]: I0702 11:53:18.104001 2742 scope.go:117] "RemoveContainer" containerID="27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634" Jul 2 11:53:18.104238 env[1676]: time="2024-07-02T11:53:18.104147737Z" level=error msg="ContainerStatus for \"27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634\": not found" Jul 2 11:53:18.104323 kubelet[2742]: E0702 11:53:18.104310 2742 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634\": not found" containerID="27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634" Jul 2 11:53:18.104406 kubelet[2742]: I0702 11:53:18.104396 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634"} err="failed to get container status \"27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634\": rpc error: code = NotFound desc = an error occurred when try to find container \"27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634\": not found" Jul 2 11:53:18.104459 kubelet[2742]: I0702 11:53:18.104411 2742 scope.go:117] "RemoveContainer" containerID="0726cf02cb5665efdafe5194cc28657e7cce59be1f85ea3b0cc60480356e9452" Jul 2 11:53:18.104711 env[1676]: time="2024-07-02T11:53:18.104618743Z" level=error msg="ContainerStatus for \"0726cf02cb5665efdafe5194cc28657e7cce59be1f85ea3b0cc60480356e9452\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0726cf02cb5665efdafe5194cc28657e7cce59be1f85ea3b0cc60480356e9452\": not found" Jul 2 11:53:18.104783 kubelet[2742]: E0702 11:53:18.104760 2742 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0726cf02cb5665efdafe5194cc28657e7cce59be1f85ea3b0cc60480356e9452\": not found" containerID="0726cf02cb5665efdafe5194cc28657e7cce59be1f85ea3b0cc60480356e9452" Jul 2 11:53:18.104835 kubelet[2742]: I0702 11:53:18.104790 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0726cf02cb5665efdafe5194cc28657e7cce59be1f85ea3b0cc60480356e9452"} err="failed to get container status \"0726cf02cb5665efdafe5194cc28657e7cce59be1f85ea3b0cc60480356e9452\": rpc error: code = NotFound desc = an error occurred when try to find container \"0726cf02cb5665efdafe5194cc28657e7cce59be1f85ea3b0cc60480356e9452\": not found" Jul 2 11:53:18.104835 kubelet[2742]: I0702 11:53:18.104806 2742 scope.go:117] "RemoveContainer" containerID="6eb6099c6f553cf0b3a9d21f9457545e571262b68b63fc021c099cdbf639fce1" Jul 2 11:53:18.105003 env[1676]: time="2024-07-02T11:53:18.104925227Z" level=error msg="ContainerStatus for \"6eb6099c6f553cf0b3a9d21f9457545e571262b68b63fc021c099cdbf639fce1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6eb6099c6f553cf0b3a9d21f9457545e571262b68b63fc021c099cdbf639fce1\": not found" Jul 2 11:53:18.105074 kubelet[2742]: E0702 11:53:18.105058 2742 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6eb6099c6f553cf0b3a9d21f9457545e571262b68b63fc021c099cdbf639fce1\": not found" containerID="6eb6099c6f553cf0b3a9d21f9457545e571262b68b63fc021c099cdbf639fce1" Jul 2 11:53:18.105121 kubelet[2742]: I0702 11:53:18.105084 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6eb6099c6f553cf0b3a9d21f9457545e571262b68b63fc021c099cdbf639fce1"} err="failed to get container status \"6eb6099c6f553cf0b3a9d21f9457545e571262b68b63fc021c099cdbf639fce1\": rpc error: code = NotFound desc = an error occurred when try to find container \"6eb6099c6f553cf0b3a9d21f9457545e571262b68b63fc021c099cdbf639fce1\": not found" Jul 2 11:53:18.105121 kubelet[2742]: I0702 11:53:18.105095 2742 scope.go:117] "RemoveContainer" containerID="478d3c0061b83765dbf5e8925644c40526dde8508cd74c729267b3b82a90c56d" Jul 2 11:53:18.105291 env[1676]: time="2024-07-02T11:53:18.105242502Z" level=error msg="ContainerStatus for \"478d3c0061b83765dbf5e8925644c40526dde8508cd74c729267b3b82a90c56d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"478d3c0061b83765dbf5e8925644c40526dde8508cd74c729267b3b82a90c56d\": not found" Jul 2 11:53:18.105378 kubelet[2742]: E0702 11:53:18.105368 2742 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"478d3c0061b83765dbf5e8925644c40526dde8508cd74c729267b3b82a90c56d\": not found" containerID="478d3c0061b83765dbf5e8925644c40526dde8508cd74c729267b3b82a90c56d" Jul 2 11:53:18.105420 kubelet[2742]: I0702 11:53:18.105391 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"478d3c0061b83765dbf5e8925644c40526dde8508cd74c729267b3b82a90c56d"} err="failed to get container status \"478d3c0061b83765dbf5e8925644c40526dde8508cd74c729267b3b82a90c56d\": rpc error: code = NotFound desc = an error occurred when try to find container \"478d3c0061b83765dbf5e8925644c40526dde8508cd74c729267b3b82a90c56d\": not found" Jul 2 11:53:18.105420 kubelet[2742]: I0702 11:53:18.105406 2742 scope.go:117] "RemoveContainer" containerID="c839604be81b4af067f8b75298ce8cea837ca3361a9d02cf47731ee0492350cc" Jul 2 11:53:18.105601 env[1676]: time="2024-07-02T11:53:18.105526601Z" level=error msg="ContainerStatus for \"c839604be81b4af067f8b75298ce8cea837ca3361a9d02cf47731ee0492350cc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c839604be81b4af067f8b75298ce8cea837ca3361a9d02cf47731ee0492350cc\": not found" Jul 2 11:53:18.105673 kubelet[2742]: E0702 11:53:18.105650 2742 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c839604be81b4af067f8b75298ce8cea837ca3361a9d02cf47731ee0492350cc\": not found" containerID="c839604be81b4af067f8b75298ce8cea837ca3361a9d02cf47731ee0492350cc" Jul 2 11:53:18.105721 kubelet[2742]: I0702 11:53:18.105678 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c839604be81b4af067f8b75298ce8cea837ca3361a9d02cf47731ee0492350cc"} err="failed to get container status \"c839604be81b4af067f8b75298ce8cea837ca3361a9d02cf47731ee0492350cc\": rpc error: code = NotFound desc = an error occurred when try to find container \"c839604be81b4af067f8b75298ce8cea837ca3361a9d02cf47731ee0492350cc\": not found" Jul 2 11:53:18.105721 kubelet[2742]: I0702 11:53:18.105688 2742 scope.go:117] "RemoveContainer" containerID="6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb" Jul 2 11:53:18.106540 env[1676]: time="2024-07-02T11:53:18.106515765Z" level=info msg="RemoveContainer for \"6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb\"" Jul 2 11:53:18.108352 env[1676]: time="2024-07-02T11:53:18.108330962Z" level=info msg="RemoveContainer for \"6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb\" returns successfully" Jul 2 11:53:18.108440 kubelet[2742]: I0702 11:53:18.108430 2742 scope.go:117] "RemoveContainer" containerID="6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb" Jul 2 11:53:18.108673 env[1676]: time="2024-07-02T11:53:18.108601566Z" level=error msg="ContainerStatus for \"6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb\": not found" Jul 2 11:53:18.108748 kubelet[2742]: E0702 11:53:18.108732 2742 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb\": not found" containerID="6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb" Jul 2 11:53:18.108793 kubelet[2742]: I0702 11:53:18.108759 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb"} err="failed to get container status \"6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c19a2bffc507e6625715aa9494826b10d9cc4cbfc34d086e2bb5ce5bf4ec2eb\": not found" Jul 2 11:53:18.153599 kubelet[2742]: I0702 11:53:18.153511 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-cilium-cgroup\") pod \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " Jul 2 11:53:18.153828 kubelet[2742]: I0702 11:53:18.153614 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-hubble-tls\") pod \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " Jul 2 11:53:18.153828 kubelet[2742]: I0702 11:53:18.153641 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" (UID: "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:53:18.153828 kubelet[2742]: I0702 11:53:18.153686 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fjkc\" (UniqueName: \"kubernetes.io/projected/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-kube-api-access-9fjkc\") pod \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " Jul 2 11:53:18.154343 kubelet[2742]: I0702 11:53:18.153835 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-bpf-maps\") pod \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " Jul 2 11:53:18.154343 kubelet[2742]: I0702 11:53:18.153940 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtd6h\" (UniqueName: \"kubernetes.io/projected/ec3fb2ea-afa9-4011-b261-821e8587cee4-kube-api-access-wtd6h\") pod \"ec3fb2ea-afa9-4011-b261-821e8587cee4\" (UID: \"ec3fb2ea-afa9-4011-b261-821e8587cee4\") " Jul 2 11:53:18.154343 kubelet[2742]: I0702 11:53:18.153945 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" (UID: "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:53:18.154343 kubelet[2742]: I0702 11:53:18.154025 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-xtables-lock\") pod \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " Jul 2 11:53:18.154343 kubelet[2742]: I0702 11:53:18.154099 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-lib-modules\") pod \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " Jul 2 11:53:18.154343 kubelet[2742]: I0702 11:53:18.154112 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" (UID: "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:53:18.155279 kubelet[2742]: I0702 11:53:18.154174 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-cilium-config-path\") pod \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " Jul 2 11:53:18.155279 kubelet[2742]: I0702 11:53:18.154178 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" (UID: "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:53:18.155279 kubelet[2742]: I0702 11:53:18.154261 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec3fb2ea-afa9-4011-b261-821e8587cee4-cilium-config-path\") pod \"ec3fb2ea-afa9-4011-b261-821e8587cee4\" (UID: \"ec3fb2ea-afa9-4011-b261-821e8587cee4\") " Jul 2 11:53:18.155279 kubelet[2742]: I0702 11:53:18.154334 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-cni-path\") pod \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " Jul 2 11:53:18.155279 kubelet[2742]: I0702 11:53:18.154408 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-cilium-run\") pod \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " Jul 2 11:53:18.155279 kubelet[2742]: I0702 11:53:18.154498 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-etc-cni-netd\") pod \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " Jul 2 11:53:18.155979 kubelet[2742]: I0702 11:53:18.154487 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-cni-path" (OuterVolumeSpecName: "cni-path") pod "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" (UID: "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:53:18.155979 kubelet[2742]: I0702 11:53:18.154582 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-host-proc-sys-net\") pod \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " Jul 2 11:53:18.155979 kubelet[2742]: I0702 11:53:18.154584 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" (UID: "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:53:18.155979 kubelet[2742]: I0702 11:53:18.154605 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" (UID: "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:53:18.155979 kubelet[2742]: I0702 11:53:18.154690 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-clustermesh-secrets\") pod \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " Jul 2 11:53:18.156514 kubelet[2742]: I0702 11:53:18.154765 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-hostproc\") pod \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " Jul 2 11:53:18.156514 kubelet[2742]: I0702 11:53:18.154743 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" (UID: "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:53:18.156514 kubelet[2742]: I0702 11:53:18.154826 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-host-proc-sys-kernel\") pod \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\" (UID: \"c7d0ce8c-8c9a-43e8-ba9d-f515959674fb\") " Jul 2 11:53:18.156514 kubelet[2742]: I0702 11:53:18.154875 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-hostproc" (OuterVolumeSpecName: "hostproc") pod "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" (UID: "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:53:18.156514 kubelet[2742]: I0702 11:53:18.154940 2742 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-cilium-run\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:18.156514 kubelet[2742]: I0702 11:53:18.154993 2742 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-etc-cni-netd\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:18.157175 kubelet[2742]: I0702 11:53:18.155035 2742 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-host-proc-sys-net\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:18.157175 kubelet[2742]: I0702 11:53:18.155069 2742 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-cilium-cgroup\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:18.157175 kubelet[2742]: I0702 11:53:18.155009 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" (UID: "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:53:18.157175 kubelet[2742]: I0702 11:53:18.155101 2742 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-bpf-maps\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:18.157175 kubelet[2742]: I0702 11:53:18.155140 2742 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-xtables-lock\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:18.157175 kubelet[2742]: I0702 11:53:18.155171 2742 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-lib-modules\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:18.157175 kubelet[2742]: I0702 11:53:18.155201 2742 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-cni-path\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:18.160813 kubelet[2742]: I0702 11:53:18.160733 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" (UID: "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 11:53:18.161077 kubelet[2742]: I0702 11:53:18.160839 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-kube-api-access-9fjkc" (OuterVolumeSpecName: "kube-api-access-9fjkc") pod "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" (UID: "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb"). InnerVolumeSpecName "kube-api-access-9fjkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 11:53:18.161077 kubelet[2742]: I0702 11:53:18.160913 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec3fb2ea-afa9-4011-b261-821e8587cee4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ec3fb2ea-afa9-4011-b261-821e8587cee4" (UID: "ec3fb2ea-afa9-4011-b261-821e8587cee4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 11:53:18.161428 kubelet[2742]: I0702 11:53:18.161118 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec3fb2ea-afa9-4011-b261-821e8587cee4-kube-api-access-wtd6h" (OuterVolumeSpecName: "kube-api-access-wtd6h") pod "ec3fb2ea-afa9-4011-b261-821e8587cee4" (UID: "ec3fb2ea-afa9-4011-b261-821e8587cee4"). InnerVolumeSpecName "kube-api-access-wtd6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 11:53:18.161428 kubelet[2742]: I0702 11:53:18.161219 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" (UID: "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 11:53:18.161428 kubelet[2742]: I0702 11:53:18.161391 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" (UID: "c7d0ce8c-8c9a-43e8-ba9d-f515959674fb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 11:53:18.255418 kubelet[2742]: I0702 11:53:18.255368 2742 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-cilium-config-path\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:18.255635 kubelet[2742]: I0702 11:53:18.255430 2742 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec3fb2ea-afa9-4011-b261-821e8587cee4-cilium-config-path\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:18.255635 kubelet[2742]: I0702 11:53:18.255484 2742 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-clustermesh-secrets\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:18.255635 kubelet[2742]: I0702 11:53:18.255520 2742 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-hostproc\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:18.255635 kubelet[2742]: I0702 11:53:18.255558 2742 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-host-proc-sys-kernel\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:18.255635 kubelet[2742]: I0702 11:53:18.255592 2742 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-hubble-tls\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:18.255635 kubelet[2742]: I0702 11:53:18.255625 2742 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9fjkc\" (UniqueName: \"kubernetes.io/projected/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb-kube-api-access-9fjkc\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:18.256271 kubelet[2742]: I0702 11:53:18.255660 2742 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wtd6h\" (UniqueName: \"kubernetes.io/projected/ec3fb2ea-afa9-4011-b261-821e8587cee4-kube-api-access-wtd6h\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:18.957636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27fd2b6b9b957bcc6a83bfa986b67d0bd1ebc3f2d097852aa4cb62449be8e634-rootfs.mount: Deactivated successfully. Jul 2 11:53:18.957714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05-rootfs.mount: Deactivated successfully. Jul 2 11:53:18.957763 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05-shm.mount: Deactivated successfully. Jul 2 11:53:18.957811 systemd[1]: var-lib-kubelet-pods-c7d0ce8c\x2d8c9a\x2d43e8\x2dba9d\x2df515959674fb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9fjkc.mount: Deactivated successfully. Jul 2 11:53:18.957860 systemd[1]: var-lib-kubelet-pods-ec3fb2ea\x2dafa9\x2d4011\x2db261\x2d821e8587cee4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwtd6h.mount: Deactivated successfully. Jul 2 11:53:18.957913 systemd[1]: var-lib-kubelet-pods-c7d0ce8c\x2d8c9a\x2d43e8\x2dba9d\x2df515959674fb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 11:53:18.957962 systemd[1]: var-lib-kubelet-pods-c7d0ce8c\x2d8c9a\x2d43e8\x2dba9d\x2df515959674fb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 11:53:19.770621 kubelet[2742]: I0702 11:53:19.770527 2742 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" path="/var/lib/kubelet/pods/c7d0ce8c-8c9a-43e8-ba9d-f515959674fb/volumes" Jul 2 11:53:19.772317 kubelet[2742]: I0702 11:53:19.772245 2742 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ec3fb2ea-afa9-4011-b261-821e8587cee4" path="/var/lib/kubelet/pods/ec3fb2ea-afa9-4011-b261-821e8587cee4/volumes" Jul 2 11:53:19.882990 sshd[4798]: pam_unix(sshd:session): session closed for user core Jul 2 11:53:19.888780 systemd[1]: Started sshd@22-147.75.203.15:22-139.178.68.195:42676.service. Jul 2 11:53:19.889078 systemd[1]: sshd@21-147.75.203.15:22-139.178.68.195:42660.service: Deactivated successfully. Jul 2 11:53:19.889708 systemd-logind[1717]: Session 24 logged out. Waiting for processes to exit. Jul 2 11:53:19.889751 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 11:53:19.890265 systemd-logind[1717]: Removed session 24. Jul 2 11:53:19.919512 sshd[4978]: Accepted publickey for core from 139.178.68.195 port 42676 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:53:19.920506 sshd[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:53:19.923713 systemd-logind[1717]: New session 25 of user core. Jul 2 11:53:19.924490 systemd[1]: Started session-25.scope. Jul 2 11:53:20.220032 sshd[4978]: pam_unix(sshd:session): session closed for user core Jul 2 11:53:20.221912 systemd[1]: Started sshd@23-147.75.203.15:22-139.178.68.195:42678.service. Jul 2 11:53:20.222291 systemd[1]: sshd@22-147.75.203.15:22-139.178.68.195:42676.service: Deactivated successfully. Jul 2 11:53:20.223186 systemd-logind[1717]: Session 25 logged out. Waiting for processes to exit. Jul 2 11:53:20.223223 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 11:53:20.223895 systemd-logind[1717]: Removed session 25. Jul 2 11:53:20.227324 kubelet[2742]: I0702 11:53:20.227306 2742 topology_manager.go:215] "Topology Admit Handler" podUID="452a3fcc-a3c4-4ca0-9027-db1e67339a02" podNamespace="kube-system" podName="cilium-ccnnp" Jul 2 11:53:20.227403 kubelet[2742]: E0702 11:53:20.227341 2742 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" containerName="apply-sysctl-overwrites" Jul 2 11:53:20.227403 kubelet[2742]: E0702 11:53:20.227348 2742 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" containerName="mount-bpf-fs" Jul 2 11:53:20.227403 kubelet[2742]: E0702 11:53:20.227352 2742 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" containerName="clean-cilium-state" Jul 2 11:53:20.227403 kubelet[2742]: E0702 11:53:20.227357 2742 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" containerName="cilium-agent" Jul 2 11:53:20.227403 kubelet[2742]: E0702 11:53:20.227361 2742 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ec3fb2ea-afa9-4011-b261-821e8587cee4" containerName="cilium-operator" Jul 2 11:53:20.227403 kubelet[2742]: E0702 11:53:20.227366 2742 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" containerName="mount-cgroup" Jul 2 11:53:20.227403 kubelet[2742]: I0702 11:53:20.227379 2742 memory_manager.go:346] "RemoveStaleState removing state" podUID="ec3fb2ea-afa9-4011-b261-821e8587cee4" containerName="cilium-operator" Jul 2 11:53:20.227403 kubelet[2742]: I0702 11:53:20.227385 2742 memory_manager.go:346] "RemoveStaleState removing state" podUID="c7d0ce8c-8c9a-43e8-ba9d-f515959674fb" containerName="cilium-agent" Jul 2 11:53:20.255164 sshd[5002]: Accepted publickey for core from 139.178.68.195 port 42678 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:53:20.258788 sshd[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:53:20.268428 kubelet[2742]: I0702 11:53:20.268364 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-xtables-lock\") pod \"cilium-ccnnp\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " pod="kube-system/cilium-ccnnp" Jul 2 11:53:20.268721 kubelet[2742]: I0702 11:53:20.268503 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-bpf-maps\") pod \"cilium-ccnnp\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " pod="kube-system/cilium-ccnnp" Jul 2 11:53:20.268721 kubelet[2742]: I0702 11:53:20.268661 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-cilium-cgroup\") pod \"cilium-ccnnp\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " pod="kube-system/cilium-ccnnp" Jul 2 11:53:20.268955 kubelet[2742]: I0702 11:53:20.268830 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-host-proc-sys-kernel\") pod \"cilium-ccnnp\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " pod="kube-system/cilium-ccnnp" Jul 2 11:53:20.268955 kubelet[2742]: I0702 11:53:20.268932 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/452a3fcc-a3c4-4ca0-9027-db1e67339a02-hubble-tls\") pod \"cilium-ccnnp\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " pod="kube-system/cilium-ccnnp" Jul 2 11:53:20.269170 kubelet[2742]: I0702 11:53:20.269086 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-cni-path\") pod \"cilium-ccnnp\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " pod="kube-system/cilium-ccnnp" Jul 2 11:53:20.269273 kubelet[2742]: I0702 11:53:20.269184 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgwmm\" (UniqueName: \"kubernetes.io/projected/452a3fcc-a3c4-4ca0-9027-db1e67339a02-kube-api-access-qgwmm\") pod \"cilium-ccnnp\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " pod="kube-system/cilium-ccnnp" Jul 2 11:53:20.269386 kubelet[2742]: I0702 11:53:20.269321 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-cilium-run\") pod \"cilium-ccnnp\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " pod="kube-system/cilium-ccnnp" Jul 2 11:53:20.269515 kubelet[2742]: I0702 11:53:20.269481 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/452a3fcc-a3c4-4ca0-9027-db1e67339a02-clustermesh-secrets\") pod \"cilium-ccnnp\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " pod="kube-system/cilium-ccnnp" Jul 2 11:53:20.269627 kubelet[2742]: I0702 11:53:20.269587 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-lib-modules\") pod \"cilium-ccnnp\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " pod="kube-system/cilium-ccnnp" Jul 2 11:53:20.269734 kubelet[2742]: I0702 11:53:20.269682 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-host-proc-sys-net\") pod \"cilium-ccnnp\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " pod="kube-system/cilium-ccnnp" Jul 2 11:53:20.269873 kubelet[2742]: I0702 11:53:20.269827 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-etc-cni-netd\") pod \"cilium-ccnnp\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " pod="kube-system/cilium-ccnnp" Jul 2 11:53:20.270040 kubelet[2742]: I0702 11:53:20.269993 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-hostproc\") pod \"cilium-ccnnp\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " pod="kube-system/cilium-ccnnp" Jul 2 11:53:20.270193 kubelet[2742]: I0702 11:53:20.270078 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/452a3fcc-a3c4-4ca0-9027-db1e67339a02-cilium-config-path\") pod \"cilium-ccnnp\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " pod="kube-system/cilium-ccnnp" Jul 2 11:53:20.270193 kubelet[2742]: I0702 11:53:20.270144 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/452a3fcc-a3c4-4ca0-9027-db1e67339a02-cilium-ipsec-secrets\") pod \"cilium-ccnnp\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " pod="kube-system/cilium-ccnnp" Jul 2 11:53:20.270479 systemd-logind[1717]: New session 26 of user core. Jul 2 11:53:20.272959 systemd[1]: Started session-26.scope. Jul 2 11:53:20.388470 sshd[5002]: pam_unix(sshd:session): session closed for user core Jul 2 11:53:20.390005 systemd[1]: Started sshd@24-147.75.203.15:22-139.178.68.195:42682.service. Jul 2 11:53:20.390360 systemd[1]: sshd@23-147.75.203.15:22-139.178.68.195:42678.service: Deactivated successfully. Jul 2 11:53:20.390937 systemd-logind[1717]: Session 26 logged out. Waiting for processes to exit. Jul 2 11:53:20.390970 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 11:53:20.391576 systemd-logind[1717]: Removed session 26. Jul 2 11:53:20.394263 env[1676]: time="2024-07-02T11:53:20.394242159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ccnnp,Uid:452a3fcc-a3c4-4ca0-9027-db1e67339a02,Namespace:kube-system,Attempt:0,}" Jul 2 11:53:20.399710 env[1676]: time="2024-07-02T11:53:20.399671877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:53:20.399710 env[1676]: time="2024-07-02T11:53:20.399699552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:53:20.399809 env[1676]: time="2024-07-02T11:53:20.399710012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:53:20.399829 env[1676]: time="2024-07-02T11:53:20.399803482Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/012a13d878fa6f47284c59a78c6c7f8fbc38e9c5903159eed12b72019faac90a pid=5042 runtime=io.containerd.runc.v2 Jul 2 11:53:20.415963 env[1676]: time="2024-07-02T11:53:20.415938951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ccnnp,Uid:452a3fcc-a3c4-4ca0-9027-db1e67339a02,Namespace:kube-system,Attempt:0,} returns sandbox id \"012a13d878fa6f47284c59a78c6c7f8fbc38e9c5903159eed12b72019faac90a\"" Jul 2 11:53:20.417162 env[1676]: time="2024-07-02T11:53:20.417122182Z" level=info msg="CreateContainer within sandbox \"012a13d878fa6f47284c59a78c6c7f8fbc38e9c5903159eed12b72019faac90a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 11:53:20.420606 sshd[5032]: Accepted publickey for core from 139.178.68.195 port 42682 ssh2: RSA SHA256:Sj9QnLcvpWLOP1yrdw8OYb14dJ/sKC+z0D/PNZW8QiA Jul 2 11:53:20.421197 env[1676]: time="2024-07-02T11:53:20.421157274Z" level=info msg="CreateContainer within sandbox \"012a13d878fa6f47284c59a78c6c7f8fbc38e9c5903159eed12b72019faac90a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9b27eee0190e5ed86b2aa3d7582ae41d7b566cd015b616aa33f3ec959b97ac0c\"" Jul 2 11:53:20.421418 env[1676]: time="2024-07-02T11:53:20.421379933Z" level=info msg="StartContainer for \"9b27eee0190e5ed86b2aa3d7582ae41d7b566cd015b616aa33f3ec959b97ac0c\"" Jul 2 11:53:20.421428 sshd[5032]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 11:53:20.423523 systemd-logind[1717]: New session 27 of user core. Jul 2 11:53:20.424079 systemd[1]: Started session-27.scope. Jul 2 11:53:20.441558 env[1676]: time="2024-07-02T11:53:20.441508383Z" level=info msg="StartContainer for \"9b27eee0190e5ed86b2aa3d7582ae41d7b566cd015b616aa33f3ec959b97ac0c\" returns successfully" Jul 2 11:53:20.474776 env[1676]: time="2024-07-02T11:53:20.474711539Z" level=info msg="shim disconnected" id=9b27eee0190e5ed86b2aa3d7582ae41d7b566cd015b616aa33f3ec959b97ac0c Jul 2 11:53:20.474776 env[1676]: time="2024-07-02T11:53:20.474742686Z" level=warning msg="cleaning up after shim disconnected" id=9b27eee0190e5ed86b2aa3d7582ae41d7b566cd015b616aa33f3ec959b97ac0c namespace=k8s.io Jul 2 11:53:20.474776 env[1676]: time="2024-07-02T11:53:20.474749772Z" level=info msg="cleaning up dead shim" Jul 2 11:53:20.478646 env[1676]: time="2024-07-02T11:53:20.478621750Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:53:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5134 runtime=io.containerd.runc.v2\n" Jul 2 11:53:21.094832 env[1676]: time="2024-07-02T11:53:21.094812133Z" level=info msg="StopPodSandbox for \"012a13d878fa6f47284c59a78c6c7f8fbc38e9c5903159eed12b72019faac90a\"" Jul 2 11:53:21.094918 env[1676]: time="2024-07-02T11:53:21.094846606Z" level=info msg="Container to stop \"9b27eee0190e5ed86b2aa3d7582ae41d7b566cd015b616aa33f3ec959b97ac0c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 11:53:21.106627 env[1676]: time="2024-07-02T11:53:21.106584540Z" level=info msg="shim disconnected" id=012a13d878fa6f47284c59a78c6c7f8fbc38e9c5903159eed12b72019faac90a Jul 2 11:53:21.106774 env[1676]: time="2024-07-02T11:53:21.106629281Z" level=warning msg="cleaning up after shim disconnected" id=012a13d878fa6f47284c59a78c6c7f8fbc38e9c5903159eed12b72019faac90a namespace=k8s.io Jul 2 11:53:21.106774 env[1676]: time="2024-07-02T11:53:21.106640461Z" level=info msg="cleaning up dead shim" Jul 2 11:53:21.110667 env[1676]: time="2024-07-02T11:53:21.110647774Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:53:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5176 runtime=io.containerd.runc.v2\n" Jul 2 11:53:21.110870 env[1676]: time="2024-07-02T11:53:21.110834874Z" level=info msg="TearDown network for sandbox \"012a13d878fa6f47284c59a78c6c7f8fbc38e9c5903159eed12b72019faac90a\" successfully" Jul 2 11:53:21.110870 env[1676]: time="2024-07-02T11:53:21.110850553Z" level=info msg="StopPodSandbox for \"012a13d878fa6f47284c59a78c6c7f8fbc38e9c5903159eed12b72019faac90a\" returns successfully" Jul 2 11:53:21.175369 kubelet[2742]: I0702 11:53:21.175311 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-xtables-lock\") pod \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " Jul 2 11:53:21.176602 kubelet[2742]: I0702 11:53:21.175394 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-bpf-maps\") pod \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " Jul 2 11:53:21.176602 kubelet[2742]: I0702 11:53:21.175444 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-cilium-run\") pod \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " Jul 2 11:53:21.176602 kubelet[2742]: I0702 11:53:21.175485 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "452a3fcc-a3c4-4ca0-9027-db1e67339a02" (UID: "452a3fcc-a3c4-4ca0-9027-db1e67339a02"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:53:21.176602 kubelet[2742]: I0702 11:53:21.175524 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/452a3fcc-a3c4-4ca0-9027-db1e67339a02-cilium-ipsec-secrets\") pod \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " Jul 2 11:53:21.176602 kubelet[2742]: I0702 11:53:21.175551 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "452a3fcc-a3c4-4ca0-9027-db1e67339a02" (UID: "452a3fcc-a3c4-4ca0-9027-db1e67339a02"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:53:21.177445 kubelet[2742]: I0702 11:53:21.175585 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "452a3fcc-a3c4-4ca0-9027-db1e67339a02" (UID: "452a3fcc-a3c4-4ca0-9027-db1e67339a02"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:53:21.177445 kubelet[2742]: I0702 11:53:21.175690 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-lib-modules\") pod \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " Jul 2 11:53:21.177445 kubelet[2742]: I0702 11:53:21.175734 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "452a3fcc-a3c4-4ca0-9027-db1e67339a02" (UID: "452a3fcc-a3c4-4ca0-9027-db1e67339a02"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:53:21.177445 kubelet[2742]: I0702 11:53:21.175799 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-etc-cni-netd\") pod \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " Jul 2 11:53:21.177445 kubelet[2742]: I0702 11:53:21.175879 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "452a3fcc-a3c4-4ca0-9027-db1e67339a02" (UID: "452a3fcc-a3c4-4ca0-9027-db1e67339a02"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:53:21.178049 kubelet[2742]: I0702 11:53:21.175900 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-cni-path\") pod \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " Jul 2 11:53:21.178049 kubelet[2742]: I0702 11:53:21.175954 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-cni-path" (OuterVolumeSpecName: "cni-path") pod "452a3fcc-a3c4-4ca0-9027-db1e67339a02" (UID: "452a3fcc-a3c4-4ca0-9027-db1e67339a02"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:53:21.178049 kubelet[2742]: I0702 11:53:21.176007 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/452a3fcc-a3c4-4ca0-9027-db1e67339a02-clustermesh-secrets\") pod \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " Jul 2 11:53:21.178049 kubelet[2742]: I0702 11:53:21.176070 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-hostproc\") pod \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " Jul 2 11:53:21.178049 kubelet[2742]: I0702 11:53:21.176132 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-cilium-cgroup\") pod \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " Jul 2 11:53:21.178049 kubelet[2742]: I0702 11:53:21.176208 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-host-proc-sys-kernel\") pod \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " Jul 2 11:53:21.178721 kubelet[2742]: I0702 11:53:21.176226 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-hostproc" (OuterVolumeSpecName: "hostproc") pod "452a3fcc-a3c4-4ca0-9027-db1e67339a02" (UID: "452a3fcc-a3c4-4ca0-9027-db1e67339a02"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:53:21.178721 kubelet[2742]: I0702 11:53:21.176271 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/452a3fcc-a3c4-4ca0-9027-db1e67339a02-hubble-tls\") pod \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " Jul 2 11:53:21.178721 kubelet[2742]: I0702 11:53:21.176272 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "452a3fcc-a3c4-4ca0-9027-db1e67339a02" (UID: "452a3fcc-a3c4-4ca0-9027-db1e67339a02"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:53:21.178721 kubelet[2742]: I0702 11:53:21.176336 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgwmm\" (UniqueName: \"kubernetes.io/projected/452a3fcc-a3c4-4ca0-9027-db1e67339a02-kube-api-access-qgwmm\") pod \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " Jul 2 11:53:21.178721 kubelet[2742]: I0702 11:53:21.176392 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-host-proc-sys-net\") pod \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " Jul 2 11:53:21.179254 kubelet[2742]: I0702 11:53:21.176362 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "452a3fcc-a3c4-4ca0-9027-db1e67339a02" (UID: "452a3fcc-a3c4-4ca0-9027-db1e67339a02"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:53:21.179254 kubelet[2742]: I0702 11:53:21.176490 2742 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/452a3fcc-a3c4-4ca0-9027-db1e67339a02-cilium-config-path\") pod \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\" (UID: \"452a3fcc-a3c4-4ca0-9027-db1e67339a02\") " Jul 2 11:53:21.179254 kubelet[2742]: I0702 11:53:21.176519 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "452a3fcc-a3c4-4ca0-9027-db1e67339a02" (UID: "452a3fcc-a3c4-4ca0-9027-db1e67339a02"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 11:53:21.179254 kubelet[2742]: I0702 11:53:21.176588 2742 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-host-proc-sys-kernel\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:21.179254 kubelet[2742]: I0702 11:53:21.176626 2742 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-cilium-cgroup\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:21.179254 kubelet[2742]: I0702 11:53:21.176662 2742 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-cilium-run\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:21.179887 kubelet[2742]: I0702 11:53:21.176693 2742 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-xtables-lock\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:21.179887 kubelet[2742]: I0702 11:53:21.176723 2742 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-bpf-maps\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:21.179887 kubelet[2742]: I0702 11:53:21.176754 2742 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-lib-modules\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:21.179887 kubelet[2742]: I0702 11:53:21.176784 2742 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-etc-cni-netd\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:21.179887 kubelet[2742]: I0702 11:53:21.176815 2742 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-cni-path\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:21.179887 kubelet[2742]: I0702 11:53:21.176845 2742 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-hostproc\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:21.181504 kubelet[2742]: I0702 11:53:21.181492 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/452a3fcc-a3c4-4ca0-9027-db1e67339a02-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "452a3fcc-a3c4-4ca0-9027-db1e67339a02" (UID: "452a3fcc-a3c4-4ca0-9027-db1e67339a02"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 11:53:21.181557 kubelet[2742]: I0702 11:53:21.181542 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/452a3fcc-a3c4-4ca0-9027-db1e67339a02-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "452a3fcc-a3c4-4ca0-9027-db1e67339a02" (UID: "452a3fcc-a3c4-4ca0-9027-db1e67339a02"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 11:53:21.181631 kubelet[2742]: I0702 11:53:21.181620 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/452a3fcc-a3c4-4ca0-9027-db1e67339a02-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "452a3fcc-a3c4-4ca0-9027-db1e67339a02" (UID: "452a3fcc-a3c4-4ca0-9027-db1e67339a02"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 11:53:21.181631 kubelet[2742]: I0702 11:53:21.181624 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/452a3fcc-a3c4-4ca0-9027-db1e67339a02-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "452a3fcc-a3c4-4ca0-9027-db1e67339a02" (UID: "452a3fcc-a3c4-4ca0-9027-db1e67339a02"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 11:53:21.181683 kubelet[2742]: I0702 11:53:21.181636 2742 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/452a3fcc-a3c4-4ca0-9027-db1e67339a02-kube-api-access-qgwmm" (OuterVolumeSpecName: "kube-api-access-qgwmm") pod "452a3fcc-a3c4-4ca0-9027-db1e67339a02" (UID: "452a3fcc-a3c4-4ca0-9027-db1e67339a02"). InnerVolumeSpecName "kube-api-access-qgwmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 11:53:21.277543 kubelet[2742]: I0702 11:53:21.277424 2742 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/452a3fcc-a3c4-4ca0-9027-db1e67339a02-hubble-tls\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:21.277543 kubelet[2742]: I0702 11:53:21.277506 2742 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qgwmm\" (UniqueName: \"kubernetes.io/projected/452a3fcc-a3c4-4ca0-9027-db1e67339a02-kube-api-access-qgwmm\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:21.277543 kubelet[2742]: I0702 11:53:21.277544 2742 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/452a3fcc-a3c4-4ca0-9027-db1e67339a02-host-proc-sys-net\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:21.278001 kubelet[2742]: I0702 11:53:21.277579 2742 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/452a3fcc-a3c4-4ca0-9027-db1e67339a02-cilium-config-path\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:21.278001 kubelet[2742]: I0702 11:53:21.277614 2742 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/452a3fcc-a3c4-4ca0-9027-db1e67339a02-cilium-ipsec-secrets\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:21.278001 kubelet[2742]: I0702 11:53:21.277649 2742 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/452a3fcc-a3c4-4ca0-9027-db1e67339a02-clustermesh-secrets\") on node \"ci-3510.3.5-a-3cadf325ae\" DevicePath \"\"" Jul 2 11:53:21.378755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-012a13d878fa6f47284c59a78c6c7f8fbc38e9c5903159eed12b72019faac90a-rootfs.mount: Deactivated successfully. Jul 2 11:53:21.379123 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-012a13d878fa6f47284c59a78c6c7f8fbc38e9c5903159eed12b72019faac90a-shm.mount: Deactivated successfully. Jul 2 11:53:21.379403 systemd[1]: var-lib-kubelet-pods-452a3fcc\x2da3c4\x2d4ca0\x2d9027\x2ddb1e67339a02-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqgwmm.mount: Deactivated successfully. Jul 2 11:53:21.379700 systemd[1]: var-lib-kubelet-pods-452a3fcc\x2da3c4\x2d4ca0\x2d9027\x2ddb1e67339a02-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 11:53:21.379973 systemd[1]: var-lib-kubelet-pods-452a3fcc\x2da3c4\x2d4ca0\x2d9027\x2ddb1e67339a02-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 11:53:21.380248 systemd[1]: var-lib-kubelet-pods-452a3fcc\x2da3c4\x2d4ca0\x2d9027\x2ddb1e67339a02-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 11:53:21.916053 kubelet[2742]: E0702 11:53:21.916004 2742 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 11:53:21.918667 kubelet[2742]: I0702 11:53:21.918612 2742 setters.go:552] "Node became not ready" node="ci-3510.3.5-a-3cadf325ae" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T11:53:21Z","lastTransitionTime":"2024-07-02T11:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 11:53:22.101529 kubelet[2742]: I0702 11:53:22.101434 2742 scope.go:117] "RemoveContainer" containerID="9b27eee0190e5ed86b2aa3d7582ae41d7b566cd015b616aa33f3ec959b97ac0c" Jul 2 11:53:22.104250 env[1676]: time="2024-07-02T11:53:22.104157142Z" level=info msg="RemoveContainer for \"9b27eee0190e5ed86b2aa3d7582ae41d7b566cd015b616aa33f3ec959b97ac0c\"" Jul 2 11:53:22.108761 env[1676]: time="2024-07-02T11:53:22.108693936Z" level=info msg="RemoveContainer for \"9b27eee0190e5ed86b2aa3d7582ae41d7b566cd015b616aa33f3ec959b97ac0c\" returns successfully" Jul 2 11:53:22.152027 kubelet[2742]: I0702 11:53:22.151999 2742 topology_manager.go:215] "Topology Admit Handler" podUID="ee932e67-9a86-4bf6-8936-5f4722d1f44b" podNamespace="kube-system" podName="cilium-mhmsg" Jul 2 11:53:22.152178 kubelet[2742]: E0702 11:53:22.152046 2742 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="452a3fcc-a3c4-4ca0-9027-db1e67339a02" containerName="mount-cgroup" Jul 2 11:53:22.152178 kubelet[2742]: I0702 11:53:22.152074 2742 memory_manager.go:346] "RemoveStaleState removing state" podUID="452a3fcc-a3c4-4ca0-9027-db1e67339a02" containerName="mount-cgroup" Jul 2 11:53:22.283812 kubelet[2742]: I0702 11:53:22.283633 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ee932e67-9a86-4bf6-8936-5f4722d1f44b-cilium-run\") pod \"cilium-mhmsg\" (UID: \"ee932e67-9a86-4bf6-8936-5f4722d1f44b\") " pod="kube-system/cilium-mhmsg" Jul 2 11:53:22.283812 kubelet[2742]: I0702 11:53:22.283745 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ee932e67-9a86-4bf6-8936-5f4722d1f44b-cilium-ipsec-secrets\") pod \"cilium-mhmsg\" (UID: \"ee932e67-9a86-4bf6-8936-5f4722d1f44b\") " pod="kube-system/cilium-mhmsg" Jul 2 11:53:22.284984 kubelet[2742]: I0702 11:53:22.283936 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ee932e67-9a86-4bf6-8936-5f4722d1f44b-host-proc-sys-net\") pod \"cilium-mhmsg\" (UID: \"ee932e67-9a86-4bf6-8936-5f4722d1f44b\") " pod="kube-system/cilium-mhmsg" Jul 2 11:53:22.284984 kubelet[2742]: I0702 11:53:22.284079 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ee932e67-9a86-4bf6-8936-5f4722d1f44b-hubble-tls\") pod \"cilium-mhmsg\" (UID: \"ee932e67-9a86-4bf6-8936-5f4722d1f44b\") " pod="kube-system/cilium-mhmsg" Jul 2 11:53:22.284984 kubelet[2742]: I0702 11:53:22.284207 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ee932e67-9a86-4bf6-8936-5f4722d1f44b-etc-cni-netd\") pod \"cilium-mhmsg\" (UID: \"ee932e67-9a86-4bf6-8936-5f4722d1f44b\") " pod="kube-system/cilium-mhmsg" Jul 2 11:53:22.284984 kubelet[2742]: I0702 11:53:22.284280 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ee932e67-9a86-4bf6-8936-5f4722d1f44b-clustermesh-secrets\") pod \"cilium-mhmsg\" (UID: \"ee932e67-9a86-4bf6-8936-5f4722d1f44b\") " pod="kube-system/cilium-mhmsg" Jul 2 11:53:22.284984 kubelet[2742]: I0702 11:53:22.284349 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee932e67-9a86-4bf6-8936-5f4722d1f44b-cilium-config-path\") pod \"cilium-mhmsg\" (UID: \"ee932e67-9a86-4bf6-8936-5f4722d1f44b\") " pod="kube-system/cilium-mhmsg" Jul 2 11:53:22.285643 kubelet[2742]: I0702 11:53:22.284499 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ee932e67-9a86-4bf6-8936-5f4722d1f44b-host-proc-sys-kernel\") pod \"cilium-mhmsg\" (UID: \"ee932e67-9a86-4bf6-8936-5f4722d1f44b\") " pod="kube-system/cilium-mhmsg" Jul 2 11:53:22.285643 kubelet[2742]: I0702 11:53:22.284619 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cxp5\" (UniqueName: \"kubernetes.io/projected/ee932e67-9a86-4bf6-8936-5f4722d1f44b-kube-api-access-8cxp5\") pod \"cilium-mhmsg\" (UID: \"ee932e67-9a86-4bf6-8936-5f4722d1f44b\") " pod="kube-system/cilium-mhmsg" Jul 2 11:53:22.285643 kubelet[2742]: I0702 11:53:22.284695 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ee932e67-9a86-4bf6-8936-5f4722d1f44b-hostproc\") pod \"cilium-mhmsg\" (UID: \"ee932e67-9a86-4bf6-8936-5f4722d1f44b\") " pod="kube-system/cilium-mhmsg" Jul 2 11:53:22.285643 kubelet[2742]: I0702 11:53:22.284759 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee932e67-9a86-4bf6-8936-5f4722d1f44b-lib-modules\") pod \"cilium-mhmsg\" (UID: \"ee932e67-9a86-4bf6-8936-5f4722d1f44b\") " pod="kube-system/cilium-mhmsg" Jul 2 11:53:22.285643 kubelet[2742]: I0702 11:53:22.284850 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee932e67-9a86-4bf6-8936-5f4722d1f44b-xtables-lock\") pod \"cilium-mhmsg\" (UID: \"ee932e67-9a86-4bf6-8936-5f4722d1f44b\") " pod="kube-system/cilium-mhmsg" Jul 2 11:53:22.285643 kubelet[2742]: I0702 11:53:22.284968 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ee932e67-9a86-4bf6-8936-5f4722d1f44b-bpf-maps\") pod \"cilium-mhmsg\" (UID: \"ee932e67-9a86-4bf6-8936-5f4722d1f44b\") " pod="kube-system/cilium-mhmsg" Jul 2 11:53:22.286297 kubelet[2742]: I0702 11:53:22.285045 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ee932e67-9a86-4bf6-8936-5f4722d1f44b-cilium-cgroup\") pod \"cilium-mhmsg\" (UID: \"ee932e67-9a86-4bf6-8936-5f4722d1f44b\") " pod="kube-system/cilium-mhmsg" Jul 2 11:53:22.286297 kubelet[2742]: I0702 11:53:22.285155 2742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ee932e67-9a86-4bf6-8936-5f4722d1f44b-cni-path\") pod \"cilium-mhmsg\" (UID: \"ee932e67-9a86-4bf6-8936-5f4722d1f44b\") " pod="kube-system/cilium-mhmsg" Jul 2 11:53:22.455798 env[1676]: time="2024-07-02T11:53:22.455740755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mhmsg,Uid:ee932e67-9a86-4bf6-8936-5f4722d1f44b,Namespace:kube-system,Attempt:0,}" Jul 2 11:53:22.461974 env[1676]: time="2024-07-02T11:53:22.461938695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 11:53:22.461974 env[1676]: time="2024-07-02T11:53:22.461964439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 11:53:22.461974 env[1676]: time="2024-07-02T11:53:22.461973116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 11:53:22.462118 env[1676]: time="2024-07-02T11:53:22.462051301Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c1eac657683104c362f665da29663b3968da56e6afbb14eae484fb5c5b48254 pid=5203 runtime=io.containerd.runc.v2 Jul 2 11:53:22.481084 env[1676]: time="2024-07-02T11:53:22.481030423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mhmsg,Uid:ee932e67-9a86-4bf6-8936-5f4722d1f44b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c1eac657683104c362f665da29663b3968da56e6afbb14eae484fb5c5b48254\"" Jul 2 11:53:22.482537 env[1676]: time="2024-07-02T11:53:22.482485244Z" level=info msg="CreateContainer within sandbox \"5c1eac657683104c362f665da29663b3968da56e6afbb14eae484fb5c5b48254\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 11:53:22.487025 env[1676]: time="2024-07-02T11:53:22.486983790Z" level=info msg="CreateContainer within sandbox \"5c1eac657683104c362f665da29663b3968da56e6afbb14eae484fb5c5b48254\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4f0ec02ff539c6c2d61666a247606728248f60bf3cb8cdff1890de49a2ba7b44\"" Jul 2 11:53:22.487305 env[1676]: time="2024-07-02T11:53:22.487262381Z" level=info msg="StartContainer for \"4f0ec02ff539c6c2d61666a247606728248f60bf3cb8cdff1890de49a2ba7b44\"" Jul 2 11:53:22.512666 env[1676]: time="2024-07-02T11:53:22.512630783Z" level=info msg="StartContainer for \"4f0ec02ff539c6c2d61666a247606728248f60bf3cb8cdff1890de49a2ba7b44\" returns successfully" Jul 2 11:53:22.532791 env[1676]: time="2024-07-02T11:53:22.532758864Z" level=info msg="shim disconnected" id=4f0ec02ff539c6c2d61666a247606728248f60bf3cb8cdff1890de49a2ba7b44 Jul 2 11:53:22.532791 env[1676]: time="2024-07-02T11:53:22.532793306Z" level=warning msg="cleaning up after shim disconnected" id=4f0ec02ff539c6c2d61666a247606728248f60bf3cb8cdff1890de49a2ba7b44 namespace=k8s.io Jul 2 11:53:22.532948 env[1676]: time="2024-07-02T11:53:22.532802297Z" level=info msg="cleaning up dead shim" Jul 2 11:53:22.537286 env[1676]: time="2024-07-02T11:53:22.537228364Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:53:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5287 runtime=io.containerd.runc.v2\n" Jul 2 11:53:23.104762 env[1676]: time="2024-07-02T11:53:23.104732984Z" level=info msg="CreateContainer within sandbox \"5c1eac657683104c362f665da29663b3968da56e6afbb14eae484fb5c5b48254\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 11:53:23.109220 env[1676]: time="2024-07-02T11:53:23.109164887Z" level=info msg="CreateContainer within sandbox \"5c1eac657683104c362f665da29663b3968da56e6afbb14eae484fb5c5b48254\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"33aaa2f5698c8f4b02a6f76f02471c0ae070240e9e5e66c3be97a17861d7e225\"" Jul 2 11:53:23.109525 env[1676]: time="2024-07-02T11:53:23.109474955Z" level=info msg="StartContainer for \"33aaa2f5698c8f4b02a6f76f02471c0ae070240e9e5e66c3be97a17861d7e225\"" Jul 2 11:53:23.133984 env[1676]: time="2024-07-02T11:53:23.133958988Z" level=info msg="StartContainer for \"33aaa2f5698c8f4b02a6f76f02471c0ae070240e9e5e66c3be97a17861d7e225\" returns successfully" Jul 2 11:53:23.167278 env[1676]: time="2024-07-02T11:53:23.167125222Z" level=info msg="shim disconnected" id=33aaa2f5698c8f4b02a6f76f02471c0ae070240e9e5e66c3be97a17861d7e225 Jul 2 11:53:23.167278 env[1676]: time="2024-07-02T11:53:23.167241920Z" level=warning msg="cleaning up after shim disconnected" id=33aaa2f5698c8f4b02a6f76f02471c0ae070240e9e5e66c3be97a17861d7e225 namespace=k8s.io Jul 2 11:53:23.167278 env[1676]: time="2024-07-02T11:53:23.167274272Z" level=info msg="cleaning up dead shim" Jul 2 11:53:23.183881 env[1676]: time="2024-07-02T11:53:23.183760506Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:53:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5348 runtime=io.containerd.runc.v2\n" Jul 2 11:53:23.771543 kubelet[2742]: I0702 11:53:23.771444 2742 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="452a3fcc-a3c4-4ca0-9027-db1e67339a02" path="/var/lib/kubelet/pods/452a3fcc-a3c4-4ca0-9027-db1e67339a02/volumes" Jul 2 11:53:24.120605 env[1676]: time="2024-07-02T11:53:24.120484173Z" level=info msg="CreateContainer within sandbox \"5c1eac657683104c362f665da29663b3968da56e6afbb14eae484fb5c5b48254\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 11:53:24.130692 env[1676]: time="2024-07-02T11:53:24.130627279Z" level=info msg="CreateContainer within sandbox \"5c1eac657683104c362f665da29663b3968da56e6afbb14eae484fb5c5b48254\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b660a6b5a30243ce0d37ca29bf3ee3e053ccd3ecc01e3c7f1e33e71a59008122\"" Jul 2 11:53:24.131149 env[1676]: time="2024-07-02T11:53:24.131131402Z" level=info msg="StartContainer for \"b660a6b5a30243ce0d37ca29bf3ee3e053ccd3ecc01e3c7f1e33e71a59008122\"" Jul 2 11:53:24.132071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1235719155.mount: Deactivated successfully. Jul 2 11:53:24.154970 env[1676]: time="2024-07-02T11:53:24.154945098Z" level=info msg="StartContainer for \"b660a6b5a30243ce0d37ca29bf3ee3e053ccd3ecc01e3c7f1e33e71a59008122\" returns successfully" Jul 2 11:53:24.165437 env[1676]: time="2024-07-02T11:53:24.165412530Z" level=info msg="shim disconnected" id=b660a6b5a30243ce0d37ca29bf3ee3e053ccd3ecc01e3c7f1e33e71a59008122 Jul 2 11:53:24.165437 env[1676]: time="2024-07-02T11:53:24.165438209Z" level=warning msg="cleaning up after shim disconnected" id=b660a6b5a30243ce0d37ca29bf3ee3e053ccd3ecc01e3c7f1e33e71a59008122 namespace=k8s.io Jul 2 11:53:24.165567 env[1676]: time="2024-07-02T11:53:24.165444786Z" level=info msg="cleaning up dead shim" Jul 2 11:53:24.168993 env[1676]: time="2024-07-02T11:53:24.168977746Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:53:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5405 runtime=io.containerd.runc.v2\n" Jul 2 11:53:24.400373 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b660a6b5a30243ce0d37ca29bf3ee3e053ccd3ecc01e3c7f1e33e71a59008122-rootfs.mount: Deactivated successfully. Jul 2 11:53:25.114243 env[1676]: time="2024-07-02T11:53:25.114215516Z" level=info msg="CreateContainer within sandbox \"5c1eac657683104c362f665da29663b3968da56e6afbb14eae484fb5c5b48254\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 11:53:25.118728 env[1676]: time="2024-07-02T11:53:25.118703399Z" level=info msg="CreateContainer within sandbox \"5c1eac657683104c362f665da29663b3968da56e6afbb14eae484fb5c5b48254\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f4038327511de1e2af8b91fcea309f558f3b72e0421786b71416c744cc207c30\"" Jul 2 11:53:25.119042 env[1676]: time="2024-07-02T11:53:25.119020745Z" level=info msg="StartContainer for \"f4038327511de1e2af8b91fcea309f558f3b72e0421786b71416c744cc207c30\"" Jul 2 11:53:25.145653 env[1676]: time="2024-07-02T11:53:25.145625381Z" level=info msg="StartContainer for \"f4038327511de1e2af8b91fcea309f558f3b72e0421786b71416c744cc207c30\" returns successfully" Jul 2 11:53:25.156402 env[1676]: time="2024-07-02T11:53:25.156369108Z" level=info msg="shim disconnected" id=f4038327511de1e2af8b91fcea309f558f3b72e0421786b71416c744cc207c30 Jul 2 11:53:25.156402 env[1676]: time="2024-07-02T11:53:25.156401252Z" level=warning msg="cleaning up after shim disconnected" id=f4038327511de1e2af8b91fcea309f558f3b72e0421786b71416c744cc207c30 namespace=k8s.io Jul 2 11:53:25.156585 env[1676]: time="2024-07-02T11:53:25.156408927Z" level=info msg="cleaning up dead shim" Jul 2 11:53:25.161038 env[1676]: time="2024-07-02T11:53:25.161017352Z" level=warning msg="cleanup warnings time=\"2024-07-02T11:53:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5460 runtime=io.containerd.runc.v2\n" Jul 2 11:53:25.401464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4038327511de1e2af8b91fcea309f558f3b72e0421786b71416c744cc207c30-rootfs.mount: Deactivated successfully. Jul 2 11:53:26.128082 env[1676]: time="2024-07-02T11:53:26.127982159Z" level=info msg="CreateContainer within sandbox \"5c1eac657683104c362f665da29663b3968da56e6afbb14eae484fb5c5b48254\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 11:53:26.137629 env[1676]: time="2024-07-02T11:53:26.137579298Z" level=info msg="CreateContainer within sandbox \"5c1eac657683104c362f665da29663b3968da56e6afbb14eae484fb5c5b48254\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aa385e1dc03bbf086da71145ce6864c767dc94d0bf3185430aaf78558f13b88e\"" Jul 2 11:53:26.138022 env[1676]: time="2024-07-02T11:53:26.138005492Z" level=info msg="StartContainer for \"aa385e1dc03bbf086da71145ce6864c767dc94d0bf3185430aaf78558f13b88e\"" Jul 2 11:53:26.161067 env[1676]: time="2024-07-02T11:53:26.161041968Z" level=info msg="StartContainer for \"aa385e1dc03bbf086da71145ce6864c767dc94d0bf3185430aaf78558f13b88e\" returns successfully" Jul 2 11:53:26.318461 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 11:53:27.164972 kubelet[2742]: I0702 11:53:27.164902 2742 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-mhmsg" podStartSLOduration=5.164808181 podCreationTimestamp="2024-07-02 11:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 11:53:27.163925611 +0000 UTC m=+475.454336466" watchObservedRunningTime="2024-07-02 11:53:27.164808181 +0000 UTC m=+475.455219017" Jul 2 11:53:29.212900 systemd-networkd[1408]: lxc_health: Link UP Jul 2 11:53:29.233461 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 11:53:29.233533 systemd-networkd[1408]: lxc_health: Gained carrier Jul 2 11:53:31.232594 systemd-networkd[1408]: lxc_health: Gained IPv6LL Jul 2 11:53:31.773997 env[1676]: time="2024-07-02T11:53:31.773930887Z" level=info msg="StopPodSandbox for \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\"" Jul 2 11:53:31.774217 env[1676]: time="2024-07-02T11:53:31.773985848Z" level=info msg="TearDown network for sandbox \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\" successfully" Jul 2 11:53:31.774217 env[1676]: time="2024-07-02T11:53:31.774010622Z" level=info msg="StopPodSandbox for \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\" returns successfully" Jul 2 11:53:31.774396 env[1676]: time="2024-07-02T11:53:31.774357360Z" level=info msg="RemovePodSandbox for \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\"" Jul 2 11:53:31.774396 env[1676]: time="2024-07-02T11:53:31.774376078Z" level=info msg="Forcibly stopping sandbox \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\"" Jul 2 11:53:31.774476 env[1676]: time="2024-07-02T11:53:31.774411854Z" level=info msg="TearDown network for sandbox \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\" successfully" Jul 2 11:53:31.775719 env[1676]: time="2024-07-02T11:53:31.775676913Z" level=info msg="RemovePodSandbox \"46f5f8f469f1355cfc57566c7c374c4cdbc24055a75875e1f1dfd98a5f927d05\" returns successfully" Jul 2 11:53:31.775879 env[1676]: time="2024-07-02T11:53:31.775835880Z" level=info msg="StopPodSandbox for \"6efad680b5e8e08ab3b04f781888e1d4e93205ff3ea4a9e0db6adf04abbd0b66\"" Jul 2 11:53:31.775921 env[1676]: time="2024-07-02T11:53:31.775871172Z" level=info msg="TearDown network for sandbox \"6efad680b5e8e08ab3b04f781888e1d4e93205ff3ea4a9e0db6adf04abbd0b66\" successfully" Jul 2 11:53:31.775921 env[1676]: time="2024-07-02T11:53:31.775888725Z" level=info msg="StopPodSandbox for \"6efad680b5e8e08ab3b04f781888e1d4e93205ff3ea4a9e0db6adf04abbd0b66\" returns successfully" Jul 2 11:53:31.776080 env[1676]: time="2024-07-02T11:53:31.776039523Z" level=info msg="RemovePodSandbox for \"6efad680b5e8e08ab3b04f781888e1d4e93205ff3ea4a9e0db6adf04abbd0b66\"" Jul 2 11:53:31.776080 env[1676]: time="2024-07-02T11:53:31.776054820Z" level=info msg="Forcibly stopping sandbox \"6efad680b5e8e08ab3b04f781888e1d4e93205ff3ea4a9e0db6adf04abbd0b66\"" Jul 2 11:53:31.776142 env[1676]: time="2024-07-02T11:53:31.776091735Z" level=info msg="TearDown network for sandbox \"6efad680b5e8e08ab3b04f781888e1d4e93205ff3ea4a9e0db6adf04abbd0b66\" successfully" Jul 2 11:53:31.777325 env[1676]: time="2024-07-02T11:53:31.777281052Z" level=info msg="RemovePodSandbox \"6efad680b5e8e08ab3b04f781888e1d4e93205ff3ea4a9e0db6adf04abbd0b66\" returns successfully" Jul 2 11:53:31.777471 env[1676]: time="2024-07-02T11:53:31.777455975Z" level=info msg="StopPodSandbox for \"012a13d878fa6f47284c59a78c6c7f8fbc38e9c5903159eed12b72019faac90a\"" Jul 2 11:53:31.777510 env[1676]: time="2024-07-02T11:53:31.777490242Z" level=info msg="TearDown network for sandbox \"012a13d878fa6f47284c59a78c6c7f8fbc38e9c5903159eed12b72019faac90a\" successfully" Jul 2 11:53:31.777510 env[1676]: time="2024-07-02T11:53:31.777507528Z" level=info msg="StopPodSandbox for \"012a13d878fa6f47284c59a78c6c7f8fbc38e9c5903159eed12b72019faac90a\" returns successfully" Jul 2 11:53:31.777662 env[1676]: time="2024-07-02T11:53:31.777616441Z" level=info msg="RemovePodSandbox for \"012a13d878fa6f47284c59a78c6c7f8fbc38e9c5903159eed12b72019faac90a\"" Jul 2 11:53:31.777662 env[1676]: time="2024-07-02T11:53:31.777637069Z" level=info msg="Forcibly stopping sandbox \"012a13d878fa6f47284c59a78c6c7f8fbc38e9c5903159eed12b72019faac90a\"" Jul 2 11:53:31.777727 env[1676]: time="2024-07-02T11:53:31.777672662Z" level=info msg="TearDown network for sandbox \"012a13d878fa6f47284c59a78c6c7f8fbc38e9c5903159eed12b72019faac90a\" successfully" Jul 2 11:53:31.778749 env[1676]: time="2024-07-02T11:53:31.778708342Z" level=info msg="RemovePodSandbox \"012a13d878fa6f47284c59a78c6c7f8fbc38e9c5903159eed12b72019faac90a\" returns successfully" Jul 2 11:53:34.961336 sshd[5032]: pam_unix(sshd:session): session closed for user core Jul 2 11:53:34.962863 systemd[1]: sshd@24-147.75.203.15:22-139.178.68.195:42682.service: Deactivated successfully. Jul 2 11:53:34.963385 systemd-logind[1717]: Session 27 logged out. Waiting for processes to exit. Jul 2 11:53:34.963421 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 11:53:34.964058 systemd-logind[1717]: Removed session 27.