Feb 13 02:20:32.555673 kernel: microcode: microcode updated early to revision 0xf4, date = 2022-07-31 Feb 13 02:20:32.555686 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 13 02:20:32.555693 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 02:20:32.555697 kernel: BIOS-provided physical RAM map: Feb 13 02:20:32.555700 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 02:20:32.555704 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 02:20:32.555708 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 02:20:32.555713 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 02:20:32.555716 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 02:20:32.555720 kernel: BIOS-e820: [mem 0x0000000040400000-0x000000006dfbbfff] usable Feb 13 02:20:32.555724 kernel: BIOS-e820: [mem 0x000000006dfbc000-0x000000006dfbcfff] ACPI NVS Feb 13 02:20:32.555727 kernel: BIOS-e820: [mem 0x000000006dfbd000-0x000000006dfbdfff] reserved Feb 13 02:20:32.555731 kernel: BIOS-e820: [mem 0x000000006dfbe000-0x0000000077fc4fff] usable Feb 13 02:20:32.555735 kernel: BIOS-e820: [mem 0x0000000077fc5000-0x00000000790a7fff] reserved Feb 13 02:20:32.555740 kernel: BIOS-e820: [mem 0x00000000790a8000-0x0000000079230fff] usable Feb 13 02:20:32.555745 kernel: BIOS-e820: [mem 0x0000000079231000-0x0000000079662fff] ACPI NVS Feb 13 02:20:32.555749 kernel: BIOS-e820: [mem 0x0000000079663000-0x000000007befefff] reserved Feb 13 02:20:32.555753 kernel: BIOS-e820: [mem 0x000000007beff000-0x000000007befffff] usable Feb 13 02:20:32.555757 kernel: BIOS-e820: [mem 0x000000007bf00000-0x000000007f7fffff] reserved Feb 13 02:20:32.555761 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 02:20:32.555765 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 02:20:32.555769 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 02:20:32.555773 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 02:20:32.555778 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 02:20:32.555782 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000087f7fffff] usable Feb 13 02:20:32.555786 kernel: NX (Execute Disable) protection: active Feb 13 02:20:32.555790 kernel: SMBIOS 3.2.1 present. Feb 13 02:20:32.555794 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Feb 13 02:20:32.555798 kernel: tsc: Detected 3400.000 MHz processor Feb 13 02:20:32.555802 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 02:20:32.555806 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 02:20:32.555810 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 02:20:32.555815 kernel: last_pfn = 0x87f800 max_arch_pfn = 0x400000000 Feb 13 02:20:32.555820 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 02:20:32.555824 kernel: last_pfn = 0x7bf00 max_arch_pfn = 0x400000000 Feb 13 02:20:32.555828 kernel: Using GB pages for direct mapping Feb 13 02:20:32.555832 kernel: ACPI: Early table checksum verification disabled Feb 13 02:20:32.555836 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 02:20:32.555841 kernel: ACPI: XSDT 0x00000000795440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 02:20:32.555845 kernel: ACPI: FACP 0x0000000079580620 000114 (v06 01072009 AMI 00010013) Feb 13 02:20:32.555851 kernel: ACPI: DSDT 0x0000000079544268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 02:20:32.555856 kernel: ACPI: FACS 0x0000000079662F80 000040 Feb 13 02:20:32.555861 kernel: ACPI: APIC 0x0000000079580738 00012C (v04 01072009 AMI 00010013) Feb 13 02:20:32.555866 kernel: ACPI: FPDT 0x0000000079580868 000044 (v01 01072009 AMI 00010013) Feb 13 02:20:32.555870 kernel: ACPI: FIDT 0x00000000795808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 02:20:32.555875 kernel: ACPI: MCFG 0x0000000079580950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 02:20:32.555879 kernel: ACPI: SPMI 0x0000000079580990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 02:20:32.555884 kernel: ACPI: SSDT 0x00000000795809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 02:20:32.555889 kernel: ACPI: SSDT 0x00000000795824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 02:20:32.555894 kernel: ACPI: SSDT 0x00000000795856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 02:20:32.555898 kernel: ACPI: HPET 0x00000000795879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 02:20:32.555903 kernel: ACPI: SSDT 0x0000000079587A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 02:20:32.555907 kernel: ACPI: SSDT 0x00000000795889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 02:20:32.555912 kernel: ACPI: UEFI 0x00000000795892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 02:20:32.555916 kernel: ACPI: LPIT 0x0000000079589318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 02:20:32.555921 kernel: ACPI: SSDT 0x00000000795893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 02:20:32.555926 kernel: ACPI: SSDT 0x000000007958BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 02:20:32.555931 kernel: ACPI: DBGP 0x000000007958D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 02:20:32.555935 kernel: ACPI: DBG2 0x000000007958D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 02:20:32.555940 kernel: ACPI: SSDT 0x000000007958D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 02:20:32.555944 kernel: ACPI: DMAR 0x000000007958EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Feb 13 02:20:32.555949 kernel: ACPI: SSDT 0x000000007958ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 02:20:32.555953 kernel: ACPI: TPM2 0x000000007958EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 02:20:32.555958 kernel: ACPI: SSDT 0x000000007958EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 02:20:32.555963 kernel: ACPI: WSMT 0x000000007958FC28 000028 (v01 \xf5m 01072009 AMI 00010013) Feb 13 02:20:32.555968 kernel: ACPI: EINJ 0x000000007958FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 02:20:32.555972 kernel: ACPI: ERST 0x000000007958FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 02:20:32.555977 kernel: ACPI: BERT 0x000000007958FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 02:20:32.555981 kernel: ACPI: HEST 0x000000007958FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 02:20:32.555986 kernel: ACPI: SSDT 0x0000000079590260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 02:20:32.555991 kernel: ACPI: Reserving FACP table memory at [mem 0x79580620-0x79580733] Feb 13 02:20:32.555995 kernel: ACPI: Reserving DSDT table memory at [mem 0x79544268-0x7958061e] Feb 13 02:20:32.556000 kernel: ACPI: Reserving FACS table memory at [mem 0x79662f80-0x79662fbf] Feb 13 02:20:32.556005 kernel: ACPI: Reserving APIC table memory at [mem 0x79580738-0x79580863] Feb 13 02:20:32.556010 kernel: ACPI: Reserving FPDT table memory at [mem 0x79580868-0x795808ab] Feb 13 02:20:32.556014 kernel: ACPI: Reserving FIDT table memory at [mem 0x795808b0-0x7958094b] Feb 13 02:20:32.556018 kernel: ACPI: Reserving MCFG table memory at [mem 0x79580950-0x7958098b] Feb 13 02:20:32.556023 kernel: ACPI: Reserving SPMI table memory at [mem 0x79580990-0x795809d0] Feb 13 02:20:32.556027 kernel: ACPI: Reserving SSDT table memory at [mem 0x795809d8-0x795824f3] Feb 13 02:20:32.556032 kernel: ACPI: Reserving SSDT table memory at [mem 0x795824f8-0x795856bd] Feb 13 02:20:32.556037 kernel: ACPI: Reserving SSDT table memory at [mem 0x795856c0-0x795879ea] Feb 13 02:20:32.556041 kernel: ACPI: Reserving HPET table memory at [mem 0x795879f0-0x79587a27] Feb 13 02:20:32.556046 kernel: ACPI: Reserving SSDT table memory at [mem 0x79587a28-0x795889d5] Feb 13 02:20:32.556051 kernel: ACPI: Reserving SSDT table memory at [mem 0x795889d8-0x795892ce] Feb 13 02:20:32.556055 kernel: ACPI: Reserving UEFI table memory at [mem 0x795892d0-0x79589311] Feb 13 02:20:32.556060 kernel: ACPI: Reserving LPIT table memory at [mem 0x79589318-0x795893ab] Feb 13 02:20:32.556064 kernel: ACPI: Reserving SSDT table memory at [mem 0x795893b0-0x7958bb8d] Feb 13 02:20:32.556069 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958bb90-0x7958d071] Feb 13 02:20:32.556073 kernel: ACPI: Reserving DBGP table memory at [mem 0x7958d078-0x7958d0ab] Feb 13 02:20:32.556078 kernel: ACPI: Reserving DBG2 table memory at [mem 0x7958d0b0-0x7958d103] Feb 13 02:20:32.556082 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958d108-0x7958ec6e] Feb 13 02:20:32.556088 kernel: ACPI: Reserving DMAR table memory at [mem 0x7958ec70-0x7958ed17] Feb 13 02:20:32.556092 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ed18-0x7958ee5b] Feb 13 02:20:32.556097 kernel: ACPI: Reserving TPM2 table memory at [mem 0x7958ee60-0x7958ee93] Feb 13 02:20:32.556101 kernel: ACPI: Reserving SSDT table memory at [mem 0x7958ee98-0x7958fc26] Feb 13 02:20:32.556106 kernel: ACPI: Reserving WSMT table memory at [mem 0x7958fc28-0x7958fc4f] Feb 13 02:20:32.556110 kernel: ACPI: Reserving EINJ table memory at [mem 0x7958fc50-0x7958fd7f] Feb 13 02:20:32.556115 kernel: ACPI: Reserving ERST table memory at [mem 0x7958fd80-0x7958ffaf] Feb 13 02:20:32.556119 kernel: ACPI: Reserving BERT table memory at [mem 0x7958ffb0-0x7958ffdf] Feb 13 02:20:32.556124 kernel: ACPI: Reserving HEST table memory at [mem 0x7958ffe0-0x7959025b] Feb 13 02:20:32.556129 kernel: ACPI: Reserving SSDT table memory at [mem 0x79590260-0x795903c1] Feb 13 02:20:32.556133 kernel: No NUMA configuration found Feb 13 02:20:32.556138 kernel: Faking a node at [mem 0x0000000000000000-0x000000087f7fffff] Feb 13 02:20:32.556142 kernel: NODE_DATA(0) allocated [mem 0x87f7fa000-0x87f7fffff] Feb 13 02:20:32.556147 kernel: Zone ranges: Feb 13 02:20:32.556152 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 02:20:32.556156 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 02:20:32.556161 kernel: Normal [mem 0x0000000100000000-0x000000087f7fffff] Feb 13 02:20:32.556165 kernel: Movable zone start for each node Feb 13 02:20:32.556170 kernel: Early memory node ranges Feb 13 02:20:32.556175 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 02:20:32.556180 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 02:20:32.556184 kernel: node 0: [mem 0x0000000040400000-0x000000006dfbbfff] Feb 13 02:20:32.556189 kernel: node 0: [mem 0x000000006dfbe000-0x0000000077fc4fff] Feb 13 02:20:32.556193 kernel: node 0: [mem 0x00000000790a8000-0x0000000079230fff] Feb 13 02:20:32.556198 kernel: node 0: [mem 0x000000007beff000-0x000000007befffff] Feb 13 02:20:32.556202 kernel: node 0: [mem 0x0000000100000000-0x000000087f7fffff] Feb 13 02:20:32.556207 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000087f7fffff] Feb 13 02:20:32.556215 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 02:20:32.556220 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 02:20:32.556225 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 02:20:32.556230 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 02:20:32.556235 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Feb 13 02:20:32.556240 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Feb 13 02:20:32.556245 kernel: On node 0, zone Normal: 16640 pages in unavailable ranges Feb 13 02:20:32.556250 kernel: On node 0, zone Normal: 2048 pages in unavailable ranges Feb 13 02:20:32.556255 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 02:20:32.556260 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 02:20:32.556265 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 02:20:32.556270 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 02:20:32.556275 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 02:20:32.556280 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 02:20:32.556284 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 02:20:32.556289 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 02:20:32.556294 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 02:20:32.556300 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 02:20:32.556304 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 02:20:32.556309 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 02:20:32.556314 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 02:20:32.556319 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 02:20:32.556324 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 02:20:32.556328 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 02:20:32.556333 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 02:20:32.556338 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 02:20:32.556344 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 02:20:32.556349 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 02:20:32.556353 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 02:20:32.556358 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 02:20:32.556363 kernel: TSC deadline timer available Feb 13 02:20:32.556368 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 02:20:32.556373 kernel: [mem 0x7f800000-0xdfffffff] available for PCI devices Feb 13 02:20:32.556377 kernel: Booting paravirtualized kernel on bare hardware Feb 13 02:20:32.556382 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 02:20:32.556388 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 13 02:20:32.556393 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 13 02:20:32.556398 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 13 02:20:32.556403 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 02:20:32.556407 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8222327 Feb 13 02:20:32.556412 kernel: Policy zone: Normal Feb 13 02:20:32.556418 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 02:20:32.556423 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 02:20:32.556429 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 02:20:32.556434 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 02:20:32.556438 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 02:20:32.556443 kernel: Memory: 32683728K/33411988K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 728000K reserved, 0K cma-reserved) Feb 13 02:20:32.556467 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 02:20:32.556472 kernel: ftrace: allocating 34475 entries in 135 pages Feb 13 02:20:32.556477 kernel: ftrace: allocated 135 pages with 4 groups Feb 13 02:20:32.556481 kernel: rcu: Hierarchical RCU implementation. Feb 13 02:20:32.556500 kernel: rcu: RCU event tracing is enabled. Feb 13 02:20:32.556505 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 02:20:32.556510 kernel: Rude variant of Tasks RCU enabled. Feb 13 02:20:32.556515 kernel: Tracing variant of Tasks RCU enabled. Feb 13 02:20:32.556520 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 02:20:32.556525 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 02:20:32.556530 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 02:20:32.556535 kernel: random: crng init done Feb 13 02:20:32.556539 kernel: Console: colour dummy device 80x25 Feb 13 02:20:32.556544 kernel: printk: console [tty0] enabled Feb 13 02:20:32.556550 kernel: printk: console [ttyS1] enabled Feb 13 02:20:32.556554 kernel: ACPI: Core revision 20210730 Feb 13 02:20:32.556559 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Feb 13 02:20:32.556564 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 02:20:32.556569 kernel: DMAR: Host address width 39 Feb 13 02:20:32.556574 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Feb 13 02:20:32.556579 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Feb 13 02:20:32.556583 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 02:20:32.556588 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 02:20:32.556594 kernel: DMAR: RMRR base: 0x00000079f11000 end: 0x0000007a15afff Feb 13 02:20:32.556599 kernel: DMAR: RMRR base: 0x0000007d000000 end: 0x0000007f7fffff Feb 13 02:20:32.556604 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Feb 13 02:20:32.556609 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 02:20:32.556613 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 02:20:32.556618 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 02:20:32.556623 kernel: x2apic enabled Feb 13 02:20:32.556628 kernel: Switched APIC routing to cluster x2apic. Feb 13 02:20:32.556633 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 02:20:32.556638 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 02:20:32.556643 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 02:20:32.556648 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 02:20:32.556653 kernel: process: using mwait in idle threads Feb 13 02:20:32.556658 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 02:20:32.556663 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 02:20:32.556667 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 02:20:32.556672 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 02:20:32.556677 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 13 02:20:32.556683 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 02:20:32.556688 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 02:20:32.556693 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 02:20:32.556698 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 02:20:32.556703 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 13 02:20:32.556708 kernel: TAA: Mitigation: TSX disabled Feb 13 02:20:32.556713 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 02:20:32.556717 kernel: SRBDS: Mitigation: Microcode Feb 13 02:20:32.556722 kernel: GDS: Vulnerable: No microcode Feb 13 02:20:32.556728 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 02:20:32.556733 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 02:20:32.556737 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 02:20:32.556742 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 02:20:32.556747 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 02:20:32.556752 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 02:20:32.556757 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 02:20:32.556761 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 02:20:32.556766 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 02:20:32.556772 kernel: Freeing SMP alternatives memory: 32K Feb 13 02:20:32.556777 kernel: pid_max: default: 32768 minimum: 301 Feb 13 02:20:32.556781 kernel: LSM: Security Framework initializing Feb 13 02:20:32.556786 kernel: SELinux: Initializing. Feb 13 02:20:32.556791 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 02:20:32.556796 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 02:20:32.556801 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 02:20:32.556806 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 02:20:32.556811 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 02:20:32.556816 kernel: ... version: 4 Feb 13 02:20:32.556821 kernel: ... bit width: 48 Feb 13 02:20:32.556826 kernel: ... generic registers: 4 Feb 13 02:20:32.556830 kernel: ... value mask: 0000ffffffffffff Feb 13 02:20:32.556835 kernel: ... max period: 00007fffffffffff Feb 13 02:20:32.556840 kernel: ... fixed-purpose events: 3 Feb 13 02:20:32.556845 kernel: ... event mask: 000000070000000f Feb 13 02:20:32.556850 kernel: signal: max sigframe size: 2032 Feb 13 02:20:32.556854 kernel: rcu: Hierarchical SRCU implementation. Feb 13 02:20:32.556860 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 02:20:32.556865 kernel: smp: Bringing up secondary CPUs ... Feb 13 02:20:32.556870 kernel: x86: Booting SMP configuration: Feb 13 02:20:32.556874 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 13 02:20:32.556879 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 02:20:32.556884 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 13 02:20:32.556889 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 02:20:32.556894 kernel: smpboot: Max logical packages: 1 Feb 13 02:20:32.556899 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 02:20:32.556904 kernel: devtmpfs: initialized Feb 13 02:20:32.556909 kernel: x86/mm: Memory block size: 128MB Feb 13 02:20:32.556914 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6dfbc000-0x6dfbcfff] (4096 bytes) Feb 13 02:20:32.556919 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x79231000-0x79662fff] (4399104 bytes) Feb 13 02:20:32.556924 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 02:20:32.556929 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 02:20:32.556933 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 02:20:32.556938 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 02:20:32.556944 kernel: audit: initializing netlink subsys (disabled) Feb 13 02:20:32.556949 kernel: audit: type=2000 audit(1707790827.120:1): state=initialized audit_enabled=0 res=1 Feb 13 02:20:32.556953 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 02:20:32.556958 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 02:20:32.556963 kernel: cpuidle: using governor menu Feb 13 02:20:32.556968 kernel: ACPI: bus type PCI registered Feb 13 02:20:32.556973 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 02:20:32.556977 kernel: dca service started, version 1.12.1 Feb 13 02:20:32.556982 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 02:20:32.556988 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 13 02:20:32.556993 kernel: PCI: Using configuration type 1 for base access Feb 13 02:20:32.556997 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 02:20:32.557002 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 02:20:32.557007 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 02:20:32.557012 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 02:20:32.557017 kernel: ACPI: Added _OSI(Module Device) Feb 13 02:20:32.557021 kernel: ACPI: Added _OSI(Processor Device) Feb 13 02:20:32.557026 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 02:20:32.557032 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 02:20:32.557037 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 13 02:20:32.557041 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 13 02:20:32.557046 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 13 02:20:32.557051 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 02:20:32.557056 kernel: ACPI: Dynamic OEM Table Load: Feb 13 02:20:32.557061 kernel: ACPI: SSDT 0xFFFF9F01C0215500 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 02:20:32.557066 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 13 02:20:32.557070 kernel: ACPI: Dynamic OEM Table Load: Feb 13 02:20:32.557076 kernel: ACPI: SSDT 0xFFFF9F01C1CEE000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 02:20:32.557081 kernel: ACPI: Dynamic OEM Table Load: Feb 13 02:20:32.557086 kernel: ACPI: SSDT 0xFFFF9F01C1C5F800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 02:20:32.557090 kernel: ACPI: Dynamic OEM Table Load: Feb 13 02:20:32.557095 kernel: ACPI: SSDT 0xFFFF9F01C1C58800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 02:20:32.557100 kernel: ACPI: Dynamic OEM Table Load: Feb 13 02:20:32.557104 kernel: ACPI: SSDT 0xFFFF9F01C014F000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 02:20:32.557109 kernel: ACPI: Dynamic OEM Table Load: Feb 13 02:20:32.557114 kernel: ACPI: SSDT 0xFFFF9F01C1CEAC00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 02:20:32.557119 kernel: ACPI: Interpreter enabled Feb 13 02:20:32.557124 kernel: ACPI: PM: (supports S0 S5) Feb 13 02:20:32.557129 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 02:20:32.557134 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 02:20:32.557139 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 02:20:32.557143 kernel: HEST: Table parsing has been initialized. Feb 13 02:20:32.557148 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 02:20:32.557153 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 02:20:32.557158 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 02:20:32.557163 kernel: ACPI: PM: Power Resource [USBC] Feb 13 02:20:32.557168 kernel: ACPI: PM: Power Resource [V0PR] Feb 13 02:20:32.557173 kernel: ACPI: PM: Power Resource [V1PR] Feb 13 02:20:32.557178 kernel: ACPI: PM: Power Resource [V2PR] Feb 13 02:20:32.557182 kernel: ACPI: PM: Power Resource [WRST] Feb 13 02:20:32.557187 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 13 02:20:32.557192 kernel: ACPI: PM: Power Resource [FN00] Feb 13 02:20:32.557197 kernel: ACPI: PM: Power Resource [FN01] Feb 13 02:20:32.557202 kernel: ACPI: PM: Power Resource [FN02] Feb 13 02:20:32.557207 kernel: ACPI: PM: Power Resource [FN03] Feb 13 02:20:32.557212 kernel: ACPI: PM: Power Resource [FN04] Feb 13 02:20:32.557217 kernel: ACPI: PM: Power Resource [PIN] Feb 13 02:20:32.557222 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 02:20:32.557287 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 02:20:32.557332 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 02:20:32.557373 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 02:20:32.557380 kernel: PCI host bridge to bus 0000:00 Feb 13 02:20:32.557425 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 02:20:32.557499 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 02:20:32.557536 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 02:20:32.557572 kernel: pci_bus 0000:00: root bus resource [mem 0x7f800000-0xdfffffff window] Feb 13 02:20:32.557609 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 02:20:32.557644 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 02:20:32.557694 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 02:20:32.557743 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 02:20:32.557787 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 02:20:32.557834 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Feb 13 02:20:32.557878 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Feb 13 02:20:32.557923 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Feb 13 02:20:32.557965 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x94000000-0x94ffffff 64bit] Feb 13 02:20:32.558009 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Feb 13 02:20:32.558050 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Feb 13 02:20:32.558099 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 02:20:32.558141 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9651f000-0x9651ffff 64bit] Feb 13 02:20:32.558187 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 02:20:32.558228 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9651e000-0x9651efff 64bit] Feb 13 02:20:32.558274 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 02:20:32.558318 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x96500000-0x9650ffff 64bit] Feb 13 02:20:32.558358 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 02:20:32.558403 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 02:20:32.558447 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x96512000-0x96513fff 64bit] Feb 13 02:20:32.558507 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9651d000-0x9651dfff 64bit] Feb 13 02:20:32.558553 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 02:20:32.558597 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 02:20:32.558643 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 02:20:32.558685 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 02:20:32.558733 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 02:20:32.558775 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9651a000-0x9651afff 64bit] Feb 13 02:20:32.558824 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 02:20:32.558871 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 02:20:32.558914 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x96519000-0x96519fff 64bit] Feb 13 02:20:32.558956 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 02:20:32.559002 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 02:20:32.559045 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x96518000-0x96518fff 64bit] Feb 13 02:20:32.559088 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 02:20:32.559134 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 02:20:32.559177 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x96510000-0x96511fff] Feb 13 02:20:32.559219 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x96517000-0x965170ff] Feb 13 02:20:32.559261 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Feb 13 02:20:32.559303 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Feb 13 02:20:32.559344 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Feb 13 02:20:32.559386 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x96516000-0x965167ff] Feb 13 02:20:32.559429 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 02:20:32.559481 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 02:20:32.559525 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 02:20:32.559572 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 02:20:32.559618 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 02:20:32.559666 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 02:20:32.559710 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 02:20:32.559757 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 02:20:32.559799 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 02:20:32.559847 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Feb 13 02:20:32.559892 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Feb 13 02:20:32.559938 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 02:20:32.559980 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 02:20:32.560027 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 02:20:32.560073 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 02:20:32.560115 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x96514000-0x965140ff 64bit] Feb 13 02:20:32.560157 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 02:20:32.560203 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 02:20:32.560246 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 02:20:32.560288 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 02:20:32.560339 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 02:20:32.560383 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 02:20:32.560428 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x96200000-0x962fffff pref] Feb 13 02:20:32.560474 kernel: pci 0000:02:00.0: PME# supported from D3cold Feb 13 02:20:32.560520 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 02:20:32.560564 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 02:20:32.560612 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 02:20:32.560657 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 02:20:32.560720 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x96100000-0x961fffff pref] Feb 13 02:20:32.560764 kernel: pci 0000:02:00.1: PME# supported from D3cold Feb 13 02:20:32.560806 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 02:20:32.560851 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 02:20:32.560893 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 13 02:20:32.560935 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Feb 13 02:20:32.560976 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 02:20:32.561018 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 13 02:20:32.561064 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 02:20:32.561108 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x96400000-0x9647ffff] Feb 13 02:20:32.561190 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 02:20:32.561252 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x96480000-0x96483fff] Feb 13 02:20:32.561295 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 02:20:32.561337 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 13 02:20:32.561380 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 02:20:32.561421 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Feb 13 02:20:32.561513 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Feb 13 02:20:32.561558 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x96300000-0x9637ffff] Feb 13 02:20:32.561601 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 02:20:32.561644 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x96380000-0x96383fff] Feb 13 02:20:32.561687 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Feb 13 02:20:32.561728 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 13 02:20:32.561770 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 02:20:32.561812 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Feb 13 02:20:32.561854 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 13 02:20:32.561903 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 02:20:32.561946 kernel: pci 0000:07:00.0: enabling Extended Tags Feb 13 02:20:32.561990 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 02:20:32.562032 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 02:20:32.562075 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 13 02:20:32.562117 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 13 02:20:32.562159 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Feb 13 02:20:32.562205 kernel: pci_bus 0000:08: extended config space not accessible Feb 13 02:20:32.562259 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 02:20:32.562306 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x95000000-0x95ffffff] Feb 13 02:20:32.562352 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x96000000-0x9601ffff] Feb 13 02:20:32.562397 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 02:20:32.562442 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 02:20:32.562526 kernel: pci 0000:08:00.0: supports D1 D2 Feb 13 02:20:32.562572 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 02:20:32.562617 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 13 02:20:32.562660 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 13 02:20:32.562704 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Feb 13 02:20:32.562712 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 02:20:32.562718 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 02:20:32.562723 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 02:20:32.562728 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 02:20:32.562733 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 02:20:32.562740 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 02:20:32.562745 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 02:20:32.562750 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 02:20:32.562755 kernel: iommu: Default domain type: Translated Feb 13 02:20:32.562760 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 02:20:32.562804 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Feb 13 02:20:32.562850 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 02:20:32.562895 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Feb 13 02:20:32.562902 kernel: vgaarb: loaded Feb 13 02:20:32.562909 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 02:20:32.562914 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 02:20:32.562919 kernel: PTP clock support registered Feb 13 02:20:32.562925 kernel: PCI: Using ACPI for IRQ routing Feb 13 02:20:32.562930 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 02:20:32.562935 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 02:20:32.562940 kernel: e820: reserve RAM buffer [mem 0x6dfbc000-0x6fffffff] Feb 13 02:20:32.562945 kernel: e820: reserve RAM buffer [mem 0x77fc5000-0x77ffffff] Feb 13 02:20:32.562950 kernel: e820: reserve RAM buffer [mem 0x79231000-0x7bffffff] Feb 13 02:20:32.562956 kernel: e820: reserve RAM buffer [mem 0x7bf00000-0x7bffffff] Feb 13 02:20:32.562961 kernel: e820: reserve RAM buffer [mem 0x87f800000-0x87fffffff] Feb 13 02:20:32.562966 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 02:20:32.562971 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Feb 13 02:20:32.562977 kernel: clocksource: Switched to clocksource tsc-early Feb 13 02:20:32.562982 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 02:20:32.562987 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 02:20:32.562992 kernel: pnp: PnP ACPI init Feb 13 02:20:32.563037 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 02:20:32.563081 kernel: pnp 00:02: [dma 0 disabled] Feb 13 02:20:32.563124 kernel: pnp 00:03: [dma 0 disabled] Feb 13 02:20:32.563165 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 02:20:32.563204 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 02:20:32.563246 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 02:20:32.563287 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 02:20:32.563330 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 02:20:32.563367 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 02:20:32.563405 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 02:20:32.563442 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 02:20:32.563524 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 02:20:32.563562 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 02:20:32.563600 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 02:20:32.563642 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 02:20:32.563679 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 02:20:32.563717 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 02:20:32.563754 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 02:20:32.563791 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 02:20:32.563828 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 02:20:32.563868 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 02:20:32.563908 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 02:20:32.563916 kernel: pnp: PnP ACPI: found 10 devices Feb 13 02:20:32.563921 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 02:20:32.563927 kernel: NET: Registered PF_INET protocol family Feb 13 02:20:32.563932 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 02:20:32.563937 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 02:20:32.563942 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 02:20:32.563949 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 02:20:32.563954 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 13 02:20:32.563959 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 02:20:32.563964 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 02:20:32.563970 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 02:20:32.563975 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 02:20:32.563980 kernel: NET: Registered PF_XDP protocol family Feb 13 02:20:32.564021 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7f800000-0x7f800fff 64bit] Feb 13 02:20:32.564065 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7f801000-0x7f801fff 64bit] Feb 13 02:20:32.564109 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7f802000-0x7f802fff 64bit] Feb 13 02:20:32.564152 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 02:20:32.564196 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 02:20:32.564240 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 02:20:32.564284 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 02:20:32.564330 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 02:20:32.564372 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 13 02:20:32.564416 kernel: pci 0000:00:01.1: bridge window [mem 0x96100000-0x962fffff] Feb 13 02:20:32.564482 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 02:20:32.564524 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 13 02:20:32.564567 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 13 02:20:32.564610 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 02:20:32.564653 kernel: pci 0000:00:1b.4: bridge window [mem 0x96400000-0x964fffff] Feb 13 02:20:32.564697 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 13 02:20:32.564741 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 02:20:32.564784 kernel: pci 0000:00:1b.5: bridge window [mem 0x96300000-0x963fffff] Feb 13 02:20:32.564826 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 13 02:20:32.564871 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 13 02:20:32.564915 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 13 02:20:32.564959 kernel: pci 0000:07:00.0: bridge window [mem 0x95000000-0x960fffff] Feb 13 02:20:32.565002 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 13 02:20:32.565048 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 13 02:20:32.565091 kernel: pci 0000:00:1c.1: bridge window [mem 0x95000000-0x960fffff] Feb 13 02:20:32.565130 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 02:20:32.565169 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 02:20:32.565206 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 02:20:32.565246 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 02:20:32.565283 kernel: pci_bus 0000:00: resource 7 [mem 0x7f800000-0xdfffffff window] Feb 13 02:20:32.565320 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 02:20:32.565364 kernel: pci_bus 0000:02: resource 1 [mem 0x96100000-0x962fffff] Feb 13 02:20:32.565406 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 02:20:32.565453 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Feb 13 02:20:32.565493 kernel: pci_bus 0000:04: resource 1 [mem 0x96400000-0x964fffff] Feb 13 02:20:32.565539 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 13 02:20:32.565579 kernel: pci_bus 0000:05: resource 1 [mem 0x96300000-0x963fffff] Feb 13 02:20:32.565622 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 02:20:32.565664 kernel: pci_bus 0000:07: resource 1 [mem 0x95000000-0x960fffff] Feb 13 02:20:32.565725 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Feb 13 02:20:32.565766 kernel: pci_bus 0000:08: resource 1 [mem 0x95000000-0x960fffff] Feb 13 02:20:32.565773 kernel: PCI: CLS 64 bytes, default 64 Feb 13 02:20:32.565779 kernel: DMAR: No ATSR found Feb 13 02:20:32.565784 kernel: DMAR: No SATC found Feb 13 02:20:32.565789 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Feb 13 02:20:32.565795 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Feb 13 02:20:32.565801 kernel: DMAR: IOMMU feature nwfs inconsistent Feb 13 02:20:32.565806 kernel: DMAR: IOMMU feature pasid inconsistent Feb 13 02:20:32.565811 kernel: DMAR: IOMMU feature eafs inconsistent Feb 13 02:20:32.565816 kernel: DMAR: IOMMU feature prs inconsistent Feb 13 02:20:32.565821 kernel: DMAR: IOMMU feature nest inconsistent Feb 13 02:20:32.565827 kernel: DMAR: IOMMU feature mts inconsistent Feb 13 02:20:32.565832 kernel: DMAR: IOMMU feature sc_support inconsistent Feb 13 02:20:32.565837 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Feb 13 02:20:32.565842 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 02:20:32.565848 kernel: DMAR: dmar1: Using Queued invalidation Feb 13 02:20:32.565891 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 13 02:20:32.565935 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 13 02:20:32.565977 kernel: pci 0000:00:01.1: Adding to iommu group 1 Feb 13 02:20:32.566019 kernel: pci 0000:00:02.0: Adding to iommu group 2 Feb 13 02:20:32.566061 kernel: pci 0000:00:08.0: Adding to iommu group 3 Feb 13 02:20:32.566104 kernel: pci 0000:00:12.0: Adding to iommu group 4 Feb 13 02:20:32.566145 kernel: pci 0000:00:14.0: Adding to iommu group 5 Feb 13 02:20:32.566189 kernel: pci 0000:00:14.2: Adding to iommu group 5 Feb 13 02:20:32.566230 kernel: pci 0000:00:15.0: Adding to iommu group 6 Feb 13 02:20:32.566270 kernel: pci 0000:00:15.1: Adding to iommu group 6 Feb 13 02:20:32.566312 kernel: pci 0000:00:16.0: Adding to iommu group 7 Feb 13 02:20:32.566353 kernel: pci 0000:00:16.1: Adding to iommu group 7 Feb 13 02:20:32.566394 kernel: pci 0000:00:16.4: Adding to iommu group 7 Feb 13 02:20:32.566435 kernel: pci 0000:00:17.0: Adding to iommu group 8 Feb 13 02:20:32.566521 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Feb 13 02:20:32.566564 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Feb 13 02:20:32.566607 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Feb 13 02:20:32.566648 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Feb 13 02:20:32.566690 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Feb 13 02:20:32.566732 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Feb 13 02:20:32.566774 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Feb 13 02:20:32.566815 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Feb 13 02:20:32.566857 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Feb 13 02:20:32.566902 kernel: pci 0000:02:00.0: Adding to iommu group 1 Feb 13 02:20:32.566946 kernel: pci 0000:02:00.1: Adding to iommu group 1 Feb 13 02:20:32.566991 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 02:20:32.567034 kernel: pci 0000:05:00.0: Adding to iommu group 17 Feb 13 02:20:32.567078 kernel: pci 0000:07:00.0: Adding to iommu group 18 Feb 13 02:20:32.567125 kernel: pci 0000:08:00.0: Adding to iommu group 18 Feb 13 02:20:32.567133 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 02:20:32.567138 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 02:20:32.567145 kernel: software IO TLB: mapped [mem 0x0000000073fc5000-0x0000000077fc5000] (64MB) Feb 13 02:20:32.567150 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Feb 13 02:20:32.567155 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 02:20:32.567160 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 02:20:32.567165 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 02:20:32.567171 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Feb 13 02:20:32.567215 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 02:20:32.567223 kernel: Initialise system trusted keyrings Feb 13 02:20:32.567229 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 02:20:32.567235 kernel: Key type asymmetric registered Feb 13 02:20:32.567240 kernel: Asymmetric key parser 'x509' registered Feb 13 02:20:32.567245 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 13 02:20:32.567250 kernel: io scheduler mq-deadline registered Feb 13 02:20:32.567255 kernel: io scheduler kyber registered Feb 13 02:20:32.567260 kernel: io scheduler bfq registered Feb 13 02:20:32.567302 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Feb 13 02:20:32.567344 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Feb 13 02:20:32.567388 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Feb 13 02:20:32.567430 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Feb 13 02:20:32.567512 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Feb 13 02:20:32.567554 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Feb 13 02:20:32.567596 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Feb 13 02:20:32.567642 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 02:20:32.567650 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 02:20:32.567657 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 02:20:32.567662 kernel: pstore: Registered erst as persistent store backend Feb 13 02:20:32.567667 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 02:20:32.567672 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 02:20:32.567678 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 02:20:32.567683 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 02:20:32.567725 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 02:20:32.567733 kernel: i8042: PNP: No PS/2 controller found. Feb 13 02:20:32.567773 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 02:20:32.567811 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 02:20:32.567851 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-13T02:20:31 UTC (1707790831) Feb 13 02:20:32.567889 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 02:20:32.567896 kernel: fail to initialize ptp_kvm Feb 13 02:20:32.567902 kernel: intel_pstate: Intel P-state driver initializing Feb 13 02:20:32.567907 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 02:20:32.567912 kernel: intel_pstate: HWP enabled Feb 13 02:20:32.567919 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 13 02:20:32.567924 kernel: vesafb: scrolling: redraw Feb 13 02:20:32.567929 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 13 02:20:32.567934 kernel: vesafb: framebuffer at 0x95000000, mapped to 0x00000000919b3828, using 768k, total 768k Feb 13 02:20:32.567939 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 02:20:32.567945 kernel: fb0: VESA VGA frame buffer device Feb 13 02:20:32.567950 kernel: NET: Registered PF_INET6 protocol family Feb 13 02:20:32.567955 kernel: Segment Routing with IPv6 Feb 13 02:20:32.567960 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 02:20:32.567966 kernel: NET: Registered PF_PACKET protocol family Feb 13 02:20:32.567971 kernel: Key type dns_resolver registered Feb 13 02:20:32.567976 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 13 02:20:32.567981 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 02:20:32.567987 kernel: IPI shorthand broadcast: enabled Feb 13 02:20:32.567992 kernel: sched_clock: Marking stable (2324522971, 1360193894)->(4631656049, -946939184) Feb 13 02:20:32.567997 kernel: registered taskstats version 1 Feb 13 02:20:32.568002 kernel: Loading compiled-in X.509 certificates Feb 13 02:20:32.568007 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 13 02:20:32.568013 kernel: Key type .fscrypt registered Feb 13 02:20:32.568018 kernel: Key type fscrypt-provisioning registered Feb 13 02:20:32.568023 kernel: pstore: Using crash dump compression: deflate Feb 13 02:20:32.568029 kernel: ima: Allocated hash algorithm: sha1 Feb 13 02:20:32.568034 kernel: ima: No architecture policies found Feb 13 02:20:32.568039 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 13 02:20:32.568044 kernel: Write protecting the kernel read-only data: 28672k Feb 13 02:20:32.568049 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 13 02:20:32.568054 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 13 02:20:32.568061 kernel: Run /init as init process Feb 13 02:20:32.568066 kernel: with arguments: Feb 13 02:20:32.568071 kernel: /init Feb 13 02:20:32.568076 kernel: with environment: Feb 13 02:20:32.568081 kernel: HOME=/ Feb 13 02:20:32.568086 kernel: TERM=linux Feb 13 02:20:32.568091 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 02:20:32.568098 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 02:20:32.568106 systemd[1]: Detected architecture x86-64. Feb 13 02:20:32.568111 systemd[1]: Running in initrd. Feb 13 02:20:32.568117 systemd[1]: No hostname configured, using default hostname. Feb 13 02:20:32.568122 systemd[1]: Hostname set to . Feb 13 02:20:32.568127 systemd[1]: Initializing machine ID from random generator. Feb 13 02:20:32.568132 systemd[1]: Queued start job for default target initrd.target. Feb 13 02:20:32.568138 systemd[1]: Started systemd-ask-password-console.path. Feb 13 02:20:32.568143 systemd[1]: Reached target cryptsetup.target. Feb 13 02:20:32.568149 systemd[1]: Reached target paths.target. Feb 13 02:20:32.568155 systemd[1]: Reached target slices.target. Feb 13 02:20:32.568160 systemd[1]: Reached target swap.target. Feb 13 02:20:32.568165 systemd[1]: Reached target timers.target. Feb 13 02:20:32.568170 systemd[1]: Listening on iscsid.socket. Feb 13 02:20:32.568176 systemd[1]: Listening on iscsiuio.socket. Feb 13 02:20:32.568181 systemd[1]: Listening on systemd-journald-audit.socket. Feb 13 02:20:32.568188 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 13 02:20:32.568193 systemd[1]: Listening on systemd-journald.socket. Feb 13 02:20:32.568198 kernel: tsc: Refined TSC clocksource calibration: 3407.990 MHz Feb 13 02:20:32.568204 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fcaf6eb0, max_idle_ns: 440795321766 ns Feb 13 02:20:32.568209 kernel: clocksource: Switched to clocksource tsc Feb 13 02:20:32.568214 systemd[1]: Listening on systemd-networkd.socket. Feb 13 02:20:32.568219 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 02:20:32.568225 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 02:20:32.568230 systemd[1]: Reached target sockets.target. Feb 13 02:20:32.568236 systemd[1]: Starting kmod-static-nodes.service... Feb 13 02:20:32.568242 systemd[1]: Finished network-cleanup.service. Feb 13 02:20:32.568247 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 02:20:32.568252 systemd[1]: Starting systemd-journald.service... Feb 13 02:20:32.568258 systemd[1]: Starting systemd-modules-load.service... Feb 13 02:20:32.568266 systemd-journald[269]: Journal started Feb 13 02:20:32.568291 systemd-journald[269]: Runtime Journal (/run/log/journal/aeb63524059f4aa898b0a5d8ee9aade5) is 8.0M, max 639.3M, 631.3M free. Feb 13 02:20:32.570011 systemd-modules-load[270]: Inserted module 'overlay' Feb 13 02:20:32.599875 kernel: audit: type=1334 audit(1707790832.575:2): prog-id=6 op=LOAD Feb 13 02:20:32.599886 systemd[1]: Starting systemd-resolved.service... Feb 13 02:20:32.575000 audit: BPF prog-id=6 op=LOAD Feb 13 02:20:32.643461 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 02:20:32.643492 systemd[1]: Starting systemd-vconsole-setup.service... Feb 13 02:20:32.673455 kernel: Bridge firewalling registered Feb 13 02:20:32.673485 systemd[1]: Started systemd-journald.service. Feb 13 02:20:32.687814 systemd-modules-load[270]: Inserted module 'br_netfilter' Feb 13 02:20:32.737339 kernel: audit: type=1130 audit(1707790832.694:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:32.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:32.694041 systemd-resolved[272]: Positive Trust Anchors: Feb 13 02:20:32.794593 kernel: SCSI subsystem initialized Feb 13 02:20:32.794603 kernel: audit: type=1130 audit(1707790832.748:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:32.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:32.694047 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 02:20:32.914885 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 02:20:32.914897 kernel: audit: type=1130 audit(1707790832.819:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:32.914905 kernel: device-mapper: uevent: version 1.0.3 Feb 13 02:20:32.914912 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 13 02:20:32.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:32.694067 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 02:20:32.997694 kernel: audit: type=1130 audit(1707790832.931:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:32.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:32.695611 systemd-resolved[272]: Defaulting to hostname 'linux'. Feb 13 02:20:33.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:32.695734 systemd[1]: Finished kmod-static-nodes.service. Feb 13 02:20:33.107692 kernel: audit: type=1130 audit(1707790833.005:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:33.107703 kernel: audit: type=1130 audit(1707790833.060:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:33.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:32.749583 systemd[1]: Started systemd-resolved.service. Feb 13 02:20:32.820613 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 02:20:32.917087 systemd-modules-load[270]: Inserted module 'dm_multipath' Feb 13 02:20:32.932754 systemd[1]: Finished systemd-modules-load.service. Feb 13 02:20:33.006793 systemd[1]: Finished systemd-vconsole-setup.service. Feb 13 02:20:33.061741 systemd[1]: Reached target nss-lookup.target. Feb 13 02:20:33.117043 systemd[1]: Starting dracut-cmdline-ask.service... Feb 13 02:20:33.138196 systemd[1]: Starting systemd-sysctl.service... Feb 13 02:20:33.138578 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 02:20:33.141484 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 02:20:33.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:33.142182 systemd[1]: Finished systemd-sysctl.service. Feb 13 02:20:33.190549 kernel: audit: type=1130 audit(1707790833.140:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:33.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:33.203817 systemd[1]: Finished dracut-cmdline-ask.service. Feb 13 02:20:33.265561 kernel: audit: type=1130 audit(1707790833.202:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:33.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:33.252096 systemd[1]: Starting dracut-cmdline.service... Feb 13 02:20:33.280553 dracut-cmdline[293]: dracut-dracut-053 Feb 13 02:20:33.280553 dracut-cmdline[293]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 13 02:20:33.280553 dracut-cmdline[293]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 02:20:33.364530 kernel: Loading iSCSI transport class v2.0-870. Feb 13 02:20:33.364543 kernel: iscsi: registered transport (tcp) Feb 13 02:20:33.364551 kernel: iscsi: registered transport (qla4xxx) Feb 13 02:20:33.388921 kernel: QLogic iSCSI HBA Driver Feb 13 02:20:33.404716 systemd[1]: Finished dracut-cmdline.service. Feb 13 02:20:33.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:33.405235 systemd[1]: Starting dracut-pre-udev.service... Feb 13 02:20:33.461523 kernel: raid6: avx2x4 gen() 48974 MB/s Feb 13 02:20:33.497526 kernel: raid6: avx2x4 xor() 19720 MB/s Feb 13 02:20:33.532525 kernel: raid6: avx2x2 gen() 52738 MB/s Feb 13 02:20:33.567523 kernel: raid6: avx2x2 xor() 32101 MB/s Feb 13 02:20:33.602526 kernel: raid6: avx2x1 gen() 45270 MB/s Feb 13 02:20:33.636450 kernel: raid6: avx2x1 xor() 27933 MB/s Feb 13 02:20:33.670482 kernel: raid6: sse2x4 gen() 21363 MB/s Feb 13 02:20:33.704522 kernel: raid6: sse2x4 xor() 11975 MB/s Feb 13 02:20:33.738480 kernel: raid6: sse2x2 gen() 21603 MB/s Feb 13 02:20:33.772522 kernel: raid6: sse2x2 xor() 13439 MB/s Feb 13 02:20:33.806482 kernel: raid6: sse2x1 gen() 18298 MB/s Feb 13 02:20:33.858415 kernel: raid6: sse2x1 xor() 8917 MB/s Feb 13 02:20:33.858430 kernel: raid6: using algorithm avx2x2 gen() 52738 MB/s Feb 13 02:20:33.858437 kernel: raid6: .... xor() 32101 MB/s, rmw enabled Feb 13 02:20:33.876634 kernel: raid6: using avx2x2 recovery algorithm Feb 13 02:20:33.922452 kernel: xor: automatically using best checksumming function avx Feb 13 02:20:34.001480 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 13 02:20:34.006711 systemd[1]: Finished dracut-pre-udev.service. Feb 13 02:20:34.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:34.014000 audit: BPF prog-id=7 op=LOAD Feb 13 02:20:34.014000 audit: BPF prog-id=8 op=LOAD Feb 13 02:20:34.016531 systemd[1]: Starting systemd-udevd.service... Feb 13 02:20:34.024170 systemd-udevd[474]: Using default interface naming scheme 'v252'. Feb 13 02:20:34.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:34.030608 systemd[1]: Started systemd-udevd.service. Feb 13 02:20:34.070582 dracut-pre-trigger[487]: rd.md=0: removing MD RAID activation Feb 13 02:20:34.047112 systemd[1]: Starting dracut-pre-trigger.service... Feb 13 02:20:34.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:34.074547 systemd[1]: Finished dracut-pre-trigger.service. Feb 13 02:20:34.087491 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 02:20:34.160991 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 02:20:34.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:34.185452 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 02:20:34.187456 kernel: libata version 3.00 loaded. Feb 13 02:20:34.223109 kernel: ACPI: bus type USB registered Feb 13 02:20:34.223160 kernel: usbcore: registered new interface driver usbfs Feb 13 02:20:34.223191 kernel: usbcore: registered new interface driver hub Feb 13 02:20:34.241345 kernel: usbcore: registered new device driver usb Feb 13 02:20:34.282459 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 02:20:34.282503 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 02:20:34.282612 kernel: AES CTR mode by8 optimization enabled Feb 13 02:20:34.298844 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Feb 13 02:20:34.339723 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 02:20:34.374829 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 02:20:34.374850 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 02:20:34.374860 kernel: scsi host0: ahci Feb 13 02:20:34.402313 kernel: scsi host1: ahci Feb 13 02:20:34.402429 kernel: mlx5_core 0000:02:00.0: firmware version: 14.28.2006 Feb 13 02:20:34.402496 kernel: scsi host2: ahci Feb 13 02:20:34.410492 kernel: pps pps0: new PPS source ptp0 Feb 13 02:20:34.410590 kernel: igb 0000:04:00.0: added PHC on eth0 Feb 13 02:20:34.410670 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 02:20:34.410744 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:07:2c Feb 13 02:20:34.410795 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Feb 13 02:20:34.410844 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 02:20:34.431948 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 02:20:34.432451 kernel: scsi host3: ahci Feb 13 02:20:34.460450 kernel: pps pps1: new PPS source ptp1 Feb 13 02:20:34.460529 kernel: scsi host4: ahci Feb 13 02:20:34.493100 kernel: igb 0000:05:00.0: added PHC on eth1 Feb 13 02:20:34.493196 kernel: scsi host5: ahci Feb 13 02:20:34.523421 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 02:20:34.551529 kernel: scsi host6: ahci Feb 13 02:20:34.551629 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:07:2d Feb 13 02:20:34.551701 kernel: scsi host7: ahci Feb 13 02:20:34.573491 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Feb 13 02:20:34.573580 kernel: ata1: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516100 irq 129 Feb 13 02:20:34.585090 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 02:20:34.608659 kernel: ata2: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516180 irq 129 Feb 13 02:20:34.720079 kernel: ata3: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516200 irq 129 Feb 13 02:20:34.720097 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 02:20:34.720168 kernel: ata4: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516280 irq 129 Feb 13 02:20:34.770339 kernel: ata5: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516300 irq 129 Feb 13 02:20:34.770355 kernel: port_module: 8 callbacks suppressed Feb 13 02:20:34.770363 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Feb 13 02:20:34.816972 kernel: ata6: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516380 irq 129 Feb 13 02:20:34.816987 kernel: ata7: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516400 irq 129 Feb 13 02:20:34.833724 kernel: ata8: SATA max UDMA/133 abar m2048@0x96516000 port 0x96516480 irq 129 Feb 13 02:20:34.878484 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 02:20:35.110597 kernel: mlx5_core 0000:02:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 02:20:35.159175 kernel: mlx5_core 0000:02:00.1: firmware version: 14.28.2006 Feb 13 02:20:35.159280 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 02:20:35.159355 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 02:20:35.175451 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 02:20:35.190478 kernel: ata8: SATA link down (SStatus 0 SControl 300) Feb 13 02:20:35.205492 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 02:20:35.220461 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 02:20:35.234451 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 02:20:35.250480 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 02:20:35.266487 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 02:20:35.283477 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 02:20:35.297491 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 02:20:35.341872 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 02:20:35.341888 kernel: ata1.00: Features: NCQ-prio Feb 13 02:20:35.341896 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 02:20:35.370485 kernel: ata2.00: Features: NCQ-prio Feb 13 02:20:35.388492 kernel: ata1.00: configured for UDMA/133 Feb 13 02:20:35.388508 kernel: ata2.00: configured for UDMA/133 Feb 13 02:20:35.388516 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 02:20:35.418512 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 02:20:35.440497 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 02:20:35.440593 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Feb 13 02:20:35.440649 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 02:20:35.451451 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 02:20:35.451547 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Feb 13 02:20:35.551404 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 02:20:35.551480 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 02:20:35.551534 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 02:20:35.568159 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 02:20:35.587500 kernel: hub 1-0:1.0: USB hub found Feb 13 02:20:35.587582 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 02:20:35.587593 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Feb 13 02:20:35.611515 kernel: hub 1-0:1.0: 16 ports detected Feb 13 02:20:35.611591 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 02:20:35.638651 kernel: hub 2-0:1.0: USB hub found Feb 13 02:20:35.638744 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 02:20:35.638819 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 02:20:35.638894 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 13 02:20:35.638951 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 13 02:20:35.639007 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 02:20:35.639062 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 02:20:35.639115 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 02:20:35.652452 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 02:20:35.652466 kernel: hub 2-0:1.0: 10 ports detected Feb 13 02:20:35.653512 kernel: usb: port power management may be unreliable Feb 13 02:20:35.681193 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 02:20:35.681272 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 13 02:20:35.711438 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 02:20:35.725496 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 02:20:35.725571 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 02:20:35.725632 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 02:20:35.869520 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 02:20:35.882238 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 02:20:35.897492 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 02:20:35.925869 kernel: GPT:9289727 != 937703087 Feb 13 02:20:35.925884 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 02:20:35.940917 kernel: mlx5_core 0000:02:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 02:20:35.940989 kernel: GPT:9289727 != 937703087 Feb 13 02:20:35.986864 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 02:20:35.986881 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 02:20:36.015537 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 02:20:36.015553 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 02:20:36.030509 kernel: hub 1-14:1.0: USB hub found Feb 13 02:20:36.057018 kernel: hub 1-14:1.0: 4 ports detected Feb 13 02:20:36.057110 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth2 Feb 13 02:20:36.072480 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (528) Feb 13 02:20:36.075834 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 13 02:20:36.119556 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth0 Feb 13 02:20:36.109269 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 13 02:20:36.131969 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 13 02:20:36.159617 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 13 02:20:36.173405 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 02:20:36.191726 systemd[1]: Starting disk-uuid.service... Feb 13 02:20:36.229631 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 02:20:36.229672 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 02:20:36.229843 disk-uuid[692]: Primary Header is updated. Feb 13 02:20:36.229843 disk-uuid[692]: Secondary Entries is updated. Feb 13 02:20:36.229843 disk-uuid[692]: Secondary Header is updated. Feb 13 02:20:36.283493 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 02:20:36.283521 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 02:20:36.378510 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 02:20:36.514485 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 02:20:36.545930 kernel: usbcore: registered new interface driver usbhid Feb 13 02:20:36.545988 kernel: usbhid: USB HID core driver Feb 13 02:20:36.579640 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 02:20:36.696524 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 02:20:36.696692 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 02:20:36.696701 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 02:20:37.260275 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 02:20:37.279490 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 02:20:37.279947 disk-uuid[693]: The operation has completed successfully. Feb 13 02:20:37.320414 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 02:20:37.434576 kernel: audit: type=1130 audit(1707790837.326:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:37.434591 kernel: audit: type=1131 audit(1707790837.326:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:37.434598 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 02:20:37.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:37.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:37.320460 systemd[1]: Finished disk-uuid.service. Feb 13 02:20:37.340538 systemd[1]: Starting verity-setup.service... Feb 13 02:20:37.484237 systemd[1]: Found device dev-mapper-usr.device. Feb 13 02:20:37.493640 systemd[1]: Mounting sysusr-usr.mount... Feb 13 02:20:37.507763 systemd[1]: Finished verity-setup.service. Feb 13 02:20:37.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:37.570453 kernel: audit: type=1130 audit(1707790837.522:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:37.626454 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 13 02:20:37.626646 systemd[1]: Mounted sysusr-usr.mount. Feb 13 02:20:37.634730 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 13 02:20:37.635129 systemd[1]: Starting ignition-setup.service... Feb 13 02:20:37.706522 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 02:20:37.706538 kernel: BTRFS info (device sda6): using free space tree Feb 13 02:20:37.706546 kernel: BTRFS info (device sda6): has skinny extents Feb 13 02:20:37.667890 systemd[1]: Starting parse-ip-for-networkd.service... Feb 13 02:20:37.737577 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 02:20:37.720703 systemd[1]: Finished parse-ip-for-networkd.service. Feb 13 02:20:37.794451 kernel: audit: type=1130 audit(1707790837.744:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:37.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:37.746345 systemd[1]: Finished ignition-setup.service. Feb 13 02:20:37.864531 kernel: audit: type=1130 audit(1707790837.807:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:37.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:37.809497 systemd[1]: Starting ignition-fetch-offline.service... Feb 13 02:20:37.871000 audit: BPF prog-id=9 op=LOAD Feb 13 02:20:37.873654 systemd[1]: Starting systemd-networkd.service... Feb 13 02:20:37.910602 kernel: audit: type=1334 audit(1707790837.871:24): prog-id=9 op=LOAD Feb 13 02:20:37.909024 systemd-networkd[879]: lo: Link UP Feb 13 02:20:37.969581 kernel: audit: type=1130 audit(1707790837.917:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:37.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:37.933879 ignition[868]: Ignition 2.14.0 Feb 13 02:20:37.909026 systemd-networkd[879]: lo: Gained carrier Feb 13 02:20:37.933883 ignition[868]: Stage: fetch-offline Feb 13 02:20:37.909312 systemd-networkd[879]: Enumeration completed Feb 13 02:20:38.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:37.933908 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 02:20:38.080594 kernel: audit: type=1130 audit(1707790838.009:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:38.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:37.909351 systemd[1]: Started systemd-networkd.service. Feb 13 02:20:37.933921 ignition[868]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 02:20:37.910041 systemd-networkd[879]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 02:20:38.156504 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 13 02:20:38.156587 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f1np1: link becomes ready Feb 13 02:20:37.936487 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 02:20:38.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:37.918565 systemd[1]: Reached target network.target. Feb 13 02:20:37.936562 ignition[868]: parsed url from cmdline: "" Feb 13 02:20:38.187661 iscsid[908]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 13 02:20:38.187661 iscsid[908]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 13 02:20:38.187661 iscsid[908]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 13 02:20:38.187661 iscsid[908]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 13 02:20:38.187661 iscsid[908]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 13 02:20:38.187661 iscsid[908]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 13 02:20:38.187661 iscsid[908]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 13 02:20:38.348606 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 13 02:20:38.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:38.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:37.966553 unknown[868]: fetched base config from "system" Feb 13 02:20:37.936564 ignition[868]: no config URL provided Feb 13 02:20:37.966557 unknown[868]: fetched user config from "system" Feb 13 02:20:37.936567 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 02:20:37.978133 systemd[1]: Starting iscsiuio.service... Feb 13 02:20:37.941495 ignition[868]: parsing config with SHA512: 104f1a2fde4023b6916d60812adc29c1d237eebf2ce04a499fb872f3f6a567dd2ba2d9e60ab2e8db4457bdd576efb8969c9b52cc35464185abc3f5510635cf28 Feb 13 02:20:37.995782 systemd[1]: Started iscsiuio.service. Feb 13 02:20:37.967281 ignition[868]: fetch-offline: fetch-offline passed Feb 13 02:20:38.010819 systemd[1]: Finished ignition-fetch-offline.service. Feb 13 02:20:37.967284 ignition[868]: POST message to Packet Timeline Feb 13 02:20:38.070753 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 02:20:37.967288 ignition[868]: POST Status error: resource requires networking Feb 13 02:20:38.071192 systemd[1]: Starting ignition-kargs.service... Feb 13 02:20:37.967319 ignition[868]: Ignition finished successfully Feb 13 02:20:38.087952 systemd[1]: Starting iscsid.service... Feb 13 02:20:38.075678 ignition[897]: Ignition 2.14.0 Feb 13 02:20:38.106650 systemd[1]: Started iscsid.service. Feb 13 02:20:38.075681 ignition[897]: Stage: kargs Feb 13 02:20:38.123295 systemd-networkd[879]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 02:20:38.075737 ignition[897]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 02:20:38.163967 systemd[1]: Starting dracut-initqueue.service... Feb 13 02:20:38.075747 ignition[897]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 02:20:38.180731 systemd[1]: Finished dracut-initqueue.service. Feb 13 02:20:38.077938 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 02:20:38.195663 systemd[1]: Reached target remote-fs-pre.target. Feb 13 02:20:38.078686 ignition[897]: kargs: kargs passed Feb 13 02:20:38.214604 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 02:20:38.078689 ignition[897]: POST message to Packet Timeline Feb 13 02:20:38.249638 systemd[1]: Reached target remote-fs.target. Feb 13 02:20:38.078698 ignition[897]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 02:20:38.280675 systemd[1]: Starting dracut-pre-mount.service... Feb 13 02:20:38.079985 ignition[897]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43198->[::1]:53: read: connection refused Feb 13 02:20:38.295701 systemd[1]: Finished dracut-pre-mount.service. Feb 13 02:20:38.280416 ignition[897]: GET https://metadata.packet.net/metadata: attempt #2 Feb 13 02:20:38.338736 systemd-networkd[879]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 02:20:38.281779 ignition[897]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45382->[::1]:53: read: connection refused Feb 13 02:20:38.366873 systemd-networkd[879]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 02:20:38.395035 systemd-networkd[879]: enp2s0f1np1: Link UP Feb 13 02:20:38.395204 systemd-networkd[879]: enp2s0f1np1: Gained carrier Feb 13 02:20:38.403750 systemd-networkd[879]: enp2s0f0np0: Link UP Feb 13 02:20:38.403948 systemd-networkd[879]: eno2: Link UP Feb 13 02:20:38.682174 ignition[897]: GET https://metadata.packet.net/metadata: attempt #3 Feb 13 02:20:38.404130 systemd-networkd[879]: eno1: Link UP Feb 13 02:20:38.683245 ignition[897]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44935->[::1]:53: read: connection refused Feb 13 02:20:39.180499 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0np0: link becomes ready Feb 13 02:20:39.180558 systemd-networkd[879]: enp2s0f0np0: Gained carrier Feb 13 02:20:39.216660 systemd-networkd[879]: enp2s0f0np0: DHCPv4 address 136.144.54.113/31, gateway 136.144.54.112 acquired from 145.40.83.140 Feb 13 02:20:39.483708 ignition[897]: GET https://metadata.packet.net/metadata: attempt #4 Feb 13 02:20:39.484885 ignition[897]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59918->[::1]:53: read: connection refused Feb 13 02:20:40.053942 systemd-networkd[879]: enp2s0f1np1: Gained IPv6LL Feb 13 02:20:40.373912 systemd-networkd[879]: enp2s0f0np0: Gained IPv6LL Feb 13 02:20:41.086745 ignition[897]: GET https://metadata.packet.net/metadata: attempt #5 Feb 13 02:20:41.087927 ignition[897]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60992->[::1]:53: read: connection refused Feb 13 02:20:44.291403 ignition[897]: GET https://metadata.packet.net/metadata: attempt #6 Feb 13 02:20:44.331185 ignition[897]: GET result: OK Feb 13 02:20:44.522840 ignition[897]: Ignition finished successfully Feb 13 02:20:44.527169 systemd[1]: Finished ignition-kargs.service. Feb 13 02:20:44.613711 kernel: kauditd_printk_skb: 4 callbacks suppressed Feb 13 02:20:44.613729 kernel: audit: type=1130 audit(1707790844.536:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:44.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:44.546197 ignition[924]: Ignition 2.14.0 Feb 13 02:20:44.539774 systemd[1]: Starting ignition-disks.service... Feb 13 02:20:44.546200 ignition[924]: Stage: disks Feb 13 02:20:44.546258 ignition[924]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 02:20:44.546267 ignition[924]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 02:20:44.548464 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 02:20:44.549213 ignition[924]: disks: disks passed Feb 13 02:20:44.549216 ignition[924]: POST message to Packet Timeline Feb 13 02:20:44.549225 ignition[924]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 02:20:44.572813 ignition[924]: GET result: OK Feb 13 02:20:44.810722 ignition[924]: Ignition finished successfully Feb 13 02:20:44.813513 systemd[1]: Finished ignition-disks.service. Feb 13 02:20:44.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:44.827063 systemd[1]: Reached target initrd-root-device.target. Feb 13 02:20:44.912639 kernel: audit: type=1130 audit(1707790844.825:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:44.898632 systemd[1]: Reached target local-fs-pre.target. Feb 13 02:20:44.898673 systemd[1]: Reached target local-fs.target. Feb 13 02:20:44.920631 systemd[1]: Reached target sysinit.target. Feb 13 02:20:44.920666 systemd[1]: Reached target basic.target. Feb 13 02:20:44.942326 systemd[1]: Starting systemd-fsck-root.service... Feb 13 02:20:44.961286 systemd-fsck[940]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 13 02:20:44.981009 systemd[1]: Finished systemd-fsck-root.service. Feb 13 02:20:45.073625 kernel: audit: type=1130 audit(1707790844.989:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:45.073642 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 13 02:20:44.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:44.992495 systemd[1]: Mounting sysroot.mount... Feb 13 02:20:45.081121 systemd[1]: Mounted sysroot.mount. Feb 13 02:20:45.094716 systemd[1]: Reached target initrd-root-fs.target. Feb 13 02:20:45.102354 systemd[1]: Mounting sysroot-usr.mount... Feb 13 02:20:45.116303 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 13 02:20:45.135019 systemd[1]: Starting flatcar-static-network.service... Feb 13 02:20:45.150563 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 02:20:45.150665 systemd[1]: Reached target ignition-diskful.target. Feb 13 02:20:45.168976 systemd[1]: Mounted sysroot-usr.mount. Feb 13 02:20:45.193897 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 02:20:45.265568 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (953) Feb 13 02:20:45.265585 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 02:20:45.206443 systemd[1]: Starting initrd-setup-root.service... Feb 13 02:20:45.318089 kernel: BTRFS info (device sda6): using free space tree Feb 13 02:20:45.318101 kernel: BTRFS info (device sda6): has skinny extents Feb 13 02:20:45.267962 systemd[1]: Finished initrd-setup-root.service. Feb 13 02:20:45.398746 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 02:20:45.398762 kernel: audit: type=1130 audit(1707790845.336:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:45.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:45.398802 coreos-metadata[947]: Feb 13 02:20:45.271 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 02:20:45.398802 coreos-metadata[947]: Feb 13 02:20:45.378 INFO Fetch successful Feb 13 02:20:45.398802 coreos-metadata[947]: Feb 13 02:20:45.395 INFO wrote hostname ci-3510.3.2-a-4f4948c732 to /sysroot/etc/hostname Feb 13 02:20:45.440642 coreos-metadata[948]: Feb 13 02:20:45.270 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 02:20:45.440642 coreos-metadata[948]: Feb 13 02:20:45.339 INFO Fetch successful Feb 13 02:20:45.635607 kernel: audit: type=1130 audit(1707790845.458:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:45.635623 kernel: audit: type=1130 audit(1707790845.523:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:45.635631 kernel: audit: type=1131 audit(1707790845.523:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:45.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:45.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:45.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:45.635694 initrd-setup-root[958]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 02:20:45.338014 systemd[1]: Starting ignition-mount.service... Feb 13 02:20:45.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:45.689689 initrd-setup-root[966]: cut: /sysroot/etc/group: No such file or directory Feb 13 02:20:45.730661 kernel: audit: type=1130 audit(1707790845.661:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:45.412068 systemd[1]: Starting sysroot-boot.service... Feb 13 02:20:45.737670 initrd-setup-root[974]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 02:20:45.747684 bash[1018]: umount: /sysroot/usr/share/oem: not mounted. Feb 13 02:20:45.432812 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 13 02:20:45.764685 initrd-setup-root[982]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 02:20:45.774652 ignition[1023]: INFO : Ignition 2.14.0 Feb 13 02:20:45.774652 ignition[1023]: INFO : Stage: mount Feb 13 02:20:45.774652 ignition[1023]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 02:20:45.774652 ignition[1023]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 02:20:45.774652 ignition[1023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 02:20:45.774652 ignition[1023]: INFO : mount: mount passed Feb 13 02:20:45.774652 ignition[1023]: INFO : POST message to Packet Timeline Feb 13 02:20:45.774652 ignition[1023]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 02:20:45.774652 ignition[1023]: INFO : GET result: OK Feb 13 02:20:45.459964 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 13 02:20:45.460039 systemd[1]: Finished flatcar-static-network.service. Feb 13 02:20:45.524728 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 02:20:45.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:45.948211 ignition[1023]: INFO : Ignition finished successfully Feb 13 02:20:45.962572 kernel: audit: type=1130 audit(1707790845.889:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:20:45.645155 systemd[1]: Finished sysroot-boot.service. Feb 13 02:20:45.877905 systemd[1]: Finished ignition-mount.service. Feb 13 02:20:45.892629 systemd[1]: Starting ignition-files.service... Feb 13 02:20:45.984549 ignition[1036]: INFO : Ignition 2.14.0 Feb 13 02:20:45.984549 ignition[1036]: INFO : Stage: files Feb 13 02:20:45.984549 ignition[1036]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 02:20:45.984549 ignition[1036]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 02:20:45.984549 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 02:20:45.984549 ignition[1036]: DEBUG : files: compiled without relabeling support, skipping Feb 13 02:20:45.984549 ignition[1036]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 02:20:45.984549 ignition[1036]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 02:20:45.984549 ignition[1036]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 02:20:45.984549 ignition[1036]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 02:20:45.984549 ignition[1036]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 02:20:45.984549 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 02:20:45.984549 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 02:20:45.984549 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 02:20:45.966805 unknown[1036]: wrote ssh authorized keys file for user: core Feb 13 02:20:46.165778 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 02:20:46.165778 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 02:20:46.165778 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 02:20:46.165778 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 13 02:20:46.165778 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 13 02:20:46.570301 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 13 02:20:46.648941 ignition[1036]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 13 02:20:46.648941 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 13 02:20:46.692672 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 13 02:20:46.692672 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 13 02:20:47.073557 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 02:20:47.134019 ignition[1036]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 13 02:20:47.134019 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 13 02:20:47.176663 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 13 02:20:47.176663 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 13 02:20:47.389330 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 13 02:20:55.065027 ignition[1036]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 13 02:20:55.065027 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 13 02:20:55.105769 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 13 02:20:55.105769 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 13 02:20:55.290042 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 13 02:21:14.606235 ignition[1036]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 13 02:21:14.606235 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 13 02:21:14.646686 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 13 02:21:14.646686 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 13 02:21:14.859341 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 13 02:21:22.109423 ignition[1036]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 13 02:21:22.134665 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 13 02:21:22.134665 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 13 02:21:22.134665 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 13 02:21:22.134665 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 02:21:22.134665 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 02:21:22.568505 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 02:21:22.599653 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 02:21:22.599653 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 13 02:21:22.661697 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1055) Feb 13 02:21:22.661715 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 02:21:22.661715 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 02:21:22.661715 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 02:21:22.661715 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 02:21:22.661715 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 02:21:22.661715 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 02:21:22.661715 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 02:21:22.661715 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 02:21:22.661715 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 02:21:22.661715 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 13 02:21:22.661715 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(11): oem config not found in "/usr/share/oem", looking on oem partition Feb 13 02:21:22.661715 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2799454494" Feb 13 02:21:22.661715 ignition[1036]: CRITICAL : files: createFilesystemsFiles: createFiles: op(11): op(12): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2799454494": device or resource busy Feb 13 02:21:22.661715 ignition[1036]: ERROR : files: createFilesystemsFiles: createFiles: op(11): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2799454494", trying btrfs: device or resource busy Feb 13 02:21:22.661715 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2799454494" Feb 13 02:21:22.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:22.965697 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2799454494" Feb 13 02:21:22.965697 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [started] unmounting "/mnt/oem2799454494" Feb 13 02:21:22.965697 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [finished] unmounting "/mnt/oem2799454494" Feb 13 02:21:22.965697 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 13 02:21:22.965697 ignition[1036]: INFO : files: op(15): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 13 02:21:22.965697 ignition[1036]: INFO : files: op(15): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 13 02:21:22.965697 ignition[1036]: INFO : files: op(16): [started] processing unit "packet-phone-home.service" Feb 13 02:21:22.965697 ignition[1036]: INFO : files: op(16): [finished] processing unit "packet-phone-home.service" Feb 13 02:21:22.965697 ignition[1036]: INFO : files: op(17): [started] processing unit "containerd.service" Feb 13 02:21:22.965697 ignition[1036]: INFO : files: op(17): op(18): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 02:21:22.965697 ignition[1036]: INFO : files: op(17): op(18): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 02:21:22.965697 ignition[1036]: INFO : files: op(17): [finished] processing unit "containerd.service" Feb 13 02:21:22.965697 ignition[1036]: INFO : files: op(19): [started] processing unit "prepare-cni-plugins.service" Feb 13 02:21:22.965697 ignition[1036]: INFO : files: op(19): op(1a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 13 02:21:22.965697 ignition[1036]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 13 02:21:22.965697 ignition[1036]: INFO : files: op(19): [finished] processing unit "prepare-cni-plugins.service" Feb 13 02:21:22.965697 ignition[1036]: INFO : files: op(1b): [started] processing unit "prepare-critools.service" Feb 13 02:21:22.965697 ignition[1036]: INFO : files: op(1b): op(1c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 13 02:21:23.707880 kernel: audit: type=1130 audit(1707790882.905:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.707905 kernel: audit: type=1130 audit(1707790883.012:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.707915 kernel: audit: type=1130 audit(1707790883.080:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.707923 kernel: audit: type=1131 audit(1707790883.080:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.707931 kernel: audit: type=1130 audit(1707790883.256:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.707941 kernel: audit: type=1131 audit(1707790883.256:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.707949 kernel: audit: type=1130 audit(1707790883.463:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.707957 kernel: audit: type=1131 audit(1707790883.636:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:22.888364 systemd[1]: Finished ignition-files.service. Feb 13 02:21:23.721787 ignition[1036]: INFO : files: op(1b): op(1c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 13 02:21:23.721787 ignition[1036]: INFO : files: op(1b): [finished] processing unit "prepare-critools.service" Feb 13 02:21:23.721787 ignition[1036]: INFO : files: op(1d): [started] processing unit "prepare-helm.service" Feb 13 02:21:23.721787 ignition[1036]: INFO : files: op(1d): op(1e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 02:21:23.721787 ignition[1036]: INFO : files: op(1d): op(1e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 02:21:23.721787 ignition[1036]: INFO : files: op(1d): [finished] processing unit "prepare-helm.service" Feb 13 02:21:23.721787 ignition[1036]: INFO : files: op(1f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 13 02:21:23.721787 ignition[1036]: INFO : files: op(1f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 13 02:21:23.721787 ignition[1036]: INFO : files: op(20): [started] setting preset to enabled for "packet-phone-home.service" Feb 13 02:21:23.721787 ignition[1036]: INFO : files: op(20): [finished] setting preset to enabled for "packet-phone-home.service" Feb 13 02:21:23.721787 ignition[1036]: INFO : files: op(21): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 13 02:21:23.721787 ignition[1036]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 13 02:21:23.721787 ignition[1036]: INFO : files: op(22): [started] setting preset to enabled for "prepare-critools.service" Feb 13 02:21:23.721787 ignition[1036]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-critools.service" Feb 13 02:21:23.721787 ignition[1036]: INFO : files: op(23): [started] setting preset to enabled for "prepare-helm.service" Feb 13 02:21:23.721787 ignition[1036]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 02:21:23.721787 ignition[1036]: INFO : files: createResultFile: createFiles: op(24): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 02:21:23.721787 ignition[1036]: INFO : files: createResultFile: createFiles: op(24): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 02:21:23.721787 ignition[1036]: INFO : files: files passed Feb 13 02:21:23.721787 ignition[1036]: INFO : POST message to Packet Timeline Feb 13 02:21:23.721787 ignition[1036]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 02:21:23.721787 ignition[1036]: INFO : GET result: OK Feb 13 02:21:23.721787 ignition[1036]: INFO : Ignition finished successfully Feb 13 02:21:24.278659 kernel: audit: type=1131 audit(1707790883.944:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:24.278685 kernel: audit: type=1131 audit(1707790884.037:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:24.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:24.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:24.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:24.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:24.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:22.912844 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 13 02:21:24.296877 initrd-setup-root-after-ignition[1069]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 02:21:24.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:24.320067 iscsid[908]: iscsid shutting down. Feb 13 02:21:22.973720 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 13 02:21:24.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:22.974037 systemd[1]: Starting ignition-quench.service... Feb 13 02:21:24.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:22.997850 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 13 02:21:23.013898 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 02:21:24.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.013957 systemd[1]: Finished ignition-quench.service. Feb 13 02:21:24.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.081705 systemd[1]: Reached target ignition-complete.target. Feb 13 02:21:23.206127 systemd[1]: Starting initrd-parse-etc.service... Feb 13 02:21:24.443766 ignition[1084]: INFO : Ignition 2.14.0 Feb 13 02:21:24.443766 ignition[1084]: INFO : Stage: umount Feb 13 02:21:24.443766 ignition[1084]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 02:21:24.443766 ignition[1084]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 02:21:24.443766 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 02:21:24.443766 ignition[1084]: INFO : umount: umount passed Feb 13 02:21:24.443766 ignition[1084]: INFO : POST message to Packet Timeline Feb 13 02:21:24.443766 ignition[1084]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 02:21:24.443766 ignition[1084]: INFO : GET result: OK Feb 13 02:21:24.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:24.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:24.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:24.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:24.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.243772 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 02:21:24.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:24.611749 ignition[1084]: INFO : Ignition finished successfully Feb 13 02:21:24.610000 audit: BPF prog-id=6 op=UNLOAD Feb 13 02:21:23.243822 systemd[1]: Finished initrd-parse-etc.service. Feb 13 02:21:24.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.258051 systemd[1]: Reached target initrd-fs.target. Feb 13 02:21:24.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.393668 systemd[1]: Reached target initrd.target. Feb 13 02:21:24.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.393727 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 13 02:21:24.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.394079 systemd[1]: Starting dracut-pre-pivot.service... Feb 13 02:21:23.436805 systemd[1]: Finished dracut-pre-pivot.service. Feb 13 02:21:24.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.465691 systemd[1]: Starting initrd-cleanup.service... Feb 13 02:21:24.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.536433 systemd[1]: Stopped target nss-lookup.target. Feb 13 02:21:24.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.571744 systemd[1]: Stopped target remote-cryptsetup.target. Feb 13 02:21:23.597787 systemd[1]: Stopped target timers.target. Feb 13 02:21:24.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.618009 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 02:21:23.618247 systemd[1]: Stopped dracut-pre-pivot.service. Feb 13 02:21:23.638328 systemd[1]: Stopped target initrd.target. Feb 13 02:21:24.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.714685 systemd[1]: Stopped target basic.target. Feb 13 02:21:24.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.728672 systemd[1]: Stopped target ignition-complete.target. Feb 13 02:21:24.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.745812 systemd[1]: Stopped target ignition-diskful.target. Feb 13 02:21:23.764816 systemd[1]: Stopped target initrd-root-device.target. Feb 13 02:21:24.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.791884 systemd[1]: Stopped target remote-fs.target. Feb 13 02:21:24.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:24.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:23.817156 systemd[1]: Stopped target remote-fs-pre.target. Feb 13 02:21:23.842181 systemd[1]: Stopped target sysinit.target. Feb 13 02:21:23.861175 systemd[1]: Stopped target local-fs.target. Feb 13 02:21:23.883048 systemd[1]: Stopped target local-fs-pre.target. Feb 13 02:21:23.905154 systemd[1]: Stopped target swap.target. Feb 13 02:21:23.925050 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 02:21:23.925406 systemd[1]: Stopped dracut-pre-mount.service. Feb 13 02:21:23.946378 systemd[1]: Stopped target cryptsetup.target. Feb 13 02:21:24.024665 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 02:21:24.024731 systemd[1]: Stopped dracut-initqueue.service. Feb 13 02:21:24.038872 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 02:21:24.038943 systemd[1]: Stopped ignition-fetch-offline.service. Feb 13 02:21:24.108761 systemd[1]: Stopped target paths.target. Feb 13 02:21:24.121804 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 02:21:24.125675 systemd[1]: Stopped systemd-ask-password-console.path. Feb 13 02:21:24.140803 systemd[1]: Stopped target slices.target. Feb 13 02:21:24.160804 systemd[1]: Stopped target sockets.target. Feb 13 02:21:24.189924 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 02:21:25.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:24.190175 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 13 02:21:24.216131 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 02:21:24.216489 systemd[1]: Stopped ignition-files.service. Feb 13 02:21:25.101000 audit: BPF prog-id=8 op=UNLOAD Feb 13 02:21:25.101000 audit: BPF prog-id=7 op=UNLOAD Feb 13 02:21:25.102000 audit: BPF prog-id=5 op=UNLOAD Feb 13 02:21:25.102000 audit: BPF prog-id=4 op=UNLOAD Feb 13 02:21:25.102000 audit: BPF prog-id=3 op=UNLOAD Feb 13 02:21:24.231253 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 02:21:24.231652 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 13 02:21:24.251403 systemd[1]: Stopping ignition-mount.service... Feb 13 02:21:24.270753 systemd[1]: Stopping iscsid.service... Feb 13 02:21:24.286539 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 02:21:24.286724 systemd[1]: Stopped kmod-static-nodes.service. Feb 13 02:21:24.308342 systemd[1]: Stopping sysroot-boot.service... Feb 13 02:21:24.326677 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 02:21:24.327154 systemd[1]: Stopped systemd-udev-trigger.service. Feb 13 02:21:24.342172 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 02:21:24.342512 systemd[1]: Stopped dracut-pre-trigger.service. Feb 13 02:21:24.374936 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 02:21:24.376798 systemd[1]: iscsid.service: Deactivated successfully. Feb 13 02:21:24.377106 systemd[1]: Stopped iscsid.service. Feb 13 02:21:24.390867 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 02:21:24.391084 systemd[1]: Stopped sysroot-boot.service. Feb 13 02:21:24.407014 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 02:21:24.407276 systemd[1]: Closed iscsid.socket. Feb 13 02:21:24.419977 systemd[1]: Stopping iscsiuio.service... Feb 13 02:21:24.435277 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 13 02:21:24.435590 systemd[1]: Stopped iscsiuio.service. Feb 13 02:21:24.452316 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 02:21:24.452545 systemd[1]: Finished initrd-cleanup.service. Feb 13 02:21:24.469905 systemd[1]: Stopped target network.target. Feb 13 02:21:25.150457 systemd-journald[269]: Received SIGTERM from PID 1 (n/a). Feb 13 02:21:24.482813 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 02:21:24.482908 systemd[1]: Closed iscsiuio.socket. Feb 13 02:21:24.501085 systemd[1]: Stopping systemd-networkd.service... Feb 13 02:21:24.510595 systemd-networkd[879]: enp2s0f0np0: DHCPv6 lease lost Feb 13 02:21:24.519663 systemd-networkd[879]: enp2s0f1np1: DHCPv6 lease lost Feb 13 02:21:25.149000 audit: BPF prog-id=9 op=UNLOAD Feb 13 02:21:24.527951 systemd[1]: Stopping systemd-resolved.service... Feb 13 02:21:24.546407 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 02:21:24.546663 systemd[1]: Stopped systemd-resolved.service. Feb 13 02:21:24.563069 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 02:21:24.563407 systemd[1]: Stopped systemd-networkd.service. Feb 13 02:21:24.588812 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 02:21:24.588866 systemd[1]: Stopped ignition-mount.service. Feb 13 02:21:24.603792 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 02:21:24.603811 systemd[1]: Closed systemd-networkd.socket. Feb 13 02:21:24.619637 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 02:21:24.619714 systemd[1]: Stopped ignition-disks.service. Feb 13 02:21:24.635750 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 02:21:24.635803 systemd[1]: Stopped ignition-kargs.service. Feb 13 02:21:24.651893 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 02:21:24.652003 systemd[1]: Stopped ignition-setup.service. Feb 13 02:21:24.667946 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 02:21:24.668087 systemd[1]: Stopped initrd-setup-root.service. Feb 13 02:21:24.685781 systemd[1]: Stopping network-cleanup.service... Feb 13 02:21:24.701711 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 02:21:24.701974 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 13 02:21:24.717925 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 02:21:24.718064 systemd[1]: Stopped systemd-sysctl.service. Feb 13 02:21:24.733134 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 02:21:24.733272 systemd[1]: Stopped systemd-modules-load.service. Feb 13 02:21:24.749157 systemd[1]: Stopping systemd-udevd.service... Feb 13 02:21:24.768677 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 02:21:24.770342 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 02:21:24.770676 systemd[1]: Stopped systemd-udevd.service. Feb 13 02:21:24.783525 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 02:21:24.783647 systemd[1]: Closed systemd-udevd-control.socket. Feb 13 02:21:24.795814 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 02:21:24.795910 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 13 02:21:24.811701 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 02:21:24.811949 systemd[1]: Stopped dracut-pre-udev.service. Feb 13 02:21:24.827987 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 02:21:24.828131 systemd[1]: Stopped dracut-cmdline.service. Feb 13 02:21:24.843973 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 02:21:24.844110 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 13 02:21:24.861896 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 13 02:21:24.875638 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 02:21:24.875877 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 13 02:21:24.892691 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 02:21:24.892924 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 13 02:21:25.048558 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 02:21:25.048806 systemd[1]: Stopped network-cleanup.service. Feb 13 02:21:25.059070 systemd[1]: Reached target initrd-switch-root.target. Feb 13 02:21:25.078404 systemd[1]: Starting initrd-switch-root.service... Feb 13 02:21:25.100925 systemd[1]: Switching root. Feb 13 02:21:25.152532 systemd-journald[269]: Journal stopped Feb 13 02:21:29.014707 kernel: SELinux: Class mctp_socket not defined in policy. Feb 13 02:21:29.014723 kernel: SELinux: Class anon_inode not defined in policy. Feb 13 02:21:29.014731 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 13 02:21:29.014736 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 02:21:29.014742 kernel: SELinux: policy capability open_perms=1 Feb 13 02:21:29.014747 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 02:21:29.014753 kernel: SELinux: policy capability always_check_network=0 Feb 13 02:21:29.014759 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 02:21:29.014765 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 02:21:29.014770 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 02:21:29.014775 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 02:21:29.014781 systemd[1]: Successfully loaded SELinux policy in 330.934ms. Feb 13 02:21:29.014788 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.702ms. Feb 13 02:21:29.014795 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 02:21:29.014803 systemd[1]: Detected architecture x86-64. Feb 13 02:21:29.014809 systemd[1]: Detected first boot. Feb 13 02:21:29.014815 systemd[1]: Hostname set to . Feb 13 02:21:29.014821 systemd[1]: Initializing machine ID from random generator. Feb 13 02:21:29.014827 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 13 02:21:29.014834 systemd[1]: Populated /etc with preset unit settings. Feb 13 02:21:29.014840 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 02:21:29.014847 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 02:21:29.014854 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 02:21:29.014860 systemd[1]: Queued start job for default target multi-user.target. Feb 13 02:21:29.014866 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 13 02:21:29.014872 systemd[1]: Created slice system-addon\x2drun.slice. Feb 13 02:21:29.014880 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 13 02:21:29.014886 systemd[1]: Created slice system-getty.slice. Feb 13 02:21:29.014893 systemd[1]: Created slice system-modprobe.slice. Feb 13 02:21:29.014899 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 13 02:21:29.014905 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 13 02:21:29.014911 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 13 02:21:29.014917 systemd[1]: Created slice user.slice. Feb 13 02:21:29.014923 systemd[1]: Started systemd-ask-password-console.path. Feb 13 02:21:29.014930 systemd[1]: Started systemd-ask-password-wall.path. Feb 13 02:21:29.014937 systemd[1]: Set up automount boot.automount. Feb 13 02:21:29.014943 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 13 02:21:29.014949 systemd[1]: Reached target integritysetup.target. Feb 13 02:21:29.014955 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 02:21:29.014961 systemd[1]: Reached target remote-fs.target. Feb 13 02:21:29.014969 systemd[1]: Reached target slices.target. Feb 13 02:21:29.014975 systemd[1]: Reached target swap.target. Feb 13 02:21:29.014982 systemd[1]: Reached target torcx.target. Feb 13 02:21:29.014989 systemd[1]: Reached target veritysetup.target. Feb 13 02:21:29.014996 systemd[1]: Listening on systemd-coredump.socket. Feb 13 02:21:29.015002 systemd[1]: Listening on systemd-initctl.socket. Feb 13 02:21:29.015008 systemd[1]: Listening on systemd-journald-audit.socket. Feb 13 02:21:29.015015 kernel: kauditd_printk_skb: 49 callbacks suppressed Feb 13 02:21:29.015021 kernel: audit: type=1400 audit(1707790888.267:92): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 02:21:29.015028 kernel: audit: type=1335 audit(1707790888.267:93): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 13 02:21:29.015035 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 13 02:21:29.015041 systemd[1]: Listening on systemd-journald.socket. Feb 13 02:21:29.015048 systemd[1]: Listening on systemd-networkd.socket. Feb 13 02:21:29.015054 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 02:21:29.015060 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 02:21:29.015068 systemd[1]: Listening on systemd-userdbd.socket. Feb 13 02:21:29.015075 systemd[1]: Mounting dev-hugepages.mount... Feb 13 02:21:29.015081 systemd[1]: Mounting dev-mqueue.mount... Feb 13 02:21:29.015087 systemd[1]: Mounting media.mount... Feb 13 02:21:29.015094 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 02:21:29.015100 systemd[1]: Mounting sys-kernel-debug.mount... Feb 13 02:21:29.015107 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 13 02:21:29.015113 systemd[1]: Mounting tmp.mount... Feb 13 02:21:29.015121 systemd[1]: Starting flatcar-tmpfiles.service... Feb 13 02:21:29.015127 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 13 02:21:29.015134 systemd[1]: Starting kmod-static-nodes.service... Feb 13 02:21:29.015141 systemd[1]: Starting modprobe@configfs.service... Feb 13 02:21:29.015147 systemd[1]: Starting modprobe@dm_mod.service... Feb 13 02:21:29.015154 systemd[1]: Starting modprobe@drm.service... Feb 13 02:21:29.015160 systemd[1]: Starting modprobe@efi_pstore.service... Feb 13 02:21:29.015166 systemd[1]: Starting modprobe@fuse.service... Feb 13 02:21:29.015173 kernel: fuse: init (API version 7.34) Feb 13 02:21:29.015180 systemd[1]: Starting modprobe@loop.service... Feb 13 02:21:29.015186 kernel: loop: module loaded Feb 13 02:21:29.015192 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 02:21:29.015199 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 02:21:29.015205 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 13 02:21:29.015212 systemd[1]: Starting systemd-journald.service... Feb 13 02:21:29.015218 systemd[1]: Starting systemd-modules-load.service... Feb 13 02:21:29.015225 kernel: audit: type=1305 audit(1707790889.011:94): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 02:21:29.015233 systemd-journald[1278]: Journal started Feb 13 02:21:29.015259 systemd-journald[1278]: Runtime Journal (/run/log/journal/cf16953dbbd04fa28aba3b93c9f55bcf) is 8.0M, max 639.3M, 631.3M free. Feb 13 02:21:28.267000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 02:21:28.267000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 13 02:21:29.011000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 02:21:29.011000 audit[1278]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc2a071060 a2=4000 a3=7ffc2a0710fc items=0 ppid=1 pid=1278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 02:21:29.011000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 13 02:21:29.060449 kernel: audit: type=1300 audit(1707790889.011:94): arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc2a071060 a2=4000 a3=7ffc2a0710fc items=0 ppid=1 pid=1278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 02:21:29.060462 kernel: audit: type=1327 audit(1707790889.011:94): proctitle="/usr/lib/systemd/systemd-journald" Feb 13 02:21:29.174651 systemd[1]: Starting systemd-network-generator.service... Feb 13 02:21:29.201635 systemd[1]: Starting systemd-remount-fs.service... Feb 13 02:21:29.226517 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 02:21:29.269505 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 02:21:29.288624 systemd[1]: Started systemd-journald.service. Feb 13 02:21:29.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.298160 systemd[1]: Mounted dev-hugepages.mount. Feb 13 02:21:29.345494 kernel: audit: type=1130 audit(1707790889.296:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.352671 systemd[1]: Mounted dev-mqueue.mount. Feb 13 02:21:29.359671 systemd[1]: Mounted media.mount. Feb 13 02:21:29.366674 systemd[1]: Mounted sys-kernel-debug.mount. Feb 13 02:21:29.374670 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 13 02:21:29.382648 systemd[1]: Mounted tmp.mount. Feb 13 02:21:29.389769 systemd[1]: Finished flatcar-tmpfiles.service. Feb 13 02:21:29.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.397779 systemd[1]: Finished kmod-static-nodes.service. Feb 13 02:21:29.445586 kernel: audit: type=1130 audit(1707790889.396:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.453838 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 02:21:29.453932 systemd[1]: Finished modprobe@configfs.service. Feb 13 02:21:29.502487 kernel: audit: type=1130 audit(1707790889.452:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.510687 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 02:21:29.510760 systemd[1]: Finished modprobe@dm_mod.service. Feb 13 02:21:29.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.561496 kernel: audit: type=1130 audit(1707790889.509:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.561531 kernel: audit: type=1131 audit(1707790889.509:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.620779 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 02:21:29.620850 systemd[1]: Finished modprobe@drm.service. Feb 13 02:21:29.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.629826 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 02:21:29.629898 systemd[1]: Finished modprobe@efi_pstore.service. Feb 13 02:21:29.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.638794 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 02:21:29.638864 systemd[1]: Finished modprobe@fuse.service. Feb 13 02:21:29.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.647771 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 02:21:29.647842 systemd[1]: Finished modprobe@loop.service. Feb 13 02:21:29.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.656822 systemd[1]: Finished systemd-modules-load.service. Feb 13 02:21:29.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.665797 systemd[1]: Finished systemd-network-generator.service. Feb 13 02:21:29.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.675828 systemd[1]: Finished systemd-remount-fs.service. Feb 13 02:21:29.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.685876 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 02:21:29.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.695149 systemd[1]: Reached target network-pre.target. Feb 13 02:21:29.705593 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 13 02:21:29.715150 systemd[1]: Mounting sys-kernel-config.mount... Feb 13 02:21:29.722655 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 02:21:29.723764 systemd[1]: Starting systemd-hwdb-update.service... Feb 13 02:21:29.731156 systemd[1]: Starting systemd-journal-flush.service... Feb 13 02:21:29.735262 systemd-journald[1278]: Time spent on flushing to /var/log/journal/cf16953dbbd04fa28aba3b93c9f55bcf is 14.960ms for 1582 entries. Feb 13 02:21:29.735262 systemd-journald[1278]: System Journal (/var/log/journal/cf16953dbbd04fa28aba3b93c9f55bcf) is 8.0M, max 195.6M, 187.6M free. Feb 13 02:21:29.783427 systemd-journald[1278]: Received client request to flush runtime journal. Feb 13 02:21:29.748569 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 02:21:29.749096 systemd[1]: Starting systemd-random-seed.service... Feb 13 02:21:29.768586 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 13 02:21:29.769170 systemd[1]: Starting systemd-sysctl.service... Feb 13 02:21:29.776248 systemd[1]: Starting systemd-sysusers.service... Feb 13 02:21:29.784132 systemd[1]: Starting systemd-udev-settle.service... Feb 13 02:21:29.792008 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 13 02:21:29.800552 systemd[1]: Mounted sys-kernel-config.mount. Feb 13 02:21:29.808695 systemd[1]: Finished systemd-journal-flush.service. Feb 13 02:21:29.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.816677 systemd[1]: Finished systemd-random-seed.service. Feb 13 02:21:29.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.824687 systemd[1]: Finished systemd-sysctl.service. Feb 13 02:21:29.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.832712 systemd[1]: Finished systemd-sysusers.service. Feb 13 02:21:29.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:29.841659 systemd[1]: Reached target first-boot-complete.target. Feb 13 02:21:29.850191 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 02:21:29.858821 udevadm[1305]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 02:21:29.868308 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 02:21:29.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:30.033333 systemd[1]: Finished systemd-hwdb-update.service. Feb 13 02:21:30.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:30.042368 systemd[1]: Starting systemd-udevd.service... Feb 13 02:21:30.054057 systemd-udevd[1312]: Using default interface naming scheme 'v252'. Feb 13 02:21:30.073240 systemd[1]: Started systemd-udevd.service. Feb 13 02:21:30.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:30.085389 systemd[1]: Found device dev-ttyS1.device. Feb 13 02:21:30.107329 systemd[1]: Starting systemd-networkd.service... Feb 13 02:21:30.130092 systemd[1]: Starting systemd-userdbd.service... Feb 13 02:21:30.131759 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 13 02:21:30.131796 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 02:21:30.131811 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 02:21:30.131825 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 02:21:30.141455 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1381) Feb 13 02:21:30.203454 kernel: IPMI message handler: version 39.2 Feb 13 02:21:30.229551 kernel: ACPI: button: Power Button [PWRF] Feb 13 02:21:30.115000 audit[1313]: AVC avc: denied { confidentiality } for pid=1313 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 02:21:30.115000 audit[1313]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=561497decbd0 a1=4d8bc a2=7f32bdd4ebc5 a3=5 items=42 ppid=1312 pid=1313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 02:21:30.115000 audit: CWD cwd="/" Feb 13 02:21:30.115000 audit: PATH item=0 name=(null) inode=1039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=1 name=(null) inode=14118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=2 name=(null) inode=14118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=3 name=(null) inode=14119 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=4 name=(null) inode=14118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.266492 kernel: ipmi device interface Feb 13 02:21:30.115000 audit: PATH item=5 name=(null) inode=14120 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=6 name=(null) inode=14118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=7 name=(null) inode=14121 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=8 name=(null) inode=14121 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=9 name=(null) inode=14122 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=10 name=(null) inode=14121 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=11 name=(null) inode=14123 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=12 name=(null) inode=14121 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=13 name=(null) inode=14124 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=14 name=(null) inode=14121 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=15 name=(null) inode=14125 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=16 name=(null) inode=14121 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=17 name=(null) inode=14126 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=18 name=(null) inode=14118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=19 name=(null) inode=14127 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=20 name=(null) inode=14127 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=21 name=(null) inode=14128 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=22 name=(null) inode=14127 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=23 name=(null) inode=14129 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=24 name=(null) inode=14127 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=25 name=(null) inode=14130 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=26 name=(null) inode=14127 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=27 name=(null) inode=14131 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=28 name=(null) inode=14127 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=29 name=(null) inode=14132 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=30 name=(null) inode=14118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=31 name=(null) inode=14133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=32 name=(null) inode=14133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=33 name=(null) inode=14134 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=34 name=(null) inode=14133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=35 name=(null) inode=14135 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=36 name=(null) inode=14133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=37 name=(null) inode=14136 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=38 name=(null) inode=14133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=39 name=(null) inode=14137 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=40 name=(null) inode=14133 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PATH item=41 name=(null) inode=14138 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 02:21:30.115000 audit: PROCTITLE proctitle="(udev-worker)" Feb 13 02:21:30.270032 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 13 02:21:30.318845 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 13 02:21:30.319083 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 13 02:21:30.326649 systemd[1]: Started systemd-userdbd.service. Feb 13 02:21:30.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:30.388148 kernel: ipmi_si: IPMI System Interface driver Feb 13 02:21:30.388183 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 13 02:21:30.388265 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 13 02:21:30.411714 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 13 02:21:30.433561 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 13 02:21:30.479507 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 13 02:21:30.523092 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 13 02:21:30.523309 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 13 02:21:30.545454 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 13 02:21:30.573498 kernel: iTCO_vendor_support: vendor-support=0 Feb 13 02:21:30.573557 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 13 02:21:30.589703 systemd-networkd[1392]: bond0: netdev ready Feb 13 02:21:30.591729 systemd-networkd[1392]: lo: Link UP Feb 13 02:21:30.591732 systemd-networkd[1392]: lo: Gained carrier Feb 13 02:21:30.592188 systemd-networkd[1392]: Enumeration completed Feb 13 02:21:30.592299 systemd[1]: Started systemd-networkd.service. Feb 13 02:21:30.592462 systemd-networkd[1392]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 13 02:21:30.595029 systemd-networkd[1392]: enp2s0f1np1: Configuring with /etc/systemd/network/10-04:3f:72:d7:77:67.network. Feb 13 02:21:30.617225 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 13 02:21:30.617256 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 13 02:21:30.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:30.685482 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Feb 13 02:21:30.685574 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 13 02:21:30.757454 kernel: intel_rapl_common: Found RAPL domain package Feb 13 02:21:30.757492 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Feb 13 02:21:30.757581 kernel: intel_rapl_common: Found RAPL domain core Feb 13 02:21:30.757595 kernel: intel_rapl_common: Found RAPL domain uncore Feb 13 02:21:30.757607 kernel: intel_rapl_common: Found RAPL domain dram Feb 13 02:21:30.823504 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 13 02:21:30.890451 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Feb 13 02:21:30.897080 systemd-networkd[1392]: enp2s0f0np0: Configuring with /etc/systemd/network/10-04:3f:72:d7:77:66.network. Feb 13 02:21:30.926490 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 02:21:30.926514 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 13 02:21:30.966452 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 13 02:21:30.972780 systemd[1]: Finished systemd-udev-settle.service. Feb 13 02:21:30.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:30.981256 systemd[1]: Starting lvm2-activation-early.service... Feb 13 02:21:30.996615 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 02:21:31.029899 systemd[1]: Finished lvm2-activation-early.service. Feb 13 02:21:31.052489 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 02:21:31.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:31.060647 systemd[1]: Reached target cryptsetup.target. Feb 13 02:21:31.069168 systemd[1]: Starting lvm2-activation.service... Feb 13 02:21:31.071303 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 02:21:31.106927 systemd[1]: Finished lvm2-activation.service. Feb 13 02:21:31.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:31.115646 systemd[1]: Reached target local-fs-pre.target. Feb 13 02:21:31.123563 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 02:21:31.123579 systemd[1]: Reached target local-fs.target. Feb 13 02:21:31.131571 systemd[1]: Reached target machines.target. Feb 13 02:21:31.140207 systemd[1]: Starting ldconfig.service... Feb 13 02:21:31.147504 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 13 02:21:31.147525 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 02:21:31.148266 systemd[1]: Starting systemd-boot-update.service... Feb 13 02:21:31.156023 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 13 02:21:31.166090 systemd[1]: Starting systemd-machine-id-commit.service... Feb 13 02:21:31.166190 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 13 02:21:31.166213 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 13 02:21:31.166864 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 13 02:21:31.167076 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1428 (bootctl) Feb 13 02:21:31.167631 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 13 02:21:31.183525 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 13 02:21:31.187919 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 13 02:21:31.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:31.192350 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 02:21:31.203609 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 02:21:31.338472 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 13 02:21:31.341995 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 02:21:31.343243 systemd[1]: Finished systemd-machine-id-commit.service. Feb 13 02:21:31.371473 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Feb 13 02:21:31.371586 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 13 02:21:31.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:31.392473 systemd-networkd[1392]: bond0: Link UP Feb 13 02:21:31.392741 systemd-networkd[1392]: enp2s0f1np1: Link UP Feb 13 02:21:31.392917 systemd-networkd[1392]: enp2s0f1np1: Gained carrier Feb 13 02:21:31.394238 systemd-networkd[1392]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-04:3f:72:d7:77:66.network. Feb 13 02:21:31.400955 systemd-fsck[1438]: fsck.fat 4.2 (2021-01-31) Feb 13 02:21:31.400955 systemd-fsck[1438]: /dev/sda1: 789 files, 115339/258078 clusters Feb 13 02:21:31.404025 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 13 02:21:31.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:31.432346 systemd[1]: Mounting boot.mount... Feb 13 02:21:31.433210 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 13 02:21:31.433316 kernel: bond0: active interface up! Feb 13 02:21:31.447607 systemd[1]: Mounted boot.mount. Feb 13 02:21:31.455571 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 13 02:21:31.474827 systemd[1]: Finished systemd-boot-update.service. Feb 13 02:21:31.492453 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 02:21:31.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:31.502093 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 13 02:21:31.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 02:21:31.511236 systemd[1]: Starting audit-rules.service... Feb 13 02:21:31.518165 systemd[1]: Starting clean-ca-certificates.service... Feb 13 02:21:31.528161 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 13 02:21:31.528000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 13 02:21:31.528000 audit[1465]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcd341bc40 a2=420 a3=0 items=0 ppid=1448 pid=1465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 02:21:31.528000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 13 02:21:31.529681 augenrules[1465]: No rules Feb 13 02:21:31.537321 systemd[1]: Starting systemd-resolved.service... Feb 13 02:21:31.545283 systemd[1]: Starting systemd-timesyncd.service... Feb 13 02:21:31.547177 systemd-networkd[1392]: enp2s0f0np0: Link UP Feb 13 02:21:31.547352 systemd-networkd[1392]: bond0: Gained carrier Feb 13 02:21:31.547443 systemd-networkd[1392]: enp2s0f0np0: Gained carrier Feb 13 02:21:31.553114 systemd[1]: Starting systemd-update-utmp.service... Feb 13 02:21:31.553755 systemd-networkd[1392]: enp2s0f1np1: Link DOWN Feb 13 02:21:31.553758 systemd-networkd[1392]: enp2s0f1np1: Lost carrier Feb 13 02:21:31.567581 systemd[1]: Finished audit-rules.service. Feb 13 02:21:31.580486 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 02:21:31.580516 kernel: bond0: (slave enp2s0f1np1): invalid new link 1 on slave Feb 13 02:21:31.595560 ldconfig[1427]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 02:21:31.605690 systemd[1]: Finished ldconfig.service. Feb 13 02:21:31.613633 systemd[1]: Finished clean-ca-certificates.service. Feb 13 02:21:31.622628 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 13 02:21:31.635930 systemd[1]: Starting systemd-update-done.service... Feb 13 02:21:31.643491 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 02:21:31.643939 systemd[1]: Finished systemd-update-utmp.service. Feb 13 02:21:31.653689 systemd[1]: Finished systemd-update-done.service. Feb 13 02:21:31.666833 systemd[1]: Started systemd-timesyncd.service. Feb 13 02:21:31.668174 systemd-resolved[1472]: Positive Trust Anchors: Feb 13 02:21:31.668179 systemd-resolved[1472]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 02:21:31.668198 systemd-resolved[1472]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 02:21:31.672243 systemd-resolved[1472]: Using system hostname 'ci-3510.3.2-a-4f4948c732'. Feb 13 02:21:31.675915 systemd[1]: Reached target time-set.target. Feb 13 02:21:31.722478 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 13 02:21:31.742455 kernel: bond0: (slave enp2s0f1np1): speed changed to 0 on port 1 Feb 13 02:21:31.744439 systemd-networkd[1392]: enp2s0f1np1: Link UP Feb 13 02:21:31.744556 systemd-timesyncd[1474]: Network configuration changed, trying to establish connection. Feb 13 02:21:31.744609 systemd-networkd[1392]: enp2s0f1np1: Gained carrier Feb 13 02:21:31.744613 systemd-timesyncd[1474]: Network configuration changed, trying to establish connection. Feb 13 02:21:31.745411 systemd[1]: Started systemd-resolved.service. Feb 13 02:21:31.754560 systemd[1]: Reached target network.target. Feb 13 02:21:31.757586 systemd-timesyncd[1474]: Network configuration changed, trying to establish connection. Feb 13 02:21:31.757616 systemd-timesyncd[1474]: Network configuration changed, trying to establish connection. Feb 13 02:21:31.757672 systemd-timesyncd[1474]: Network configuration changed, trying to establish connection. Feb 13 02:21:31.763536 systemd[1]: Reached target nss-lookup.target. Feb 13 02:21:31.771541 systemd[1]: Reached target sysinit.target. Feb 13 02:21:31.779564 systemd[1]: Started motdgen.path. Feb 13 02:21:31.786534 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 13 02:21:31.803595 systemd[1]: Started logrotate.timer. Feb 13 02:21:31.809489 kernel: bond0: (slave enp2s0f1np1): link status up again after 200 ms Feb 13 02:21:31.824579 systemd[1]: Started mdadm.timer. Feb 13 02:21:31.832486 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 13 02:21:31.838530 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 13 02:21:31.846524 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 02:21:31.846540 systemd[1]: Reached target paths.target. Feb 13 02:21:31.853519 systemd[1]: Reached target timers.target. Feb 13 02:21:31.860646 systemd[1]: Listening on dbus.socket. Feb 13 02:21:31.868161 systemd[1]: Starting docker.socket... Feb 13 02:21:31.875338 systemd[1]: Listening on sshd.socket. Feb 13 02:21:31.882616 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 02:21:31.882862 systemd[1]: Listening on docker.socket. Feb 13 02:21:31.889565 systemd[1]: Reached target sockets.target. Feb 13 02:21:31.897521 systemd[1]: Reached target basic.target. Feb 13 02:21:31.904608 systemd[1]: System is tainted: cgroupsv1 Feb 13 02:21:31.904633 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 02:21:31.904647 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 02:21:31.905147 systemd[1]: Starting containerd.service... Feb 13 02:21:31.911978 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 13 02:21:31.921090 systemd[1]: Starting coreos-metadata.service... Feb 13 02:21:31.928060 systemd[1]: Starting dbus.service... Feb 13 02:21:31.934199 systemd[1]: Starting enable-oem-cloudinit.service... Feb 13 02:21:31.938772 jq[1493]: false Feb 13 02:21:31.941573 coreos-metadata[1486]: Feb 13 02:21:31.941 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 02:21:31.942158 systemd[1]: Starting extend-filesystems.service... Feb 13 02:21:31.947218 dbus-daemon[1492]: [system] SELinux support is enabled Feb 13 02:21:31.949547 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 13 02:21:31.949961 extend-filesystems[1496]: Found sda Feb 13 02:21:31.949961 extend-filesystems[1496]: Found sda1 Feb 13 02:21:31.987622 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Feb 13 02:21:31.987642 coreos-metadata[1489]: Feb 13 02:21:31.950 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 02:21:31.950520 systemd[1]: Starting motdgen.service... Feb 13 02:21:31.987819 extend-filesystems[1496]: Found sda2 Feb 13 02:21:31.987819 extend-filesystems[1496]: Found sda3 Feb 13 02:21:31.987819 extend-filesystems[1496]: Found usr Feb 13 02:21:31.987819 extend-filesystems[1496]: Found sda4 Feb 13 02:21:31.987819 extend-filesystems[1496]: Found sda6 Feb 13 02:21:31.987819 extend-filesystems[1496]: Found sda7 Feb 13 02:21:31.987819 extend-filesystems[1496]: Found sda9 Feb 13 02:21:31.987819 extend-filesystems[1496]: Checking size of /dev/sda9 Feb 13 02:21:31.987819 extend-filesystems[1496]: Resized partition /dev/sda9 Feb 13 02:21:31.969892 systemd[1]: Starting prepare-cni-plugins.service... Feb 13 02:21:32.102648 extend-filesystems[1508]: resize2fs 1.46.5 (30-Dec-2021) Feb 13 02:21:31.996159 systemd[1]: Starting prepare-critools.service... Feb 13 02:21:32.010049 systemd[1]: Starting prepare-helm.service... Feb 13 02:21:32.024010 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 13 02:21:32.038996 systemd[1]: Starting sshd-keygen.service... Feb 13 02:21:32.053485 systemd[1]: Starting systemd-logind.service... Feb 13 02:21:32.070497 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 02:21:32.071053 systemd[1]: Starting tcsd.service... Feb 13 02:21:32.119016 jq[1532]: true Feb 13 02:21:32.077418 systemd-logind[1529]: Watching system buttons on /dev/input/event3 (Power Button) Feb 13 02:21:32.077429 systemd-logind[1529]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 02:21:32.077438 systemd-logind[1529]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 13 02:21:32.077542 systemd-logind[1529]: New seat seat0. Feb 13 02:21:32.083168 systemd[1]: Starting update-engine.service... Feb 13 02:21:32.091120 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 13 02:21:32.110886 systemd[1]: Started dbus.service. Feb 13 02:21:32.125416 update_engine[1531]: I0213 02:21:32.124922 1531 main.cc:92] Flatcar Update Engine starting Feb 13 02:21:32.127360 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 02:21:32.127494 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 13 02:21:32.127674 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 02:21:32.127781 systemd[1]: Finished motdgen.service. Feb 13 02:21:32.128479 update_engine[1531]: I0213 02:21:32.128455 1531 update_check_scheduler.cc:74] Next update check in 10m54s Feb 13 02:21:32.135634 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 02:21:32.135760 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 13 02:21:32.140015 tar[1536]: ./ Feb 13 02:21:32.140015 tar[1536]: ./macvlan Feb 13 02:21:32.146103 jq[1542]: true Feb 13 02:21:32.146718 dbus-daemon[1492]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 02:21:32.148031 tar[1537]: crictl Feb 13 02:21:32.149341 tar[1538]: linux-amd64/helm Feb 13 02:21:32.152054 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 13 02:21:32.152192 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 13 02:21:32.154723 systemd[1]: Started update-engine.service. Feb 13 02:21:32.156936 env[1543]: time="2024-02-13T02:21:32.156886385Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 13 02:21:32.162192 tar[1536]: ./static Feb 13 02:21:32.164557 systemd[1]: Started systemd-logind.service. Feb 13 02:21:32.167579 env[1543]: time="2024-02-13T02:21:32.167556877Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 02:21:32.167789 env[1543]: time="2024-02-13T02:21:32.167775157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 02:21:32.168423 env[1543]: time="2024-02-13T02:21:32.168406517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 02:21:32.168423 env[1543]: time="2024-02-13T02:21:32.168421424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 02:21:32.170039 env[1543]: time="2024-02-13T02:21:32.169992867Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 02:21:32.170039 env[1543]: time="2024-02-13T02:21:32.170009406Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 02:21:32.170039 env[1543]: time="2024-02-13T02:21:32.170029532Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 13 02:21:32.170110 env[1543]: time="2024-02-13T02:21:32.170039637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 02:21:32.170110 env[1543]: time="2024-02-13T02:21:32.170098393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 02:21:32.172134 env[1543]: time="2024-02-13T02:21:32.172098405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 02:21:32.172244 env[1543]: time="2024-02-13T02:21:32.172230293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 02:21:32.172265 env[1543]: time="2024-02-13T02:21:32.172245950Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 02:21:32.174005 systemd[1]: Started locksmithd.service. Feb 13 02:21:32.174175 env[1543]: time="2024-02-13T02:21:32.174142759Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 13 02:21:32.174175 env[1543]: time="2024-02-13T02:21:32.174159703Z" level=info msg="metadata content store policy set" policy=shared Feb 13 02:21:32.174278 bash[1567]: Updated "/home/core/.ssh/authorized_keys" Feb 13 02:21:32.180589 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 02:21:32.180668 systemd[1]: Reached target system-config.target. Feb 13 02:21:32.186038 env[1543]: time="2024-02-13T02:21:32.186019038Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 02:21:32.186088 env[1543]: time="2024-02-13T02:21:32.186045536Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 02:21:32.186088 env[1543]: time="2024-02-13T02:21:32.186059148Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 02:21:32.186088 env[1543]: time="2024-02-13T02:21:32.186083631Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 02:21:32.186142 env[1543]: time="2024-02-13T02:21:32.186096739Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 02:21:32.186142 env[1543]: time="2024-02-13T02:21:32.186109603Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 02:21:32.186142 env[1543]: time="2024-02-13T02:21:32.186124222Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 02:21:32.186188 env[1543]: time="2024-02-13T02:21:32.186145082Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 02:21:32.186188 env[1543]: time="2024-02-13T02:21:32.186165308Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 13 02:21:32.186188 env[1543]: time="2024-02-13T02:21:32.186176471Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 02:21:32.186232 env[1543]: time="2024-02-13T02:21:32.186186837Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 02:21:32.186232 env[1543]: time="2024-02-13T02:21:32.186197702Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 02:21:32.186282 env[1543]: time="2024-02-13T02:21:32.186272696Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 02:21:32.186349 env[1543]: time="2024-02-13T02:21:32.186340644Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 02:21:32.186595 env[1543]: time="2024-02-13T02:21:32.186584364Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 02:21:32.186623 env[1543]: time="2024-02-13T02:21:32.186606229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 02:21:32.186645 env[1543]: time="2024-02-13T02:21:32.186620186Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 02:21:32.186667 env[1543]: time="2024-02-13T02:21:32.186654949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 02:21:32.186688 env[1543]: time="2024-02-13T02:21:32.186667063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 02:21:32.186688 env[1543]: time="2024-02-13T02:21:32.186677134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 02:21:32.186688 env[1543]: time="2024-02-13T02:21:32.186685837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 02:21:32.186746 env[1543]: time="2024-02-13T02:21:32.186695898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 02:21:32.186746 env[1543]: time="2024-02-13T02:21:32.186707058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 02:21:32.186746 env[1543]: time="2024-02-13T02:21:32.186717975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 02:21:32.186746 env[1543]: time="2024-02-13T02:21:32.186728770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 02:21:32.186746 env[1543]: time="2024-02-13T02:21:32.186740958Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 02:21:32.186834 env[1543]: time="2024-02-13T02:21:32.186824340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 02:21:32.186854 env[1543]: time="2024-02-13T02:21:32.186840875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 02:21:32.186870 env[1543]: time="2024-02-13T02:21:32.186852530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 02:21:32.186870 env[1543]: time="2024-02-13T02:21:32.186862603Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 02:21:32.186900 env[1543]: time="2024-02-13T02:21:32.186875701Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 13 02:21:32.186900 env[1543]: time="2024-02-13T02:21:32.186884691Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 02:21:32.186935 env[1543]: time="2024-02-13T02:21:32.186901340Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 13 02:21:32.186935 env[1543]: time="2024-02-13T02:21:32.186930378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 02:21:32.187034 tar[1536]: ./vlan Feb 13 02:21:32.187136 env[1543]: time="2024-02-13T02:21:32.187094716Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 02:21:32.188745 env[1543]: time="2024-02-13T02:21:32.187152683Z" level=info msg="Connect containerd service" Feb 13 02:21:32.188745 env[1543]: time="2024-02-13T02:21:32.187176740Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 02:21:32.188745 env[1543]: time="2024-02-13T02:21:32.187542464Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 02:21:32.188745 env[1543]: time="2024-02-13T02:21:32.187643926Z" level=info msg="Start subscribing containerd event" Feb 13 02:21:32.188745 env[1543]: time="2024-02-13T02:21:32.187688402Z" level=info msg="Start recovering state" Feb 13 02:21:32.188745 env[1543]: time="2024-02-13T02:21:32.187689109Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 02:21:32.188745 env[1543]: time="2024-02-13T02:21:32.187717977Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 02:21:32.188745 env[1543]: time="2024-02-13T02:21:32.187733636Z" level=info msg="Start event monitor" Feb 13 02:21:32.188745 env[1543]: time="2024-02-13T02:21:32.187746924Z" level=info msg="containerd successfully booted in 0.031276s" Feb 13 02:21:32.188745 env[1543]: time="2024-02-13T02:21:32.187758276Z" level=info msg="Start snapshots syncer" Feb 13 02:21:32.188745 env[1543]: time="2024-02-13T02:21:32.187767387Z" level=info msg="Start cni network conf syncer for default" Feb 13 02:21:32.188745 env[1543]: time="2024-02-13T02:21:32.187773901Z" level=info msg="Start streaming server" Feb 13 02:21:32.189587 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 02:21:32.189670 systemd[1]: Reached target user-config.target. Feb 13 02:21:32.199297 systemd[1]: Started containerd.service. Feb 13 02:21:32.205810 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 13 02:21:32.208108 tar[1536]: ./portmap Feb 13 02:21:32.227669 tar[1536]: ./host-local Feb 13 02:21:32.237152 locksmithd[1579]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 02:21:32.245268 tar[1536]: ./vrf Feb 13 02:21:32.263479 tar[1536]: ./bridge Feb 13 02:21:32.285176 tar[1536]: ./tuning Feb 13 02:21:32.302528 tar[1536]: ./firewall Feb 13 02:21:32.324960 tar[1536]: ./host-device Feb 13 02:21:32.344616 tar[1536]: ./sbr Feb 13 02:21:32.362655 tar[1536]: ./loopback Feb 13 02:21:32.379674 tar[1536]: ./dhcp Feb 13 02:21:32.405009 tar[1538]: linux-amd64/LICENSE Feb 13 02:21:32.405081 tar[1538]: linux-amd64/README.md Feb 13 02:21:32.407188 systemd[1]: Finished prepare-critools.service. Feb 13 02:21:32.415841 systemd[1]: Finished prepare-helm.service. Feb 13 02:21:32.429046 tar[1536]: ./ptp Feb 13 02:21:32.450141 tar[1536]: ./ipvlan Feb 13 02:21:32.470504 tar[1536]: ./bandwidth Feb 13 02:21:32.485492 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Feb 13 02:21:32.512841 extend-filesystems[1508]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 02:21:32.512841 extend-filesystems[1508]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 13 02:21:32.512841 extend-filesystems[1508]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Feb 13 02:21:32.549560 extend-filesystems[1496]: Resized filesystem in /dev/sda9 Feb 13 02:21:32.549560 extend-filesystems[1496]: Found sdb Feb 13 02:21:32.513287 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 02:21:32.513422 systemd[1]: Finished extend-filesystems.service. Feb 13 02:21:32.544309 systemd[1]: Finished prepare-cni-plugins.service. Feb 13 02:21:33.032160 sshd_keygen[1528]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 02:21:33.043690 systemd[1]: Finished sshd-keygen.service. Feb 13 02:21:33.045554 systemd-networkd[1392]: bond0: Gained IPv6LL Feb 13 02:21:33.045820 systemd-timesyncd[1474]: Network configuration changed, trying to establish connection. Feb 13 02:21:33.052500 systemd[1]: Starting issuegen.service... Feb 13 02:21:33.060780 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 02:21:33.060882 systemd[1]: Finished issuegen.service. Feb 13 02:21:33.068322 systemd[1]: Starting systemd-user-sessions.service... Feb 13 02:21:33.077804 systemd[1]: Finished systemd-user-sessions.service. Feb 13 02:21:33.087185 systemd[1]: Started getty@tty1.service. Feb 13 02:21:33.095144 systemd[1]: Started serial-getty@ttyS1.service. Feb 13 02:21:33.103629 systemd[1]: Reached target getty.target. Feb 13 02:21:33.110774 systemd-timesyncd[1474]: Network configuration changed, trying to establish connection. Feb 13 02:21:33.110894 systemd-timesyncd[1474]: Network configuration changed, trying to establish connection. Feb 13 02:21:34.039627 kernel: mlx5_core 0000:02:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 13 02:21:36.976083 coreos-metadata[1489]: Feb 13 02:21:36.975 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Feb 13 02:21:36.976961 coreos-metadata[1486]: Feb 13 02:21:36.976 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Feb 13 02:21:37.976375 coreos-metadata[1486]: Feb 13 02:21:37.976 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 02:21:37.976660 coreos-metadata[1489]: Feb 13 02:21:37.976 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 02:21:37.980199 coreos-metadata[1486]: Feb 13 02:21:37.980 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Feb 13 02:21:37.982914 coreos-metadata[1489]: Feb 13 02:21:37.982 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution Feb 13 02:21:38.116590 login[1620]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 02:21:38.123683 login[1618]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 02:21:38.124310 systemd-logind[1529]: New session 1 of user core. Feb 13 02:21:38.124814 systemd[1]: Created slice user-500.slice. Feb 13 02:21:38.125260 systemd[1]: Starting user-runtime-dir@500.service... Feb 13 02:21:38.126403 systemd-logind[1529]: New session 2 of user core. Feb 13 02:21:38.130774 systemd[1]: Finished user-runtime-dir@500.service. Feb 13 02:21:38.131388 systemd[1]: Starting user@500.service... Feb 13 02:21:38.133422 (systemd)[1626]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:21:38.205666 systemd[1626]: Queued start job for default target default.target. Feb 13 02:21:38.205764 systemd[1626]: Reached target paths.target. Feb 13 02:21:38.205774 systemd[1626]: Reached target sockets.target. Feb 13 02:21:38.205782 systemd[1626]: Reached target timers.target. Feb 13 02:21:38.205788 systemd[1626]: Reached target basic.target. Feb 13 02:21:38.205806 systemd[1626]: Reached target default.target. Feb 13 02:21:38.205818 systemd[1626]: Startup finished in 69ms. Feb 13 02:21:38.205878 systemd[1]: Started user@500.service. Feb 13 02:21:38.206436 systemd[1]: Started session-1.scope. Feb 13 02:21:38.206779 systemd[1]: Started session-2.scope. Feb 13 02:21:39.220489 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:2 port 2:2 Feb 13 02:21:39.227501 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:2 Feb 13 02:21:39.980624 coreos-metadata[1486]: Feb 13 02:21:39.980 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Feb 13 02:21:39.983158 coreos-metadata[1489]: Feb 13 02:21:39.983 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Feb 13 02:21:40.008066 coreos-metadata[1486]: Feb 13 02:21:40.008 INFO Fetch successful Feb 13 02:21:40.008155 coreos-metadata[1489]: Feb 13 02:21:40.008 INFO Fetch successful Feb 13 02:21:40.031040 systemd[1]: Finished coreos-metadata.service. Feb 13 02:21:40.031525 unknown[1486]: wrote ssh authorized keys file for user: core Feb 13 02:21:40.032111 systemd[1]: Started packet-phone-home.service. Feb 13 02:21:40.041997 curl[1653]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 13 02:21:40.042172 curl[1653]: Dload Upload Total Spent Left Speed Feb 13 02:21:40.045465 update-ssh-keys[1655]: Updated "/home/core/.ssh/authorized_keys" Feb 13 02:21:40.045714 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 13 02:21:40.045926 systemd[1]: Reached target multi-user.target. Feb 13 02:21:40.046650 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 13 02:21:40.050410 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 13 02:21:40.050557 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 13 02:21:40.050694 systemd[1]: Startup finished in 55.990s (kernel) + 14.722s (userspace) = 1min 10.713s. Feb 13 02:21:40.067829 systemd[1]: Created slice system-sshd.slice. Feb 13 02:21:40.068419 systemd[1]: Started sshd@0-136.144.54.113:22-139.178.68.195:49050.service. Feb 13 02:21:40.134308 sshd[1660]: Accepted publickey for core from 139.178.68.195 port 49050 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:21:40.137543 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:21:40.148087 systemd-logind[1529]: New session 3 of user core. Feb 13 02:21:40.150322 systemd[1]: Started session-3.scope. Feb 13 02:21:40.218243 systemd[1]: Started sshd@1-136.144.54.113:22-139.178.68.195:49052.service. Feb 13 02:21:40.237401 curl[1653]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 13 02:21:40.237896 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 13 02:21:40.253058 sshd[1665]: Accepted publickey for core from 139.178.68.195 port 49052 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:21:40.253737 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:21:40.256060 systemd-logind[1529]: New session 4 of user core. Feb 13 02:21:40.256498 systemd[1]: Started session-4.scope. Feb 13 02:21:40.307037 sshd[1665]: pam_unix(sshd:session): session closed for user core Feb 13 02:21:40.308309 systemd[1]: Started sshd@2-136.144.54.113:22-139.178.68.195:49058.service. Feb 13 02:21:40.308606 systemd[1]: sshd@1-136.144.54.113:22-139.178.68.195:49052.service: Deactivated successfully. Feb 13 02:21:40.309055 systemd-logind[1529]: Session 4 logged out. Waiting for processes to exit. Feb 13 02:21:40.309102 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 02:21:40.309489 systemd-logind[1529]: Removed session 4. Feb 13 02:21:40.345326 sshd[1671]: Accepted publickey for core from 139.178.68.195 port 49058 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:21:40.346533 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:21:40.351092 systemd-logind[1529]: New session 5 of user core. Feb 13 02:21:40.352053 systemd[1]: Started session-5.scope. Feb 13 02:21:40.413078 sshd[1671]: pam_unix(sshd:session): session closed for user core Feb 13 02:21:40.418988 systemd[1]: Started sshd@3-136.144.54.113:22-139.178.68.195:49070.service. Feb 13 02:21:40.420510 systemd[1]: sshd@2-136.144.54.113:22-139.178.68.195:49058.service: Deactivated successfully. Feb 13 02:21:40.422951 systemd-logind[1529]: Session 5 logged out. Waiting for processes to exit. Feb 13 02:21:40.423024 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 02:21:40.425530 systemd-logind[1529]: Removed session 5. Feb 13 02:21:40.494078 sshd[1678]: Accepted publickey for core from 139.178.68.195 port 49070 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:21:40.497172 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:21:40.507530 systemd-logind[1529]: New session 6 of user core. Feb 13 02:21:40.509787 systemd[1]: Started session-6.scope. Feb 13 02:21:40.591240 sshd[1678]: pam_unix(sshd:session): session closed for user core Feb 13 02:21:40.597047 systemd[1]: Started sshd@4-136.144.54.113:22-139.178.68.195:49078.service. Feb 13 02:21:40.598577 systemd[1]: sshd@3-136.144.54.113:22-139.178.68.195:49070.service: Deactivated successfully. Feb 13 02:21:40.600927 systemd-logind[1529]: Session 6 logged out. Waiting for processes to exit. Feb 13 02:21:40.600951 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 02:21:40.603308 systemd-logind[1529]: Removed session 6. Feb 13 02:21:40.671773 sshd[1685]: Accepted publickey for core from 139.178.68.195 port 49078 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:21:40.674861 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:21:40.685170 systemd-logind[1529]: New session 7 of user core. Feb 13 02:21:40.687391 systemd[1]: Started session-7.scope. Feb 13 02:21:40.785873 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 02:21:40.786502 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 13 02:21:44.739995 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 13 02:21:44.744593 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 13 02:21:44.744783 systemd[1]: Reached target network-online.target. Feb 13 02:21:44.745551 systemd[1]: Starting docker.service... Feb 13 02:21:44.766279 env[1713]: time="2024-02-13T02:21:44.766252484Z" level=info msg="Starting up" Feb 13 02:21:44.766906 env[1713]: time="2024-02-13T02:21:44.766866857Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 13 02:21:44.766906 env[1713]: time="2024-02-13T02:21:44.766876075Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 13 02:21:44.766906 env[1713]: time="2024-02-13T02:21:44.766887354Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 13 02:21:44.766906 env[1713]: time="2024-02-13T02:21:44.766893266Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 13 02:21:44.768414 env[1713]: time="2024-02-13T02:21:44.768378337Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 13 02:21:44.768414 env[1713]: time="2024-02-13T02:21:44.768387064Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 13 02:21:44.768414 env[1713]: time="2024-02-13T02:21:44.768395612Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 13 02:21:44.768414 env[1713]: time="2024-02-13T02:21:44.768401138Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 13 02:21:45.185303 env[1713]: time="2024-02-13T02:21:45.185214016Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 13 02:21:45.185303 env[1713]: time="2024-02-13T02:21:45.185230843Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 13 02:21:45.185426 env[1713]: time="2024-02-13T02:21:45.185313321Z" level=info msg="Loading containers: start." Feb 13 02:21:45.305519 kernel: Initializing XFRM netlink socket Feb 13 02:21:45.334554 env[1713]: time="2024-02-13T02:21:45.334534158Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 13 02:21:45.335250 systemd-timesyncd[1474]: Network configuration changed, trying to establish connection. Feb 13 02:21:45.394465 systemd-networkd[1392]: docker0: Link UP Feb 13 02:21:45.401195 env[1713]: time="2024-02-13T02:21:45.401152789Z" level=info msg="Loading containers: done." Feb 13 02:21:45.409262 env[1713]: time="2024-02-13T02:21:45.409211620Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 02:21:45.409383 env[1713]: time="2024-02-13T02:21:45.409352358Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 13 02:21:45.409439 env[1713]: time="2024-02-13T02:21:45.409427078Z" level=info msg="Daemon has completed initialization" Feb 13 02:21:45.420555 systemd[1]: Started docker.service. Feb 13 02:21:45.428122 env[1713]: time="2024-02-13T02:21:45.428031251Z" level=info msg="API listen on /run/docker.sock" Feb 13 02:21:45.457932 systemd[1]: Reloading. Feb 13 02:21:45.509685 /usr/lib/systemd/system-generators/torcx-generator[1872]: time="2024-02-13T02:21:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 02:21:45.509700 /usr/lib/systemd/system-generators/torcx-generator[1872]: time="2024-02-13T02:21:45Z" level=info msg="torcx already run" Feb 13 02:21:45.562738 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 02:21:45.562745 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 02:21:45.573616 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 02:21:45.624787 systemd[1]: Started kubelet.service. Feb 13 02:21:45.647744 kubelet[1935]: E0213 02:21:45.647686 1935 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 13 02:21:45.648961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 02:21:45.649042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 02:21:45.666293 systemd-timesyncd[1474]: Contacted time server [2603:c020:0:8369:16e7:baf9:64d9:7355]:123 (2.flatcar.pool.ntp.org). Feb 13 02:21:45.666321 systemd-timesyncd[1474]: Initial clock synchronization to Tue 2024-02-13 02:21:45.629790 UTC. Feb 13 02:21:46.344836 env[1543]: time="2024-02-13T02:21:46.344680089Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 13 02:21:47.223327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount622781028.mount: Deactivated successfully. Feb 13 02:21:48.927286 env[1543]: time="2024-02-13T02:21:48.927231322Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:48.927859 env[1543]: time="2024-02-13T02:21:48.927845357Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:48.928939 env[1543]: time="2024-02-13T02:21:48.928925426Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:48.929935 env[1543]: time="2024-02-13T02:21:48.929879886Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:48.931070 env[1543]: time="2024-02-13T02:21:48.931049877Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 13 02:21:48.938215 env[1543]: time="2024-02-13T02:21:48.938179386Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 13 02:21:50.862609 env[1543]: time="2024-02-13T02:21:50.862554040Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:50.863144 env[1543]: time="2024-02-13T02:21:50.863132642Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:50.864397 env[1543]: time="2024-02-13T02:21:50.864357528Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:50.865281 env[1543]: time="2024-02-13T02:21:50.865234372Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:50.865790 env[1543]: time="2024-02-13T02:21:50.865745502Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 13 02:21:50.871910 env[1543]: time="2024-02-13T02:21:50.871892482Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 13 02:21:52.009880 env[1543]: time="2024-02-13T02:21:52.009833571Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:52.011103 env[1543]: time="2024-02-13T02:21:52.011057426Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:52.012228 env[1543]: time="2024-02-13T02:21:52.012185305Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:52.013472 env[1543]: time="2024-02-13T02:21:52.013441571Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:52.013867 env[1543]: time="2024-02-13T02:21:52.013812143Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 13 02:21:52.019352 env[1543]: time="2024-02-13T02:21:52.019339317Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 13 02:21:52.850195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2188114847.mount: Deactivated successfully. Feb 13 02:21:53.156688 env[1543]: time="2024-02-13T02:21:53.156603369Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:53.157152 env[1543]: time="2024-02-13T02:21:53.157117946Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:53.158012 env[1543]: time="2024-02-13T02:21:53.157978614Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:53.159180 env[1543]: time="2024-02-13T02:21:53.159166076Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:53.159919 env[1543]: time="2024-02-13T02:21:53.159864624Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 13 02:21:53.167148 env[1543]: time="2024-02-13T02:21:53.167094374Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 02:21:53.607810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425580749.mount: Deactivated successfully. Feb 13 02:21:53.608805 env[1543]: time="2024-02-13T02:21:53.608765200Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:53.609888 env[1543]: time="2024-02-13T02:21:53.609863770Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:53.610935 env[1543]: time="2024-02-13T02:21:53.610878727Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:53.611832 env[1543]: time="2024-02-13T02:21:53.611791473Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:53.612138 env[1543]: time="2024-02-13T02:21:53.612089845Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 02:21:53.617754 env[1543]: time="2024-02-13T02:21:53.617712253Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 13 02:21:54.278637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1865898877.mount: Deactivated successfully. Feb 13 02:21:55.723110 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 02:21:55.723239 systemd[1]: Stopped kubelet.service. Feb 13 02:21:55.724163 systemd[1]: Started kubelet.service. Feb 13 02:21:55.748138 kubelet[2028]: E0213 02:21:55.748060 2028 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 13 02:21:55.750234 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 02:21:55.750326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 02:21:57.126022 env[1543]: time="2024-02-13T02:21:57.125993974Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:57.126626 env[1543]: time="2024-02-13T02:21:57.126571033Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:57.127617 env[1543]: time="2024-02-13T02:21:57.127582735Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:57.128465 env[1543]: time="2024-02-13T02:21:57.128426972Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:57.128894 env[1543]: time="2024-02-13T02:21:57.128832410Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 13 02:21:57.135436 env[1543]: time="2024-02-13T02:21:57.135420718Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 13 02:21:57.657708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount41263168.mount: Deactivated successfully. Feb 13 02:21:58.096373 env[1543]: time="2024-02-13T02:21:58.096315509Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:58.096987 env[1543]: time="2024-02-13T02:21:58.096943287Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:58.097630 env[1543]: time="2024-02-13T02:21:58.097589148Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:58.098353 env[1543]: time="2024-02-13T02:21:58.098311561Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:21:58.098738 env[1543]: time="2024-02-13T02:21:58.098679493Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 13 02:21:59.737641 systemd[1]: Stopped kubelet.service. Feb 13 02:21:59.745343 systemd[1]: Reloading. Feb 13 02:21:59.782492 /usr/lib/systemd/system-generators/torcx-generator[2192]: time="2024-02-13T02:21:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 02:21:59.782514 /usr/lib/systemd/system-generators/torcx-generator[2192]: time="2024-02-13T02:21:59Z" level=info msg="torcx already run" Feb 13 02:21:59.845056 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 02:21:59.845063 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 02:21:59.855752 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 02:21:59.908810 systemd[1]: Started kubelet.service. Feb 13 02:21:59.930291 kubelet[2258]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 13 02:21:59.930291 kubelet[2258]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 02:21:59.930291 kubelet[2258]: I0213 02:21:59.930281 2258 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 02:21:59.931026 kubelet[2258]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 13 02:21:59.931026 kubelet[2258]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 02:22:00.051277 kubelet[2258]: I0213 02:22:00.051220 2258 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 13 02:22:00.051277 kubelet[2258]: I0213 02:22:00.051229 2258 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 02:22:00.051356 kubelet[2258]: I0213 02:22:00.051350 2258 server.go:836] "Client rotation is on, will bootstrap in background" Feb 13 02:22:00.052803 kubelet[2258]: I0213 02:22:00.052765 2258 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 02:22:00.053124 kubelet[2258]: E0213 02:22:00.053088 2258 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://136.144.54.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 136.144.54.113:6443: connect: connection refused Feb 13 02:22:00.071751 kubelet[2258]: I0213 02:22:00.071711 2258 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 02:22:00.071928 kubelet[2258]: I0213 02:22:00.071894 2258 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 02:22:00.071964 kubelet[2258]: I0213 02:22:00.071933 2258 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 13 02:22:00.071964 kubelet[2258]: I0213 02:22:00.071943 2258 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 13 02:22:00.071964 kubelet[2258]: I0213 02:22:00.071949 2258 container_manager_linux.go:308] "Creating device plugin manager" Feb 13 02:22:00.072048 kubelet[2258]: I0213 02:22:00.071991 2258 state_mem.go:36] "Initialized new in-memory state store" Feb 13 02:22:00.074048 kubelet[2258]: I0213 02:22:00.074006 2258 kubelet.go:398] "Attempting to sync node with API server" Feb 13 02:22:00.074048 kubelet[2258]: I0213 02:22:00.074016 2258 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 02:22:00.074048 kubelet[2258]: I0213 02:22:00.074027 2258 kubelet.go:297] "Adding apiserver pod source" Feb 13 02:22:00.074048 kubelet[2258]: I0213 02:22:00.074036 2258 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 02:22:00.074347 kubelet[2258]: W0213 02:22:00.074298 2258 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://136.144.54.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 136.144.54.113:6443: connect: connection refused Feb 13 02:22:00.074347 kubelet[2258]: W0213 02:22:00.074303 2258 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://136.144.54.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-4f4948c732&limit=500&resourceVersion=0": dial tcp 136.144.54.113:6443: connect: connection refused Feb 13 02:22:00.074347 kubelet[2258]: E0213 02:22:00.074338 2258 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://136.144.54.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-4f4948c732&limit=500&resourceVersion=0": dial tcp 136.144.54.113:6443: connect: connection refused Feb 13 02:22:00.074347 kubelet[2258]: E0213 02:22:00.074339 2258 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://136.144.54.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 136.144.54.113:6443: connect: connection refused Feb 13 02:22:00.074347 kubelet[2258]: I0213 02:22:00.074339 2258 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 13 02:22:00.074507 kubelet[2258]: W0213 02:22:00.074473 2258 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 02:22:00.074705 kubelet[2258]: I0213 02:22:00.074671 2258 server.go:1186] "Started kubelet" Feb 13 02:22:00.074748 kubelet[2258]: I0213 02:22:00.074708 2258 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 02:22:00.074853 kubelet[2258]: E0213 02:22:00.074840 2258 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 13 02:22:00.074853 kubelet[2258]: E0213 02:22:00.074853 2258 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 02:22:00.074929 kubelet[2258]: E0213 02:22:00.074881 2258 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-4f4948c732.17b34acb25b00dc0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-4f4948c732", UID:"ci-3510.3.2-a-4f4948c732", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-4f4948c732"}, FirstTimestamp:time.Date(2024, time.February, 13, 2, 22, 0, 74661312, time.Local), LastTimestamp:time.Date(2024, time.February, 13, 2, 22, 0, 74661312, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://136.144.54.113:6443/api/v1/namespaces/default/events": dial tcp 136.144.54.113:6443: connect: connection refused'(may retry after sleeping) Feb 13 02:22:00.075257 kubelet[2258]: I0213 02:22:00.075251 2258 server.go:451] "Adding debug handlers to kubelet server" Feb 13 02:22:00.084506 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 13 02:22:00.084574 kubelet[2258]: I0213 02:22:00.084566 2258 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 02:22:00.084663 kubelet[2258]: I0213 02:22:00.084653 2258 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 13 02:22:00.084663 kubelet[2258]: E0213 02:22:00.084660 2258 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-4f4948c732\" not found" Feb 13 02:22:00.084726 kubelet[2258]: I0213 02:22:00.084672 2258 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 02:22:00.084819 kubelet[2258]: E0213 02:22:00.084806 2258 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://136.144.54.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-4f4948c732?timeout=10s": dial tcp 136.144.54.113:6443: connect: connection refused Feb 13 02:22:00.084855 kubelet[2258]: W0213 02:22:00.084834 2258 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://136.144.54.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 136.144.54.113:6443: connect: connection refused Feb 13 02:22:00.084883 kubelet[2258]: E0213 02:22:00.084863 2258 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://136.144.54.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 136.144.54.113:6443: connect: connection refused Feb 13 02:22:00.103898 kubelet[2258]: I0213 02:22:00.103851 2258 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 13 02:22:00.105700 kubelet[2258]: I0213 02:22:00.105689 2258 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 02:22:00.105700 kubelet[2258]: I0213 02:22:00.105701 2258 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 02:22:00.105756 kubelet[2258]: I0213 02:22:00.105710 2258 state_mem.go:36] "Initialized new in-memory state store" Feb 13 02:22:00.106434 kubelet[2258]: I0213 02:22:00.106428 2258 policy_none.go:49] "None policy: Start" Feb 13 02:22:00.106690 kubelet[2258]: I0213 02:22:00.106683 2258 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 13 02:22:00.106730 kubelet[2258]: I0213 02:22:00.106694 2258 state_mem.go:35] "Initializing new in-memory state store" Feb 13 02:22:00.109572 kubelet[2258]: I0213 02:22:00.109533 2258 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 02:22:00.109669 kubelet[2258]: I0213 02:22:00.109662 2258 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 02:22:00.109870 kubelet[2258]: E0213 02:22:00.109863 2258 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-4f4948c732\" not found" Feb 13 02:22:00.114239 kubelet[2258]: I0213 02:22:00.114230 2258 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 13 02:22:00.114239 kubelet[2258]: I0213 02:22:00.114241 2258 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 13 02:22:00.114315 kubelet[2258]: I0213 02:22:00.114254 2258 kubelet.go:2113] "Starting kubelet main sync loop" Feb 13 02:22:00.114315 kubelet[2258]: E0213 02:22:00.114282 2258 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 02:22:00.114576 kubelet[2258]: W0213 02:22:00.114550 2258 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://136.144.54.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 136.144.54.113:6443: connect: connection refused Feb 13 02:22:00.114610 kubelet[2258]: E0213 02:22:00.114583 2258 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://136.144.54.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 136.144.54.113:6443: connect: connection refused Feb 13 02:22:00.187534 kubelet[2258]: I0213 02:22:00.187496 2258 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-4f4948c732" Feb 13 02:22:00.187977 kubelet[2258]: E0213 02:22:00.187954 2258 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://136.144.54.113:6443/api/v1/nodes\": dial tcp 136.144.54.113:6443: connect: connection refused" node="ci-3510.3.2-a-4f4948c732" Feb 13 02:22:00.215438 kubelet[2258]: I0213 02:22:00.215371 2258 topology_manager.go:210] "Topology Admit Handler" Feb 13 02:22:00.219132 kubelet[2258]: I0213 02:22:00.219080 2258 topology_manager.go:210] "Topology Admit Handler" Feb 13 02:22:00.222933 kubelet[2258]: I0213 02:22:00.222889 2258 topology_manager.go:210] "Topology Admit Handler" Feb 13 02:22:00.223289 kubelet[2258]: I0213 02:22:00.223223 2258 status_manager.go:698] "Failed to get status for pod" podUID=8f23348b05f3bd58003d05d5da6bf4a4 pod="kube-system/kube-apiserver-ci-3510.3.2-a-4f4948c732" err="Get \"https://136.144.54.113:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-4f4948c732\": dial tcp 136.144.54.113:6443: connect: connection refused" Feb 13 02:22:00.227060 kubelet[2258]: I0213 02:22:00.226980 2258 status_manager.go:698] "Failed to get status for pod" podUID=2eeee56a603eb324d6cbe0e13e7acdc0 pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4f4948c732" err="Get \"https://136.144.54.113:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-4f4948c732\": dial tcp 136.144.54.113:6443: connect: connection refused" Feb 13 02:22:00.230332 kubelet[2258]: I0213 02:22:00.230262 2258 status_manager.go:698] "Failed to get status for pod" podUID=e5fc186f373fe90395863f565f2ba691 pod="kube-system/kube-scheduler-ci-3510.3.2-a-4f4948c732" err="Get \"https://136.144.54.113:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-4f4948c732\": dial tcp 136.144.54.113:6443: connect: connection refused" Feb 13 02:22:00.286502 kubelet[2258]: E0213 02:22:00.286385 2258 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://136.144.54.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-4f4948c732?timeout=10s": dial tcp 136.144.54.113:6443: connect: connection refused Feb 13 02:22:00.386984 kubelet[2258]: I0213 02:22:00.386801 2258 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5fc186f373fe90395863f565f2ba691-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-4f4948c732\" (UID: \"e5fc186f373fe90395863f565f2ba691\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:00.386984 kubelet[2258]: I0213 02:22:00.386934 2258 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f23348b05f3bd58003d05d5da6bf4a4-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-4f4948c732\" (UID: \"8f23348b05f3bd58003d05d5da6bf4a4\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:00.387440 kubelet[2258]: I0213 02:22:00.387072 2258 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2eeee56a603eb324d6cbe0e13e7acdc0-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-4f4948c732\" (UID: \"2eeee56a603eb324d6cbe0e13e7acdc0\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:00.387440 kubelet[2258]: I0213 02:22:00.387192 2258 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2eeee56a603eb324d6cbe0e13e7acdc0-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-4f4948c732\" (UID: \"2eeee56a603eb324d6cbe0e13e7acdc0\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:00.387440 kubelet[2258]: I0213 02:22:00.387290 2258 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f23348b05f3bd58003d05d5da6bf4a4-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-4f4948c732\" (UID: \"8f23348b05f3bd58003d05d5da6bf4a4\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:00.387440 kubelet[2258]: I0213 02:22:00.387411 2258 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f23348b05f3bd58003d05d5da6bf4a4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-4f4948c732\" (UID: \"8f23348b05f3bd58003d05d5da6bf4a4\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:00.388012 kubelet[2258]: I0213 02:22:00.387539 2258 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2eeee56a603eb324d6cbe0e13e7acdc0-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-4f4948c732\" (UID: \"2eeee56a603eb324d6cbe0e13e7acdc0\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:00.388012 kubelet[2258]: I0213 02:22:00.387730 2258 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2eeee56a603eb324d6cbe0e13e7acdc0-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-4f4948c732\" (UID: \"2eeee56a603eb324d6cbe0e13e7acdc0\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:00.388012 kubelet[2258]: I0213 02:22:00.387927 2258 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2eeee56a603eb324d6cbe0e13e7acdc0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-4f4948c732\" (UID: \"2eeee56a603eb324d6cbe0e13e7acdc0\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:00.392529 kubelet[2258]: I0213 02:22:00.392468 2258 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-4f4948c732" Feb 13 02:22:00.393318 kubelet[2258]: E0213 02:22:00.393242 2258 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://136.144.54.113:6443/api/v1/nodes\": dial tcp 136.144.54.113:6443: connect: connection refused" node="ci-3510.3.2-a-4f4948c732" Feb 13 02:22:00.530271 env[1543]: time="2024-02-13T02:22:00.530133163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-4f4948c732,Uid:8f23348b05f3bd58003d05d5da6bf4a4,Namespace:kube-system,Attempt:0,}" Feb 13 02:22:00.533328 env[1543]: time="2024-02-13T02:22:00.533254396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-4f4948c732,Uid:2eeee56a603eb324d6cbe0e13e7acdc0,Namespace:kube-system,Attempt:0,}" Feb 13 02:22:00.536578 env[1543]: time="2024-02-13T02:22:00.536443871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-4f4948c732,Uid:e5fc186f373fe90395863f565f2ba691,Namespace:kube-system,Attempt:0,}" Feb 13 02:22:00.687572 kubelet[2258]: E0213 02:22:00.687382 2258 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://136.144.54.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-4f4948c732?timeout=10s": dial tcp 136.144.54.113:6443: connect: connection refused Feb 13 02:22:00.800163 kubelet[2258]: I0213 02:22:00.800060 2258 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-4f4948c732" Feb 13 02:22:00.800855 kubelet[2258]: E0213 02:22:00.800776 2258 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://136.144.54.113:6443/api/v1/nodes\": dial tcp 136.144.54.113:6443: connect: connection refused" node="ci-3510.3.2-a-4f4948c732" Feb 13 02:22:00.924134 kubelet[2258]: W0213 02:22:00.923983 2258 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://136.144.54.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 136.144.54.113:6443: connect: connection refused Feb 13 02:22:00.924134 kubelet[2258]: E0213 02:22:00.924108 2258 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://136.144.54.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 136.144.54.113:6443: connect: connection refused Feb 13 02:22:00.997978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2300140489.mount: Deactivated successfully. Feb 13 02:22:00.998682 env[1543]: time="2024-02-13T02:22:00.998638506Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:22:01.000004 env[1543]: time="2024-02-13T02:22:00.999957019Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:22:01.000740 env[1543]: time="2024-02-13T02:22:01.000700115Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:22:01.001432 env[1543]: time="2024-02-13T02:22:01.001390942Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:22:01.001907 env[1543]: time="2024-02-13T02:22:01.001867815Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:22:01.003185 env[1543]: time="2024-02-13T02:22:01.003144391Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:22:01.004820 env[1543]: time="2024-02-13T02:22:01.004778130Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:22:01.005963 env[1543]: time="2024-02-13T02:22:01.005922422Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:22:01.007895 env[1543]: time="2024-02-13T02:22:01.007859552Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:22:01.008298 env[1543]: time="2024-02-13T02:22:01.008283334Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:22:01.008752 env[1543]: time="2024-02-13T02:22:01.008723184Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:22:01.009118 env[1543]: time="2024-02-13T02:22:01.009105954Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:22:01.012154 env[1543]: time="2024-02-13T02:22:01.012097816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 02:22:01.012154 env[1543]: time="2024-02-13T02:22:01.012139326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 02:22:01.012154 env[1543]: time="2024-02-13T02:22:01.012149096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 02:22:01.012297 env[1543]: time="2024-02-13T02:22:01.012235443Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/95630b8769f981084abeab0396bcc35b0fd9308c351326525ead6add6319bae7 pid=2345 runtime=io.containerd.runc.v2 Feb 13 02:22:01.015394 env[1543]: time="2024-02-13T02:22:01.015348563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 02:22:01.015394 env[1543]: time="2024-02-13T02:22:01.015371451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 02:22:01.015394 env[1543]: time="2024-02-13T02:22:01.015378330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 02:22:01.015543 env[1543]: time="2024-02-13T02:22:01.015459765Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ccff5316dfdcf83b0392c9fcaefc6884672325465d819be97f8ece511001d766 pid=2375 runtime=io.containerd.runc.v2 Feb 13 02:22:01.015543 env[1543]: time="2024-02-13T02:22:01.015487994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 02:22:01.015543 env[1543]: time="2024-02-13T02:22:01.015505545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 02:22:01.015543 env[1543]: time="2024-02-13T02:22:01.015516790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 02:22:01.015619 env[1543]: time="2024-02-13T02:22:01.015578467Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d7ad55cd74eaba4d36e40b8c9c917fe05b15e601ca0768ddaa03464359d3e5a pid=2374 runtime=io.containerd.runc.v2 Feb 13 02:22:01.043200 env[1543]: time="2024-02-13T02:22:01.043169334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-4f4948c732,Uid:8f23348b05f3bd58003d05d5da6bf4a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"95630b8769f981084abeab0396bcc35b0fd9308c351326525ead6add6319bae7\"" Feb 13 02:22:01.043404 env[1543]: time="2024-02-13T02:22:01.043217884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-4f4948c732,Uid:2eeee56a603eb324d6cbe0e13e7acdc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d7ad55cd74eaba4d36e40b8c9c917fe05b15e601ca0768ddaa03464359d3e5a\"" Feb 13 02:22:01.043494 env[1543]: time="2024-02-13T02:22:01.043479106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-4f4948c732,Uid:e5fc186f373fe90395863f565f2ba691,Namespace:kube-system,Attempt:0,} returns sandbox id \"ccff5316dfdcf83b0392c9fcaefc6884672325465d819be97f8ece511001d766\"" Feb 13 02:22:01.044863 env[1543]: time="2024-02-13T02:22:01.044845851Z" level=info msg="CreateContainer within sandbox \"ccff5316dfdcf83b0392c9fcaefc6884672325465d819be97f8ece511001d766\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 02:22:01.044914 env[1543]: time="2024-02-13T02:22:01.044901250Z" level=info msg="CreateContainer within sandbox \"95630b8769f981084abeab0396bcc35b0fd9308c351326525ead6add6319bae7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 02:22:01.044944 env[1543]: time="2024-02-13T02:22:01.044933979Z" level=info msg="CreateContainer within sandbox \"2d7ad55cd74eaba4d36e40b8c9c917fe05b15e601ca0768ddaa03464359d3e5a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 02:22:01.051088 env[1543]: time="2024-02-13T02:22:01.051043802Z" level=info msg="CreateContainer within sandbox \"2d7ad55cd74eaba4d36e40b8c9c917fe05b15e601ca0768ddaa03464359d3e5a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7d2fe84849d388b0a0ebcb25405c91828e44d402b14cb9b9a15eb6cd709d6413\"" Feb 13 02:22:01.051318 env[1543]: time="2024-02-13T02:22:01.051281967Z" level=info msg="StartContainer for \"7d2fe84849d388b0a0ebcb25405c91828e44d402b14cb9b9a15eb6cd709d6413\"" Feb 13 02:22:01.051896 env[1543]: time="2024-02-13T02:22:01.051864658Z" level=info msg="CreateContainer within sandbox \"ccff5316dfdcf83b0392c9fcaefc6884672325465d819be97f8ece511001d766\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a4cc5158a286a958159981d35800686527cc8dbfc23c7a0fed54295fe235327b\"" Feb 13 02:22:01.052020 env[1543]: time="2024-02-13T02:22:01.052003355Z" level=info msg="StartContainer for \"a4cc5158a286a958159981d35800686527cc8dbfc23c7a0fed54295fe235327b\"" Feb 13 02:22:01.052756 env[1543]: time="2024-02-13T02:22:01.052734283Z" level=info msg="CreateContainer within sandbox \"95630b8769f981084abeab0396bcc35b0fd9308c351326525ead6add6319bae7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"474201f2f13cf204fecf67b1ca0872bfad81203c672711c5d80eaf16a87cc015\"" Feb 13 02:22:01.052920 env[1543]: time="2024-02-13T02:22:01.052905918Z" level=info msg="StartContainer for \"474201f2f13cf204fecf67b1ca0872bfad81203c672711c5d80eaf16a87cc015\"" Feb 13 02:22:01.084156 env[1543]: time="2024-02-13T02:22:01.084134061Z" level=info msg="StartContainer for \"a4cc5158a286a958159981d35800686527cc8dbfc23c7a0fed54295fe235327b\" returns successfully" Feb 13 02:22:01.093956 env[1543]: time="2024-02-13T02:22:01.093933904Z" level=info msg="StartContainer for \"7d2fe84849d388b0a0ebcb25405c91828e44d402b14cb9b9a15eb6cd709d6413\" returns successfully" Feb 13 02:22:01.095011 env[1543]: time="2024-02-13T02:22:01.094996346Z" level=info msg="StartContainer for \"474201f2f13cf204fecf67b1ca0872bfad81203c672711c5d80eaf16a87cc015\" returns successfully" Feb 13 02:22:01.117895 kubelet[2258]: I0213 02:22:01.117874 2258 status_manager.go:698] "Failed to get status for pod" podUID=2eeee56a603eb324d6cbe0e13e7acdc0 pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4f4948c732" err="Get \"https://136.144.54.113:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-4f4948c732\": dial tcp 136.144.54.113:6443: connect: connection refused" Feb 13 02:22:01.118327 kubelet[2258]: I0213 02:22:01.118317 2258 status_manager.go:698] "Failed to get status for pod" podUID=8f23348b05f3bd58003d05d5da6bf4a4 pod="kube-system/kube-apiserver-ci-3510.3.2-a-4f4948c732" err="Get \"https://136.144.54.113:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-4f4948c732\": dial tcp 136.144.54.113:6443: connect: connection refused" Feb 13 02:22:01.602804 kubelet[2258]: I0213 02:22:01.602744 2258 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-4f4948c732" Feb 13 02:22:01.904127 kubelet[2258]: E0213 02:22:01.904077 2258 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-4f4948c732\" not found" node="ci-3510.3.2-a-4f4948c732" Feb 13 02:22:01.994922 kubelet[2258]: I0213 02:22:01.994862 2258 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-4f4948c732" Feb 13 02:22:02.017008 kubelet[2258]: E0213 02:22:02.016957 2258 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-4f4948c732\" not found" Feb 13 02:22:02.484487 kubelet[2258]: E0213 02:22:02.484418 2258 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-4f4948c732\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:02.679823 kubelet[2258]: E0213 02:22:02.679758 2258 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-4f4948c732\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:02.879866 kubelet[2258]: E0213 02:22:02.879699 2258 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-4f4948c732\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:03.075508 kubelet[2258]: I0213 02:22:03.075413 2258 apiserver.go:52] "Watching apiserver" Feb 13 02:22:03.285694 kubelet[2258]: I0213 02:22:03.285634 2258 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 02:22:03.304889 kubelet[2258]: I0213 02:22:03.304811 2258 reconciler.go:41] "Reconciler: start to sync state" Feb 13 02:22:05.020555 systemd[1]: Reloading. Feb 13 02:22:05.051004 /usr/lib/systemd/system-generators/torcx-generator[2627]: time="2024-02-13T02:22:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 02:22:05.051021 /usr/lib/systemd/system-generators/torcx-generator[2627]: time="2024-02-13T02:22:05Z" level=info msg="torcx already run" Feb 13 02:22:05.109308 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 02:22:05.109318 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 02:22:05.121703 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 02:22:05.180065 kubelet[2258]: I0213 02:22:05.180044 2258 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 02:22:05.180067 systemd[1]: Stopping kubelet.service... Feb 13 02:22:05.197139 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 02:22:05.197875 systemd[1]: Stopped kubelet.service. Feb 13 02:22:05.201976 systemd[1]: Started kubelet.service. Feb 13 02:22:05.262563 kubelet[2695]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 13 02:22:05.262563 kubelet[2695]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 02:22:05.262841 kubelet[2695]: I0213 02:22:05.262565 2695 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 02:22:05.263606 kubelet[2695]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 13 02:22:05.263606 kubelet[2695]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 02:22:05.265745 kubelet[2695]: I0213 02:22:05.265732 2695 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 13 02:22:05.265745 kubelet[2695]: I0213 02:22:05.265745 2695 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 02:22:05.265896 kubelet[2695]: I0213 02:22:05.265865 2695 server.go:836] "Client rotation is on, will bootstrap in background" Feb 13 02:22:05.266671 kubelet[2695]: I0213 02:22:05.266663 2695 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 02:22:05.267798 kubelet[2695]: I0213 02:22:05.267786 2695 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 02:22:05.285512 kubelet[2695]: I0213 02:22:05.285473 2695 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 02:22:05.285741 kubelet[2695]: I0213 02:22:05.285706 2695 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 02:22:05.285782 kubelet[2695]: I0213 02:22:05.285744 2695 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 13 02:22:05.285782 kubelet[2695]: I0213 02:22:05.285756 2695 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 13 02:22:05.285782 kubelet[2695]: I0213 02:22:05.285763 2695 container_manager_linux.go:308] "Creating device plugin manager" Feb 13 02:22:05.285886 kubelet[2695]: I0213 02:22:05.285784 2695 state_mem.go:36] "Initialized new in-memory state store" Feb 13 02:22:05.287187 kubelet[2695]: I0213 02:22:05.287179 2695 kubelet.go:398] "Attempting to sync node with API server" Feb 13 02:22:05.287225 kubelet[2695]: I0213 02:22:05.287190 2695 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 02:22:05.287225 kubelet[2695]: I0213 02:22:05.287214 2695 kubelet.go:297] "Adding apiserver pod source" Feb 13 02:22:05.287272 kubelet[2695]: I0213 02:22:05.287226 2695 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 02:22:05.287440 kubelet[2695]: I0213 02:22:05.287429 2695 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 13 02:22:05.287779 kubelet[2695]: I0213 02:22:05.287770 2695 server.go:1186] "Started kubelet" Feb 13 02:22:05.287824 kubelet[2695]: I0213 02:22:05.287818 2695 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 02:22:05.288022 kubelet[2695]: E0213 02:22:05.288011 2695 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 13 02:22:05.288062 kubelet[2695]: E0213 02:22:05.288028 2695 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 02:22:05.288514 kubelet[2695]: I0213 02:22:05.288505 2695 server.go:451] "Adding debug handlers to kubelet server" Feb 13 02:22:05.288573 kubelet[2695]: I0213 02:22:05.288566 2695 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 02:22:05.288650 kubelet[2695]: I0213 02:22:05.288639 2695 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 13 02:22:05.288694 kubelet[2695]: I0213 02:22:05.288667 2695 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 02:22:05.288845 kubelet[2695]: E0213 02:22:05.288833 2695 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-4f4948c732\" not found" Feb 13 02:22:05.300686 kubelet[2695]: I0213 02:22:05.300673 2695 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 13 02:22:05.307042 kubelet[2695]: I0213 02:22:05.307028 2695 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 13 02:22:05.307042 kubelet[2695]: I0213 02:22:05.307039 2695 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 13 02:22:05.307164 kubelet[2695]: I0213 02:22:05.307050 2695 kubelet.go:2113] "Starting kubelet main sync loop" Feb 13 02:22:05.307164 kubelet[2695]: E0213 02:22:05.307082 2695 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 02:22:05.321328 kubelet[2695]: I0213 02:22:05.321283 2695 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 02:22:05.321328 kubelet[2695]: I0213 02:22:05.321295 2695 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 02:22:05.321328 kubelet[2695]: I0213 02:22:05.321305 2695 state_mem.go:36] "Initialized new in-memory state store" Feb 13 02:22:05.321448 kubelet[2695]: I0213 02:22:05.321386 2695 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 02:22:05.321448 kubelet[2695]: I0213 02:22:05.321394 2695 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 13 02:22:05.321448 kubelet[2695]: I0213 02:22:05.321398 2695 policy_none.go:49] "None policy: Start" Feb 13 02:22:05.321767 kubelet[2695]: I0213 02:22:05.321731 2695 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 13 02:22:05.321767 kubelet[2695]: I0213 02:22:05.321743 2695 state_mem.go:35] "Initializing new in-memory state store" Feb 13 02:22:05.321818 kubelet[2695]: I0213 02:22:05.321811 2695 state_mem.go:75] "Updated machine memory state" Feb 13 02:22:05.322471 kubelet[2695]: I0213 02:22:05.322433 2695 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 02:22:05.322592 kubelet[2695]: I0213 02:22:05.322558 2695 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 02:22:05.395243 kubelet[2695]: I0213 02:22:05.395155 2695 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-4f4948c732" Feb 13 02:22:05.407496 kubelet[2695]: I0213 02:22:05.407315 2695 topology_manager.go:210] "Topology Admit Handler" Feb 13 02:22:05.407722 kubelet[2695]: I0213 02:22:05.407542 2695 topology_manager.go:210] "Topology Admit Handler" Feb 13 02:22:05.407722 kubelet[2695]: I0213 02:22:05.407659 2695 topology_manager.go:210] "Topology Admit Handler" Feb 13 02:22:05.408084 kubelet[2695]: I0213 02:22:05.407853 2695 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-4f4948c732" Feb 13 02:22:05.408084 kubelet[2695]: I0213 02:22:05.408010 2695 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-4f4948c732" Feb 13 02:22:05.416631 kubelet[2695]: E0213 02:22:05.416548 2695 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-4f4948c732\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:05.438927 sudo[2759]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 02:22:05.439540 sudo[2759]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 13 02:22:05.590489 kubelet[2695]: I0213 02:22:05.590348 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2eeee56a603eb324d6cbe0e13e7acdc0-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-4f4948c732\" (UID: \"2eeee56a603eb324d6cbe0e13e7acdc0\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:05.590489 kubelet[2695]: I0213 02:22:05.590396 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2eeee56a603eb324d6cbe0e13e7acdc0-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-4f4948c732\" (UID: \"2eeee56a603eb324d6cbe0e13e7acdc0\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:05.590597 kubelet[2695]: I0213 02:22:05.590496 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2eeee56a603eb324d6cbe0e13e7acdc0-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-4f4948c732\" (UID: \"2eeee56a603eb324d6cbe0e13e7acdc0\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:05.590597 kubelet[2695]: I0213 02:22:05.590520 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5fc186f373fe90395863f565f2ba691-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-4f4948c732\" (UID: \"e5fc186f373fe90395863f565f2ba691\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:05.590597 kubelet[2695]: I0213 02:22:05.590534 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f23348b05f3bd58003d05d5da6bf4a4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-4f4948c732\" (UID: \"8f23348b05f3bd58003d05d5da6bf4a4\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:05.590597 kubelet[2695]: I0213 02:22:05.590547 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f23348b05f3bd58003d05d5da6bf4a4-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-4f4948c732\" (UID: \"8f23348b05f3bd58003d05d5da6bf4a4\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:05.590597 kubelet[2695]: I0213 02:22:05.590574 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2eeee56a603eb324d6cbe0e13e7acdc0-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-4f4948c732\" (UID: \"2eeee56a603eb324d6cbe0e13e7acdc0\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:05.590722 kubelet[2695]: I0213 02:22:05.590632 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2eeee56a603eb324d6cbe0e13e7acdc0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-4f4948c732\" (UID: \"2eeee56a603eb324d6cbe0e13e7acdc0\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:05.590722 kubelet[2695]: I0213 02:22:05.590650 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f23348b05f3bd58003d05d5da6bf4a4-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-4f4948c732\" (UID: \"8f23348b05f3bd58003d05d5da6bf4a4\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:05.806744 sudo[2759]: pam_unix(sudo:session): session closed for user root Feb 13 02:22:06.288343 kubelet[2695]: I0213 02:22:06.288229 2695 apiserver.go:52] "Watching apiserver" Feb 13 02:22:06.390394 kubelet[2695]: I0213 02:22:06.390273 2695 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 02:22:06.397468 kubelet[2695]: I0213 02:22:06.397396 2695 reconciler.go:41] "Reconciler: start to sync state" Feb 13 02:22:06.702356 kubelet[2695]: E0213 02:22:06.702269 2695 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-4f4948c732\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:06.722617 sudo[1691]: pam_unix(sudo:session): session closed for user root Feb 13 02:22:06.723639 sshd[1685]: pam_unix(sshd:session): session closed for user core Feb 13 02:22:06.725484 systemd[1]: sshd@4-136.144.54.113:22-139.178.68.195:49078.service: Deactivated successfully. Feb 13 02:22:06.726382 systemd-logind[1529]: Session 7 logged out. Waiting for processes to exit. Feb 13 02:22:06.726421 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 02:22:06.727283 systemd-logind[1529]: Removed session 7. Feb 13 02:22:06.896583 kubelet[2695]: E0213 02:22:06.896496 2695 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-4f4948c732\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-4f4948c732" Feb 13 02:22:07.493836 kubelet[2695]: I0213 02:22:07.493782 2695 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-4f4948c732" podStartSLOduration=2.493706623 pod.CreationTimestamp="2024-02-13 02:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 02:22:07.098324155 +0000 UTC m=+1.889768912" watchObservedRunningTime="2024-02-13 02:22:07.493706623 +0000 UTC m=+2.285151390" Feb 13 02:22:07.494276 kubelet[2695]: I0213 02:22:07.493888 2695 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4f4948c732" podStartSLOduration=2.493862545 pod.CreationTimestamp="2024-02-13 02:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 02:22:07.493812041 +0000 UTC m=+2.285256824" watchObservedRunningTime="2024-02-13 02:22:07.493862545 +0000 UTC m=+2.285307314" Feb 13 02:22:07.533488 systemd[1]: Started sshd@5-136.144.54.113:22-109.123.237.173:58436.service. Feb 13 02:22:07.896208 kubelet[2695]: I0213 02:22:07.896103 2695 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-4f4948c732" podStartSLOduration=4.896052989 pod.CreationTimestamp="2024-02-13 02:22:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 02:22:07.896021493 +0000 UTC m=+2.687466251" watchObservedRunningTime="2024-02-13 02:22:07.896052989 +0000 UTC m=+2.687497747" Feb 13 02:22:08.596225 sshd[2862]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=109.123.237.173 user=root Feb 13 02:22:10.309483 sshd[2862]: Failed password for root from 109.123.237.173 port 58436 ssh2 Feb 13 02:22:11.169008 sshd[2862]: Received disconnect from 109.123.237.173 port 58436:11: Bye Bye [preauth] Feb 13 02:22:11.169008 sshd[2862]: Disconnected from authenticating user root 109.123.237.173 port 58436 [preauth] Feb 13 02:22:11.171416 systemd[1]: sshd@5-136.144.54.113:22-109.123.237.173:58436.service: Deactivated successfully. Feb 13 02:22:17.348349 update_engine[1531]: I0213 02:22:17.348241 1531 update_attempter.cc:509] Updating boot flags... Feb 13 02:22:19.060065 kubelet[2695]: I0213 02:22:19.059975 2695 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 02:22:19.061427 kubelet[2695]: I0213 02:22:19.061271 2695 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 02:22:19.061648 env[1543]: time="2024-02-13T02:22:19.060812547Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 02:22:19.844417 kubelet[2695]: I0213 02:22:19.844352 2695 topology_manager.go:210] "Topology Admit Handler" Feb 13 02:22:19.853367 kubelet[2695]: I0213 02:22:19.853307 2695 topology_manager.go:210] "Topology Admit Handler" Feb 13 02:22:19.884572 kubelet[2695]: I0213 02:22:19.884543 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89d63643-a054-4d85-8674-9028742b6ad8-lib-modules\") pod \"kube-proxy-5hs4b\" (UID: \"89d63643-a054-4d85-8674-9028742b6ad8\") " pod="kube-system/kube-proxy-5hs4b" Feb 13 02:22:19.884730 kubelet[2695]: I0213 02:22:19.884586 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-cilium-cgroup\") pod \"cilium-6czxd\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " pod="kube-system/cilium-6czxd" Feb 13 02:22:19.884730 kubelet[2695]: I0213 02:22:19.884615 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31ae5097-a7c9-4039-9bc0-d616d239369b-clustermesh-secrets\") pod \"cilium-6czxd\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " pod="kube-system/cilium-6czxd" Feb 13 02:22:19.884730 kubelet[2695]: I0213 02:22:19.884640 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31ae5097-a7c9-4039-9bc0-d616d239369b-cilium-config-path\") pod \"cilium-6czxd\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " pod="kube-system/cilium-6czxd" Feb 13 02:22:19.884730 kubelet[2695]: I0213 02:22:19.884667 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/89d63643-a054-4d85-8674-9028742b6ad8-kube-proxy\") pod \"kube-proxy-5hs4b\" (UID: \"89d63643-a054-4d85-8674-9028742b6ad8\") " pod="kube-system/kube-proxy-5hs4b" Feb 13 02:22:19.884730 kubelet[2695]: I0213 02:22:19.884699 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89d63643-a054-4d85-8674-9028742b6ad8-xtables-lock\") pod \"kube-proxy-5hs4b\" (UID: \"89d63643-a054-4d85-8674-9028742b6ad8\") " pod="kube-system/kube-proxy-5hs4b" Feb 13 02:22:19.884979 kubelet[2695]: I0213 02:22:19.884747 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-cni-path\") pod \"cilium-6czxd\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " pod="kube-system/cilium-6czxd" Feb 13 02:22:19.884979 kubelet[2695]: I0213 02:22:19.884803 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-host-proc-sys-net\") pod \"cilium-6czxd\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " pod="kube-system/cilium-6czxd" Feb 13 02:22:19.884979 kubelet[2695]: I0213 02:22:19.884844 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-host-proc-sys-kernel\") pod \"cilium-6czxd\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " pod="kube-system/cilium-6czxd" Feb 13 02:22:19.884979 kubelet[2695]: I0213 02:22:19.884871 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31ae5097-a7c9-4039-9bc0-d616d239369b-hubble-tls\") pod \"cilium-6czxd\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " pod="kube-system/cilium-6czxd" Feb 13 02:22:19.884979 kubelet[2695]: I0213 02:22:19.884964 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nr7p\" (UniqueName: \"kubernetes.io/projected/89d63643-a054-4d85-8674-9028742b6ad8-kube-api-access-5nr7p\") pod \"kube-proxy-5hs4b\" (UID: \"89d63643-a054-4d85-8674-9028742b6ad8\") " pod="kube-system/kube-proxy-5hs4b" Feb 13 02:22:19.885211 kubelet[2695]: I0213 02:22:19.885028 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54s2s\" (UniqueName: \"kubernetes.io/projected/31ae5097-a7c9-4039-9bc0-d616d239369b-kube-api-access-54s2s\") pod \"cilium-6czxd\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " pod="kube-system/cilium-6czxd" Feb 13 02:22:19.885211 kubelet[2695]: I0213 02:22:19.885074 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-lib-modules\") pod \"cilium-6czxd\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " pod="kube-system/cilium-6czxd" Feb 13 02:22:19.885211 kubelet[2695]: I0213 02:22:19.885123 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-cilium-run\") pod \"cilium-6czxd\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " pod="kube-system/cilium-6czxd" Feb 13 02:22:19.885211 kubelet[2695]: I0213 02:22:19.885168 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-bpf-maps\") pod \"cilium-6czxd\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " pod="kube-system/cilium-6czxd" Feb 13 02:22:19.885211 kubelet[2695]: I0213 02:22:19.885211 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-etc-cni-netd\") pod \"cilium-6czxd\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " pod="kube-system/cilium-6czxd" Feb 13 02:22:19.885442 kubelet[2695]: I0213 02:22:19.885256 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-hostproc\") pod \"cilium-6czxd\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " pod="kube-system/cilium-6czxd" Feb 13 02:22:19.885442 kubelet[2695]: I0213 02:22:19.885305 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-xtables-lock\") pod \"cilium-6czxd\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " pod="kube-system/cilium-6czxd" Feb 13 02:22:19.984366 kubelet[2695]: I0213 02:22:19.984266 2695 topology_manager.go:210] "Topology Admit Handler" Feb 13 02:22:20.092023 kubelet[2695]: I0213 02:22:20.091906 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlkjj\" (UniqueName: \"kubernetes.io/projected/6c052796-7445-4971-88b3-002c9a666486-kube-api-access-tlkjj\") pod \"cilium-operator-f59cbd8c6-h4kvt\" (UID: \"6c052796-7445-4971-88b3-002c9a666486\") " pod="kube-system/cilium-operator-f59cbd8c6-h4kvt" Feb 13 02:22:20.092942 kubelet[2695]: I0213 02:22:20.092140 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c052796-7445-4971-88b3-002c9a666486-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-h4kvt\" (UID: \"6c052796-7445-4971-88b3-002c9a666486\") " pod="kube-system/cilium-operator-f59cbd8c6-h4kvt" Feb 13 02:22:20.155294 env[1543]: time="2024-02-13T02:22:20.155041714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5hs4b,Uid:89d63643-a054-4d85-8674-9028742b6ad8,Namespace:kube-system,Attempt:0,}" Feb 13 02:22:20.177696 env[1543]: time="2024-02-13T02:22:20.177511882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 02:22:20.177696 env[1543]: time="2024-02-13T02:22:20.177618154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 02:22:20.177696 env[1543]: time="2024-02-13T02:22:20.177658323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 02:22:20.178245 env[1543]: time="2024-02-13T02:22:20.178128515Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c985485b248649faf6fc632055a18168b9ff91044bb308fb0997ae75b94caef0 pid=2894 runtime=io.containerd.runc.v2 Feb 13 02:22:20.253439 env[1543]: time="2024-02-13T02:22:20.253327367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5hs4b,Uid:89d63643-a054-4d85-8674-9028742b6ad8,Namespace:kube-system,Attempt:0,} returns sandbox id \"c985485b248649faf6fc632055a18168b9ff91044bb308fb0997ae75b94caef0\"" Feb 13 02:22:20.259183 env[1543]: time="2024-02-13T02:22:20.259055298Z" level=info msg="CreateContainer within sandbox \"c985485b248649faf6fc632055a18168b9ff91044bb308fb0997ae75b94caef0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 02:22:20.275684 env[1543]: time="2024-02-13T02:22:20.275547101Z" level=info msg="CreateContainer within sandbox \"c985485b248649faf6fc632055a18168b9ff91044bb308fb0997ae75b94caef0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2cb1fbebbdd9f6f485561939e980668e86dc007e1473d208d89e35627292b679\"" Feb 13 02:22:20.276616 env[1543]: time="2024-02-13T02:22:20.276537494Z" level=info msg="StartContainer for \"2cb1fbebbdd9f6f485561939e980668e86dc007e1473d208d89e35627292b679\"" Feb 13 02:22:20.363263 env[1543]: time="2024-02-13T02:22:20.363197895Z" level=info msg="StartContainer for \"2cb1fbebbdd9f6f485561939e980668e86dc007e1473d208d89e35627292b679\" returns successfully" Feb 13 02:22:20.464822 env[1543]: time="2024-02-13T02:22:20.464641774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6czxd,Uid:31ae5097-a7c9-4039-9bc0-d616d239369b,Namespace:kube-system,Attempt:0,}" Feb 13 02:22:20.486544 env[1543]: time="2024-02-13T02:22:20.486316498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 02:22:20.486544 env[1543]: time="2024-02-13T02:22:20.486418102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 02:22:20.486544 env[1543]: time="2024-02-13T02:22:20.486470723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 02:22:20.487112 env[1543]: time="2024-02-13T02:22:20.486916444Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47e962844bd904a6475ae8bddfaa05237cb1b1e05dc1853d071d71a1dbb78b01 pid=3008 runtime=io.containerd.runc.v2 Feb 13 02:22:20.541927 env[1543]: time="2024-02-13T02:22:20.541864536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6czxd,Uid:31ae5097-a7c9-4039-9bc0-d616d239369b,Namespace:kube-system,Attempt:0,} returns sandbox id \"47e962844bd904a6475ae8bddfaa05237cb1b1e05dc1853d071d71a1dbb78b01\"" Feb 13 02:22:20.543987 env[1543]: time="2024-02-13T02:22:20.543940302Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 02:22:20.892329 env[1543]: time="2024-02-13T02:22:20.892187212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-h4kvt,Uid:6c052796-7445-4971-88b3-002c9a666486,Namespace:kube-system,Attempt:0,}" Feb 13 02:22:20.917846 env[1543]: time="2024-02-13T02:22:20.917696559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 02:22:20.917846 env[1543]: time="2024-02-13T02:22:20.917793485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 02:22:20.917846 env[1543]: time="2024-02-13T02:22:20.917831760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 02:22:20.918410 env[1543]: time="2024-02-13T02:22:20.918255400Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/63f16a1a528354bf2924dbaff7d964e2d28aa0c30f33dde013ee10fd295cca6c pid=3127 runtime=io.containerd.runc.v2 Feb 13 02:22:21.015155 env[1543]: time="2024-02-13T02:22:21.015122346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-h4kvt,Uid:6c052796-7445-4971-88b3-002c9a666486,Namespace:kube-system,Attempt:0,} returns sandbox id \"63f16a1a528354bf2924dbaff7d964e2d28aa0c30f33dde013ee10fd295cca6c\"" Feb 13 02:22:21.373125 kubelet[2695]: I0213 02:22:21.373070 2695 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5hs4b" podStartSLOduration=2.372983844 pod.CreationTimestamp="2024-02-13 02:22:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 02:22:21.37235939 +0000 UTC m=+16.163804233" watchObservedRunningTime="2024-02-13 02:22:21.372983844 +0000 UTC m=+16.164428648" Feb 13 02:22:25.692453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2423367827.mount: Deactivated successfully. Feb 13 02:22:27.366062 env[1543]: time="2024-02-13T02:22:27.365995968Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:22:27.366714 env[1543]: time="2024-02-13T02:22:27.366671656Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:22:27.367583 env[1543]: time="2024-02-13T02:22:27.367541635Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:22:27.367948 env[1543]: time="2024-02-13T02:22:27.367889237Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 02:22:27.368373 env[1543]: time="2024-02-13T02:22:27.368314481Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 02:22:27.368827 env[1543]: time="2024-02-13T02:22:27.368788249Z" level=info msg="CreateContainer within sandbox \"47e962844bd904a6475ae8bddfaa05237cb1b1e05dc1853d071d71a1dbb78b01\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 02:22:27.372972 env[1543]: time="2024-02-13T02:22:27.372955350Z" level=info msg="CreateContainer within sandbox \"47e962844bd904a6475ae8bddfaa05237cb1b1e05dc1853d071d71a1dbb78b01\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a9c5ccee99c80885ccd2056a442f2fbe4ec420f88129b156889c7bf4ca820da8\"" Feb 13 02:22:27.373249 env[1543]: time="2024-02-13T02:22:27.373234370Z" level=info msg="StartContainer for \"a9c5ccee99c80885ccd2056a442f2fbe4ec420f88129b156889c7bf4ca820da8\"" Feb 13 02:22:27.374184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3878827133.mount: Deactivated successfully. Feb 13 02:22:27.392629 env[1543]: time="2024-02-13T02:22:27.392573762Z" level=info msg="StartContainer for \"a9c5ccee99c80885ccd2056a442f2fbe4ec420f88129b156889c7bf4ca820da8\" returns successfully" Feb 13 02:22:28.376527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9c5ccee99c80885ccd2056a442f2fbe4ec420f88129b156889c7bf4ca820da8-rootfs.mount: Deactivated successfully. Feb 13 02:22:29.608549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1706997334.mount: Deactivated successfully. Feb 13 02:22:29.669163 env[1543]: time="2024-02-13T02:22:29.669064940Z" level=info msg="shim disconnected" id=a9c5ccee99c80885ccd2056a442f2fbe4ec420f88129b156889c7bf4ca820da8 Feb 13 02:22:29.669163 env[1543]: time="2024-02-13T02:22:29.669137559Z" level=warning msg="cleaning up after shim disconnected" id=a9c5ccee99c80885ccd2056a442f2fbe4ec420f88129b156889c7bf4ca820da8 namespace=k8s.io Feb 13 02:22:29.669163 env[1543]: time="2024-02-13T02:22:29.669162490Z" level=info msg="cleaning up dead shim" Feb 13 02:22:29.679238 env[1543]: time="2024-02-13T02:22:29.679140427Z" level=warning msg="cleanup warnings time=\"2024-02-13T02:22:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3210 runtime=io.containerd.runc.v2\n" Feb 13 02:22:30.115384 env[1543]: time="2024-02-13T02:22:30.115333366Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:22:30.115985 env[1543]: time="2024-02-13T02:22:30.115942733Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:22:30.116614 env[1543]: time="2024-02-13T02:22:30.116571802Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 02:22:30.116900 env[1543]: time="2024-02-13T02:22:30.116856141Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 02:22:30.118063 env[1543]: time="2024-02-13T02:22:30.118018268Z" level=info msg="CreateContainer within sandbox \"63f16a1a528354bf2924dbaff7d964e2d28aa0c30f33dde013ee10fd295cca6c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 02:22:30.121942 env[1543]: time="2024-02-13T02:22:30.121895765Z" level=info msg="CreateContainer within sandbox \"63f16a1a528354bf2924dbaff7d964e2d28aa0c30f33dde013ee10fd295cca6c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9\"" Feb 13 02:22:30.122192 env[1543]: time="2024-02-13T02:22:30.122159415Z" level=info msg="StartContainer for \"21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9\"" Feb 13 02:22:30.143760 env[1543]: time="2024-02-13T02:22:30.143730924Z" level=info msg="StartContainer for \"21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9\" returns successfully" Feb 13 02:22:30.379086 env[1543]: time="2024-02-13T02:22:30.378858513Z" level=info msg="CreateContainer within sandbox \"47e962844bd904a6475ae8bddfaa05237cb1b1e05dc1853d071d71a1dbb78b01\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 02:22:30.391304 env[1543]: time="2024-02-13T02:22:30.391186388Z" level=info msg="CreateContainer within sandbox \"47e962844bd904a6475ae8bddfaa05237cb1b1e05dc1853d071d71a1dbb78b01\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d6777b6dd0c136cbba3fb51856131108b6344299852d9329f8222508faa68caf\"" Feb 13 02:22:30.392164 env[1543]: time="2024-02-13T02:22:30.392075631Z" level=info msg="StartContainer for \"d6777b6dd0c136cbba3fb51856131108b6344299852d9329f8222508faa68caf\"" Feb 13 02:22:30.419162 kubelet[2695]: I0213 02:22:30.419131 2695 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-h4kvt" podStartSLOduration=-9.223372025435688e+09 pod.CreationTimestamp="2024-02-13 02:22:19 +0000 UTC" firstStartedPulling="2024-02-13 02:22:21.015834511 +0000 UTC m=+15.807279270" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 02:22:30.418844729 +0000 UTC m=+25.210289516" watchObservedRunningTime="2024-02-13 02:22:30.419087861 +0000 UTC m=+25.210532629" Feb 13 02:22:30.454705 env[1543]: time="2024-02-13T02:22:30.454669061Z" level=info msg="StartContainer for \"d6777b6dd0c136cbba3fb51856131108b6344299852d9329f8222508faa68caf\" returns successfully" Feb 13 02:22:30.461330 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 02:22:30.461530 systemd[1]: Stopped systemd-sysctl.service. Feb 13 02:22:30.461632 systemd[1]: Stopping systemd-sysctl.service... Feb 13 02:22:30.462722 systemd[1]: Starting systemd-sysctl.service... Feb 13 02:22:30.467269 systemd[1]: Finished systemd-sysctl.service. Feb 13 02:22:30.600045 env[1543]: time="2024-02-13T02:22:30.600015304Z" level=info msg="shim disconnected" id=d6777b6dd0c136cbba3fb51856131108b6344299852d9329f8222508faa68caf Feb 13 02:22:30.600045 env[1543]: time="2024-02-13T02:22:30.600043049Z" level=warning msg="cleaning up after shim disconnected" id=d6777b6dd0c136cbba3fb51856131108b6344299852d9329f8222508faa68caf namespace=k8s.io Feb 13 02:22:30.600045 env[1543]: time="2024-02-13T02:22:30.600049684Z" level=info msg="cleaning up dead shim" Feb 13 02:22:30.609682 env[1543]: time="2024-02-13T02:22:30.609649120Z" level=warning msg="cleanup warnings time=\"2024-02-13T02:22:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3323 runtime=io.containerd.runc.v2\n" Feb 13 02:22:31.382103 env[1543]: time="2024-02-13T02:22:31.382079557Z" level=info msg="CreateContainer within sandbox \"47e962844bd904a6475ae8bddfaa05237cb1b1e05dc1853d071d71a1dbb78b01\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 02:22:31.387889 env[1543]: time="2024-02-13T02:22:31.387844840Z" level=info msg="CreateContainer within sandbox \"47e962844bd904a6475ae8bddfaa05237cb1b1e05dc1853d071d71a1dbb78b01\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"040ce3673b20a9210087c01a712cb764c2c520933bd8bd190d2f251c2ae0d18f\"" Feb 13 02:22:31.388147 env[1543]: time="2024-02-13T02:22:31.388107762Z" level=info msg="StartContainer for \"040ce3673b20a9210087c01a712cb764c2c520933bd8bd190d2f251c2ae0d18f\"" Feb 13 02:22:31.388989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2949533171.mount: Deactivated successfully. Feb 13 02:22:31.414256 env[1543]: time="2024-02-13T02:22:31.414203948Z" level=info msg="StartContainer for \"040ce3673b20a9210087c01a712cb764c2c520933bd8bd190d2f251c2ae0d18f\" returns successfully" Feb 13 02:22:31.422751 env[1543]: time="2024-02-13T02:22:31.422694409Z" level=info msg="shim disconnected" id=040ce3673b20a9210087c01a712cb764c2c520933bd8bd190d2f251c2ae0d18f Feb 13 02:22:31.422751 env[1543]: time="2024-02-13T02:22:31.422722254Z" level=warning msg="cleaning up after shim disconnected" id=040ce3673b20a9210087c01a712cb764c2c520933bd8bd190d2f251c2ae0d18f namespace=k8s.io Feb 13 02:22:31.422751 env[1543]: time="2024-02-13T02:22:31.422728491Z" level=info msg="cleaning up dead shim" Feb 13 02:22:31.426032 env[1543]: time="2024-02-13T02:22:31.426016558Z" level=warning msg="cleanup warnings time=\"2024-02-13T02:22:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3378 runtime=io.containerd.runc.v2\n" Feb 13 02:22:31.602344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-040ce3673b20a9210087c01a712cb764c2c520933bd8bd190d2f251c2ae0d18f-rootfs.mount: Deactivated successfully. Feb 13 02:22:32.383650 env[1543]: time="2024-02-13T02:22:32.383623420Z" level=info msg="CreateContainer within sandbox \"47e962844bd904a6475ae8bddfaa05237cb1b1e05dc1853d071d71a1dbb78b01\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 02:22:32.389125 env[1543]: time="2024-02-13T02:22:32.389099630Z" level=info msg="CreateContainer within sandbox \"47e962844bd904a6475ae8bddfaa05237cb1b1e05dc1853d071d71a1dbb78b01\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f414e529e00e24fd4ba709ad5abd9d671ebe759804200c0d743f6725eff3e8de\"" Feb 13 02:22:32.389394 env[1543]: time="2024-02-13T02:22:32.389377026Z" level=info msg="StartContainer for \"f414e529e00e24fd4ba709ad5abd9d671ebe759804200c0d743f6725eff3e8de\"" Feb 13 02:22:32.416692 env[1543]: time="2024-02-13T02:22:32.416659103Z" level=info msg="StartContainer for \"f414e529e00e24fd4ba709ad5abd9d671ebe759804200c0d743f6725eff3e8de\" returns successfully" Feb 13 02:22:32.430951 env[1543]: time="2024-02-13T02:22:32.430904399Z" level=info msg="shim disconnected" id=f414e529e00e24fd4ba709ad5abd9d671ebe759804200c0d743f6725eff3e8de Feb 13 02:22:32.430951 env[1543]: time="2024-02-13T02:22:32.430951740Z" level=warning msg="cleaning up after shim disconnected" id=f414e529e00e24fd4ba709ad5abd9d671ebe759804200c0d743f6725eff3e8de namespace=k8s.io Feb 13 02:22:32.431163 env[1543]: time="2024-02-13T02:22:32.430964027Z" level=info msg="cleaning up dead shim" Feb 13 02:22:32.436871 env[1543]: time="2024-02-13T02:22:32.436817366Z" level=warning msg="cleanup warnings time=\"2024-02-13T02:22:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3433 runtime=io.containerd.runc.v2\n" Feb 13 02:22:32.606574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f414e529e00e24fd4ba709ad5abd9d671ebe759804200c0d743f6725eff3e8de-rootfs.mount: Deactivated successfully. Feb 13 02:22:33.396866 env[1543]: time="2024-02-13T02:22:33.396770157Z" level=info msg="CreateContainer within sandbox \"47e962844bd904a6475ae8bddfaa05237cb1b1e05dc1853d071d71a1dbb78b01\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 02:22:33.411930 env[1543]: time="2024-02-13T02:22:33.411802388Z" level=info msg="CreateContainer within sandbox \"47e962844bd904a6475ae8bddfaa05237cb1b1e05dc1853d071d71a1dbb78b01\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd\"" Feb 13 02:22:33.412374 env[1543]: time="2024-02-13T02:22:33.412362100Z" level=info msg="StartContainer for \"def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd\"" Feb 13 02:22:33.433328 env[1543]: time="2024-02-13T02:22:33.433276049Z" level=info msg="StartContainer for \"def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd\" returns successfully" Feb 13 02:22:33.486457 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 02:22:33.527898 kubelet[2695]: I0213 02:22:33.527885 2695 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 13 02:22:33.538495 kubelet[2695]: I0213 02:22:33.538464 2695 topology_manager.go:210] "Topology Admit Handler" Feb 13 02:22:33.538643 kubelet[2695]: I0213 02:22:33.538624 2695 topology_manager.go:210] "Topology Admit Handler" Feb 13 02:22:33.594681 kubelet[2695]: I0213 02:22:33.594658 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e533553-6470-42f6-b41f-bbe33d5a9567-config-volume\") pod \"coredns-787d4945fb-csrbk\" (UID: \"9e533553-6470-42f6-b41f-bbe33d5a9567\") " pod="kube-system/coredns-787d4945fb-csrbk" Feb 13 02:22:33.594681 kubelet[2695]: I0213 02:22:33.594684 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5415c7e-8682-47a2-b3dd-d3a075ca61c0-config-volume\") pod \"coredns-787d4945fb-d2ngw\" (UID: \"f5415c7e-8682-47a2-b3dd-d3a075ca61c0\") " pod="kube-system/coredns-787d4945fb-d2ngw" Feb 13 02:22:33.594810 kubelet[2695]: I0213 02:22:33.594752 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c74zx\" (UniqueName: \"kubernetes.io/projected/9e533553-6470-42f6-b41f-bbe33d5a9567-kube-api-access-c74zx\") pod \"coredns-787d4945fb-csrbk\" (UID: \"9e533553-6470-42f6-b41f-bbe33d5a9567\") " pod="kube-system/coredns-787d4945fb-csrbk" Feb 13 02:22:33.594810 kubelet[2695]: I0213 02:22:33.594781 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqcr7\" (UniqueName: \"kubernetes.io/projected/f5415c7e-8682-47a2-b3dd-d3a075ca61c0-kube-api-access-pqcr7\") pod \"coredns-787d4945fb-d2ngw\" (UID: \"f5415c7e-8682-47a2-b3dd-d3a075ca61c0\") " pod="kube-system/coredns-787d4945fb-d2ngw" Feb 13 02:22:33.618454 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 02:22:33.843865 env[1543]: time="2024-02-13T02:22:33.843746734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-csrbk,Uid:9e533553-6470-42f6-b41f-bbe33d5a9567,Namespace:kube-system,Attempt:0,}" Feb 13 02:22:33.845745 env[1543]: time="2024-02-13T02:22:33.844633597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-d2ngw,Uid:f5415c7e-8682-47a2-b3dd-d3a075ca61c0,Namespace:kube-system,Attempt:0,}" Feb 13 02:22:34.418133 kubelet[2695]: I0213 02:22:34.418115 2695 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6czxd" podStartSLOduration=-9.22337202143668e+09 pod.CreationTimestamp="2024-02-13 02:22:19 +0000 UTC" firstStartedPulling="2024-02-13 02:22:20.543312488 +0000 UTC m=+15.334757287" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 02:22:34.417862581 +0000 UTC m=+29.209307338" watchObservedRunningTime="2024-02-13 02:22:34.418095288 +0000 UTC m=+29.209540043" Feb 13 02:22:35.209707 systemd-networkd[1392]: cilium_host: Link UP Feb 13 02:22:35.209868 systemd-networkd[1392]: cilium_net: Link UP Feb 13 02:22:35.209873 systemd-networkd[1392]: cilium_net: Gained carrier Feb 13 02:22:35.210063 systemd-networkd[1392]: cilium_host: Gained carrier Feb 13 02:22:35.218039 systemd-networkd[1392]: cilium_host: Gained IPv6LL Feb 13 02:22:35.218461 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 13 02:22:35.274230 systemd-networkd[1392]: cilium_vxlan: Link UP Feb 13 02:22:35.274233 systemd-networkd[1392]: cilium_vxlan: Gained carrier Feb 13 02:22:35.409530 kernel: NET: Registered PF_ALG protocol family Feb 13 02:22:35.940855 systemd-networkd[1392]: lxc_health: Link UP Feb 13 02:22:35.967220 systemd-networkd[1392]: lxc_health: Gained carrier Feb 13 02:22:35.967502 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 13 02:22:36.021595 systemd-networkd[1392]: cilium_net: Gained IPv6LL Feb 13 02:22:36.367166 systemd-networkd[1392]: lxc9c0cbc79c265: Link UP Feb 13 02:22:36.398522 kernel: eth0: renamed from tmp3f173 Feb 13 02:22:36.416530 kernel: eth0: renamed from tmp44adf Feb 13 02:22:36.435455 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9c0cbc79c265: link becomes ready Feb 13 02:22:36.435503 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 13 02:22:36.449493 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9935956d242b: link becomes ready Feb 13 02:22:36.449508 systemd-networkd[1392]: lxc9935956d242b: Link UP Feb 13 02:22:36.450104 systemd-networkd[1392]: lxc9c0cbc79c265: Gained carrier Feb 13 02:22:36.450249 systemd-networkd[1392]: lxc9935956d242b: Gained carrier Feb 13 02:22:36.533594 systemd-networkd[1392]: cilium_vxlan: Gained IPv6LL Feb 13 02:22:37.493598 systemd-networkd[1392]: lxc_health: Gained IPv6LL Feb 13 02:22:37.749567 systemd-networkd[1392]: lxc9935956d242b: Gained IPv6LL Feb 13 02:22:37.813536 systemd-networkd[1392]: lxc9c0cbc79c265: Gained IPv6LL Feb 13 02:22:38.714677 env[1543]: time="2024-02-13T02:22:38.714636504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 02:22:38.714677 env[1543]: time="2024-02-13T02:22:38.714663684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 02:22:38.714941 env[1543]: time="2024-02-13T02:22:38.714679720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 02:22:38.714941 env[1543]: time="2024-02-13T02:22:38.714761967Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f1736209d557638ac7d7ebfe624bd0973b7d63cf41fc27f3f6e864f61a39cb7 pid=4118 runtime=io.containerd.runc.v2 Feb 13 02:22:38.714941 env[1543]: time="2024-02-13T02:22:38.714877262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 02:22:38.714941 env[1543]: time="2024-02-13T02:22:38.714897514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 02:22:38.714941 env[1543]: time="2024-02-13T02:22:38.714905626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 02:22:38.715054 env[1543]: time="2024-02-13T02:22:38.715016691Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/44adf280e0b067fca98bb3588a2f337747921d2abc1100927756ed2548c4f518 pid=4121 runtime=io.containerd.runc.v2 Feb 13 02:22:38.746011 env[1543]: time="2024-02-13T02:22:38.745978537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-csrbk,Uid:9e533553-6470-42f6-b41f-bbe33d5a9567,Namespace:kube-system,Attempt:0,} returns sandbox id \"44adf280e0b067fca98bb3588a2f337747921d2abc1100927756ed2548c4f518\"" Feb 13 02:22:38.746119 env[1543]: time="2024-02-13T02:22:38.745979303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-d2ngw,Uid:f5415c7e-8682-47a2-b3dd-d3a075ca61c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f1736209d557638ac7d7ebfe624bd0973b7d63cf41fc27f3f6e864f61a39cb7\"" Feb 13 02:22:38.748099 env[1543]: time="2024-02-13T02:22:38.748073592Z" level=info msg="CreateContainer within sandbox \"3f1736209d557638ac7d7ebfe624bd0973b7d63cf41fc27f3f6e864f61a39cb7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 02:22:38.748286 env[1543]: time="2024-02-13T02:22:38.748270430Z" level=info msg="CreateContainer within sandbox \"44adf280e0b067fca98bb3588a2f337747921d2abc1100927756ed2548c4f518\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 02:22:38.753370 env[1543]: time="2024-02-13T02:22:38.753323121Z" level=info msg="CreateContainer within sandbox \"3f1736209d557638ac7d7ebfe624bd0973b7d63cf41fc27f3f6e864f61a39cb7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c14ed9780796d25a1ffd0239685f4fd38a64759a63bb1d4da1f609c245b47736\"" Feb 13 02:22:38.753577 env[1543]: time="2024-02-13T02:22:38.753532151Z" level=info msg="StartContainer for \"c14ed9780796d25a1ffd0239685f4fd38a64759a63bb1d4da1f609c245b47736\"" Feb 13 02:22:38.754213 env[1543]: time="2024-02-13T02:22:38.754168907Z" level=info msg="CreateContainer within sandbox \"44adf280e0b067fca98bb3588a2f337747921d2abc1100927756ed2548c4f518\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b1ba530596764b8bbb8f228d7d817bb4c47eaa815e374292c8e14628ef2e1cd8\"" Feb 13 02:22:38.754334 env[1543]: time="2024-02-13T02:22:38.754320446Z" level=info msg="StartContainer for \"b1ba530596764b8bbb8f228d7d817bb4c47eaa815e374292c8e14628ef2e1cd8\"" Feb 13 02:22:38.790455 env[1543]: time="2024-02-13T02:22:38.790417079Z" level=info msg="StartContainer for \"c14ed9780796d25a1ffd0239685f4fd38a64759a63bb1d4da1f609c245b47736\" returns successfully" Feb 13 02:22:38.790565 env[1543]: time="2024-02-13T02:22:38.790549514Z" level=info msg="StartContainer for \"b1ba530596764b8bbb8f228d7d817bb4c47eaa815e374292c8e14628ef2e1cd8\" returns successfully" Feb 13 02:22:39.418617 kubelet[2695]: I0213 02:22:39.418578 2695 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-d2ngw" podStartSLOduration=20.418527768 pod.CreationTimestamp="2024-02-13 02:22:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 02:22:39.418179954 +0000 UTC m=+34.209624741" watchObservedRunningTime="2024-02-13 02:22:39.418527768 +0000 UTC m=+34.209972544" Feb 13 02:22:39.434320 kubelet[2695]: I0213 02:22:39.434298 2695 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-csrbk" podStartSLOduration=20.434268715 pod.CreationTimestamp="2024-02-13 02:22:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 02:22:39.433989714 +0000 UTC m=+34.225434471" watchObservedRunningTime="2024-02-13 02:22:39.434268715 +0000 UTC m=+34.225713472" Feb 13 02:22:42.857630 kubelet[2695]: I0213 02:22:42.857503 2695 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 13 02:23:17.073275 systemd[1]: Started sshd@6-136.144.54.113:22-109.123.237.173:58510.service. Feb 13 02:23:18.034804 sshd[4336]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=109.123.237.173 user=root Feb 13 02:23:19.355732 sshd[4336]: Failed password for root from 109.123.237.173 port 58510 ssh2 Feb 13 02:23:20.575736 sshd[4336]: Received disconnect from 109.123.237.173 port 58510:11: Bye Bye [preauth] Feb 13 02:23:20.575736 sshd[4336]: Disconnected from authenticating user root 109.123.237.173 port 58510 [preauth] Feb 13 02:23:20.578240 systemd[1]: sshd@6-136.144.54.113:22-109.123.237.173:58510.service: Deactivated successfully. Feb 13 02:24:27.712580 systemd[1]: Started sshd@7-136.144.54.113:22-109.123.237.173:58590.service. Feb 13 02:24:28.658366 sshd[4357]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=109.123.237.173 user=root Feb 13 02:24:31.259748 sshd[4357]: Failed password for root from 109.123.237.173 port 58590 ssh2 Feb 13 02:24:33.559502 sshd[4357]: Received disconnect from 109.123.237.173 port 58590:11: Bye Bye [preauth] Feb 13 02:24:33.559502 sshd[4357]: Disconnected from authenticating user root 109.123.237.173 port 58590 [preauth] Feb 13 02:24:33.560172 systemd[1]: sshd@7-136.144.54.113:22-109.123.237.173:58590.service: Deactivated successfully. Feb 13 02:25:36.052190 systemd[1]: Started sshd@8-136.144.54.113:22-109.123.237.173:58672.service. Feb 13 02:25:36.980274 sshd[4367]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=109.123.237.173 user=root Feb 13 02:25:39.250508 sshd[4367]: Failed password for root from 109.123.237.173 port 58672 ssh2 Feb 13 02:25:39.514717 sshd[4367]: Received disconnect from 109.123.237.173 port 58672:11: Bye Bye [preauth] Feb 13 02:25:39.514717 sshd[4367]: Disconnected from authenticating user root 109.123.237.173 port 58672 [preauth] Feb 13 02:25:39.517060 systemd[1]: sshd@8-136.144.54.113:22-109.123.237.173:58672.service: Deactivated successfully. Feb 13 02:26:45.796781 systemd[1]: Started sshd@9-136.144.54.113:22-109.123.237.173:58750.service. Feb 13 02:26:46.724821 sshd[4382]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=109.123.237.173 user=root Feb 13 02:26:48.267892 sshd[4382]: Failed password for root from 109.123.237.173 port 58750 ssh2 Feb 13 02:26:49.252331 sshd[4382]: Received disconnect from 109.123.237.173 port 58750:11: Bye Bye [preauth] Feb 13 02:26:49.252331 sshd[4382]: Disconnected from authenticating user root 109.123.237.173 port 58750 [preauth] Feb 13 02:26:49.255024 systemd[1]: sshd@9-136.144.54.113:22-109.123.237.173:58750.service: Deactivated successfully. Feb 13 02:27:48.278923 systemd[1]: Started sshd@10-136.144.54.113:22-139.178.68.195:50632.service. Feb 13 02:27:48.316202 sshd[4393]: Accepted publickey for core from 139.178.68.195 port 50632 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:27:48.319393 sshd[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:27:48.331002 systemd-logind[1529]: New session 8 of user core. Feb 13 02:27:48.334152 systemd[1]: Started session-8.scope. Feb 13 02:27:48.495717 sshd[4393]: pam_unix(sshd:session): session closed for user core Feb 13 02:27:48.497577 systemd[1]: sshd@10-136.144.54.113:22-139.178.68.195:50632.service: Deactivated successfully. Feb 13 02:27:48.498418 systemd-logind[1529]: Session 8 logged out. Waiting for processes to exit. Feb 13 02:27:48.498517 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 02:27:48.499353 systemd-logind[1529]: Removed session 8. Feb 13 02:27:53.502082 systemd[1]: Started sshd@11-136.144.54.113:22-139.178.68.195:50640.service. Feb 13 02:27:53.538168 sshd[4425]: Accepted publickey for core from 139.178.68.195 port 50640 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:27:53.539107 sshd[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:27:53.542726 systemd-logind[1529]: New session 9 of user core. Feb 13 02:27:53.543546 systemd[1]: Started session-9.scope. Feb 13 02:27:53.634074 sshd[4425]: pam_unix(sshd:session): session closed for user core Feb 13 02:27:53.635786 systemd[1]: sshd@11-136.144.54.113:22-139.178.68.195:50640.service: Deactivated successfully. Feb 13 02:27:53.636460 systemd-logind[1529]: Session 9 logged out. Waiting for processes to exit. Feb 13 02:27:53.636512 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 02:27:53.637228 systemd-logind[1529]: Removed session 9. Feb 13 02:27:54.277796 systemd[1]: Started sshd@12-136.144.54.113:22-109.123.237.173:58826.service. Feb 13 02:27:55.217608 sshd[4453]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=109.123.237.173 user=root Feb 13 02:27:57.236921 sshd[4453]: Failed password for root from 109.123.237.173 port 58826 ssh2 Feb 13 02:27:57.745352 sshd[4453]: Received disconnect from 109.123.237.173 port 58826:11: Bye Bye [preauth] Feb 13 02:27:57.745352 sshd[4453]: Disconnected from authenticating user root 109.123.237.173 port 58826 [preauth] Feb 13 02:27:57.747843 systemd[1]: sshd@12-136.144.54.113:22-109.123.237.173:58826.service: Deactivated successfully. Feb 13 02:27:58.640860 systemd[1]: Started sshd@13-136.144.54.113:22-139.178.68.195:51460.service. Feb 13 02:27:58.676873 sshd[4458]: Accepted publickey for core from 139.178.68.195 port 51460 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:27:58.677835 sshd[4458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:27:58.681472 systemd-logind[1529]: New session 10 of user core. Feb 13 02:27:58.682245 systemd[1]: Started session-10.scope. Feb 13 02:27:58.769669 sshd[4458]: pam_unix(sshd:session): session closed for user core Feb 13 02:27:58.771185 systemd[1]: sshd@13-136.144.54.113:22-139.178.68.195:51460.service: Deactivated successfully. Feb 13 02:27:58.771842 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 02:27:58.771886 systemd-logind[1529]: Session 10 logged out. Waiting for processes to exit. Feb 13 02:27:58.772353 systemd-logind[1529]: Removed session 10. Feb 13 02:28:03.777661 systemd[1]: Started sshd@14-136.144.54.113:22-139.178.68.195:51464.service. Feb 13 02:28:03.854546 sshd[4486]: Accepted publickey for core from 139.178.68.195 port 51464 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:28:03.857831 sshd[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:28:03.868746 systemd-logind[1529]: New session 11 of user core. Feb 13 02:28:03.871318 systemd[1]: Started session-11.scope. Feb 13 02:28:04.018654 sshd[4486]: pam_unix(sshd:session): session closed for user core Feb 13 02:28:04.020429 systemd[1]: Started sshd@15-136.144.54.113:22-139.178.68.195:51470.service. Feb 13 02:28:04.020712 systemd[1]: sshd@14-136.144.54.113:22-139.178.68.195:51464.service: Deactivated successfully. Feb 13 02:28:04.021303 systemd-logind[1529]: Session 11 logged out. Waiting for processes to exit. Feb 13 02:28:04.021322 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 02:28:04.021925 systemd-logind[1529]: Removed session 11. Feb 13 02:28:04.057701 sshd[4512]: Accepted publickey for core from 139.178.68.195 port 51470 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:28:04.061265 sshd[4512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:28:04.072352 systemd-logind[1529]: New session 12 of user core. Feb 13 02:28:04.074822 systemd[1]: Started session-12.scope. Feb 13 02:28:04.595215 sshd[4512]: pam_unix(sshd:session): session closed for user core Feb 13 02:28:04.596796 systemd[1]: Started sshd@16-136.144.54.113:22-139.178.68.195:51472.service. Feb 13 02:28:04.597081 systemd[1]: sshd@15-136.144.54.113:22-139.178.68.195:51470.service: Deactivated successfully. Feb 13 02:28:04.597761 systemd-logind[1529]: Session 12 logged out. Waiting for processes to exit. Feb 13 02:28:04.597768 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 02:28:04.598402 systemd-logind[1529]: Removed session 12. Feb 13 02:28:04.634887 sshd[4535]: Accepted publickey for core from 139.178.68.195 port 51472 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:28:04.638160 sshd[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:28:04.648920 systemd-logind[1529]: New session 13 of user core. Feb 13 02:28:04.651385 systemd[1]: Started session-13.scope. Feb 13 02:28:04.795876 sshd[4535]: pam_unix(sshd:session): session closed for user core Feb 13 02:28:04.797453 systemd[1]: sshd@16-136.144.54.113:22-139.178.68.195:51472.service: Deactivated successfully. Feb 13 02:28:04.798202 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 02:28:04.798208 systemd-logind[1529]: Session 13 logged out. Waiting for processes to exit. Feb 13 02:28:04.798993 systemd-logind[1529]: Removed session 13. Feb 13 02:28:09.803707 systemd[1]: Started sshd@17-136.144.54.113:22-139.178.68.195:49620.service. Feb 13 02:28:09.840214 sshd[4567]: Accepted publickey for core from 139.178.68.195 port 49620 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:28:09.841488 sshd[4567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:28:09.845504 systemd-logind[1529]: New session 14 of user core. Feb 13 02:28:09.846678 systemd[1]: Started session-14.scope. Feb 13 02:28:09.934970 sshd[4567]: pam_unix(sshd:session): session closed for user core Feb 13 02:28:09.936345 systemd[1]: sshd@17-136.144.54.113:22-139.178.68.195:49620.service: Deactivated successfully. Feb 13 02:28:09.936957 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 02:28:09.936972 systemd-logind[1529]: Session 14 logged out. Waiting for processes to exit. Feb 13 02:28:09.937411 systemd-logind[1529]: Removed session 14. Feb 13 02:28:14.942390 systemd[1]: Started sshd@18-136.144.54.113:22-139.178.68.195:49632.service. Feb 13 02:28:14.978867 sshd[4593]: Accepted publickey for core from 139.178.68.195 port 49632 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:28:14.979846 sshd[4593]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:28:14.983520 systemd-logind[1529]: New session 15 of user core. Feb 13 02:28:14.984557 systemd[1]: Started session-15.scope. Feb 13 02:28:15.083142 sshd[4593]: pam_unix(sshd:session): session closed for user core Feb 13 02:28:15.088994 systemd[1]: sshd@18-136.144.54.113:22-139.178.68.195:49632.service: Deactivated successfully. Feb 13 02:28:15.091656 systemd-logind[1529]: Session 15 logged out. Waiting for processes to exit. Feb 13 02:28:15.091963 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 02:28:15.094329 systemd-logind[1529]: Removed session 15. Feb 13 02:28:20.090463 systemd[1]: Started sshd@19-136.144.54.113:22-139.178.68.195:47454.service. Feb 13 02:28:20.130493 sshd[4618]: Accepted publickey for core from 139.178.68.195 port 47454 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:28:20.131380 sshd[4618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:28:20.134797 systemd-logind[1529]: New session 16 of user core. Feb 13 02:28:20.135779 systemd[1]: Started session-16.scope. Feb 13 02:28:20.227814 sshd[4618]: pam_unix(sshd:session): session closed for user core Feb 13 02:28:20.229544 systemd[1]: Started sshd@20-136.144.54.113:22-139.178.68.195:47460.service. Feb 13 02:28:20.229890 systemd[1]: sshd@19-136.144.54.113:22-139.178.68.195:47454.service: Deactivated successfully. Feb 13 02:28:20.230381 systemd-logind[1529]: Session 16 logged out. Waiting for processes to exit. Feb 13 02:28:20.230431 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 02:28:20.230925 systemd-logind[1529]: Removed session 16. Feb 13 02:28:20.266936 sshd[4643]: Accepted publickey for core from 139.178.68.195 port 47460 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:28:20.270160 sshd[4643]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:28:20.280698 systemd-logind[1529]: New session 17 of user core. Feb 13 02:28:20.283821 systemd[1]: Started session-17.scope. Feb 13 02:28:21.514830 sshd[4643]: pam_unix(sshd:session): session closed for user core Feb 13 02:28:21.516579 systemd[1]: Started sshd@21-136.144.54.113:22-139.178.68.195:47476.service. Feb 13 02:28:21.516845 systemd[1]: sshd@20-136.144.54.113:22-139.178.68.195:47460.service: Deactivated successfully. Feb 13 02:28:21.517390 systemd-logind[1529]: Session 17 logged out. Waiting for processes to exit. Feb 13 02:28:21.517418 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 02:28:21.518011 systemd-logind[1529]: Removed session 17. Feb 13 02:28:21.553544 sshd[4670]: Accepted publickey for core from 139.178.68.195 port 47476 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:28:21.554608 sshd[4670]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:28:21.558804 systemd-logind[1529]: New session 18 of user core. Feb 13 02:28:21.559716 systemd[1]: Started session-18.scope. Feb 13 02:28:22.436062 sshd[4670]: pam_unix(sshd:session): session closed for user core Feb 13 02:28:22.443891 systemd[1]: Started sshd@22-136.144.54.113:22-139.178.68.195:47492.service. Feb 13 02:28:22.446411 systemd[1]: sshd@21-136.144.54.113:22-139.178.68.195:47476.service: Deactivated successfully. Feb 13 02:28:22.449052 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 02:28:22.449107 systemd-logind[1529]: Session 18 logged out. Waiting for processes to exit. Feb 13 02:28:22.451261 systemd-logind[1529]: Removed session 18. Feb 13 02:28:22.495286 sshd[4713]: Accepted publickey for core from 139.178.68.195 port 47492 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:28:22.496346 sshd[4713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:28:22.499491 systemd-logind[1529]: New session 19 of user core. Feb 13 02:28:22.500228 systemd[1]: Started session-19.scope. Feb 13 02:28:22.684545 sshd[4713]: pam_unix(sshd:session): session closed for user core Feb 13 02:28:22.686267 systemd[1]: Started sshd@23-136.144.54.113:22-139.178.68.195:47500.service. Feb 13 02:28:22.686534 systemd[1]: sshd@22-136.144.54.113:22-139.178.68.195:47492.service: Deactivated successfully. Feb 13 02:28:22.687117 systemd-logind[1529]: Session 19 logged out. Waiting for processes to exit. Feb 13 02:28:22.687148 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 02:28:22.687679 systemd-logind[1529]: Removed session 19. Feb 13 02:28:22.722314 sshd[4780]: Accepted publickey for core from 139.178.68.195 port 47500 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:28:22.723133 sshd[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:28:22.725964 systemd-logind[1529]: New session 20 of user core. Feb 13 02:28:22.726470 systemd[1]: Started session-20.scope. Feb 13 02:28:22.871662 sshd[4780]: pam_unix(sshd:session): session closed for user core Feb 13 02:28:22.873314 systemd[1]: sshd@23-136.144.54.113:22-139.178.68.195:47500.service: Deactivated successfully. Feb 13 02:28:22.874038 systemd-logind[1529]: Session 20 logged out. Waiting for processes to exit. Feb 13 02:28:22.874049 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 02:28:22.874591 systemd-logind[1529]: Removed session 20. Feb 13 02:28:27.879128 systemd[1]: Started sshd@24-136.144.54.113:22-139.178.68.195:35596.service. Feb 13 02:28:27.915455 sshd[4835]: Accepted publickey for core from 139.178.68.195 port 35596 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:28:27.916545 sshd[4835]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:28:27.920308 systemd-logind[1529]: New session 21 of user core. Feb 13 02:28:27.921105 systemd[1]: Started session-21.scope. Feb 13 02:28:28.013456 sshd[4835]: pam_unix(sshd:session): session closed for user core Feb 13 02:28:28.014939 systemd[1]: sshd@24-136.144.54.113:22-139.178.68.195:35596.service: Deactivated successfully. Feb 13 02:28:28.015522 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 02:28:28.015526 systemd-logind[1529]: Session 21 logged out. Waiting for processes to exit. Feb 13 02:28:28.016200 systemd-logind[1529]: Removed session 21. Feb 13 02:28:33.015528 systemd[1]: Started sshd@25-136.144.54.113:22-139.178.68.195:35598.service. Feb 13 02:28:33.052663 sshd[4863]: Accepted publickey for core from 139.178.68.195 port 35598 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:28:33.053613 sshd[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:28:33.056962 systemd-logind[1529]: New session 22 of user core. Feb 13 02:28:33.057656 systemd[1]: Started session-22.scope. Feb 13 02:28:33.145746 sshd[4863]: pam_unix(sshd:session): session closed for user core Feb 13 02:28:33.147208 systemd[1]: sshd@25-136.144.54.113:22-139.178.68.195:35598.service: Deactivated successfully. Feb 13 02:28:33.147863 systemd-logind[1529]: Session 22 logged out. Waiting for processes to exit. Feb 13 02:28:33.147874 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 02:28:33.148363 systemd-logind[1529]: Removed session 22. Feb 13 02:28:38.152806 systemd[1]: Started sshd@26-136.144.54.113:22-139.178.68.195:44970.service. Feb 13 02:28:38.189331 sshd[4885]: Accepted publickey for core from 139.178.68.195 port 44970 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:28:38.190457 sshd[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:28:38.194280 systemd-logind[1529]: New session 23 of user core. Feb 13 02:28:38.195172 systemd[1]: Started session-23.scope. Feb 13 02:28:38.285155 sshd[4885]: pam_unix(sshd:session): session closed for user core Feb 13 02:28:38.286865 systemd[1]: Started sshd@27-136.144.54.113:22-139.178.68.195:44982.service. Feb 13 02:28:38.287162 systemd[1]: sshd@26-136.144.54.113:22-139.178.68.195:44970.service: Deactivated successfully. Feb 13 02:28:38.287735 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 02:28:38.287743 systemd-logind[1529]: Session 23 logged out. Waiting for processes to exit. Feb 13 02:28:38.288291 systemd-logind[1529]: Removed session 23. Feb 13 02:28:38.322978 sshd[4906]: Accepted publickey for core from 139.178.68.195 port 44982 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:28:38.323749 sshd[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:28:38.326664 systemd-logind[1529]: New session 24 of user core. Feb 13 02:28:38.327221 systemd[1]: Started session-24.scope. Feb 13 02:28:39.917846 env[1543]: time="2024-02-13T02:28:39.917822571Z" level=info msg="StopContainer for \"21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9\" with timeout 30 (s)" Feb 13 02:28:39.918110 env[1543]: time="2024-02-13T02:28:39.918028199Z" level=info msg="Stop container \"21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9\" with signal terminated" Feb 13 02:28:39.928671 env[1543]: time="2024-02-13T02:28:39.928624486Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 02:28:39.931354 env[1543]: time="2024-02-13T02:28:39.931338330Z" level=info msg="StopContainer for \"def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd\" with timeout 1 (s)" Feb 13 02:28:39.931447 env[1543]: time="2024-02-13T02:28:39.931434271Z" level=info msg="Stop container \"def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd\" with signal terminated" Feb 13 02:28:39.931485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9-rootfs.mount: Deactivated successfully. Feb 13 02:28:39.934305 systemd-networkd[1392]: lxc_health: Link DOWN Feb 13 02:28:39.934308 systemd-networkd[1392]: lxc_health: Lost carrier Feb 13 02:28:39.948502 env[1543]: time="2024-02-13T02:28:39.948380181Z" level=info msg="shim disconnected" id=21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9 Feb 13 02:28:39.948874 env[1543]: time="2024-02-13T02:28:39.948517533Z" level=warning msg="cleaning up after shim disconnected" id=21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9 namespace=k8s.io Feb 13 02:28:39.948874 env[1543]: time="2024-02-13T02:28:39.948553597Z" level=info msg="cleaning up dead shim" Feb 13 02:28:39.965175 env[1543]: time="2024-02-13T02:28:39.965065390Z" level=warning msg="cleanup warnings time=\"2024-02-13T02:28:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4974 runtime=io.containerd.runc.v2\n" Feb 13 02:28:39.967230 env[1543]: time="2024-02-13T02:28:39.967114521Z" level=info msg="StopContainer for \"21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9\" returns successfully" Feb 13 02:28:39.968336 env[1543]: time="2024-02-13T02:28:39.968230924Z" level=info msg="StopPodSandbox for \"63f16a1a528354bf2924dbaff7d964e2d28aa0c30f33dde013ee10fd295cca6c\"" Feb 13 02:28:39.968595 env[1543]: time="2024-02-13T02:28:39.968388772Z" level=info msg="Container to stop \"21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 02:28:39.974472 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-63f16a1a528354bf2924dbaff7d964e2d28aa0c30f33dde013ee10fd295cca6c-shm.mount: Deactivated successfully. Feb 13 02:28:40.009226 env[1543]: time="2024-02-13T02:28:40.009165541Z" level=info msg="shim disconnected" id=63f16a1a528354bf2924dbaff7d964e2d28aa0c30f33dde013ee10fd295cca6c Feb 13 02:28:40.009400 env[1543]: time="2024-02-13T02:28:40.009231098Z" level=warning msg="cleaning up after shim disconnected" id=63f16a1a528354bf2924dbaff7d964e2d28aa0c30f33dde013ee10fd295cca6c namespace=k8s.io Feb 13 02:28:40.009400 env[1543]: time="2024-02-13T02:28:40.009250160Z" level=info msg="cleaning up dead shim" Feb 13 02:28:40.009613 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63f16a1a528354bf2924dbaff7d964e2d28aa0c30f33dde013ee10fd295cca6c-rootfs.mount: Deactivated successfully. Feb 13 02:28:40.015475 env[1543]: time="2024-02-13T02:28:40.015417993Z" level=warning msg="cleanup warnings time=\"2024-02-13T02:28:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5018 runtime=io.containerd.runc.v2\n" Feb 13 02:28:40.015819 env[1543]: time="2024-02-13T02:28:40.015791846Z" level=info msg="TearDown network for sandbox \"63f16a1a528354bf2924dbaff7d964e2d28aa0c30f33dde013ee10fd295cca6c\" successfully" Feb 13 02:28:40.015889 env[1543]: time="2024-02-13T02:28:40.015823483Z" level=info msg="StopPodSandbox for \"63f16a1a528354bf2924dbaff7d964e2d28aa0c30f33dde013ee10fd295cca6c\" returns successfully" Feb 13 02:28:40.015920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd-rootfs.mount: Deactivated successfully. Feb 13 02:28:40.016076 env[1543]: time="2024-02-13T02:28:40.015975260Z" level=info msg="shim disconnected" id=def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd Feb 13 02:28:40.016076 env[1543]: time="2024-02-13T02:28:40.016020269Z" level=warning msg="cleaning up after shim disconnected" id=def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd namespace=k8s.io Feb 13 02:28:40.016076 env[1543]: time="2024-02-13T02:28:40.016036147Z" level=info msg="cleaning up dead shim" Feb 13 02:28:40.022636 env[1543]: time="2024-02-13T02:28:40.022578863Z" level=warning msg="cleanup warnings time=\"2024-02-13T02:28:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5035 runtime=io.containerd.runc.v2\n" Feb 13 02:28:40.023598 env[1543]: time="2024-02-13T02:28:40.023554552Z" level=info msg="StopContainer for \"def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd\" returns successfully" Feb 13 02:28:40.023862 env[1543]: time="2024-02-13T02:28:40.023838551Z" level=info msg="StopPodSandbox for \"47e962844bd904a6475ae8bddfaa05237cb1b1e05dc1853d071d71a1dbb78b01\"" Feb 13 02:28:40.023916 env[1543]: time="2024-02-13T02:28:40.023893006Z" level=info msg="Container to stop \"a9c5ccee99c80885ccd2056a442f2fbe4ec420f88129b156889c7bf4ca820da8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 02:28:40.023916 env[1543]: time="2024-02-13T02:28:40.023908830Z" level=info msg="Container to stop \"d6777b6dd0c136cbba3fb51856131108b6344299852d9329f8222508faa68caf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 02:28:40.023998 env[1543]: time="2024-02-13T02:28:40.023919771Z" level=info msg="Container to stop \"f414e529e00e24fd4ba709ad5abd9d671ebe759804200c0d743f6725eff3e8de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 02:28:40.023998 env[1543]: time="2024-02-13T02:28:40.023930275Z" level=info msg="Container to stop \"def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 02:28:40.023998 env[1543]: time="2024-02-13T02:28:40.023940153Z" level=info msg="Container to stop \"040ce3673b20a9210087c01a712cb764c2c520933bd8bd190d2f251c2ae0d18f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 02:28:40.040460 env[1543]: time="2024-02-13T02:28:40.040404684Z" level=info msg="shim disconnected" id=47e962844bd904a6475ae8bddfaa05237cb1b1e05dc1853d071d71a1dbb78b01 Feb 13 02:28:40.040614 env[1543]: time="2024-02-13T02:28:40.040467084Z" level=warning msg="cleaning up after shim disconnected" id=47e962844bd904a6475ae8bddfaa05237cb1b1e05dc1853d071d71a1dbb78b01 namespace=k8s.io Feb 13 02:28:40.040614 env[1543]: time="2024-02-13T02:28:40.040480874Z" level=info msg="cleaning up dead shim" Feb 13 02:28:40.046441 env[1543]: time="2024-02-13T02:28:40.046391071Z" level=warning msg="cleanup warnings time=\"2024-02-13T02:28:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5066 runtime=io.containerd.runc.v2\n" Feb 13 02:28:40.046686 env[1543]: time="2024-02-13T02:28:40.046638094Z" level=info msg="TearDown network for sandbox \"47e962844bd904a6475ae8bddfaa05237cb1b1e05dc1853d071d71a1dbb78b01\" successfully" Feb 13 02:28:40.046686 env[1543]: time="2024-02-13T02:28:40.046659454Z" level=info msg="StopPodSandbox for \"47e962844bd904a6475ae8bddfaa05237cb1b1e05dc1853d071d71a1dbb78b01\" returns successfully" Feb 13 02:28:40.113253 kubelet[2695]: I0213 02:28:40.113147 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-cni-path\") pod \"31ae5097-a7c9-4039-9bc0-d616d239369b\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " Feb 13 02:28:40.113253 kubelet[2695]: I0213 02:28:40.113264 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54s2s\" (UniqueName: \"kubernetes.io/projected/31ae5097-a7c9-4039-9bc0-d616d239369b-kube-api-access-54s2s\") pod \"31ae5097-a7c9-4039-9bc0-d616d239369b\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " Feb 13 02:28:40.114343 kubelet[2695]: I0213 02:28:40.113326 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-lib-modules\") pod \"31ae5097-a7c9-4039-9bc0-d616d239369b\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " Feb 13 02:28:40.114343 kubelet[2695]: I0213 02:28:40.113277 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-cni-path" (OuterVolumeSpecName: "cni-path") pod "31ae5097-a7c9-4039-9bc0-d616d239369b" (UID: "31ae5097-a7c9-4039-9bc0-d616d239369b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 02:28:40.114343 kubelet[2695]: I0213 02:28:40.113388 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-etc-cni-netd\") pod \"31ae5097-a7c9-4039-9bc0-d616d239369b\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " Feb 13 02:28:40.114343 kubelet[2695]: I0213 02:28:40.113471 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31ae5097-a7c9-4039-9bc0-d616d239369b-clustermesh-secrets\") pod \"31ae5097-a7c9-4039-9bc0-d616d239369b\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " Feb 13 02:28:40.114343 kubelet[2695]: I0213 02:28:40.113510 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "31ae5097-a7c9-4039-9bc0-d616d239369b" (UID: "31ae5097-a7c9-4039-9bc0-d616d239369b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 02:28:40.114343 kubelet[2695]: I0213 02:28:40.113535 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-xtables-lock\") pod \"31ae5097-a7c9-4039-9bc0-d616d239369b\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " Feb 13 02:28:40.115005 kubelet[2695]: I0213 02:28:40.113578 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "31ae5097-a7c9-4039-9bc0-d616d239369b" (UID: "31ae5097-a7c9-4039-9bc0-d616d239369b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 02:28:40.115005 kubelet[2695]: I0213 02:28:40.113522 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "31ae5097-a7c9-4039-9bc0-d616d239369b" (UID: "31ae5097-a7c9-4039-9bc0-d616d239369b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 02:28:40.115005 kubelet[2695]: I0213 02:28:40.113682 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31ae5097-a7c9-4039-9bc0-d616d239369b-cilium-config-path\") pod \"31ae5097-a7c9-4039-9bc0-d616d239369b\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " Feb 13 02:28:40.115005 kubelet[2695]: I0213 02:28:40.113752 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-host-proc-sys-kernel\") pod \"31ae5097-a7c9-4039-9bc0-d616d239369b\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " Feb 13 02:28:40.115005 kubelet[2695]: I0213 02:28:40.113812 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-cilium-run\") pod \"31ae5097-a7c9-4039-9bc0-d616d239369b\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " Feb 13 02:28:40.115566 kubelet[2695]: I0213 02:28:40.113863 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "31ae5097-a7c9-4039-9bc0-d616d239369b" (UID: "31ae5097-a7c9-4039-9bc0-d616d239369b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 02:28:40.115566 kubelet[2695]: I0213 02:28:40.113894 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-bpf-maps\") pod \"31ae5097-a7c9-4039-9bc0-d616d239369b\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " Feb 13 02:28:40.115566 kubelet[2695]: I0213 02:28:40.114014 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c052796-7445-4971-88b3-002c9a666486-cilium-config-path\") pod \"6c052796-7445-4971-88b3-002c9a666486\" (UID: \"6c052796-7445-4971-88b3-002c9a666486\") " Feb 13 02:28:40.115566 kubelet[2695]: I0213 02:28:40.113937 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "31ae5097-a7c9-4039-9bc0-d616d239369b" (UID: "31ae5097-a7c9-4039-9bc0-d616d239369b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 02:28:40.115566 kubelet[2695]: W0213 02:28:40.114071 2695 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/31ae5097-a7c9-4039-9bc0-d616d239369b/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 13 02:28:40.115566 kubelet[2695]: I0213 02:28:40.113990 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "31ae5097-a7c9-4039-9bc0-d616d239369b" (UID: "31ae5097-a7c9-4039-9bc0-d616d239369b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 02:28:40.116172 kubelet[2695]: I0213 02:28:40.114112 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-host-proc-sys-net\") pod \"31ae5097-a7c9-4039-9bc0-d616d239369b\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " Feb 13 02:28:40.116172 kubelet[2695]: I0213 02:28:40.114224 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlkjj\" (UniqueName: \"kubernetes.io/projected/6c052796-7445-4971-88b3-002c9a666486-kube-api-access-tlkjj\") pod \"6c052796-7445-4971-88b3-002c9a666486\" (UID: \"6c052796-7445-4971-88b3-002c9a666486\") " Feb 13 02:28:40.116172 kubelet[2695]: I0213 02:28:40.114201 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "31ae5097-a7c9-4039-9bc0-d616d239369b" (UID: "31ae5097-a7c9-4039-9bc0-d616d239369b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 02:28:40.116172 kubelet[2695]: I0213 02:28:40.114318 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-cilium-cgroup\") pod \"31ae5097-a7c9-4039-9bc0-d616d239369b\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " Feb 13 02:28:40.116172 kubelet[2695]: I0213 02:28:40.114390 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "31ae5097-a7c9-4039-9bc0-d616d239369b" (UID: "31ae5097-a7c9-4039-9bc0-d616d239369b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 02:28:40.116172 kubelet[2695]: W0213 02:28:40.114418 2695 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/6c052796-7445-4971-88b3-002c9a666486/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 13 02:28:40.116877 kubelet[2695]: I0213 02:28:40.114421 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31ae5097-a7c9-4039-9bc0-d616d239369b-hubble-tls\") pod \"31ae5097-a7c9-4039-9bc0-d616d239369b\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " Feb 13 02:28:40.116877 kubelet[2695]: I0213 02:28:40.114610 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-hostproc\") pod \"31ae5097-a7c9-4039-9bc0-d616d239369b\" (UID: \"31ae5097-a7c9-4039-9bc0-d616d239369b\") " Feb 13 02:28:40.116877 kubelet[2695]: I0213 02:28:40.114695 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-hostproc" (OuterVolumeSpecName: "hostproc") pod "31ae5097-a7c9-4039-9bc0-d616d239369b" (UID: "31ae5097-a7c9-4039-9bc0-d616d239369b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 02:28:40.116877 kubelet[2695]: I0213 02:28:40.114762 2695 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-cilium-run\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:40.116877 kubelet[2695]: I0213 02:28:40.114829 2695 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-bpf-maps\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:40.116877 kubelet[2695]: I0213 02:28:40.114896 2695 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:40.116877 kubelet[2695]: I0213 02:28:40.114957 2695 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-host-proc-sys-net\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:40.117589 kubelet[2695]: I0213 02:28:40.115017 2695 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-cilium-cgroup\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:40.117589 kubelet[2695]: I0213 02:28:40.115077 2695 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-lib-modules\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:40.117589 kubelet[2695]: I0213 02:28:40.115134 2695 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-etc-cni-netd\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:40.117589 kubelet[2695]: I0213 02:28:40.115188 2695 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-cni-path\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:40.117589 kubelet[2695]: I0213 02:28:40.115223 2695 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-xtables-lock\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:40.119797 kubelet[2695]: I0213 02:28:40.119706 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ae5097-a7c9-4039-9bc0-d616d239369b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "31ae5097-a7c9-4039-9bc0-d616d239369b" (UID: "31ae5097-a7c9-4039-9bc0-d616d239369b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 02:28:40.120185 kubelet[2695]: I0213 02:28:40.120076 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c052796-7445-4971-88b3-002c9a666486-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6c052796-7445-4971-88b3-002c9a666486" (UID: "6c052796-7445-4971-88b3-002c9a666486"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 02:28:40.120399 kubelet[2695]: I0213 02:28:40.120221 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ae5097-a7c9-4039-9bc0-d616d239369b-kube-api-access-54s2s" (OuterVolumeSpecName: "kube-api-access-54s2s") pod "31ae5097-a7c9-4039-9bc0-d616d239369b" (UID: "31ae5097-a7c9-4039-9bc0-d616d239369b"). InnerVolumeSpecName "kube-api-access-54s2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 02:28:40.120674 kubelet[2695]: I0213 02:28:40.120575 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c052796-7445-4971-88b3-002c9a666486-kube-api-access-tlkjj" (OuterVolumeSpecName: "kube-api-access-tlkjj") pod "6c052796-7445-4971-88b3-002c9a666486" (UID: "6c052796-7445-4971-88b3-002c9a666486"). InnerVolumeSpecName "kube-api-access-tlkjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 02:28:40.120925 kubelet[2695]: I0213 02:28:40.120868 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ae5097-a7c9-4039-9bc0-d616d239369b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "31ae5097-a7c9-4039-9bc0-d616d239369b" (UID: "31ae5097-a7c9-4039-9bc0-d616d239369b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 02:28:40.120925 kubelet[2695]: I0213 02:28:40.120860 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ae5097-a7c9-4039-9bc0-d616d239369b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "31ae5097-a7c9-4039-9bc0-d616d239369b" (UID: "31ae5097-a7c9-4039-9bc0-d616d239369b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 02:28:40.215743 kubelet[2695]: I0213 02:28:40.215526 2695 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31ae5097-a7c9-4039-9bc0-d616d239369b-clustermesh-secrets\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:40.215743 kubelet[2695]: I0213 02:28:40.215596 2695 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31ae5097-a7c9-4039-9bc0-d616d239369b-cilium-config-path\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:40.215743 kubelet[2695]: I0213 02:28:40.215632 2695 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c052796-7445-4971-88b3-002c9a666486-cilium-config-path\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:40.215743 kubelet[2695]: I0213 02:28:40.215669 2695 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-tlkjj\" (UniqueName: \"kubernetes.io/projected/6c052796-7445-4971-88b3-002c9a666486-kube-api-access-tlkjj\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:40.215743 kubelet[2695]: I0213 02:28:40.215704 2695 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31ae5097-a7c9-4039-9bc0-d616d239369b-hubble-tls\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:40.215743 kubelet[2695]: I0213 02:28:40.215736 2695 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31ae5097-a7c9-4039-9bc0-d616d239369b-hostproc\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:40.216578 kubelet[2695]: I0213 02:28:40.215767 2695 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-54s2s\" (UniqueName: \"kubernetes.io/projected/31ae5097-a7c9-4039-9bc0-d616d239369b-kube-api-access-54s2s\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:40.393682 kubelet[2695]: I0213 02:28:40.393613 2695 scope.go:115] "RemoveContainer" containerID="def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd" Feb 13 02:28:40.396258 env[1543]: time="2024-02-13T02:28:40.396136879Z" level=info msg="RemoveContainer for \"def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd\"" Feb 13 02:28:40.399957 env[1543]: time="2024-02-13T02:28:40.399942987Z" level=info msg="RemoveContainer for \"def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd\" returns successfully" Feb 13 02:28:40.400099 kubelet[2695]: I0213 02:28:40.400066 2695 scope.go:115] "RemoveContainer" containerID="f414e529e00e24fd4ba709ad5abd9d671ebe759804200c0d743f6725eff3e8de" Feb 13 02:28:40.400501 env[1543]: time="2024-02-13T02:28:40.400465758Z" level=info msg="RemoveContainer for \"f414e529e00e24fd4ba709ad5abd9d671ebe759804200c0d743f6725eff3e8de\"" Feb 13 02:28:40.401494 env[1543]: time="2024-02-13T02:28:40.401481240Z" level=info msg="RemoveContainer for \"f414e529e00e24fd4ba709ad5abd9d671ebe759804200c0d743f6725eff3e8de\" returns successfully" Feb 13 02:28:40.401538 kubelet[2695]: I0213 02:28:40.401532 2695 scope.go:115] "RemoveContainer" containerID="040ce3673b20a9210087c01a712cb764c2c520933bd8bd190d2f251c2ae0d18f" Feb 13 02:28:40.401993 env[1543]: time="2024-02-13T02:28:40.401979336Z" level=info msg="RemoveContainer for \"040ce3673b20a9210087c01a712cb764c2c520933bd8bd190d2f251c2ae0d18f\"" Feb 13 02:28:40.402993 env[1543]: time="2024-02-13T02:28:40.402980982Z" level=info msg="RemoveContainer for \"040ce3673b20a9210087c01a712cb764c2c520933bd8bd190d2f251c2ae0d18f\" returns successfully" Feb 13 02:28:40.403037 kubelet[2695]: I0213 02:28:40.403031 2695 scope.go:115] "RemoveContainer" containerID="d6777b6dd0c136cbba3fb51856131108b6344299852d9329f8222508faa68caf" Feb 13 02:28:40.403419 env[1543]: time="2024-02-13T02:28:40.403403717Z" level=info msg="RemoveContainer for \"d6777b6dd0c136cbba3fb51856131108b6344299852d9329f8222508faa68caf\"" Feb 13 02:28:40.404603 env[1543]: time="2024-02-13T02:28:40.404590514Z" level=info msg="RemoveContainer for \"d6777b6dd0c136cbba3fb51856131108b6344299852d9329f8222508faa68caf\" returns successfully" Feb 13 02:28:40.404651 kubelet[2695]: I0213 02:28:40.404644 2695 scope.go:115] "RemoveContainer" containerID="a9c5ccee99c80885ccd2056a442f2fbe4ec420f88129b156889c7bf4ca820da8" Feb 13 02:28:40.405072 env[1543]: time="2024-02-13T02:28:40.405032042Z" level=info msg="RemoveContainer for \"a9c5ccee99c80885ccd2056a442f2fbe4ec420f88129b156889c7bf4ca820da8\"" Feb 13 02:28:40.406077 env[1543]: time="2024-02-13T02:28:40.406041809Z" level=info msg="RemoveContainer for \"a9c5ccee99c80885ccd2056a442f2fbe4ec420f88129b156889c7bf4ca820da8\" returns successfully" Feb 13 02:28:40.406122 kubelet[2695]: I0213 02:28:40.406111 2695 scope.go:115] "RemoveContainer" containerID="def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd" Feb 13 02:28:40.406253 env[1543]: time="2024-02-13T02:28:40.406186695Z" level=error msg="ContainerStatus for \"def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd\": not found" Feb 13 02:28:40.406307 kubelet[2695]: E0213 02:28:40.406279 2695 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd\": not found" containerID="def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd" Feb 13 02:28:40.406307 kubelet[2695]: I0213 02:28:40.406303 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd} err="failed to get container status \"def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd\": rpc error: code = NotFound desc = an error occurred when try to find container \"def49969f02a3e033b7819b3f90773c81a9bcb0fbe2c0293041faab188e72dcd\": not found" Feb 13 02:28:40.406361 kubelet[2695]: I0213 02:28:40.406311 2695 scope.go:115] "RemoveContainer" containerID="f414e529e00e24fd4ba709ad5abd9d671ebe759804200c0d743f6725eff3e8de" Feb 13 02:28:40.406450 env[1543]: time="2024-02-13T02:28:40.406410141Z" level=error msg="ContainerStatus for \"f414e529e00e24fd4ba709ad5abd9d671ebe759804200c0d743f6725eff3e8de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f414e529e00e24fd4ba709ad5abd9d671ebe759804200c0d743f6725eff3e8de\": not found" Feb 13 02:28:40.406515 kubelet[2695]: E0213 02:28:40.406506 2695 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f414e529e00e24fd4ba709ad5abd9d671ebe759804200c0d743f6725eff3e8de\": not found" containerID="f414e529e00e24fd4ba709ad5abd9d671ebe759804200c0d743f6725eff3e8de" Feb 13 02:28:40.406557 kubelet[2695]: I0213 02:28:40.406527 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f414e529e00e24fd4ba709ad5abd9d671ebe759804200c0d743f6725eff3e8de} err="failed to get container status \"f414e529e00e24fd4ba709ad5abd9d671ebe759804200c0d743f6725eff3e8de\": rpc error: code = NotFound desc = an error occurred when try to find container \"f414e529e00e24fd4ba709ad5abd9d671ebe759804200c0d743f6725eff3e8de\": not found" Feb 13 02:28:40.406557 kubelet[2695]: I0213 02:28:40.406536 2695 scope.go:115] "RemoveContainer" containerID="040ce3673b20a9210087c01a712cb764c2c520933bd8bd190d2f251c2ae0d18f" Feb 13 02:28:40.406656 env[1543]: time="2024-02-13T02:28:40.406621571Z" level=error msg="ContainerStatus for \"040ce3673b20a9210087c01a712cb764c2c520933bd8bd190d2f251c2ae0d18f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"040ce3673b20a9210087c01a712cb764c2c520933bd8bd190d2f251c2ae0d18f\": not found" Feb 13 02:28:40.406724 kubelet[2695]: E0213 02:28:40.406716 2695 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"040ce3673b20a9210087c01a712cb764c2c520933bd8bd190d2f251c2ae0d18f\": not found" containerID="040ce3673b20a9210087c01a712cb764c2c520933bd8bd190d2f251c2ae0d18f" Feb 13 02:28:40.406766 kubelet[2695]: I0213 02:28:40.406737 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:040ce3673b20a9210087c01a712cb764c2c520933bd8bd190d2f251c2ae0d18f} err="failed to get container status \"040ce3673b20a9210087c01a712cb764c2c520933bd8bd190d2f251c2ae0d18f\": rpc error: code = NotFound desc = an error occurred when try to find container \"040ce3673b20a9210087c01a712cb764c2c520933bd8bd190d2f251c2ae0d18f\": not found" Feb 13 02:28:40.406766 kubelet[2695]: I0213 02:28:40.406745 2695 scope.go:115] "RemoveContainer" containerID="d6777b6dd0c136cbba3fb51856131108b6344299852d9329f8222508faa68caf" Feb 13 02:28:40.406897 env[1543]: time="2024-02-13T02:28:40.406864153Z" level=error msg="ContainerStatus for \"d6777b6dd0c136cbba3fb51856131108b6344299852d9329f8222508faa68caf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d6777b6dd0c136cbba3fb51856131108b6344299852d9329f8222508faa68caf\": not found" Feb 13 02:28:40.406955 kubelet[2695]: E0213 02:28:40.406947 2695 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d6777b6dd0c136cbba3fb51856131108b6344299852d9329f8222508faa68caf\": not found" containerID="d6777b6dd0c136cbba3fb51856131108b6344299852d9329f8222508faa68caf" Feb 13 02:28:40.406994 kubelet[2695]: I0213 02:28:40.406965 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d6777b6dd0c136cbba3fb51856131108b6344299852d9329f8222508faa68caf} err="failed to get container status \"d6777b6dd0c136cbba3fb51856131108b6344299852d9329f8222508faa68caf\": rpc error: code = NotFound desc = an error occurred when try to find container \"d6777b6dd0c136cbba3fb51856131108b6344299852d9329f8222508faa68caf\": not found" Feb 13 02:28:40.406994 kubelet[2695]: I0213 02:28:40.406971 2695 scope.go:115] "RemoveContainer" containerID="a9c5ccee99c80885ccd2056a442f2fbe4ec420f88129b156889c7bf4ca820da8" Feb 13 02:28:40.407102 env[1543]: time="2024-02-13T02:28:40.407070336Z" level=error msg="ContainerStatus for \"a9c5ccee99c80885ccd2056a442f2fbe4ec420f88129b156889c7bf4ca820da8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9c5ccee99c80885ccd2056a442f2fbe4ec420f88129b156889c7bf4ca820da8\": not found" Feb 13 02:28:40.407172 kubelet[2695]: E0213 02:28:40.407162 2695 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9c5ccee99c80885ccd2056a442f2fbe4ec420f88129b156889c7bf4ca820da8\": not found" containerID="a9c5ccee99c80885ccd2056a442f2fbe4ec420f88129b156889c7bf4ca820da8" Feb 13 02:28:40.407219 kubelet[2695]: I0213 02:28:40.407180 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a9c5ccee99c80885ccd2056a442f2fbe4ec420f88129b156889c7bf4ca820da8} err="failed to get container status \"a9c5ccee99c80885ccd2056a442f2fbe4ec420f88129b156889c7bf4ca820da8\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9c5ccee99c80885ccd2056a442f2fbe4ec420f88129b156889c7bf4ca820da8\": not found" Feb 13 02:28:40.407219 kubelet[2695]: I0213 02:28:40.407186 2695 scope.go:115] "RemoveContainer" containerID="21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9" Feb 13 02:28:40.407788 env[1543]: time="2024-02-13T02:28:40.407771794Z" level=info msg="RemoveContainer for \"21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9\"" Feb 13 02:28:40.409033 env[1543]: time="2024-02-13T02:28:40.409020521Z" level=info msg="RemoveContainer for \"21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9\" returns successfully" Feb 13 02:28:40.409103 kubelet[2695]: I0213 02:28:40.409093 2695 scope.go:115] "RemoveContainer" containerID="21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9" Feb 13 02:28:40.409238 env[1543]: time="2024-02-13T02:28:40.409198657Z" level=error msg="ContainerStatus for \"21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9\": not found" Feb 13 02:28:40.409291 kubelet[2695]: E0213 02:28:40.409285 2695 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9\": not found" containerID="21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9" Feb 13 02:28:40.409316 kubelet[2695]: I0213 02:28:40.409300 2695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9} err="failed to get container status \"21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"21f1604a62c76ac8d9868b7a6637edcabe19bd01795ecd557ca9badf174143f9\": not found" Feb 13 02:28:40.446704 kubelet[2695]: E0213 02:28:40.446602 2695 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 02:28:40.922468 systemd[1]: var-lib-kubelet-pods-6c052796\x2d7445\x2d4971\x2d88b3\x2d002c9a666486-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtlkjj.mount: Deactivated successfully. Feb 13 02:28:40.922551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47e962844bd904a6475ae8bddfaa05237cb1b1e05dc1853d071d71a1dbb78b01-rootfs.mount: Deactivated successfully. Feb 13 02:28:40.922601 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47e962844bd904a6475ae8bddfaa05237cb1b1e05dc1853d071d71a1dbb78b01-shm.mount: Deactivated successfully. Feb 13 02:28:40.922649 systemd[1]: var-lib-kubelet-pods-31ae5097\x2da7c9\x2d4039\x2d9bc0\x2dd616d239369b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d54s2s.mount: Deactivated successfully. Feb 13 02:28:40.922702 systemd[1]: var-lib-kubelet-pods-31ae5097\x2da7c9\x2d4039\x2d9bc0\x2dd616d239369b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 02:28:40.922748 systemd[1]: var-lib-kubelet-pods-31ae5097\x2da7c9\x2d4039\x2d9bc0\x2dd616d239369b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 02:28:41.311810 kubelet[2695]: I0213 02:28:41.311781 2695 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=31ae5097-a7c9-4039-9bc0-d616d239369b path="/var/lib/kubelet/pods/31ae5097-a7c9-4039-9bc0-d616d239369b/volumes" Feb 13 02:28:41.312815 kubelet[2695]: I0213 02:28:41.312759 2695 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=6c052796-7445-4971-88b3-002c9a666486 path="/var/lib/kubelet/pods/6c052796-7445-4971-88b3-002c9a666486/volumes" Feb 13 02:28:41.870355 sshd[4906]: pam_unix(sshd:session): session closed for user core Feb 13 02:28:41.872241 systemd[1]: Started sshd@28-136.144.54.113:22-139.178.68.195:44996.service. Feb 13 02:28:41.872556 systemd[1]: sshd@27-136.144.54.113:22-139.178.68.195:44982.service: Deactivated successfully. Feb 13 02:28:41.873210 systemd-logind[1529]: Session 24 logged out. Waiting for processes to exit. Feb 13 02:28:41.873258 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 02:28:41.873775 systemd-logind[1529]: Removed session 24. Feb 13 02:28:41.909057 sshd[5086]: Accepted publickey for core from 139.178.68.195 port 44996 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:28:41.910136 sshd[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:28:41.913530 systemd-logind[1529]: New session 25 of user core. Feb 13 02:28:41.914393 systemd[1]: Started session-25.scope. Feb 13 02:28:42.332741 sshd[5086]: pam_unix(sshd:session): session closed for user core Feb 13 02:28:42.334527 systemd[1]: Started sshd@29-136.144.54.113:22-139.178.68.195:44998.service. Feb 13 02:28:42.334861 systemd[1]: sshd@28-136.144.54.113:22-139.178.68.195:44996.service: Deactivated successfully. Feb 13 02:28:42.335361 systemd-logind[1529]: Session 25 logged out. Waiting for processes to exit. Feb 13 02:28:42.335400 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 02:28:42.335848 systemd-logind[1529]: Removed session 25. Feb 13 02:28:42.340171 kubelet[2695]: I0213 02:28:42.340148 2695 topology_manager.go:210] "Topology Admit Handler" Feb 13 02:28:42.340481 kubelet[2695]: E0213 02:28:42.340187 2695 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31ae5097-a7c9-4039-9bc0-d616d239369b" containerName="mount-bpf-fs" Feb 13 02:28:42.340481 kubelet[2695]: E0213 02:28:42.340193 2695 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31ae5097-a7c9-4039-9bc0-d616d239369b" containerName="clean-cilium-state" Feb 13 02:28:42.340481 kubelet[2695]: E0213 02:28:42.340197 2695 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31ae5097-a7c9-4039-9bc0-d616d239369b" containerName="cilium-agent" Feb 13 02:28:42.340481 kubelet[2695]: E0213 02:28:42.340202 2695 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31ae5097-a7c9-4039-9bc0-d616d239369b" containerName="mount-cgroup" Feb 13 02:28:42.340481 kubelet[2695]: E0213 02:28:42.340205 2695 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6c052796-7445-4971-88b3-002c9a666486" containerName="cilium-operator" Feb 13 02:28:42.340481 kubelet[2695]: E0213 02:28:42.340210 2695 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31ae5097-a7c9-4039-9bc0-d616d239369b" containerName="apply-sysctl-overwrites" Feb 13 02:28:42.340481 kubelet[2695]: I0213 02:28:42.340229 2695 memory_manager.go:346] "RemoveStaleState removing state" podUID="31ae5097-a7c9-4039-9bc0-d616d239369b" containerName="cilium-agent" Feb 13 02:28:42.340481 kubelet[2695]: I0213 02:28:42.340234 2695 memory_manager.go:346] "RemoveStaleState removing state" podUID="6c052796-7445-4971-88b3-002c9a666486" containerName="cilium-operator" Feb 13 02:28:42.372035 sshd[5110]: Accepted publickey for core from 139.178.68.195 port 44998 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:28:42.372835 sshd[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:28:42.375443 systemd-logind[1529]: New session 26 of user core. Feb 13 02:28:42.375922 systemd[1]: Started session-26.scope. Feb 13 02:28:42.431939 kubelet[2695]: I0213 02:28:42.431889 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-bpf-maps\") pod \"cilium-wvg5b\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " pod="kube-system/cilium-wvg5b" Feb 13 02:28:42.432169 kubelet[2695]: I0213 02:28:42.432067 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-xtables-lock\") pod \"cilium-wvg5b\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " pod="kube-system/cilium-wvg5b" Feb 13 02:28:42.432302 kubelet[2695]: I0213 02:28:42.432183 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0213df12-6cdd-4798-9e6b-cedc12f4be31-cilium-ipsec-secrets\") pod \"cilium-wvg5b\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " pod="kube-system/cilium-wvg5b" Feb 13 02:28:42.432424 kubelet[2695]: I0213 02:28:42.432326 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-host-proc-sys-kernel\") pod \"cilium-wvg5b\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " pod="kube-system/cilium-wvg5b" Feb 13 02:28:42.432595 kubelet[2695]: I0213 02:28:42.432533 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-lib-modules\") pod \"cilium-wvg5b\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " pod="kube-system/cilium-wvg5b" Feb 13 02:28:42.432725 kubelet[2695]: I0213 02:28:42.432613 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0213df12-6cdd-4798-9e6b-cedc12f4be31-clustermesh-secrets\") pod \"cilium-wvg5b\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " pod="kube-system/cilium-wvg5b" Feb 13 02:28:42.432725 kubelet[2695]: I0213 02:28:42.432688 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbxzc\" (UniqueName: \"kubernetes.io/projected/0213df12-6cdd-4798-9e6b-cedc12f4be31-kube-api-access-jbxzc\") pod \"cilium-wvg5b\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " pod="kube-system/cilium-wvg5b" Feb 13 02:28:42.432961 kubelet[2695]: I0213 02:28:42.432920 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-cni-path\") pod \"cilium-wvg5b\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " pod="kube-system/cilium-wvg5b" Feb 13 02:28:42.433077 kubelet[2695]: I0213 02:28:42.433004 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-etc-cni-netd\") pod \"cilium-wvg5b\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " pod="kube-system/cilium-wvg5b" Feb 13 02:28:42.433077 kubelet[2695]: I0213 02:28:42.433057 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-hostproc\") pod \"cilium-wvg5b\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " pod="kube-system/cilium-wvg5b" Feb 13 02:28:42.433260 kubelet[2695]: I0213 02:28:42.433160 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-cilium-cgroup\") pod \"cilium-wvg5b\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " pod="kube-system/cilium-wvg5b" Feb 13 02:28:42.433351 kubelet[2695]: I0213 02:28:42.433265 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-host-proc-sys-net\") pod \"cilium-wvg5b\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " pod="kube-system/cilium-wvg5b" Feb 13 02:28:42.433351 kubelet[2695]: I0213 02:28:42.433323 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-cilium-run\") pod \"cilium-wvg5b\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " pod="kube-system/cilium-wvg5b" Feb 13 02:28:42.433556 kubelet[2695]: I0213 02:28:42.433373 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0213df12-6cdd-4798-9e6b-cedc12f4be31-cilium-config-path\") pod \"cilium-wvg5b\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " pod="kube-system/cilium-wvg5b" Feb 13 02:28:42.433556 kubelet[2695]: I0213 02:28:42.433423 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0213df12-6cdd-4798-9e6b-cedc12f4be31-hubble-tls\") pod \"cilium-wvg5b\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " pod="kube-system/cilium-wvg5b" Feb 13 02:28:42.515389 sshd[5110]: pam_unix(sshd:session): session closed for user core Feb 13 02:28:42.517473 systemd[1]: Started sshd@30-136.144.54.113:22-139.178.68.195:45010.service. Feb 13 02:28:42.517861 systemd[1]: sshd@29-136.144.54.113:22-139.178.68.195:44998.service: Deactivated successfully. Feb 13 02:28:42.518422 systemd-logind[1529]: Session 26 logged out. Waiting for processes to exit. Feb 13 02:28:42.518473 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 02:28:42.519121 systemd-logind[1529]: Removed session 26. Feb 13 02:28:42.554913 sshd[5136]: Accepted publickey for core from 139.178.68.195 port 45010 ssh2: RSA SHA256:SI20tmcLhMJrRXFSJfFiMLkgQ5/JIIloz0aulBy/J9I Feb 13 02:28:42.555826 sshd[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 02:28:42.558936 systemd-logind[1529]: New session 27 of user core. Feb 13 02:28:42.559791 systemd[1]: Started session-27.scope. Feb 13 02:28:42.643170 env[1543]: time="2024-02-13T02:28:42.643001517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wvg5b,Uid:0213df12-6cdd-4798-9e6b-cedc12f4be31,Namespace:kube-system,Attempt:0,}" Feb 13 02:28:42.653538 env[1543]: time="2024-02-13T02:28:42.653430732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 02:28:42.653538 env[1543]: time="2024-02-13T02:28:42.653487743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 02:28:42.653538 env[1543]: time="2024-02-13T02:28:42.653503586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 02:28:42.653733 env[1543]: time="2024-02-13T02:28:42.653699001Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1aa2db938d7a0aa42ad835084499ad101adb3ab78e9da676cba13b87124462e1 pid=5166 runtime=io.containerd.runc.v2 Feb 13 02:28:42.675819 env[1543]: time="2024-02-13T02:28:42.675790174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wvg5b,Uid:0213df12-6cdd-4798-9e6b-cedc12f4be31,Namespace:kube-system,Attempt:0,} returns sandbox id \"1aa2db938d7a0aa42ad835084499ad101adb3ab78e9da676cba13b87124462e1\"" Feb 13 02:28:42.677236 env[1543]: time="2024-02-13T02:28:42.677211098Z" level=info msg="CreateContainer within sandbox \"1aa2db938d7a0aa42ad835084499ad101adb3ab78e9da676cba13b87124462e1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 02:28:42.682728 env[1543]: time="2024-02-13T02:28:42.682704154Z" level=info msg="CreateContainer within sandbox \"1aa2db938d7a0aa42ad835084499ad101adb3ab78e9da676cba13b87124462e1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c0668da640550614d732b28f6ba19191ad984b171b3bc90736cf0be40f8da51d\"" Feb 13 02:28:42.683007 env[1543]: time="2024-02-13T02:28:42.682991615Z" level=info msg="StartContainer for \"c0668da640550614d732b28f6ba19191ad984b171b3bc90736cf0be40f8da51d\"" Feb 13 02:28:42.702654 env[1543]: time="2024-02-13T02:28:42.702629421Z" level=info msg="StartContainer for \"c0668da640550614d732b28f6ba19191ad984b171b3bc90736cf0be40f8da51d\" returns successfully" Feb 13 02:28:42.719175 env[1543]: time="2024-02-13T02:28:42.719147582Z" level=info msg="shim disconnected" id=c0668da640550614d732b28f6ba19191ad984b171b3bc90736cf0be40f8da51d Feb 13 02:28:42.719175 env[1543]: time="2024-02-13T02:28:42.719173922Z" level=warning msg="cleaning up after shim disconnected" id=c0668da640550614d732b28f6ba19191ad984b171b3bc90736cf0be40f8da51d namespace=k8s.io Feb 13 02:28:42.719292 env[1543]: time="2024-02-13T02:28:42.719179234Z" level=info msg="cleaning up dead shim" Feb 13 02:28:42.722490 env[1543]: time="2024-02-13T02:28:42.722475518Z" level=warning msg="cleanup warnings time=\"2024-02-13T02:28:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5252 runtime=io.containerd.runc.v2\n" Feb 13 02:28:43.411004 env[1543]: time="2024-02-13T02:28:43.410866925Z" level=info msg="StopPodSandbox for \"1aa2db938d7a0aa42ad835084499ad101adb3ab78e9da676cba13b87124462e1\"" Feb 13 02:28:43.411333 env[1543]: time="2024-02-13T02:28:43.411018369Z" level=info msg="Container to stop \"c0668da640550614d732b28f6ba19191ad984b171b3bc90736cf0be40f8da51d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 02:28:43.426718 env[1543]: time="2024-02-13T02:28:43.426681188Z" level=info msg="shim disconnected" id=1aa2db938d7a0aa42ad835084499ad101adb3ab78e9da676cba13b87124462e1 Feb 13 02:28:43.426718 env[1543]: time="2024-02-13T02:28:43.426718522Z" level=warning msg="cleaning up after shim disconnected" id=1aa2db938d7a0aa42ad835084499ad101adb3ab78e9da676cba13b87124462e1 namespace=k8s.io Feb 13 02:28:43.426904 env[1543]: time="2024-02-13T02:28:43.426728189Z" level=info msg="cleaning up dead shim" Feb 13 02:28:43.430219 env[1543]: time="2024-02-13T02:28:43.430166966Z" level=warning msg="cleanup warnings time=\"2024-02-13T02:28:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5283 runtime=io.containerd.runc.v2\n" Feb 13 02:28:43.430343 env[1543]: time="2024-02-13T02:28:43.430329959Z" level=info msg="TearDown network for sandbox \"1aa2db938d7a0aa42ad835084499ad101adb3ab78e9da676cba13b87124462e1\" successfully" Feb 13 02:28:43.430369 env[1543]: time="2024-02-13T02:28:43.430344747Z" level=info msg="StopPodSandbox for \"1aa2db938d7a0aa42ad835084499ad101adb3ab78e9da676cba13b87124462e1\" returns successfully" Feb 13 02:28:43.540331 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1aa2db938d7a0aa42ad835084499ad101adb3ab78e9da676cba13b87124462e1-rootfs.mount: Deactivated successfully. Feb 13 02:28:43.540440 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1aa2db938d7a0aa42ad835084499ad101adb3ab78e9da676cba13b87124462e1-shm.mount: Deactivated successfully. Feb 13 02:28:43.543756 kubelet[2695]: I0213 02:28:43.543726 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-xtables-lock\") pod \"0213df12-6cdd-4798-9e6b-cedc12f4be31\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " Feb 13 02:28:43.543922 kubelet[2695]: I0213 02:28:43.543759 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0213df12-6cdd-4798-9e6b-cedc12f4be31" (UID: "0213df12-6cdd-4798-9e6b-cedc12f4be31"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 02:28:43.543922 kubelet[2695]: I0213 02:28:43.543778 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-cni-path\") pod \"0213df12-6cdd-4798-9e6b-cedc12f4be31\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " Feb 13 02:28:43.543922 kubelet[2695]: I0213 02:28:43.543794 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-etc-cni-netd\") pod \"0213df12-6cdd-4798-9e6b-cedc12f4be31\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " Feb 13 02:28:43.543922 kubelet[2695]: I0213 02:28:43.543803 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-hostproc\") pod \"0213df12-6cdd-4798-9e6b-cedc12f4be31\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " Feb 13 02:28:43.543922 kubelet[2695]: I0213 02:28:43.543814 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-bpf-maps\") pod \"0213df12-6cdd-4798-9e6b-cedc12f4be31\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " Feb 13 02:28:43.543922 kubelet[2695]: I0213 02:28:43.543830 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-lib-modules\") pod \"0213df12-6cdd-4798-9e6b-cedc12f4be31\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " Feb 13 02:28:43.544085 kubelet[2695]: I0213 02:28:43.543830 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0213df12-6cdd-4798-9e6b-cedc12f4be31" (UID: "0213df12-6cdd-4798-9e6b-cedc12f4be31"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 02:28:43.544085 kubelet[2695]: I0213 02:28:43.543841 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-hostproc" (OuterVolumeSpecName: "hostproc") pod "0213df12-6cdd-4798-9e6b-cedc12f4be31" (UID: "0213df12-6cdd-4798-9e6b-cedc12f4be31"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 02:28:43.544085 kubelet[2695]: I0213 02:28:43.543848 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-cni-path" (OuterVolumeSpecName: "cni-path") pod "0213df12-6cdd-4798-9e6b-cedc12f4be31" (UID: "0213df12-6cdd-4798-9e6b-cedc12f4be31"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 02:28:43.544085 kubelet[2695]: I0213 02:28:43.543853 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0213df12-6cdd-4798-9e6b-cedc12f4be31" (UID: "0213df12-6cdd-4798-9e6b-cedc12f4be31"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 02:28:43.544085 kubelet[2695]: I0213 02:28:43.543852 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbxzc\" (UniqueName: \"kubernetes.io/projected/0213df12-6cdd-4798-9e6b-cedc12f4be31-kube-api-access-jbxzc\") pod \"0213df12-6cdd-4798-9e6b-cedc12f4be31\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " Feb 13 02:28:43.544253 kubelet[2695]: I0213 02:28:43.543855 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0213df12-6cdd-4798-9e6b-cedc12f4be31" (UID: "0213df12-6cdd-4798-9e6b-cedc12f4be31"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 02:28:43.544253 kubelet[2695]: I0213 02:28:43.543875 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-cilium-run\") pod \"0213df12-6cdd-4798-9e6b-cedc12f4be31\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " Feb 13 02:28:43.544253 kubelet[2695]: I0213 02:28:43.543897 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0213df12-6cdd-4798-9e6b-cedc12f4be31-cilium-ipsec-secrets\") pod \"0213df12-6cdd-4798-9e6b-cedc12f4be31\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " Feb 13 02:28:43.544253 kubelet[2695]: I0213 02:28:43.543914 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-host-proc-sys-kernel\") pod \"0213df12-6cdd-4798-9e6b-cedc12f4be31\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " Feb 13 02:28:43.544253 kubelet[2695]: I0213 02:28:43.543925 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0213df12-6cdd-4798-9e6b-cedc12f4be31" (UID: "0213df12-6cdd-4798-9e6b-cedc12f4be31"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 02:28:43.544253 kubelet[2695]: I0213 02:28:43.543934 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0213df12-6cdd-4798-9e6b-cedc12f4be31-clustermesh-secrets\") pod \"0213df12-6cdd-4798-9e6b-cedc12f4be31\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " Feb 13 02:28:43.544420 kubelet[2695]: I0213 02:28:43.543947 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0213df12-6cdd-4798-9e6b-cedc12f4be31" (UID: "0213df12-6cdd-4798-9e6b-cedc12f4be31"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 02:28:43.544420 kubelet[2695]: I0213 02:28:43.543955 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-host-proc-sys-net\") pod \"0213df12-6cdd-4798-9e6b-cedc12f4be31\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " Feb 13 02:28:43.544420 kubelet[2695]: I0213 02:28:43.543964 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0213df12-6cdd-4798-9e6b-cedc12f4be31" (UID: "0213df12-6cdd-4798-9e6b-cedc12f4be31"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 02:28:43.544420 kubelet[2695]: I0213 02:28:43.543974 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0213df12-6cdd-4798-9e6b-cedc12f4be31-hubble-tls\") pod \"0213df12-6cdd-4798-9e6b-cedc12f4be31\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " Feb 13 02:28:43.544420 kubelet[2695]: I0213 02:28:43.543991 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-cilium-cgroup\") pod \"0213df12-6cdd-4798-9e6b-cedc12f4be31\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " Feb 13 02:28:43.544615 kubelet[2695]: I0213 02:28:43.544010 2695 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0213df12-6cdd-4798-9e6b-cedc12f4be31-cilium-config-path\") pod \"0213df12-6cdd-4798-9e6b-cedc12f4be31\" (UID: \"0213df12-6cdd-4798-9e6b-cedc12f4be31\") " Feb 13 02:28:43.544615 kubelet[2695]: I0213 02:28:43.544037 2695 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-xtables-lock\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:43.544615 kubelet[2695]: I0213 02:28:43.544045 2695 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-cni-path\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:43.544615 kubelet[2695]: I0213 02:28:43.544051 2695 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-etc-cni-netd\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:43.544615 kubelet[2695]: I0213 02:28:43.544036 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0213df12-6cdd-4798-9e6b-cedc12f4be31" (UID: "0213df12-6cdd-4798-9e6b-cedc12f4be31"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 02:28:43.544615 kubelet[2695]: I0213 02:28:43.544056 2695 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-hostproc\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:43.544615 kubelet[2695]: I0213 02:28:43.544076 2695 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-bpf-maps\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:43.544824 kubelet[2695]: I0213 02:28:43.544081 2695 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-lib-modules\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:43.544824 kubelet[2695]: I0213 02:28:43.544091 2695 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-cilium-run\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:43.544824 kubelet[2695]: I0213 02:28:43.544102 2695 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:43.544824 kubelet[2695]: I0213 02:28:43.544111 2695 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-host-proc-sys-net\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:43.544824 kubelet[2695]: W0213 02:28:43.544181 2695 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/0213df12-6cdd-4798-9e6b-cedc12f4be31/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 13 02:28:43.545855 kubelet[2695]: I0213 02:28:43.545837 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0213df12-6cdd-4798-9e6b-cedc12f4be31-kube-api-access-jbxzc" (OuterVolumeSpecName: "kube-api-access-jbxzc") pod "0213df12-6cdd-4798-9e6b-cedc12f4be31" (UID: "0213df12-6cdd-4798-9e6b-cedc12f4be31"). InnerVolumeSpecName "kube-api-access-jbxzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 02:28:43.545855 kubelet[2695]: I0213 02:28:43.545846 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0213df12-6cdd-4798-9e6b-cedc12f4be31-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0213df12-6cdd-4798-9e6b-cedc12f4be31" (UID: "0213df12-6cdd-4798-9e6b-cedc12f4be31"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 02:28:43.545939 kubelet[2695]: I0213 02:28:43.545923 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0213df12-6cdd-4798-9e6b-cedc12f4be31-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "0213df12-6cdd-4798-9e6b-cedc12f4be31" (UID: "0213df12-6cdd-4798-9e6b-cedc12f4be31"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 02:28:43.546177 kubelet[2695]: I0213 02:28:43.546159 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0213df12-6cdd-4798-9e6b-cedc12f4be31-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0213df12-6cdd-4798-9e6b-cedc12f4be31" (UID: "0213df12-6cdd-4798-9e6b-cedc12f4be31"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 02:28:43.546177 kubelet[2695]: I0213 02:28:43.546161 2695 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0213df12-6cdd-4798-9e6b-cedc12f4be31-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0213df12-6cdd-4798-9e6b-cedc12f4be31" (UID: "0213df12-6cdd-4798-9e6b-cedc12f4be31"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 02:28:43.546656 systemd[1]: var-lib-kubelet-pods-0213df12\x2d6cdd\x2d4798\x2d9e6b\x2dcedc12f4be31-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djbxzc.mount: Deactivated successfully. Feb 13 02:28:43.546737 systemd[1]: var-lib-kubelet-pods-0213df12\x2d6cdd\x2d4798\x2d9e6b\x2dcedc12f4be31-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 02:28:43.546794 systemd[1]: var-lib-kubelet-pods-0213df12\x2d6cdd\x2d4798\x2d9e6b\x2dcedc12f4be31-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 13 02:28:43.548331 systemd[1]: var-lib-kubelet-pods-0213df12\x2d6cdd\x2d4798\x2d9e6b\x2dcedc12f4be31-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 02:28:43.590726 kubelet[2695]: I0213 02:28:43.590716 2695 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-4f4948c732" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-13 02:28:43.590672204 +0000 UTC m=+398.382116958 LastTransitionTime:2024-02-13 02:28:43.590672204 +0000 UTC m=+398.382116958 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 13 02:28:43.645169 kubelet[2695]: I0213 02:28:43.645054 2695 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0213df12-6cdd-4798-9e6b-cedc12f4be31-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:43.645169 kubelet[2695]: I0213 02:28:43.645136 2695 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0213df12-6cdd-4798-9e6b-cedc12f4be31-clustermesh-secrets\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:43.645169 kubelet[2695]: I0213 02:28:43.645172 2695 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0213df12-6cdd-4798-9e6b-cedc12f4be31-hubble-tls\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:43.645744 kubelet[2695]: I0213 02:28:43.645206 2695 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0213df12-6cdd-4798-9e6b-cedc12f4be31-cilium-cgroup\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:43.645744 kubelet[2695]: I0213 02:28:43.645243 2695 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0213df12-6cdd-4798-9e6b-cedc12f4be31-cilium-config-path\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:43.645744 kubelet[2695]: I0213 02:28:43.645275 2695 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-jbxzc\" (UniqueName: \"kubernetes.io/projected/0213df12-6cdd-4798-9e6b-cedc12f4be31-kube-api-access-jbxzc\") on node \"ci-3510.3.2-a-4f4948c732\" DevicePath \"\"" Feb 13 02:28:44.416792 kubelet[2695]: I0213 02:28:44.416730 2695 scope.go:115] "RemoveContainer" containerID="c0668da640550614d732b28f6ba19191ad984b171b3bc90736cf0be40f8da51d" Feb 13 02:28:44.419877 env[1543]: time="2024-02-13T02:28:44.419736384Z" level=info msg="RemoveContainer for \"c0668da640550614d732b28f6ba19191ad984b171b3bc90736cf0be40f8da51d\"" Feb 13 02:28:44.422625 env[1543]: time="2024-02-13T02:28:44.422587849Z" level=info msg="RemoveContainer for \"c0668da640550614d732b28f6ba19191ad984b171b3bc90736cf0be40f8da51d\" returns successfully" Feb 13 02:28:44.444928 kubelet[2695]: I0213 02:28:44.444893 2695 topology_manager.go:210] "Topology Admit Handler" Feb 13 02:28:44.445120 kubelet[2695]: E0213 02:28:44.445103 2695 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0213df12-6cdd-4798-9e6b-cedc12f4be31" containerName="mount-cgroup" Feb 13 02:28:44.445205 kubelet[2695]: I0213 02:28:44.445138 2695 memory_manager.go:346] "RemoveStaleState removing state" podUID="0213df12-6cdd-4798-9e6b-cedc12f4be31" containerName="mount-cgroup" Feb 13 02:28:44.552859 kubelet[2695]: I0213 02:28:44.552745 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/54bc72cf-2a85-435e-8f70-f932c4b1b03d-hubble-tls\") pod \"cilium-msfbf\" (UID: \"54bc72cf-2a85-435e-8f70-f932c4b1b03d\") " pod="kube-system/cilium-msfbf" Feb 13 02:28:44.553801 kubelet[2695]: I0213 02:28:44.552943 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/54bc72cf-2a85-435e-8f70-f932c4b1b03d-etc-cni-netd\") pod \"cilium-msfbf\" (UID: \"54bc72cf-2a85-435e-8f70-f932c4b1b03d\") " pod="kube-system/cilium-msfbf" Feb 13 02:28:44.553801 kubelet[2695]: I0213 02:28:44.553141 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/54bc72cf-2a85-435e-8f70-f932c4b1b03d-cni-path\") pod \"cilium-msfbf\" (UID: \"54bc72cf-2a85-435e-8f70-f932c4b1b03d\") " pod="kube-system/cilium-msfbf" Feb 13 02:28:44.553801 kubelet[2695]: I0213 02:28:44.553316 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/54bc72cf-2a85-435e-8f70-f932c4b1b03d-cilium-run\") pod \"cilium-msfbf\" (UID: \"54bc72cf-2a85-435e-8f70-f932c4b1b03d\") " pod="kube-system/cilium-msfbf" Feb 13 02:28:44.553801 kubelet[2695]: I0213 02:28:44.553520 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slk2g\" (UniqueName: \"kubernetes.io/projected/54bc72cf-2a85-435e-8f70-f932c4b1b03d-kube-api-access-slk2g\") pod \"cilium-msfbf\" (UID: \"54bc72cf-2a85-435e-8f70-f932c4b1b03d\") " pod="kube-system/cilium-msfbf" Feb 13 02:28:44.553801 kubelet[2695]: I0213 02:28:44.553620 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/54bc72cf-2a85-435e-8f70-f932c4b1b03d-host-proc-sys-kernel\") pod \"cilium-msfbf\" (UID: \"54bc72cf-2a85-435e-8f70-f932c4b1b03d\") " pod="kube-system/cilium-msfbf" Feb 13 02:28:44.553801 kubelet[2695]: I0213 02:28:44.553751 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54bc72cf-2a85-435e-8f70-f932c4b1b03d-lib-modules\") pod \"cilium-msfbf\" (UID: \"54bc72cf-2a85-435e-8f70-f932c4b1b03d\") " pod="kube-system/cilium-msfbf" Feb 13 02:28:44.554466 kubelet[2695]: I0213 02:28:44.553849 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/54bc72cf-2a85-435e-8f70-f932c4b1b03d-hostproc\") pod \"cilium-msfbf\" (UID: \"54bc72cf-2a85-435e-8f70-f932c4b1b03d\") " pod="kube-system/cilium-msfbf" Feb 13 02:28:44.554466 kubelet[2695]: I0213 02:28:44.553909 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/54bc72cf-2a85-435e-8f70-f932c4b1b03d-cilium-cgroup\") pod \"cilium-msfbf\" (UID: \"54bc72cf-2a85-435e-8f70-f932c4b1b03d\") " pod="kube-system/cilium-msfbf" Feb 13 02:28:44.554466 kubelet[2695]: I0213 02:28:44.554036 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/54bc72cf-2a85-435e-8f70-f932c4b1b03d-cilium-config-path\") pod \"cilium-msfbf\" (UID: \"54bc72cf-2a85-435e-8f70-f932c4b1b03d\") " pod="kube-system/cilium-msfbf" Feb 13 02:28:44.554466 kubelet[2695]: I0213 02:28:44.554139 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54bc72cf-2a85-435e-8f70-f932c4b1b03d-xtables-lock\") pod \"cilium-msfbf\" (UID: \"54bc72cf-2a85-435e-8f70-f932c4b1b03d\") " pod="kube-system/cilium-msfbf" Feb 13 02:28:44.554466 kubelet[2695]: I0213 02:28:44.554221 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/54bc72cf-2a85-435e-8f70-f932c4b1b03d-host-proc-sys-net\") pod \"cilium-msfbf\" (UID: \"54bc72cf-2a85-435e-8f70-f932c4b1b03d\") " pod="kube-system/cilium-msfbf" Feb 13 02:28:44.554466 kubelet[2695]: I0213 02:28:44.554288 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/54bc72cf-2a85-435e-8f70-f932c4b1b03d-clustermesh-secrets\") pod \"cilium-msfbf\" (UID: \"54bc72cf-2a85-435e-8f70-f932c4b1b03d\") " pod="kube-system/cilium-msfbf" Feb 13 02:28:44.555071 kubelet[2695]: I0213 02:28:44.554469 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/54bc72cf-2a85-435e-8f70-f932c4b1b03d-bpf-maps\") pod \"cilium-msfbf\" (UID: \"54bc72cf-2a85-435e-8f70-f932c4b1b03d\") " pod="kube-system/cilium-msfbf" Feb 13 02:28:44.555071 kubelet[2695]: I0213 02:28:44.554605 2695 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/54bc72cf-2a85-435e-8f70-f932c4b1b03d-cilium-ipsec-secrets\") pod \"cilium-msfbf\" (UID: \"54bc72cf-2a85-435e-8f70-f932c4b1b03d\") " pod="kube-system/cilium-msfbf" Feb 13 02:28:44.748155 env[1543]: time="2024-02-13T02:28:44.748118696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-msfbf,Uid:54bc72cf-2a85-435e-8f70-f932c4b1b03d,Namespace:kube-system,Attempt:0,}" Feb 13 02:28:44.756072 env[1543]: time="2024-02-13T02:28:44.755947000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 02:28:44.756072 env[1543]: time="2024-02-13T02:28:44.755990916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 02:28:44.756072 env[1543]: time="2024-02-13T02:28:44.756004486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 02:28:44.756247 env[1543]: time="2024-02-13T02:28:44.756170794Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ca3fd6a7fdfecfe32db985c67b12df9d11f049a0bab605f5e1b0ef7006aa977 pid=5312 runtime=io.containerd.runc.v2 Feb 13 02:28:44.796052 env[1543]: time="2024-02-13T02:28:44.795923572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-msfbf,Uid:54bc72cf-2a85-435e-8f70-f932c4b1b03d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ca3fd6a7fdfecfe32db985c67b12df9d11f049a0bab605f5e1b0ef7006aa977\"" Feb 13 02:28:44.800131 env[1543]: time="2024-02-13T02:28:44.800070894Z" level=info msg="CreateContainer within sandbox \"6ca3fd6a7fdfecfe32db985c67b12df9d11f049a0bab605f5e1b0ef7006aa977\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 02:28:44.810587 env[1543]: time="2024-02-13T02:28:44.810482584Z" level=info msg="CreateContainer within sandbox \"6ca3fd6a7fdfecfe32db985c67b12df9d11f049a0bab605f5e1b0ef7006aa977\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d111d03515d37c6eff029c9989108ec7f56aa079905943873cf31975a457c1f1\"" Feb 13 02:28:44.811317 env[1543]: time="2024-02-13T02:28:44.811260460Z" level=info msg="StartContainer for \"d111d03515d37c6eff029c9989108ec7f56aa079905943873cf31975a457c1f1\"" Feb 13 02:28:44.883974 env[1543]: time="2024-02-13T02:28:44.883858989Z" level=info msg="StartContainer for \"d111d03515d37c6eff029c9989108ec7f56aa079905943873cf31975a457c1f1\" returns successfully" Feb 13 02:28:44.935838 env[1543]: time="2024-02-13T02:28:44.935708005Z" level=info msg="shim disconnected" id=d111d03515d37c6eff029c9989108ec7f56aa079905943873cf31975a457c1f1 Feb 13 02:28:44.935838 env[1543]: time="2024-02-13T02:28:44.935804186Z" level=warning msg="cleaning up after shim disconnected" id=d111d03515d37c6eff029c9989108ec7f56aa079905943873cf31975a457c1f1 namespace=k8s.io Feb 13 02:28:44.935838 env[1543]: time="2024-02-13T02:28:44.935830932Z" level=info msg="cleaning up dead shim" Feb 13 02:28:44.948621 env[1543]: time="2024-02-13T02:28:44.948548082Z" level=warning msg="cleanup warnings time=\"2024-02-13T02:28:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5394 runtime=io.containerd.runc.v2\n" Feb 13 02:28:45.314920 kubelet[2695]: I0213 02:28:45.314821 2695 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=0213df12-6cdd-4798-9e6b-cedc12f4be31 path="/var/lib/kubelet/pods/0213df12-6cdd-4798-9e6b-cedc12f4be31/volumes" Feb 13 02:28:45.429846 env[1543]: time="2024-02-13T02:28:45.429747593Z" level=info msg="CreateContainer within sandbox \"6ca3fd6a7fdfecfe32db985c67b12df9d11f049a0bab605f5e1b0ef7006aa977\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 02:28:45.437562 env[1543]: time="2024-02-13T02:28:45.437469227Z" level=info msg="CreateContainer within sandbox \"6ca3fd6a7fdfecfe32db985c67b12df9d11f049a0bab605f5e1b0ef7006aa977\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6862a298f5100d922338234ed9b5a48908c9538330ce8f1874da1516790ef170\"" Feb 13 02:28:45.437947 env[1543]: time="2024-02-13T02:28:45.437891931Z" level=info msg="StartContainer for \"6862a298f5100d922338234ed9b5a48908c9538330ce8f1874da1516790ef170\"" Feb 13 02:28:45.447569 kubelet[2695]: E0213 02:28:45.447553 2695 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 02:28:45.458726 env[1543]: time="2024-02-13T02:28:45.458674142Z" level=info msg="StartContainer for \"6862a298f5100d922338234ed9b5a48908c9538330ce8f1874da1516790ef170\" returns successfully" Feb 13 02:28:45.470274 env[1543]: time="2024-02-13T02:28:45.470245202Z" level=info msg="shim disconnected" id=6862a298f5100d922338234ed9b5a48908c9538330ce8f1874da1516790ef170 Feb 13 02:28:45.470376 env[1543]: time="2024-02-13T02:28:45.470274880Z" level=warning msg="cleaning up after shim disconnected" id=6862a298f5100d922338234ed9b5a48908c9538330ce8f1874da1516790ef170 namespace=k8s.io Feb 13 02:28:45.470376 env[1543]: time="2024-02-13T02:28:45.470282615Z" level=info msg="cleaning up dead shim" Feb 13 02:28:45.473981 env[1543]: time="2024-02-13T02:28:45.473934584Z" level=warning msg="cleanup warnings time=\"2024-02-13T02:28:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5455 runtime=io.containerd.runc.v2\n" Feb 13 02:28:46.433849 env[1543]: time="2024-02-13T02:28:46.433783304Z" level=info msg="CreateContainer within sandbox \"6ca3fd6a7fdfecfe32db985c67b12df9d11f049a0bab605f5e1b0ef7006aa977\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 02:28:46.438755 env[1543]: time="2024-02-13T02:28:46.438725026Z" level=info msg="CreateContainer within sandbox \"6ca3fd6a7fdfecfe32db985c67b12df9d11f049a0bab605f5e1b0ef7006aa977\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"556474e7e3df8ff583131e764eb4f8aa2a6ecb0c3ce55e699102b69b47ffc1bd\"" Feb 13 02:28:46.439030 env[1543]: time="2024-02-13T02:28:46.439012908Z" level=info msg="StartContainer for \"556474e7e3df8ff583131e764eb4f8aa2a6ecb0c3ce55e699102b69b47ffc1bd\"" Feb 13 02:28:46.463622 env[1543]: time="2024-02-13T02:28:46.463595451Z" level=info msg="StartContainer for \"556474e7e3df8ff583131e764eb4f8aa2a6ecb0c3ce55e699102b69b47ffc1bd\" returns successfully" Feb 13 02:28:46.474226 env[1543]: time="2024-02-13T02:28:46.474163358Z" level=info msg="shim disconnected" id=556474e7e3df8ff583131e764eb4f8aa2a6ecb0c3ce55e699102b69b47ffc1bd Feb 13 02:28:46.474226 env[1543]: time="2024-02-13T02:28:46.474194629Z" level=warning msg="cleaning up after shim disconnected" id=556474e7e3df8ff583131e764eb4f8aa2a6ecb0c3ce55e699102b69b47ffc1bd namespace=k8s.io Feb 13 02:28:46.474226 env[1543]: time="2024-02-13T02:28:46.474201477Z" level=info msg="cleaning up dead shim" Feb 13 02:28:46.477843 env[1543]: time="2024-02-13T02:28:46.477826485Z" level=warning msg="cleanup warnings time=\"2024-02-13T02:28:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5511 runtime=io.containerd.runc.v2\n" Feb 13 02:28:46.666786 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-556474e7e3df8ff583131e764eb4f8aa2a6ecb0c3ce55e699102b69b47ffc1bd-rootfs.mount: Deactivated successfully. Feb 13 02:28:47.436935 env[1543]: time="2024-02-13T02:28:47.436900419Z" level=info msg="CreateContainer within sandbox \"6ca3fd6a7fdfecfe32db985c67b12df9d11f049a0bab605f5e1b0ef7006aa977\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 02:28:47.441975 env[1543]: time="2024-02-13T02:28:47.441940576Z" level=info msg="CreateContainer within sandbox \"6ca3fd6a7fdfecfe32db985c67b12df9d11f049a0bab605f5e1b0ef7006aa977\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"85f484b22cf9515aa45c5630a046f7906518ca6558bd426381dd77cc13be64a9\"" Feb 13 02:28:47.442330 env[1543]: time="2024-02-13T02:28:47.442299123Z" level=info msg="StartContainer for \"85f484b22cf9515aa45c5630a046f7906518ca6558bd426381dd77cc13be64a9\"" Feb 13 02:28:47.444911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount421905953.mount: Deactivated successfully. Feb 13 02:28:47.464806 env[1543]: time="2024-02-13T02:28:47.464754175Z" level=info msg="StartContainer for \"85f484b22cf9515aa45c5630a046f7906518ca6558bd426381dd77cc13be64a9\" returns successfully" Feb 13 02:28:47.473395 env[1543]: time="2024-02-13T02:28:47.473369843Z" level=info msg="shim disconnected" id=85f484b22cf9515aa45c5630a046f7906518ca6558bd426381dd77cc13be64a9 Feb 13 02:28:47.473395 env[1543]: time="2024-02-13T02:28:47.473394885Z" level=warning msg="cleaning up after shim disconnected" id=85f484b22cf9515aa45c5630a046f7906518ca6558bd426381dd77cc13be64a9 namespace=k8s.io Feb 13 02:28:47.473560 env[1543]: time="2024-02-13T02:28:47.473400715Z" level=info msg="cleaning up dead shim" Feb 13 02:28:47.477136 env[1543]: time="2024-02-13T02:28:47.477115990Z" level=warning msg="cleanup warnings time=\"2024-02-13T02:28:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5564 runtime=io.containerd.runc.v2\n" Feb 13 02:28:47.667544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85f484b22cf9515aa45c5630a046f7906518ca6558bd426381dd77cc13be64a9-rootfs.mount: Deactivated successfully. Feb 13 02:28:48.307978 kubelet[2695]: E0213 02:28:48.307846 2695 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-csrbk" podUID=9e533553-6470-42f6-b41f-bbe33d5a9567 Feb 13 02:28:48.439627 env[1543]: time="2024-02-13T02:28:48.439568247Z" level=info msg="CreateContainer within sandbox \"6ca3fd6a7fdfecfe32db985c67b12df9d11f049a0bab605f5e1b0ef7006aa977\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 02:28:48.445406 env[1543]: time="2024-02-13T02:28:48.445373661Z" level=info msg="CreateContainer within sandbox \"6ca3fd6a7fdfecfe32db985c67b12df9d11f049a0bab605f5e1b0ef7006aa977\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d5091c59a77f74a21f80316c7ac576602a23a1a1b4bd43abab8e4cb1a9425cfa\"" Feb 13 02:28:48.445803 env[1543]: time="2024-02-13T02:28:48.445774462Z" level=info msg="StartContainer for \"d5091c59a77f74a21f80316c7ac576602a23a1a1b4bd43abab8e4cb1a9425cfa\"" Feb 13 02:28:48.469311 env[1543]: time="2024-02-13T02:28:48.469287495Z" level=info msg="StartContainer for \"d5091c59a77f74a21f80316c7ac576602a23a1a1b4bd43abab8e4cb1a9425cfa\" returns successfully" Feb 13 02:28:48.612457 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 02:28:49.487358 kubelet[2695]: I0213 02:28:49.487179 2695 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-msfbf" podStartSLOduration=5.487087316 pod.CreationTimestamp="2024-02-13 02:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 02:28:49.486910252 +0000 UTC m=+404.278355053" watchObservedRunningTime="2024-02-13 02:28:49.487087316 +0000 UTC m=+404.278532117" Feb 13 02:28:50.307244 kubelet[2695]: E0213 02:28:50.307194 2695 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-csrbk" podUID=9e533553-6470-42f6-b41f-bbe33d5a9567 Feb 13 02:28:51.629508 systemd-networkd[1392]: lxc_health: Link UP Feb 13 02:28:51.648345 systemd-networkd[1392]: lxc_health: Gained carrier Feb 13 02:28:51.648505 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 13 02:28:52.789595 systemd-networkd[1392]: lxc_health: Gained IPv6LL Feb 13 02:28:57.168302 sshd[5136]: pam_unix(sshd:session): session closed for user core Feb 13 02:28:57.169816 systemd[1]: sshd@30-136.144.54.113:22-139.178.68.195:45010.service: Deactivated successfully. Feb 13 02:28:57.170423 systemd-logind[1529]: Session 27 logged out. Waiting for processes to exit. Feb 13 02:28:57.170463 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 02:28:57.170936 systemd-logind[1529]: Removed session 27.