Feb 9 09:54:41.552577 kernel: microcode: microcode updated early to revision 0xf4, date = 2022-07-31 Feb 9 09:54:41.552590 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 9 09:54:41.552597 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 09:54:41.552602 kernel: BIOS-provided physical RAM map: Feb 9 09:54:41.552605 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 9 09:54:41.552609 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 9 09:54:41.552614 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 9 09:54:41.552618 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 9 09:54:41.552622 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 9 09:54:41.552626 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000062034fff] usable Feb 9 09:54:41.552630 kernel: BIOS-e820: [mem 0x0000000062035000-0x0000000062035fff] ACPI NVS Feb 9 09:54:41.552634 kernel: BIOS-e820: [mem 0x0000000062036000-0x0000000062036fff] reserved Feb 9 09:54:41.552637 kernel: BIOS-e820: [mem 0x0000000062037000-0x000000006c0c4fff] usable Feb 9 09:54:41.552641 kernel: BIOS-e820: [mem 0x000000006c0c5000-0x000000006d1a7fff] reserved Feb 9 09:54:41.552646 kernel: BIOS-e820: [mem 0x000000006d1a8000-0x000000006d330fff] usable Feb 9 09:54:41.552651 kernel: BIOS-e820: [mem 0x000000006d331000-0x000000006d762fff] ACPI NVS Feb 9 09:54:41.552656 kernel: BIOS-e820: [mem 0x000000006d763000-0x000000006fffefff] reserved Feb 9 09:54:41.552660 kernel: BIOS-e820: [mem 0x000000006ffff000-0x000000006fffffff] usable Feb 9 09:54:41.552664 kernel: BIOS-e820: [mem 0x0000000070000000-0x000000007b7fffff] reserved Feb 9 09:54:41.552668 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 9 09:54:41.552672 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 9 09:54:41.552676 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 9 09:54:41.552680 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 9 09:54:41.552684 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 9 09:54:41.552688 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000008837fffff] usable Feb 9 09:54:41.552693 kernel: NX (Execute Disable) protection: active Feb 9 09:54:41.552698 kernel: SMBIOS 3.2.1 present. Feb 9 09:54:41.552702 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 1.5 11/17/2020 Feb 9 09:54:41.552706 kernel: tsc: Detected 3400.000 MHz processor Feb 9 09:54:41.552710 kernel: tsc: Detected 3399.906 MHz TSC Feb 9 09:54:41.552714 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 09:54:41.552719 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 09:54:41.552724 kernel: last_pfn = 0x883800 max_arch_pfn = 0x400000000 Feb 9 09:54:41.552728 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 09:54:41.552732 kernel: last_pfn = 0x70000 max_arch_pfn = 0x400000000 Feb 9 09:54:41.552737 kernel: Using GB pages for direct mapping Feb 9 09:54:41.552742 kernel: ACPI: Early table checksum verification disabled Feb 9 09:54:41.552746 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 9 09:54:41.552750 kernel: ACPI: XSDT 0x000000006D6440C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 9 09:54:41.552755 kernel: ACPI: FACP 0x000000006D680620 000114 (v06 01072009 AMI 00010013) Feb 9 09:54:41.552761 kernel: ACPI: DSDT 0x000000006D644268 03C3B7 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 9 09:54:41.552765 kernel: ACPI: FACS 0x000000006D762F80 000040 Feb 9 09:54:41.552771 kernel: ACPI: APIC 0x000000006D680738 00012C (v04 01072009 AMI 00010013) Feb 9 09:54:41.552776 kernel: ACPI: FPDT 0x000000006D680868 000044 (v01 01072009 AMI 00010013) Feb 9 09:54:41.552780 kernel: ACPI: FIDT 0x000000006D6808B0 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 9 09:54:41.552785 kernel: ACPI: MCFG 0x000000006D680950 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 9 09:54:41.552789 kernel: ACPI: SPMI 0x000000006D680990 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 9 09:54:41.552794 kernel: ACPI: SSDT 0x000000006D6809D8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 9 09:54:41.552799 kernel: ACPI: SSDT 0x000000006D6824F8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 9 09:54:41.552804 kernel: ACPI: SSDT 0x000000006D6856C0 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 9 09:54:41.552809 kernel: ACPI: HPET 0x000000006D6879F0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 09:54:41.552813 kernel: ACPI: SSDT 0x000000006D687A28 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 9 09:54:41.552818 kernel: ACPI: SSDT 0x000000006D6889D8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 9 09:54:41.552823 kernel: ACPI: UEFI 0x000000006D6892D0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 09:54:41.552827 kernel: ACPI: LPIT 0x000000006D689318 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 09:54:41.552832 kernel: ACPI: SSDT 0x000000006D6893B0 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 9 09:54:41.552837 kernel: ACPI: SSDT 0x000000006D68BB90 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 9 09:54:41.552841 kernel: ACPI: DBGP 0x000000006D68D078 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 9 09:54:41.552847 kernel: ACPI: DBG2 0x000000006D68D0B0 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 9 09:54:41.552851 kernel: ACPI: SSDT 0x000000006D68D108 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 9 09:54:41.552856 kernel: ACPI: DMAR 0x000000006D68EC70 0000A8 (v01 INTEL EDK2 00000002 01000013) Feb 9 09:54:41.552860 kernel: ACPI: SSDT 0x000000006D68ED18 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 9 09:54:41.552865 kernel: ACPI: TPM2 0x000000006D68EE60 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 9 09:54:41.552870 kernel: ACPI: SSDT 0x000000006D68EE98 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 9 09:54:41.552875 kernel: ACPI: WSMT 0x000000006D68FC28 000028 (v01 \xfca 01072009 AMI 00010013) Feb 9 09:54:41.552879 kernel: ACPI: EINJ 0x000000006D68FC50 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 9 09:54:41.552885 kernel: ACPI: ERST 0x000000006D68FD80 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 9 09:54:41.552889 kernel: ACPI: BERT 0x000000006D68FFB0 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 9 09:54:41.552894 kernel: ACPI: HEST 0x000000006D68FFE0 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 9 09:54:41.552899 kernel: ACPI: SSDT 0x000000006D690260 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 9 09:54:41.552903 kernel: ACPI: Reserving FACP table memory at [mem 0x6d680620-0x6d680733] Feb 9 09:54:41.552908 kernel: ACPI: Reserving DSDT table memory at [mem 0x6d644268-0x6d68061e] Feb 9 09:54:41.552912 kernel: ACPI: Reserving FACS table memory at [mem 0x6d762f80-0x6d762fbf] Feb 9 09:54:41.552917 kernel: ACPI: Reserving APIC table memory at [mem 0x6d680738-0x6d680863] Feb 9 09:54:41.552922 kernel: ACPI: Reserving FPDT table memory at [mem 0x6d680868-0x6d6808ab] Feb 9 09:54:41.552927 kernel: ACPI: Reserving FIDT table memory at [mem 0x6d6808b0-0x6d68094b] Feb 9 09:54:41.552932 kernel: ACPI: Reserving MCFG table memory at [mem 0x6d680950-0x6d68098b] Feb 9 09:54:41.552936 kernel: ACPI: Reserving SPMI table memory at [mem 0x6d680990-0x6d6809d0] Feb 9 09:54:41.552941 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6809d8-0x6d6824f3] Feb 9 09:54:41.552946 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6824f8-0x6d6856bd] Feb 9 09:54:41.552950 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6856c0-0x6d6879ea] Feb 9 09:54:41.552955 kernel: ACPI: Reserving HPET table memory at [mem 0x6d6879f0-0x6d687a27] Feb 9 09:54:41.552959 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d687a28-0x6d6889d5] Feb 9 09:54:41.552964 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6889d8-0x6d6892ce] Feb 9 09:54:41.552969 kernel: ACPI: Reserving UEFI table memory at [mem 0x6d6892d0-0x6d689311] Feb 9 09:54:41.552974 kernel: ACPI: Reserving LPIT table memory at [mem 0x6d689318-0x6d6893ab] Feb 9 09:54:41.552978 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d6893b0-0x6d68bb8d] Feb 9 09:54:41.552983 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68bb90-0x6d68d071] Feb 9 09:54:41.552987 kernel: ACPI: Reserving DBGP table memory at [mem 0x6d68d078-0x6d68d0ab] Feb 9 09:54:41.552992 kernel: ACPI: Reserving DBG2 table memory at [mem 0x6d68d0b0-0x6d68d103] Feb 9 09:54:41.552997 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68d108-0x6d68ec6e] Feb 9 09:54:41.553001 kernel: ACPI: Reserving DMAR table memory at [mem 0x6d68ec70-0x6d68ed17] Feb 9 09:54:41.553006 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ed18-0x6d68ee5b] Feb 9 09:54:41.553011 kernel: ACPI: Reserving TPM2 table memory at [mem 0x6d68ee60-0x6d68ee93] Feb 9 09:54:41.553016 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d68ee98-0x6d68fc26] Feb 9 09:54:41.553020 kernel: ACPI: Reserving WSMT table memory at [mem 0x6d68fc28-0x6d68fc4f] Feb 9 09:54:41.553025 kernel: ACPI: Reserving EINJ table memory at [mem 0x6d68fc50-0x6d68fd7f] Feb 9 09:54:41.553030 kernel: ACPI: Reserving ERST table memory at [mem 0x6d68fd80-0x6d68ffaf] Feb 9 09:54:41.553034 kernel: ACPI: Reserving BERT table memory at [mem 0x6d68ffb0-0x6d68ffdf] Feb 9 09:54:41.553039 kernel: ACPI: Reserving HEST table memory at [mem 0x6d68ffe0-0x6d69025b] Feb 9 09:54:41.553043 kernel: ACPI: Reserving SSDT table memory at [mem 0x6d690260-0x6d6903c1] Feb 9 09:54:41.553048 kernel: No NUMA configuration found Feb 9 09:54:41.553053 kernel: Faking a node at [mem 0x0000000000000000-0x00000008837fffff] Feb 9 09:54:41.553058 kernel: NODE_DATA(0) allocated [mem 0x8837fa000-0x8837fffff] Feb 9 09:54:41.553063 kernel: Zone ranges: Feb 9 09:54:41.553068 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 09:54:41.553072 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 09:54:41.553077 kernel: Normal [mem 0x0000000100000000-0x00000008837fffff] Feb 9 09:54:41.553081 kernel: Movable zone start for each node Feb 9 09:54:41.553086 kernel: Early memory node ranges Feb 9 09:54:41.553091 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 9 09:54:41.553095 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 9 09:54:41.553101 kernel: node 0: [mem 0x0000000040400000-0x0000000062034fff] Feb 9 09:54:41.553105 kernel: node 0: [mem 0x0000000062037000-0x000000006c0c4fff] Feb 9 09:54:41.553110 kernel: node 0: [mem 0x000000006d1a8000-0x000000006d330fff] Feb 9 09:54:41.553114 kernel: node 0: [mem 0x000000006ffff000-0x000000006fffffff] Feb 9 09:54:41.553119 kernel: node 0: [mem 0x0000000100000000-0x00000008837fffff] Feb 9 09:54:41.553124 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000008837fffff] Feb 9 09:54:41.553133 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 09:54:41.553138 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 9 09:54:41.553143 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 9 09:54:41.553148 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 9 09:54:41.553154 kernel: On node 0, zone DMA32: 4323 pages in unavailable ranges Feb 9 09:54:41.553159 kernel: On node 0, zone DMA32: 11470 pages in unavailable ranges Feb 9 09:54:41.553164 kernel: On node 0, zone Normal: 18432 pages in unavailable ranges Feb 9 09:54:41.553169 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 9 09:54:41.553174 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 9 09:54:41.553179 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 9 09:54:41.553185 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 9 09:54:41.553190 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 9 09:54:41.553194 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 9 09:54:41.553199 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 9 09:54:41.553204 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 9 09:54:41.553209 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 9 09:54:41.553214 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 9 09:54:41.553219 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 9 09:54:41.553224 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 9 09:54:41.553230 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 9 09:54:41.553234 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 9 09:54:41.553239 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 9 09:54:41.553244 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 9 09:54:41.553249 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 9 09:54:41.553254 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 9 09:54:41.553262 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 09:54:41.553267 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 09:54:41.553272 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 09:54:41.553278 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 09:54:41.553283 kernel: TSC deadline timer available Feb 9 09:54:41.553288 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 9 09:54:41.553293 kernel: [mem 0x7b800000-0xdfffffff] available for PCI devices Feb 9 09:54:41.553298 kernel: Booting paravirtualized kernel on bare hardware Feb 9 09:54:41.553303 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 09:54:41.553321 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 9 09:54:41.553326 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 9 09:54:41.553331 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 9 09:54:41.553336 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 9 09:54:41.553341 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8190323 Feb 9 09:54:41.553346 kernel: Policy zone: Normal Feb 9 09:54:41.553352 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 09:54:41.553357 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:54:41.553361 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 9 09:54:41.553366 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 9 09:54:41.553371 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:54:41.553377 kernel: Memory: 32555728K/33281940K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 725952K reserved, 0K cma-reserved) Feb 9 09:54:41.553382 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 9 09:54:41.553387 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 09:54:41.553392 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 09:54:41.553397 kernel: rcu: Hierarchical RCU implementation. Feb 9 09:54:41.553401 kernel: rcu: RCU event tracing is enabled. Feb 9 09:54:41.553406 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 9 09:54:41.553411 kernel: Rude variant of Tasks RCU enabled. Feb 9 09:54:41.553416 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:54:41.553422 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:54:41.553427 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 9 09:54:41.553432 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 9 09:54:41.553437 kernel: random: crng init done Feb 9 09:54:41.553441 kernel: Console: colour dummy device 80x25 Feb 9 09:54:41.553446 kernel: printk: console [tty0] enabled Feb 9 09:54:41.553451 kernel: printk: console [ttyS1] enabled Feb 9 09:54:41.553456 kernel: ACPI: Core revision 20210730 Feb 9 09:54:41.553461 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Feb 9 09:54:41.553466 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 09:54:41.553471 kernel: DMAR: Host address width 39 Feb 9 09:54:41.553476 kernel: DMAR: DRHD base: 0x000000fed90000 flags: 0x0 Feb 9 09:54:41.553481 kernel: DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 19e2ff0505e Feb 9 09:54:41.553486 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 9 09:54:41.553491 kernel: DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 9 09:54:41.553496 kernel: DMAR: RMRR base: 0x0000006e011000 end: 0x0000006e25afff Feb 9 09:54:41.553500 kernel: DMAR: RMRR base: 0x00000079000000 end: 0x0000007b7fffff Feb 9 09:54:41.553505 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 Feb 9 09:54:41.553511 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 9 09:54:41.553516 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 9 09:54:41.553520 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 9 09:54:41.553525 kernel: x2apic enabled Feb 9 09:54:41.553530 kernel: Switched APIC routing to cluster x2apic. Feb 9 09:54:41.553535 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 09:54:41.553540 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 9 09:54:41.553545 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 9 09:54:41.553550 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 9 09:54:41.553555 kernel: process: using mwait in idle threads Feb 9 09:54:41.553560 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 09:54:41.553565 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 09:54:41.553570 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 09:54:41.553575 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:54:41.553580 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 9 09:54:41.553584 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 09:54:41.553589 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 9 09:54:41.553594 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 9 09:54:41.553600 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 09:54:41.553605 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 09:54:41.553609 kernel: TAA: Mitigation: TSX disabled Feb 9 09:54:41.553614 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 9 09:54:41.553619 kernel: SRBDS: Mitigation: Microcode Feb 9 09:54:41.553624 kernel: GDS: Vulnerable: No microcode Feb 9 09:54:41.553629 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 09:54:41.553634 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 09:54:41.553638 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 09:54:41.553644 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 9 09:54:41.553649 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 9 09:54:41.553654 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 09:54:41.553659 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 9 09:54:41.553663 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 9 09:54:41.553668 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 9 09:54:41.553673 kernel: Freeing SMP alternatives memory: 32K Feb 9 09:54:41.553678 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:54:41.553683 kernel: LSM: Security Framework initializing Feb 9 09:54:41.553688 kernel: SELinux: Initializing. Feb 9 09:54:41.553693 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:54:41.553698 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:54:41.553703 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 9 09:54:41.553708 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 9 09:54:41.553713 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 9 09:54:41.553717 kernel: ... version: 4 Feb 9 09:54:41.553722 kernel: ... bit width: 48 Feb 9 09:54:41.553727 kernel: ... generic registers: 4 Feb 9 09:54:41.553733 kernel: ... value mask: 0000ffffffffffff Feb 9 09:54:41.553738 kernel: ... max period: 00007fffffffffff Feb 9 09:54:41.553742 kernel: ... fixed-purpose events: 3 Feb 9 09:54:41.553747 kernel: ... event mask: 000000070000000f Feb 9 09:54:41.553752 kernel: signal: max sigframe size: 2032 Feb 9 09:54:41.553757 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:54:41.553762 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 9 09:54:41.553766 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:54:41.553771 kernel: x86: Booting SMP configuration: Feb 9 09:54:41.553777 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 9 09:54:41.553782 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 09:54:41.553787 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 9 09:54:41.553792 kernel: smp: Brought up 1 node, 16 CPUs Feb 9 09:54:41.553797 kernel: smpboot: Max logical packages: 1 Feb 9 09:54:41.553801 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 9 09:54:41.553806 kernel: devtmpfs: initialized Feb 9 09:54:41.553811 kernel: x86/mm: Memory block size: 128MB Feb 9 09:54:41.553816 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x62035000-0x62035fff] (4096 bytes) Feb 9 09:54:41.553822 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x6d331000-0x6d762fff] (4399104 bytes) Feb 9 09:54:41.553827 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:54:41.553832 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 9 09:54:41.553837 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:54:41.553841 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:54:41.553846 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:54:41.553851 kernel: audit: type=2000 audit(1707472476.110:1): state=initialized audit_enabled=0 res=1 Feb 9 09:54:41.553856 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:54:41.553862 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 09:54:41.553867 kernel: cpuidle: using governor menu Feb 9 09:54:41.553871 kernel: ACPI: bus type PCI registered Feb 9 09:54:41.553876 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:54:41.553881 kernel: dca service started, version 1.12.1 Feb 9 09:54:41.553886 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 9 09:54:41.553891 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 9 09:54:41.553896 kernel: PCI: Using configuration type 1 for base access Feb 9 09:54:41.553901 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 9 09:54:41.553906 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 09:54:41.553911 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:54:41.553916 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:54:41.553921 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:54:41.553925 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:54:41.553930 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:54:41.553935 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:54:41.553940 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:54:41.553945 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:54:41.553950 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:54:41.553955 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 9 09:54:41.553960 kernel: ACPI: Dynamic OEM Table Load: Feb 9 09:54:41.553965 kernel: ACPI: SSDT 0xFFFF95BB40215600 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 9 09:54:41.553970 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 9 09:54:41.553975 kernel: ACPI: Dynamic OEM Table Load: Feb 9 09:54:41.553980 kernel: ACPI: SSDT 0xFFFF95BB41CEB000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 9 09:54:41.553984 kernel: ACPI: Dynamic OEM Table Load: Feb 9 09:54:41.553989 kernel: ACPI: SSDT 0xFFFF95BB41C5C000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 9 09:54:41.553994 kernel: ACPI: Dynamic OEM Table Load: Feb 9 09:54:41.553999 kernel: ACPI: SSDT 0xFFFF95BB41C59800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 9 09:54:41.554004 kernel: ACPI: Dynamic OEM Table Load: Feb 9 09:54:41.554009 kernel: ACPI: SSDT 0xFFFF95BB40148000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 9 09:54:41.554014 kernel: ACPI: Dynamic OEM Table Load: Feb 9 09:54:41.554018 kernel: ACPI: SSDT 0xFFFF95BB41CE9C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 9 09:54:41.554023 kernel: ACPI: Interpreter enabled Feb 9 09:54:41.554028 kernel: ACPI: PM: (supports S0 S5) Feb 9 09:54:41.554033 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 09:54:41.554038 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 9 09:54:41.554043 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 9 09:54:41.554048 kernel: HEST: Table parsing has been initialized. Feb 9 09:54:41.554053 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 9 09:54:41.554058 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 09:54:41.554063 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 9 09:54:41.554068 kernel: ACPI: PM: Power Resource [USBC] Feb 9 09:54:41.554073 kernel: ACPI: PM: Power Resource [V0PR] Feb 9 09:54:41.554077 kernel: ACPI: PM: Power Resource [V1PR] Feb 9 09:54:41.554082 kernel: ACPI: PM: Power Resource [V2PR] Feb 9 09:54:41.554088 kernel: ACPI: PM: Power Resource [WRST] Feb 9 09:54:41.554093 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 9 09:54:41.554097 kernel: ACPI: PM: Power Resource [FN00] Feb 9 09:54:41.554102 kernel: ACPI: PM: Power Resource [FN01] Feb 9 09:54:41.554107 kernel: ACPI: PM: Power Resource [FN02] Feb 9 09:54:41.554112 kernel: ACPI: PM: Power Resource [FN03] Feb 9 09:54:41.554116 kernel: ACPI: PM: Power Resource [FN04] Feb 9 09:54:41.554121 kernel: ACPI: PM: Power Resource [PIN] Feb 9 09:54:41.554126 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 9 09:54:41.554189 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 09:54:41.554236 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 9 09:54:41.554295 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 9 09:54:41.554302 kernel: PCI host bridge to bus 0000:00 Feb 9 09:54:41.554349 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 09:54:41.554387 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 09:54:41.554424 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 09:54:41.554462 kernel: pci_bus 0000:00: root bus resource [mem 0x7b800000-0xdfffffff window] Feb 9 09:54:41.554499 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 9 09:54:41.554534 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 9 09:54:41.554584 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 9 09:54:41.554632 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 9 09:54:41.554675 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 9 09:54:41.554725 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Feb 9 09:54:41.554767 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Feb 9 09:54:41.554812 kernel: pci 0000:00:02.0: [8086:3e9a] type 00 class 0x038000 Feb 9 09:54:41.554855 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x7c000000-0x7cffffff 64bit] Feb 9 09:54:41.554896 kernel: pci 0000:00:02.0: reg 0x18: [mem 0x80000000-0x8fffffff 64bit pref] Feb 9 09:54:41.554937 kernel: pci 0000:00:02.0: reg 0x20: [io 0x6000-0x603f] Feb 9 09:54:41.554988 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 9 09:54:41.555032 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x7e51f000-0x7e51ffff 64bit] Feb 9 09:54:41.555076 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 9 09:54:41.555118 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x7e51e000-0x7e51efff 64bit] Feb 9 09:54:41.555162 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 9 09:54:41.555203 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x7e500000-0x7e50ffff 64bit] Feb 9 09:54:41.555247 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 9 09:54:41.555293 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 9 09:54:41.555336 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x7e512000-0x7e513fff 64bit] Feb 9 09:54:41.555376 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x7e51d000-0x7e51dfff 64bit] Feb 9 09:54:41.555421 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 9 09:54:41.555462 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 09:54:41.555505 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 9 09:54:41.555549 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 09:54:41.555595 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 9 09:54:41.555637 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x7e51a000-0x7e51afff 64bit] Feb 9 09:54:41.555679 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 9 09:54:41.555731 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 9 09:54:41.555775 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x7e519000-0x7e519fff 64bit] Feb 9 09:54:41.555817 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 9 09:54:41.555862 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 9 09:54:41.555903 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x7e518000-0x7e518fff 64bit] Feb 9 09:54:41.555945 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 9 09:54:41.555989 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 9 09:54:41.556031 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x7e510000-0x7e511fff] Feb 9 09:54:41.556071 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x7e517000-0x7e5170ff] Feb 9 09:54:41.556114 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6090-0x6097] Feb 9 09:54:41.556155 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6080-0x6083] Feb 9 09:54:41.556195 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6060-0x607f] Feb 9 09:54:41.556236 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x7e516000-0x7e5167ff] Feb 9 09:54:41.556279 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 9 09:54:41.556326 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 9 09:54:41.556369 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 9 09:54:41.556418 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 9 09:54:41.556459 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 9 09:54:41.556507 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 9 09:54:41.556552 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 9 09:54:41.556597 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 9 09:54:41.556640 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 9 09:54:41.556685 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Feb 9 09:54:41.556727 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Feb 9 09:54:41.556772 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 9 09:54:41.556816 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 9 09:54:41.556863 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 9 09:54:41.556908 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 9 09:54:41.556951 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x7e514000-0x7e5140ff 64bit] Feb 9 09:54:41.556992 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 9 09:54:41.557039 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 9 09:54:41.557081 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 9 09:54:41.557125 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 09:54:41.557174 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Feb 9 09:54:41.557218 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 9 09:54:41.557263 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x7e200000-0x7e2fffff pref] Feb 9 09:54:41.557306 kernel: pci 0000:02:00.0: PME# supported from D3cold Feb 9 09:54:41.557349 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 09:54:41.557391 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 09:54:41.557441 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Feb 9 09:54:41.557484 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 9 09:54:41.557529 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x7e100000-0x7e1fffff pref] Feb 9 09:54:41.557571 kernel: pci 0000:02:00.1: PME# supported from D3cold Feb 9 09:54:41.557614 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 9 09:54:41.557656 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 9 09:54:41.557698 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 9 09:54:41.557741 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Feb 9 09:54:41.557783 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 09:54:41.557825 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 9 09:54:41.557871 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 9 09:54:41.557966 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x7e400000-0x7e47ffff] Feb 9 09:54:41.558009 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Feb 9 09:54:41.558053 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x7e480000-0x7e483fff] Feb 9 09:54:41.558095 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 9 09:54:41.558140 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 9 09:54:41.558181 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 09:54:41.558223 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Feb 9 09:54:41.558272 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Feb 9 09:54:41.558336 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x7e300000-0x7e37ffff] Feb 9 09:54:41.558379 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Feb 9 09:54:41.558421 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x7e380000-0x7e383fff] Feb 9 09:54:41.558465 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Feb 9 09:54:41.558505 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 9 09:54:41.558547 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 09:54:41.558587 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Feb 9 09:54:41.558628 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 9 09:54:41.558674 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Feb 9 09:54:41.558716 kernel: pci 0000:07:00.0: enabling Extended Tags Feb 9 09:54:41.558758 kernel: pci 0000:07:00.0: supports D1 D2 Feb 9 09:54:41.558802 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 09:54:41.558843 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 9 09:54:41.558883 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 9 09:54:41.558924 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Feb 9 09:54:41.558971 kernel: pci_bus 0000:08: extended config space not accessible Feb 9 09:54:41.559021 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Feb 9 09:54:41.559066 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x7d000000-0x7dffffff] Feb 9 09:54:41.559113 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x7e000000-0x7e01ffff] Feb 9 09:54:41.559156 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Feb 9 09:54:41.559202 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 09:54:41.559245 kernel: pci 0000:08:00.0: supports D1 D2 Feb 9 09:54:41.559336 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 09:54:41.559379 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 9 09:54:41.559422 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 9 09:54:41.559467 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Feb 9 09:54:41.559474 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 9 09:54:41.559480 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 9 09:54:41.559485 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 9 09:54:41.559490 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 9 09:54:41.559495 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 9 09:54:41.559501 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 9 09:54:41.559506 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 9 09:54:41.559511 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 9 09:54:41.559518 kernel: iommu: Default domain type: Translated Feb 9 09:54:41.559523 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 09:54:41.559568 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Feb 9 09:54:41.559612 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 09:54:41.559656 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Feb 9 09:54:41.559664 kernel: vgaarb: loaded Feb 9 09:54:41.559669 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:54:41.559674 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:54:41.559679 kernel: PTP clock support registered Feb 9 09:54:41.559686 kernel: PCI: Using ACPI for IRQ routing Feb 9 09:54:41.559691 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 09:54:41.559696 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 9 09:54:41.559701 kernel: e820: reserve RAM buffer [mem 0x62035000-0x63ffffff] Feb 9 09:54:41.559706 kernel: e820: reserve RAM buffer [mem 0x6c0c5000-0x6fffffff] Feb 9 09:54:41.559712 kernel: e820: reserve RAM buffer [mem 0x6d331000-0x6fffffff] Feb 9 09:54:41.559717 kernel: e820: reserve RAM buffer [mem 0x883800000-0x883ffffff] Feb 9 09:54:41.559722 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 9 09:54:41.559727 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Feb 9 09:54:41.559733 kernel: clocksource: Switched to clocksource tsc-early Feb 9 09:54:41.559738 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:54:41.559743 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:54:41.559749 kernel: pnp: PnP ACPI init Feb 9 09:54:41.559792 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 9 09:54:41.559834 kernel: pnp 00:02: [dma 0 disabled] Feb 9 09:54:41.559875 kernel: pnp 00:03: [dma 0 disabled] Feb 9 09:54:41.559917 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 9 09:54:41.559955 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 9 09:54:41.559997 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 9 09:54:41.560037 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 9 09:54:41.560075 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 9 09:54:41.560114 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 9 09:54:41.560150 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 9 09:54:41.560189 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 9 09:54:41.560225 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 9 09:54:41.560264 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 9 09:54:41.560341 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 9 09:54:41.560380 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 9 09:54:41.560417 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 9 09:54:41.560457 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 9 09:54:41.560494 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 9 09:54:41.560530 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 9 09:54:41.560567 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 9 09:54:41.560604 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 9 09:54:41.560643 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 9 09:54:41.560651 kernel: pnp: PnP ACPI: found 10 devices Feb 9 09:54:41.560656 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 09:54:41.560663 kernel: NET: Registered PF_INET protocol family Feb 9 09:54:41.560668 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:54:41.560673 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 09:54:41.560679 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:54:41.560684 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:54:41.560689 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 09:54:41.560694 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 9 09:54:41.560699 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 09:54:41.560706 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 9 09:54:41.560711 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:54:41.560716 kernel: NET: Registered PF_XDP protocol family Feb 9 09:54:41.560757 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x7b800000-0x7b800fff 64bit] Feb 9 09:54:41.560798 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x7b801000-0x7b801fff 64bit] Feb 9 09:54:41.560839 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x7b802000-0x7b802fff 64bit] Feb 9 09:54:41.560881 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 09:54:41.560924 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 09:54:41.560967 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 09:54:41.561013 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 9 09:54:41.561055 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 9 09:54:41.561097 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Feb 9 09:54:41.561140 kernel: pci 0000:00:01.1: bridge window [mem 0x7e100000-0x7e2fffff] Feb 9 09:54:41.561182 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 09:54:41.561225 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Feb 9 09:54:41.561268 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Feb 9 09:54:41.561356 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 9 09:54:41.561397 kernel: pci 0000:00:1b.4: bridge window [mem 0x7e400000-0x7e4fffff] Feb 9 09:54:41.561439 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Feb 9 09:54:41.561480 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 9 09:54:41.561521 kernel: pci 0000:00:1b.5: bridge window [mem 0x7e300000-0x7e3fffff] Feb 9 09:54:41.561562 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Feb 9 09:54:41.561606 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Feb 9 09:54:41.561649 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Feb 9 09:54:41.561691 kernel: pci 0000:07:00.0: bridge window [mem 0x7d000000-0x7e0fffff] Feb 9 09:54:41.561734 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Feb 9 09:54:41.561776 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Feb 9 09:54:41.561818 kernel: pci 0000:00:1c.1: bridge window [mem 0x7d000000-0x7e0fffff] Feb 9 09:54:41.561856 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 9 09:54:41.561892 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 09:54:41.561932 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 09:54:41.561967 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 09:54:41.562003 kernel: pci_bus 0000:00: resource 7 [mem 0x7b800000-0xdfffffff window] Feb 9 09:54:41.562039 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 9 09:54:41.562081 kernel: pci_bus 0000:02: resource 1 [mem 0x7e100000-0x7e2fffff] Feb 9 09:54:41.562120 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 9 09:54:41.562163 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Feb 9 09:54:41.562203 kernel: pci_bus 0000:04: resource 1 [mem 0x7e400000-0x7e4fffff] Feb 9 09:54:41.562247 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 9 09:54:41.562325 kernel: pci_bus 0000:05: resource 1 [mem 0x7e300000-0x7e3fffff] Feb 9 09:54:41.562367 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 9 09:54:41.562406 kernel: pci_bus 0000:07: resource 1 [mem 0x7d000000-0x7e0fffff] Feb 9 09:54:41.562446 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Feb 9 09:54:41.562486 kernel: pci_bus 0000:08: resource 1 [mem 0x7d000000-0x7e0fffff] Feb 9 09:54:41.562494 kernel: PCI: CLS 64 bytes, default 64 Feb 9 09:54:41.562500 kernel: DMAR: No ATSR found Feb 9 09:54:41.562505 kernel: DMAR: No SATC found Feb 9 09:54:41.562510 kernel: DMAR: IOMMU feature fl1gp_support inconsistent Feb 9 09:54:41.562515 kernel: DMAR: IOMMU feature pgsel_inv inconsistent Feb 9 09:54:41.562521 kernel: DMAR: IOMMU feature nwfs inconsistent Feb 9 09:54:41.562526 kernel: DMAR: IOMMU feature pasid inconsistent Feb 9 09:54:41.562531 kernel: DMAR: IOMMU feature eafs inconsistent Feb 9 09:54:41.562536 kernel: DMAR: IOMMU feature prs inconsistent Feb 9 09:54:41.562542 kernel: DMAR: IOMMU feature nest inconsistent Feb 9 09:54:41.562547 kernel: DMAR: IOMMU feature mts inconsistent Feb 9 09:54:41.562552 kernel: DMAR: IOMMU feature sc_support inconsistent Feb 9 09:54:41.562558 kernel: DMAR: IOMMU feature dev_iotlb_support inconsistent Feb 9 09:54:41.562563 kernel: DMAR: dmar0: Using Queued invalidation Feb 9 09:54:41.562568 kernel: DMAR: dmar1: Using Queued invalidation Feb 9 09:54:41.562610 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 9 09:54:41.562652 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 9 09:54:41.562696 kernel: pci 0000:00:01.1: Adding to iommu group 1 Feb 9 09:54:41.562738 kernel: pci 0000:00:02.0: Adding to iommu group 2 Feb 9 09:54:41.562780 kernel: pci 0000:00:08.0: Adding to iommu group 3 Feb 9 09:54:41.562821 kernel: pci 0000:00:12.0: Adding to iommu group 4 Feb 9 09:54:41.562862 kernel: pci 0000:00:14.0: Adding to iommu group 5 Feb 9 09:54:41.562903 kernel: pci 0000:00:14.2: Adding to iommu group 5 Feb 9 09:54:41.562943 kernel: pci 0000:00:15.0: Adding to iommu group 6 Feb 9 09:54:41.562984 kernel: pci 0000:00:15.1: Adding to iommu group 6 Feb 9 09:54:41.563024 kernel: pci 0000:00:16.0: Adding to iommu group 7 Feb 9 09:54:41.563067 kernel: pci 0000:00:16.1: Adding to iommu group 7 Feb 9 09:54:41.563108 kernel: pci 0000:00:16.4: Adding to iommu group 7 Feb 9 09:54:41.563148 kernel: pci 0000:00:17.0: Adding to iommu group 8 Feb 9 09:54:41.563190 kernel: pci 0000:00:1b.0: Adding to iommu group 9 Feb 9 09:54:41.563230 kernel: pci 0000:00:1b.4: Adding to iommu group 10 Feb 9 09:54:41.563293 kernel: pci 0000:00:1b.5: Adding to iommu group 11 Feb 9 09:54:41.563353 kernel: pci 0000:00:1c.0: Adding to iommu group 12 Feb 9 09:54:41.563395 kernel: pci 0000:00:1c.1: Adding to iommu group 13 Feb 9 09:54:41.563436 kernel: pci 0000:00:1e.0: Adding to iommu group 14 Feb 9 09:54:41.563478 kernel: pci 0000:00:1f.0: Adding to iommu group 15 Feb 9 09:54:41.563518 kernel: pci 0000:00:1f.4: Adding to iommu group 15 Feb 9 09:54:41.563559 kernel: pci 0000:00:1f.5: Adding to iommu group 15 Feb 9 09:54:41.563601 kernel: pci 0000:02:00.0: Adding to iommu group 1 Feb 9 09:54:41.563643 kernel: pci 0000:02:00.1: Adding to iommu group 1 Feb 9 09:54:41.563685 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 9 09:54:41.563729 kernel: pci 0000:05:00.0: Adding to iommu group 17 Feb 9 09:54:41.563775 kernel: pci 0000:07:00.0: Adding to iommu group 18 Feb 9 09:54:41.563819 kernel: pci 0000:08:00.0: Adding to iommu group 18 Feb 9 09:54:41.563826 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 9 09:54:41.563832 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 09:54:41.563837 kernel: software IO TLB: mapped [mem 0x00000000680c5000-0x000000006c0c5000] (64MB) Feb 9 09:54:41.563842 kernel: RAPL PMU: API unit is 2^-32 Joules, 4 fixed counters, 655360 ms ovfl timer Feb 9 09:54:41.563848 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 9 09:54:41.563853 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 9 09:54:41.563859 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 9 09:54:41.563864 kernel: RAPL PMU: hw unit of domain pp1-gpu 2^-14 Joules Feb 9 09:54:41.563909 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 9 09:54:41.563917 kernel: Initialise system trusted keyrings Feb 9 09:54:41.563922 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 9 09:54:41.563927 kernel: Key type asymmetric registered Feb 9 09:54:41.563932 kernel: Asymmetric key parser 'x509' registered Feb 9 09:54:41.563937 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:54:41.563944 kernel: io scheduler mq-deadline registered Feb 9 09:54:41.563949 kernel: io scheduler kyber registered Feb 9 09:54:41.563954 kernel: io scheduler bfq registered Feb 9 09:54:41.563995 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 122 Feb 9 09:54:41.564037 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 123 Feb 9 09:54:41.564077 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 124 Feb 9 09:54:41.564119 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 125 Feb 9 09:54:41.564160 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 126 Feb 9 09:54:41.564203 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 127 Feb 9 09:54:41.564245 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 128 Feb 9 09:54:41.564336 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 9 09:54:41.564344 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 9 09:54:41.564349 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 9 09:54:41.564354 kernel: pstore: Registered erst as persistent store backend Feb 9 09:54:41.564359 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 09:54:41.564365 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:54:41.564371 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 09:54:41.564376 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 09:54:41.564422 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 9 09:54:41.564429 kernel: i8042: PNP: No PS/2 controller found. Feb 9 09:54:41.564467 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 9 09:54:41.564505 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 9 09:54:41.564542 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-09T09:54:40 UTC (1707472480) Feb 9 09:54:41.564580 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 9 09:54:41.564589 kernel: fail to initialize ptp_kvm Feb 9 09:54:41.564594 kernel: intel_pstate: Intel P-state driver initializing Feb 9 09:54:41.564599 kernel: intel_pstate: Disabling energy efficiency optimization Feb 9 09:54:41.564604 kernel: intel_pstate: HWP enabled Feb 9 09:54:41.564610 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 9 09:54:41.564615 kernel: vesafb: scrolling: redraw Feb 9 09:54:41.564620 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 9 09:54:41.564625 kernel: vesafb: framebuffer at 0x7d000000, mapped to 0x00000000750106bb, using 768k, total 768k Feb 9 09:54:41.564632 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 09:54:41.564637 kernel: fb0: VESA VGA frame buffer device Feb 9 09:54:41.564642 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:54:41.564647 kernel: Segment Routing with IPv6 Feb 9 09:54:41.564652 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:54:41.564658 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:54:41.564663 kernel: Key type dns_resolver registered Feb 9 09:54:41.564668 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 9 09:54:41.564673 kernel: microcode: Microcode Update Driver: v2.2. Feb 9 09:54:41.564679 kernel: IPI shorthand broadcast: enabled Feb 9 09:54:41.564685 kernel: sched_clock: Marking stable (1838758262, 1353745915)->(4616682955, -1424178778) Feb 9 09:54:41.564690 kernel: registered taskstats version 1 Feb 9 09:54:41.564695 kernel: Loading compiled-in X.509 certificates Feb 9 09:54:41.564700 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 9 09:54:41.564705 kernel: Key type .fscrypt registered Feb 9 09:54:41.564711 kernel: Key type fscrypt-provisioning registered Feb 9 09:54:41.564716 kernel: pstore: Using crash dump compression: deflate Feb 9 09:54:41.564721 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:54:41.564727 kernel: ima: No architecture policies found Feb 9 09:54:41.564732 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 09:54:41.564738 kernel: Write protecting the kernel read-only data: 28672k Feb 9 09:54:41.564743 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 09:54:41.564748 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 09:54:41.564753 kernel: Run /init as init process Feb 9 09:54:41.564758 kernel: with arguments: Feb 9 09:54:41.564763 kernel: /init Feb 9 09:54:41.564769 kernel: with environment: Feb 9 09:54:41.564774 kernel: HOME=/ Feb 9 09:54:41.564779 kernel: TERM=linux Feb 9 09:54:41.564785 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:54:41.564791 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:54:41.564798 systemd[1]: Detected architecture x86-64. Feb 9 09:54:41.564803 systemd[1]: Running in initrd. Feb 9 09:54:41.564809 systemd[1]: No hostname configured, using default hostname. Feb 9 09:54:41.564814 systemd[1]: Hostname set to . Feb 9 09:54:41.564820 systemd[1]: Initializing machine ID from random generator. Feb 9 09:54:41.564825 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:54:41.564831 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:54:41.564836 systemd[1]: Reached target cryptsetup.target. Feb 9 09:54:41.564841 systemd[1]: Reached target paths.target. Feb 9 09:54:41.564846 systemd[1]: Reached target slices.target. Feb 9 09:54:41.564852 systemd[1]: Reached target swap.target. Feb 9 09:54:41.564857 systemd[1]: Reached target timers.target. Feb 9 09:54:41.564863 systemd[1]: Listening on iscsid.socket. Feb 9 09:54:41.564869 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:54:41.564874 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:54:41.564880 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:54:41.564885 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:54:41.564891 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:54:41.564896 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:54:41.564901 kernel: tsc: Refined TSC clocksource calibration: 3408.046 MHz Feb 9 09:54:41.564907 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fff667c0, max_idle_ns: 440795358023 ns Feb 9 09:54:41.564913 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:54:41.564918 kernel: clocksource: Switched to clocksource tsc Feb 9 09:54:41.564923 systemd[1]: Reached target sockets.target. Feb 9 09:54:41.564929 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:54:41.564934 systemd[1]: Finished network-cleanup.service. Feb 9 09:54:41.564939 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:54:41.564945 systemd[1]: Starting systemd-journald.service... Feb 9 09:54:41.564950 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:54:41.564958 systemd-journald[268]: Journal started Feb 9 09:54:41.564983 systemd-journald[268]: Runtime Journal (/run/log/journal/08f9db2abacc4ab7b86bd96691a3d25c) is 8.0M, max 636.8M, 628.8M free. Feb 9 09:54:41.566971 systemd-modules-load[269]: Inserted module 'overlay' Feb 9 09:54:41.572000 audit: BPF prog-id=6 op=LOAD Feb 9 09:54:41.591265 kernel: audit: type=1334 audit(1707472481.572:2): prog-id=6 op=LOAD Feb 9 09:54:41.591280 systemd[1]: Starting systemd-resolved.service... Feb 9 09:54:41.639286 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:54:41.639301 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:54:41.670305 kernel: Bridge firewalling registered Feb 9 09:54:41.670320 systemd[1]: Started systemd-journald.service. Feb 9 09:54:41.684479 systemd-modules-load[269]: Inserted module 'br_netfilter' Feb 9 09:54:41.732299 kernel: audit: type=1130 audit(1707472481.691:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:41.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:41.690299 systemd-resolved[271]: Positive Trust Anchors: Feb 9 09:54:41.807288 kernel: SCSI subsystem initialized Feb 9 09:54:41.807304 kernel: audit: type=1130 audit(1707472481.743:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:41.807312 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:54:41.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:41.690305 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:54:41.907393 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:54:41.907428 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:54:41.907435 kernel: audit: type=1130 audit(1707472481.863:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:41.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:41.690325 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:54:41.980519 kernel: audit: type=1130 audit(1707472481.915:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:41.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:41.691869 systemd-resolved[271]: Defaulting to hostname 'linux'. Feb 9 09:54:41.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:41.692496 systemd[1]: Started systemd-resolved.service. Feb 9 09:54:42.088200 kernel: audit: type=1130 audit(1707472481.989:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.088212 kernel: audit: type=1130 audit(1707472482.042:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:41.744452 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:54:41.865000 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:54:41.908040 systemd-modules-load[269]: Inserted module 'dm_multipath' Feb 9 09:54:41.915577 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:54:41.989628 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:54:42.042568 systemd[1]: Reached target nss-lookup.target. Feb 9 09:54:42.096871 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:54:42.116804 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:54:42.117095 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:54:42.119866 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:54:42.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.120662 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:54:42.169316 kernel: audit: type=1130 audit(1707472482.118:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.182617 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:54:42.246357 kernel: audit: type=1130 audit(1707472482.182:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.238889 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:54:42.260355 dracut-cmdline[293]: dracut-dracut-053 Feb 9 09:54:42.260355 dracut-cmdline[293]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 09:54:42.260355 dracut-cmdline[293]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 9 09:54:42.328334 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:54:42.328348 kernel: iscsi: registered transport (tcp) Feb 9 09:54:42.376514 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:54:42.376531 kernel: QLogic iSCSI HBA Driver Feb 9 09:54:42.393315 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:54:42.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:42.401969 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:54:42.457328 kernel: raid6: avx2x4 gen() 48010 MB/s Feb 9 09:54:42.492328 kernel: raid6: avx2x4 xor() 21481 MB/s Feb 9 09:54:42.527327 kernel: raid6: avx2x2 gen() 54867 MB/s Feb 9 09:54:42.562329 kernel: raid6: avx2x2 xor() 32753 MB/s Feb 9 09:54:42.597329 kernel: raid6: avx2x1 gen() 46167 MB/s Feb 9 09:54:42.632327 kernel: raid6: avx2x1 xor() 28486 MB/s Feb 9 09:54:42.666316 kernel: raid6: sse2x4 gen() 21758 MB/s Feb 9 09:54:42.700326 kernel: raid6: sse2x4 xor() 11964 MB/s Feb 9 09:54:42.734292 kernel: raid6: sse2x2 gen() 22098 MB/s Feb 9 09:54:42.768328 kernel: raid6: sse2x2 xor() 13708 MB/s Feb 9 09:54:42.802330 kernel: raid6: sse2x1 gen() 18657 MB/s Feb 9 09:54:42.853752 kernel: raid6: sse2x1 xor() 9093 MB/s Feb 9 09:54:42.853767 kernel: raid6: using algorithm avx2x2 gen() 54867 MB/s Feb 9 09:54:42.853775 kernel: raid6: .... xor() 32753 MB/s, rmw enabled Feb 9 09:54:42.871774 kernel: raid6: using avx2x2 recovery algorithm Feb 9 09:54:42.917275 kernel: xor: automatically using best checksumming function avx Feb 9 09:54:42.997295 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 09:54:43.002459 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:54:43.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:43.012000 audit: BPF prog-id=7 op=LOAD Feb 9 09:54:43.012000 audit: BPF prog-id=8 op=LOAD Feb 9 09:54:43.013224 systemd[1]: Starting systemd-udevd.service... Feb 9 09:54:43.021588 systemd-udevd[472]: Using default interface naming scheme 'v252'. Feb 9 09:54:43.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:43.027400 systemd[1]: Started systemd-udevd.service. Feb 9 09:54:43.067507 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Feb 9 09:54:43.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:43.042790 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:54:43.066743 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:54:43.076914 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:54:43.125202 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:54:43.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:43.155270 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:54:43.174327 kernel: ACPI: bus type USB registered Feb 9 09:54:43.174369 kernel: usbcore: registered new interface driver usbfs Feb 9 09:54:43.174382 kernel: usbcore: registered new interface driver hub Feb 9 09:54:43.174394 kernel: usbcore: registered new device driver usb Feb 9 09:54:43.244269 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 09:54:43.244299 kernel: libata version 3.00 loaded. Feb 9 09:54:43.244307 kernel: AES CTR mode by8 optimization enabled Feb 9 09:54:43.294410 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 9 09:54:43.294435 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 9 09:54:43.331397 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 09:54:43.331490 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 9 09:54:43.331543 kernel: mlx5_core 0000:02:00.0: firmware version: 14.29.2002 Feb 9 09:54:43.341311 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 9 09:54:43.341383 kernel: pps pps0: new PPS source ptp0 Feb 9 09:54:43.341453 kernel: igb 0000:04:00.0: added PHC on eth0 Feb 9 09:54:43.341508 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 09:54:43.341559 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:24:4c Feb 9 09:54:43.341608 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Feb 9 09:54:43.341656 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 09:54:43.367735 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 09:54:43.379265 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 9 09:54:43.379348 kernel: pps pps1: new PPS source ptp1 Feb 9 09:54:43.379419 kernel: igb 0000:05:00.0: added PHC on eth1 Feb 9 09:54:43.379486 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 9 09:54:43.379550 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:24:4d Feb 9 09:54:43.379609 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Feb 9 09:54:43.379669 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 9 09:54:43.431214 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Feb 9 09:54:43.431293 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 9 09:54:43.494665 kernel: ahci 0000:00:17.0: version 3.0 Feb 9 09:54:43.494763 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 9 09:54:43.507164 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Feb 9 09:54:43.520343 kernel: hub 1-0:1.0: USB hub found Feb 9 09:54:43.520443 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 9 09:54:43.561976 kernel: scsi host0: ahci Feb 9 09:54:43.562013 kernel: hub 1-0:1.0: 16 ports detected Feb 9 09:54:43.576775 kernel: scsi host1: ahci Feb 9 09:54:43.593315 kernel: hub 2-0:1.0: USB hub found Feb 9 09:54:43.593414 kernel: scsi host2: ahci Feb 9 09:54:43.593434 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Feb 9 09:54:43.618275 kernel: hub 2-0:1.0: 10 ports detected Feb 9 09:54:43.629265 kernel: scsi host3: ahci Feb 9 09:54:43.629300 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 09:54:43.652294 kernel: usb: port power management may be unreliable Feb 9 09:54:43.652311 kernel: scsi host4: ahci Feb 9 09:54:43.652328 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 09:54:43.822324 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 9 09:54:43.822350 kernel: scsi host5: ahci Feb 9 09:54:43.862062 kernel: scsi host6: ahci Feb 9 09:54:43.862139 kernel: mlx5_core 0000:02:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 09:54:43.862197 kernel: scsi host7: ahci Feb 9 09:54:43.894297 kernel: mlx5_core 0000:02:00.1: firmware version: 14.29.2002 Feb 9 09:54:43.894370 kernel: ata1: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516100 irq 140 Feb 9 09:54:43.926628 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 9 09:54:43.926700 kernel: ata2: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516180 irq 140 Feb 9 09:54:43.981088 kernel: ata3: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516200 irq 140 Feb 9 09:54:43.981106 kernel: hub 1-14:1.0: USB hub found Feb 9 09:54:43.981178 kernel: ata4: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516280 irq 140 Feb 9 09:54:44.027444 kernel: hub 1-14:1.0: 4 ports detected Feb 9 09:54:44.027532 kernel: ata5: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516300 irq 140 Feb 9 09:54:44.066306 kernel: ata6: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516380 irq 140 Feb 9 09:54:44.066337 kernel: ata7: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516400 irq 140 Feb 9 09:54:44.100928 kernel: ata8: SATA max UDMA/133 abar m2048@0x7e516000 port 0x7e516480 irq 140 Feb 9 09:54:44.228295 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 9 09:54:44.264366 kernel: port_module: 9 callbacks suppressed Feb 9 09:54:44.264387 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Feb 9 09:54:44.298296 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 09:54:44.337449 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 9 09:54:44.425302 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 9 09:54:44.425362 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 09:54:44.442265 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 9 09:54:44.457290 kernel: ata8: SATA link down (SStatus 0 SControl 300) Feb 9 09:54:44.457306 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:54:44.472263 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 9 09:54:44.503264 kernel: mlx5_core 0000:02:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 9 09:54:44.503361 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 9 09:54:44.539264 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 9 09:54:44.554265 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 09:54:44.572316 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 9 09:54:44.589330 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 9 09:54:44.638320 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 09:54:44.638335 kernel: ata1.00: Features: NCQ-prio Feb 9 09:54:44.638343 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 9 09:54:44.668835 kernel: ata2.00: Features: NCQ-prio Feb 9 09:54:44.688328 kernel: ata1.00: configured for UDMA/133 Feb 9 09:54:44.688368 kernel: ata2.00: configured for UDMA/133 Feb 9 09:54:44.688376 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 09:54:44.720323 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 9 09:54:44.758325 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Feb 9 09:54:44.758418 kernel: usbcore: registered new interface driver usbhid Feb 9 09:54:44.787251 kernel: usbhid: USB HID core driver Feb 9 09:54:44.821267 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 9 09:54:44.840879 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 09:54:44.840900 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 09:54:44.855454 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Feb 9 09:54:44.855536 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 09:54:44.855603 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 9 09:54:44.855694 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 9 09:54:44.855750 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 9 09:54:44.855803 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 9 09:54:44.855856 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 09:54:44.855913 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 09:54:44.857311 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 09:54:44.857326 kernel: GPT:9289727 != 937703087 Feb 9 09:54:44.857335 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 09:54:44.857342 kernel: GPT:9289727 != 937703087 Feb 9 09:54:44.857348 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 09:54:44.857354 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 9 09:54:44.858267 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 09:54:44.858281 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 9 09:54:44.906509 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 9 09:54:44.906621 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 09:54:44.906683 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 9 09:54:44.921419 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 09:54:44.935741 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 9 09:54:44.954327 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 9 09:54:45.211305 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 9 09:54:45.249203 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 09:54:45.265269 kernel: ata1.00: Enabling discard_zeroes_data Feb 9 09:54:45.265307 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 09:54:45.316141 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:54:45.343508 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (677) Feb 9 09:54:45.336753 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:54:45.353390 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:54:45.379729 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:54:45.407192 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:54:45.417069 systemd[1]: Starting disk-uuid.service... Feb 9 09:54:45.455361 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 09:54:45.455374 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 9 09:54:45.455426 disk-uuid[694]: Primary Header is updated. Feb 9 09:54:45.455426 disk-uuid[694]: Secondary Entries is updated. Feb 9 09:54:45.455426 disk-uuid[694]: Secondary Header is updated. Feb 9 09:54:45.524335 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 09:54:45.524348 kernel: GPT:disk_guids don't match. Feb 9 09:54:45.524356 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 09:54:45.524362 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 9 09:54:45.524368 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 09:54:45.561266 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 9 09:54:46.513251 kernel: ata2.00: Enabling discard_zeroes_data Feb 9 09:54:46.531754 disk-uuid[695]: The operation has completed successfully. Feb 9 09:54:46.540359 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Feb 9 09:54:46.572652 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:54:46.666059 kernel: audit: type=1130 audit(1707472486.580:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:46.666074 kernel: audit: type=1131 audit(1707472486.580:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:46.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:46.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:46.572710 systemd[1]: Finished disk-uuid.service. Feb 9 09:54:46.694357 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 09:54:46.580975 systemd[1]: Starting verity-setup.service... Feb 9 09:54:46.724883 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:54:46.734285 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:54:46.740532 systemd[1]: Finished verity-setup.service. Feb 9 09:54:46.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:46.804325 kernel: audit: type=1130 audit(1707472486.758:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:46.855266 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:54:46.855473 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:54:46.863563 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:54:46.863955 systemd[1]: Starting ignition-setup.service... Feb 9 09:54:46.970383 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 9 09:54:46.970399 kernel: BTRFS info (device sdb6): using free space tree Feb 9 09:54:46.970406 kernel: BTRFS info (device sdb6): has skinny extents Feb 9 09:54:46.970413 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 9 09:54:46.962679 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:54:46.978679 systemd[1]: Finished ignition-setup.service. Feb 9 09:54:46.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:46.995811 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:54:47.057391 kernel: audit: type=1130 audit(1707472486.994:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:47.049508 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:54:47.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:47.115000 audit: BPF prog-id=9 op=LOAD Feb 9 09:54:47.117121 systemd[1]: Starting systemd-networkd.service... Feb 9 09:54:47.150316 kernel: audit: type=1130 audit(1707472487.064:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:47.150334 kernel: audit: type=1334 audit(1707472487.115:24): prog-id=9 op=LOAD Feb 9 09:54:47.140890 ignition[869]: Ignition 2.14.0 Feb 9 09:54:47.150987 systemd-networkd[882]: lo: Link UP Feb 9 09:54:47.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:47.140894 ignition[869]: Stage: fetch-offline Feb 9 09:54:47.228370 kernel: audit: type=1130 audit(1707472487.162:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:47.150989 systemd-networkd[882]: lo: Gained carrier Feb 9 09:54:47.140920 ignition[869]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:47.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:47.151317 systemd-networkd[882]: Enumeration completed Feb 9 09:54:47.373216 kernel: audit: type=1130 audit(1707472487.242:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:47.373232 kernel: audit: type=1130 audit(1707472487.301:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:47.373240 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 9 09:54:47.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:47.140934 ignition[869]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 09:54:47.151396 systemd[1]: Started systemd-networkd.service. Feb 9 09:54:47.404551 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f1np1: link becomes ready Feb 9 09:54:47.151141 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 09:54:47.152158 systemd-networkd[882]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:54:47.151205 ignition[869]: parsed url from cmdline: "" Feb 9 09:54:47.163381 systemd[1]: Reached target network.target. Feb 9 09:54:47.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:47.447345 iscsid[905]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:54:47.447345 iscsid[905]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 09:54:47.447345 iscsid[905]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:54:47.447345 iscsid[905]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:54:47.447345 iscsid[905]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:54:47.447345 iscsid[905]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:54:47.447345 iscsid[905]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:54:47.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:47.151207 ignition[869]: no config URL provided Feb 9 09:54:47.615376 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 9 09:54:47.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:47.167582 unknown[869]: fetched base config from "system" Feb 9 09:54:47.151210 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:54:47.167586 unknown[869]: fetched user config from "system" Feb 9 09:54:47.151238 ignition[869]: parsing config with SHA512: fcbdacf763ce524d5d232d56d208b5cb9817839a856e05fbd8aa9ddffaa92792c6442078503ccf1687d81e3554d0af6a2ddeec1a738d081f9c20fafc086b6c52 Feb 9 09:54:47.221826 systemd[1]: Starting iscsiuio.service... Feb 9 09:54:47.167958 ignition[869]: fetch-offline: fetch-offline passed Feb 9 09:54:47.235595 systemd[1]: Started iscsiuio.service. Feb 9 09:54:47.167961 ignition[869]: POST message to Packet Timeline Feb 9 09:54:47.242681 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:54:47.167966 ignition[869]: POST Status error: resource requires networking Feb 9 09:54:47.301530 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 09:54:47.167996 ignition[869]: Ignition finished successfully Feb 9 09:54:47.301987 systemd[1]: Starting ignition-kargs.service... Feb 9 09:54:47.377432 ignition[896]: Ignition 2.14.0 Feb 9 09:54:47.375163 systemd-networkd[882]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:54:47.377436 ignition[896]: Stage: kargs Feb 9 09:54:47.387802 systemd[1]: Starting iscsid.service... Feb 9 09:54:47.377491 ignition[896]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:47.411500 systemd[1]: Started iscsid.service. Feb 9 09:54:47.377501 ignition[896]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 09:54:47.439933 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:54:47.378848 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 09:54:47.454564 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:54:47.380612 ignition[896]: kargs: kargs passed Feb 9 09:54:47.466537 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:54:47.380615 ignition[896]: POST message to Packet Timeline Feb 9 09:54:47.507469 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:54:47.380628 ignition[896]: GET https://metadata.packet.net/metadata: attempt #1 Feb 9 09:54:47.534557 systemd[1]: Reached target remote-fs.target. Feb 9 09:54:47.382407 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59113->[::1]:53: read: connection refused Feb 9 09:54:47.552606 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:54:47.582823 ignition[896]: GET https://metadata.packet.net/metadata: attempt #2 Feb 9 09:54:47.569626 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:54:47.583203 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51253->[::1]:53: read: connection refused Feb 9 09:54:47.609889 systemd-networkd[882]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:54:47.638676 systemd-networkd[882]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:54:47.666951 systemd-networkd[882]: enp2s0f1np1: Link UP Feb 9 09:54:47.667095 systemd-networkd[882]: enp2s0f1np1: Gained carrier Feb 9 09:54:47.680587 systemd-networkd[882]: enp2s0f0np0: Link UP Feb 9 09:54:47.680781 systemd-networkd[882]: eno2: Link UP Feb 9 09:54:47.680957 systemd-networkd[882]: eno1: Link UP Feb 9 09:54:47.984253 ignition[896]: GET https://metadata.packet.net/metadata: attempt #3 Feb 9 09:54:47.985462 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41697->[::1]:53: read: connection refused Feb 9 09:54:48.410066 systemd-networkd[882]: enp2s0f0np0: Gained carrier Feb 9 09:54:48.418535 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0np0: link becomes ready Feb 9 09:54:48.439581 systemd-networkd[882]: enp2s0f0np0: DHCPv4 address 86.109.11.101/31, gateway 86.109.11.100 acquired from 145.40.83.140 Feb 9 09:54:48.707768 systemd-networkd[882]: enp2s0f1np1: Gained IPv6LL Feb 9 09:54:48.785755 ignition[896]: GET https://metadata.packet.net/metadata: attempt #4 Feb 9 09:54:48.787206 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56351->[::1]:53: read: connection refused Feb 9 09:54:49.987738 systemd-networkd[882]: enp2s0f0np0: Gained IPv6LL Feb 9 09:54:50.388511 ignition[896]: GET https://metadata.packet.net/metadata: attempt #5 Feb 9 09:54:50.389742 ignition[896]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43734->[::1]:53: read: connection refused Feb 9 09:54:53.593213 ignition[896]: GET https://metadata.packet.net/metadata: attempt #6 Feb 9 09:54:53.634332 ignition[896]: GET result: OK Feb 9 09:54:53.880466 ignition[896]: Ignition finished successfully Feb 9 09:54:53.885211 systemd[1]: Finished ignition-kargs.service. Feb 9 09:54:53.969434 kernel: kauditd_printk_skb: 3 callbacks suppressed Feb 9 09:54:53.969451 kernel: audit: type=1130 audit(1707472493.895:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:53.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:53.905678 ignition[926]: Ignition 2.14.0 Feb 9 09:54:53.898667 systemd[1]: Starting ignition-disks.service... Feb 9 09:54:53.905681 ignition[926]: Stage: disks Feb 9 09:54:53.905736 ignition[926]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:53.905745 ignition[926]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 09:54:53.907053 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 09:54:53.908346 ignition[926]: disks: disks passed Feb 9 09:54:53.908349 ignition[926]: POST message to Packet Timeline Feb 9 09:54:53.908359 ignition[926]: GET https://metadata.packet.net/metadata: attempt #1 Feb 9 09:54:53.931380 ignition[926]: GET result: OK Feb 9 09:54:54.133629 ignition[926]: Ignition finished successfully Feb 9 09:54:54.136817 systemd[1]: Finished ignition-disks.service. Feb 9 09:54:54.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:54.149805 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:54:54.224534 kernel: audit: type=1130 audit(1707472494.148:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:54.210474 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:54:54.210511 systemd[1]: Reached target local-fs.target. Feb 9 09:54:54.233498 systemd[1]: Reached target sysinit.target. Feb 9 09:54:54.247487 systemd[1]: Reached target basic.target. Feb 9 09:54:54.261152 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:54:54.283759 systemd-fsck[941]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 09:54:54.294834 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:54:54.381530 kernel: audit: type=1130 audit(1707472494.302:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:54.381544 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:54:54.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:54.305230 systemd[1]: Mounting sysroot.mount... Feb 9 09:54:54.388934 systemd[1]: Mounted sysroot.mount. Feb 9 09:54:54.408834 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:54:54.419608 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:54:54.436530 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 09:54:54.451172 systemd[1]: Starting flatcar-static-network.service... Feb 9 09:54:54.465497 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:54:54.465586 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:54:54.484343 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:54:54.507792 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:54:54.645975 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (950) Feb 9 09:54:54.646071 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 9 09:54:54.646080 kernel: BTRFS info (device sdb6): using free space tree Feb 9 09:54:54.646087 kernel: BTRFS info (device sdb6): has skinny extents Feb 9 09:54:54.646094 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 9 09:54:54.646160 coreos-metadata[949]: Feb 09 09:54:54.572 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 09:54:54.646160 coreos-metadata[949]: Feb 09 09:54:54.601 INFO Fetch successful Feb 9 09:54:54.771386 kernel: audit: type=1130 audit(1707472494.654:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:54.771472 kernel: audit: type=1130 audit(1707472494.716:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:54.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:54.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:54.518639 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:54:54.892001 kernel: audit: type=1130 audit(1707472494.779:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:54.892087 kernel: audit: type=1131 audit(1707472494.779:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:54.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:54.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:54.892129 coreos-metadata[948]: Feb 09 09:54:54.572 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 09:54:54.892129 coreos-metadata[948]: Feb 09 09:54:54.593 INFO Fetch successful Feb 9 09:54:54.892129 coreos-metadata[948]: Feb 09 09:54:54.611 INFO wrote hostname ci-3510.3.2-a-98b619e81b to /sysroot/etc/hostname Feb 9 09:54:54.557026 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:54:54.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:54.980459 initrd-setup-root[957]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:54:55.019453 kernel: audit: type=1130 audit(1707472494.953:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:54.655608 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 09:54:55.028533 initrd-setup-root[965]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:54:54.716593 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 9 09:54:55.048504 initrd-setup-root[973]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:54:54.716631 systemd[1]: Finished flatcar-static-network.service. Feb 9 09:54:55.066467 ignition[1023]: INFO : Ignition 2.14.0 Feb 9 09:54:55.066467 ignition[1023]: INFO : Stage: mount Feb 9 09:54:55.066467 ignition[1023]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:55.066467 ignition[1023]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 09:54:55.066467 ignition[1023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 09:54:55.066467 ignition[1023]: INFO : mount: mount passed Feb 9 09:54:55.066467 ignition[1023]: INFO : POST message to Packet Timeline Feb 9 09:54:55.066467 ignition[1023]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 09:54:55.066467 ignition[1023]: INFO : GET result: OK Feb 9 09:54:55.155626 initrd-setup-root[983]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:54:54.779534 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:54:54.900863 systemd[1]: Starting ignition-mount.service... Feb 9 09:54:54.913874 systemd[1]: Starting sysroot-boot.service... Feb 9 09:54:54.929498 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 09:54:54.929536 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 09:54:55.208568 ignition[1023]: INFO : Ignition finished successfully Feb 9 09:54:55.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:54.937413 systemd[1]: Finished sysroot-boot.service. Feb 9 09:54:55.289387 kernel: audit: type=1130 audit(1707472495.215:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:55.200142 systemd[1]: Finished ignition-mount.service. Feb 9 09:54:55.218653 systemd[1]: Starting ignition-files.service... Feb 9 09:54:55.381350 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1040) Feb 9 09:54:55.381361 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Feb 9 09:54:55.381368 kernel: BTRFS info (device sdb6): using free space tree Feb 9 09:54:55.381375 kernel: BTRFS info (device sdb6): has skinny extents Feb 9 09:54:55.381381 kernel: BTRFS info (device sdb6): enabling ssd optimizations Feb 9 09:54:55.284042 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:54:55.416802 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:54:55.441418 ignition[1059]: INFO : Ignition 2.14.0 Feb 9 09:54:55.441418 ignition[1059]: INFO : Stage: files Feb 9 09:54:55.441418 ignition[1059]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:55.441418 ignition[1059]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 09:54:55.441418 ignition[1059]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 09:54:55.441418 ignition[1059]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:54:55.441418 ignition[1059]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:54:55.441418 ignition[1059]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:54:55.444797 unknown[1059]: wrote ssh authorized keys file for user: core Feb 9 09:54:55.542529 ignition[1059]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:54:55.542529 ignition[1059]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:54:55.542529 ignition[1059]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:54:55.542529 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 09:54:55.542529 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 09:54:55.704604 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 09:54:55.767853 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 09:54:55.784607 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 09:54:55.784607 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 09:54:56.250676 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 09:54:56.327981 ignition[1059]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 09:54:56.327981 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 09:54:56.371500 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 09:54:56.371500 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 09:54:56.723124 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 09:54:56.772666 ignition[1059]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 09:54:56.796533 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 09:54:56.796533 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:54:56.796533 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 09:54:56.846453 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 09:54:57.015939 ignition[1059]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 09:54:57.015939 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:54:57.056498 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:54:57.056498 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 09:54:57.090511 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 09:54:57.430912 ignition[1059]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 09:54:57.456517 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:54:57.456517 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:54:57.456517 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 09:54:57.514494 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 09:54:57.614028 ignition[1059]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 09:54:57.614028 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 09:54:57.655492 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:54:57.655492 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:54:57.655492 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 09:54:57.655492 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 09:54:58.050101 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 09:54:58.081700 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 09:54:58.081700 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:54:58.132478 kernel: BTRFS info: devid 1 device path /dev/sdb6 changed to /dev/disk/by-label/OEM scanned by ignition (1067) Feb 9 09:54:58.132491 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:54:58.132491 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:54:58.132491 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 09:54:58.132491 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:54:58.132491 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 09:54:58.132491 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:54:58.132491 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 09:54:58.132491 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:54:58.132491 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:54:58.132491 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 9 09:54:58.132491 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:54:58.132491 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3070509271" Feb 9 09:54:58.132491 ignition[1059]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3070509271": device or resource busy Feb 9 09:54:58.132491 ignition[1059]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3070509271", trying btrfs: device or resource busy Feb 9 09:54:58.132491 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3070509271" Feb 9 09:54:58.442529 kernel: audit: type=1130 audit(1707472498.357:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.330604 systemd[1]: Finished ignition-files.service. Feb 9 09:54:58.457466 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3070509271" Feb 9 09:54:58.457466 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem3070509271" Feb 9 09:54:58.457466 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem3070509271" Feb 9 09:54:58.457466 ignition[1059]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 9 09:54:58.457466 ignition[1059]: INFO : files: op(14): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 09:54:58.457466 ignition[1059]: INFO : files: op(14): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 09:54:58.457466 ignition[1059]: INFO : files: op(15): [started] processing unit "packet-phone-home.service" Feb 9 09:54:58.457466 ignition[1059]: INFO : files: op(15): [finished] processing unit "packet-phone-home.service" Feb 9 09:54:58.457466 ignition[1059]: INFO : files: op(16): [started] processing unit "prepare-critools.service" Feb 9 09:54:58.457466 ignition[1059]: INFO : files: op(16): op(17): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:54:58.457466 ignition[1059]: INFO : files: op(16): op(17): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:54:58.457466 ignition[1059]: INFO : files: op(16): [finished] processing unit "prepare-critools.service" Feb 9 09:54:58.457466 ignition[1059]: INFO : files: op(18): [started] processing unit "prepare-helm.service" Feb 9 09:54:58.457466 ignition[1059]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:54:58.457466 ignition[1059]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 09:54:58.457466 ignition[1059]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" Feb 9 09:54:58.457466 ignition[1059]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:54:58.457466 ignition[1059]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:54:58.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.364338 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:54:58.862609 ignition[1059]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:54:58.862609 ignition[1059]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:54:58.862609 ignition[1059]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:54:58.862609 ignition[1059]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:54:58.862609 ignition[1059]: INFO : files: op(1d): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 09:54:58.862609 ignition[1059]: INFO : files: op(1d): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 09:54:58.862609 ignition[1059]: INFO : files: op(1e): [started] setting preset to enabled for "packet-phone-home.service" Feb 9 09:54:58.862609 ignition[1059]: INFO : files: op(1e): [finished] setting preset to enabled for "packet-phone-home.service" Feb 9 09:54:58.862609 ignition[1059]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:54:58.862609 ignition[1059]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:54:58.862609 ignition[1059]: INFO : files: op(20): [started] setting preset to enabled for "prepare-helm.service" Feb 9 09:54:58.862609 ignition[1059]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 09:54:58.862609 ignition[1059]: INFO : files: createResultFile: createFiles: op(21): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:54:58.862609 ignition[1059]: INFO : files: createResultFile: createFiles: op(21): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:54:58.862609 ignition[1059]: INFO : files: files passed Feb 9 09:54:58.862609 ignition[1059]: INFO : POST message to Packet Timeline Feb 9 09:54:58.862609 ignition[1059]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 09:54:58.862609 ignition[1059]: INFO : GET result: OK Feb 9 09:54:58.862609 ignition[1059]: INFO : Ignition finished successfully Feb 9 09:54:59.393466 kernel: kauditd_printk_skb: 7 callbacks suppressed Feb 9 09:54:59.393500 kernel: audit: type=1131 audit(1707472499.092:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:59.393516 kernel: audit: type=1131 audit(1707472499.195:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:59.393529 kernel: audit: type=1131 audit(1707472499.262:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:59.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:59.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:59.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:59.393664 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:54:58.424524 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:54:59.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.424836 systemd[1]: Starting ignition-quench.service... Feb 9 09:54:59.558448 kernel: audit: type=1131 audit(1707472499.423:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:59.558465 kernel: audit: type=1131 audit(1707472499.500:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:59.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.449569 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:54:59.628229 kernel: audit: type=1131 audit(1707472499.567:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:59.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.457622 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:54:58.457667 systemd[1]: Finished ignition-quench.service. Feb 9 09:54:59.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.492615 systemd[1]: Reached target ignition-complete.target. Feb 9 09:54:59.731521 kernel: audit: type=1131 audit(1707472499.654:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.520398 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:54:59.748499 ignition[1109]: INFO : Ignition 2.14.0 Feb 9 09:54:59.748499 ignition[1109]: INFO : Stage: umount Feb 9 09:54:59.748499 ignition[1109]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:54:59.748499 ignition[1109]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 9 09:54:59.748499 ignition[1109]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 9 09:54:59.748499 ignition[1109]: INFO : umount: umount passed Feb 9 09:54:59.748499 ignition[1109]: INFO : POST message to Packet Timeline Feb 9 09:54:59.748499 ignition[1109]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 9 09:54:59.748499 ignition[1109]: INFO : GET result: OK Feb 9 09:55:00.039507 kernel: audit: type=1131 audit(1707472499.775:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:00.039581 kernel: audit: type=1131 audit(1707472499.845:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:00.039635 kernel: audit: type=1131 audit(1707472499.914:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:59.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:59.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:59.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:59.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:59.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:00.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.566612 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:55:00.039000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:55:00.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:00.055672 ignition[1109]: INFO : Ignition finished successfully Feb 9 09:55:00.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.566692 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:55:00.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.572596 systemd[1]: Reached target initrd-fs.target. Feb 9 09:55:00.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.598573 systemd[1]: Reached target initrd.target. Feb 9 09:54:58.619737 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:55:00.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.621975 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:55:00.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.633624 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:55:00.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.677400 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:54:58.707780 systemd[1]: Stopped target network.target. Feb 9 09:55:00.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.731587 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:55:00.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:00.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.757756 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:54:58.784906 systemd[1]: Stopped target timers.target. Feb 9 09:54:58.802969 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:54:58.803370 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:55:00.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.826134 systemd[1]: Stopped target initrd.target. Feb 9 09:55:00.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.851995 systemd[1]: Stopped target basic.target. Feb 9 09:55:00.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.869999 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:55:00.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.897982 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:55:00.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:00.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:58.918862 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:54:58.941863 systemd[1]: Stopped target remote-fs.target. Feb 9 09:54:58.963857 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:54:58.986861 systemd[1]: Stopped target sysinit.target. Feb 9 09:54:59.009851 systemd[1]: Stopped target local-fs.target. Feb 9 09:54:59.030861 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:54:59.051870 systemd[1]: Stopped target swap.target. Feb 9 09:54:59.071752 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:54:59.072114 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:54:59.094087 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:54:59.182580 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:54:59.182656 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:54:59.195769 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:54:59.195840 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:54:59.262619 systemd[1]: Stopped target paths.target. Feb 9 09:54:59.327521 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:54:59.331532 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:54:59.334629 systemd[1]: Stopped target slices.target. Feb 9 09:54:59.350651 systemd[1]: Stopped target sockets.target. Feb 9 09:55:00.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:54:59.366636 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:54:59.366693 systemd[1]: Closed iscsid.socket. Feb 9 09:54:59.379664 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:54:59.379727 systemd[1]: Closed iscsiuio.socket. Feb 9 09:54:59.400608 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:54:59.400755 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:55:00.568851 iscsid[905]: iscsid shutting down. Feb 9 09:54:59.424924 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:54:59.425281 systemd[1]: Stopped ignition-files.service. Feb 9 09:54:59.500587 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 09:54:59.500664 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 09:54:59.568203 systemd[1]: Stopping ignition-mount.service... Feb 9 09:54:59.635541 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:54:59.635634 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:54:59.655067 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:54:59.722669 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:54:59.731503 systemd-networkd[882]: enp2s0f0np0: DHCPv6 lease lost Feb 9 09:54:59.738657 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:54:59.740415 systemd-networkd[882]: enp2s0f1np1: DHCPv6 lease lost Feb 9 09:54:59.748682 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:55:00.567000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:54:59.748747 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:54:59.776642 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:54:59.776730 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:54:59.847098 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 09:55:00.569268 systemd-journald[268]: Received SIGTERM from PID 1 (n/a). Feb 9 09:54:59.847615 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:54:59.847658 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:54:59.914748 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:54:59.914791 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:54:59.983693 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:54:59.983731 systemd[1]: Stopped ignition-mount.service. Feb 9 09:55:00.000392 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:55:00.000597 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:55:00.015682 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:55:00.015848 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:55:00.032529 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:55:00.032650 systemd[1]: Stopped ignition-disks.service. Feb 9 09:55:00.047657 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:55:00.047766 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:55:00.063704 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:55:00.063845 systemd[1]: Stopped ignition-setup.service. Feb 9 09:55:00.079610 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:55:00.079737 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:55:00.096455 systemd[1]: Stopping network-cleanup.service... Feb 9 09:55:00.109469 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:55:00.109616 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:55:00.124668 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:55:00.124811 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:55:00.145160 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:55:00.145346 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:55:00.164049 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:55:00.182325 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 09:55:00.184157 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:55:00.184497 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:55:00.194897 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:55:00.195119 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:55:00.212526 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:55:00.212630 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:55:00.223589 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:55:00.223686 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:55:00.240367 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:55:00.240404 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:55:00.264528 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:55:00.264573 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:55:00.279430 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:55:00.279488 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:55:00.295726 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:55:00.309483 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:55:00.309510 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:55:00.309742 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:55:00.309782 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:55:00.471613 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:55:00.471839 systemd[1]: Stopped network-cleanup.service. Feb 9 09:55:00.485846 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:55:00.502212 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:55:00.523501 systemd[1]: Switching root. Feb 9 09:55:00.570245 systemd-journald[268]: Journal stopped Feb 9 09:55:04.683682 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:55:04.683696 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:55:04.683705 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:55:04.683710 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:55:04.683715 kernel: SELinux: policy capability open_perms=1 Feb 9 09:55:04.683720 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:55:04.683726 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:55:04.683731 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:55:04.683736 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:55:04.683742 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:55:04.683748 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:55:04.683753 systemd[1]: Successfully loaded SELinux policy in 328.387ms. Feb 9 09:55:04.683760 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.470ms. Feb 9 09:55:04.683767 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:55:04.683774 systemd[1]: Detected architecture x86-64. Feb 9 09:55:04.683780 systemd[1]: Detected first boot. Feb 9 09:55:04.683786 systemd[1]: Hostname set to . Feb 9 09:55:04.683792 systemd[1]: Initializing machine ID from random generator. Feb 9 09:55:04.683797 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:55:04.683803 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:55:04.683809 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:55:04.683816 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:55:04.683823 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:55:04.683829 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 09:55:04.683834 systemd[1]: Stopped iscsiuio.service. Feb 9 09:55:04.683840 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 09:55:04.683846 systemd[1]: Stopped iscsid.service. Feb 9 09:55:04.683853 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 09:55:04.683859 systemd[1]: Stopped initrd-switch-root.service. Feb 9 09:55:04.683865 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 09:55:04.683871 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:55:04.683877 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:55:04.683883 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 09:55:04.683889 systemd[1]: Created slice system-getty.slice. Feb 9 09:55:04.683894 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:55:04.683900 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:55:04.683907 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:55:04.683913 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:55:04.683919 systemd[1]: Created slice user.slice. Feb 9 09:55:04.683925 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:55:04.683932 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:55:04.683938 systemd[1]: Set up automount boot.automount. Feb 9 09:55:04.683944 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:55:04.683950 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 09:55:04.683958 systemd[1]: Stopped target initrd-fs.target. Feb 9 09:55:04.683964 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 09:55:04.683970 systemd[1]: Reached target integritysetup.target. Feb 9 09:55:04.683976 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:55:04.683982 systemd[1]: Reached target remote-fs.target. Feb 9 09:55:04.683988 systemd[1]: Reached target slices.target. Feb 9 09:55:04.683994 systemd[1]: Reached target swap.target. Feb 9 09:55:04.684000 systemd[1]: Reached target torcx.target. Feb 9 09:55:04.684007 systemd[1]: Reached target veritysetup.target. Feb 9 09:55:04.684014 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:55:04.684020 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:55:04.684026 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:55:04.684032 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:55:04.684039 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:55:04.684046 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:55:04.684052 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:55:04.684058 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:55:04.684064 systemd[1]: Mounting media.mount... Feb 9 09:55:04.684070 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 09:55:04.684077 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:55:04.684083 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:55:04.684089 systemd[1]: Mounting tmp.mount... Feb 9 09:55:04.684096 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:55:04.684102 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:55:04.684109 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:55:04.684115 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:55:04.684121 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:55:04.684127 systemd[1]: Starting modprobe@drm.service... Feb 9 09:55:04.684133 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:55:04.684140 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:55:04.684146 kernel: fuse: init (API version 7.34) Feb 9 09:55:04.684152 systemd[1]: Starting modprobe@loop.service... Feb 9 09:55:04.684158 kernel: loop: module loaded Feb 9 09:55:04.684165 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:55:04.684171 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 09:55:04.684177 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 09:55:04.684183 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 09:55:04.684189 kernel: kauditd_printk_skb: 51 callbacks suppressed Feb 9 09:55:04.684195 kernel: audit: type=1131 audit(1707472504.326:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:04.684202 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 09:55:04.684209 kernel: audit: type=1131 audit(1707472504.413:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:04.684215 systemd[1]: Stopped systemd-journald.service. Feb 9 09:55:04.684221 kernel: audit: type=1130 audit(1707472504.477:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:04.684227 kernel: audit: type=1131 audit(1707472504.477:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:04.684232 kernel: audit: type=1334 audit(1707472504.561:106): prog-id=15 op=LOAD Feb 9 09:55:04.684238 kernel: audit: type=1334 audit(1707472504.579:107): prog-id=16 op=LOAD Feb 9 09:55:04.684244 kernel: audit: type=1334 audit(1707472504.598:108): prog-id=17 op=LOAD Feb 9 09:55:04.684250 systemd[1]: Starting systemd-journald.service... Feb 9 09:55:04.684256 kernel: audit: type=1334 audit(1707472504.598:109): prog-id=13 op=UNLOAD Feb 9 09:55:04.684264 kernel: audit: type=1334 audit(1707472504.598:110): prog-id=14 op=UNLOAD Feb 9 09:55:04.684270 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:55:04.684297 kernel: audit: type=1305 audit(1707472504.681:111): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:55:04.684305 systemd-journald[1260]: Journal started Feb 9 09:55:04.684330 systemd-journald[1260]: Runtime Journal (/run/log/journal/27215dd522f74da38f166c56a2eac983) is 8.0M, max 636.8M, 628.8M free. Feb 9 09:55:00.999000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 09:55:01.296000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 09:55:01.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:55:01.302000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:55:01.302000 audit: BPF prog-id=10 op=LOAD Feb 9 09:55:01.302000 audit: BPF prog-id=10 op=UNLOAD Feb 9 09:55:01.302000 audit: BPF prog-id=11 op=LOAD Feb 9 09:55:01.302000 audit: BPF prog-id=11 op=UNLOAD Feb 9 09:55:01.431000 audit[1150]: AVC avc: denied { associate } for pid=1150 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 09:55:01.431000 audit[1150]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a58e2 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=1133 pid=1150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:55:01.431000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:55:01.469000 audit[1150]: AVC avc: denied { associate } for pid=1150 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 09:55:01.469000 audit[1150]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001a59b9 a2=1ed a3=0 items=2 ppid=1133 pid=1150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:55:01.469000 audit: CWD cwd="/" Feb 9 09:55:01.469000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:01.469000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:01.469000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:55:02.992000 audit: BPF prog-id=12 op=LOAD Feb 9 09:55:02.992000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:55:02.993000 audit: BPF prog-id=13 op=LOAD Feb 9 09:55:02.993000 audit: BPF prog-id=14 op=LOAD Feb 9 09:55:02.993000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:55:02.993000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:55:02.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:03.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:03.067000 audit: BPF prog-id=12 op=UNLOAD Feb 9 09:55:03.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:03.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:03.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:04.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:04.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:04.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:04.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:04.561000 audit: BPF prog-id=15 op=LOAD Feb 9 09:55:04.579000 audit: BPF prog-id=16 op=LOAD Feb 9 09:55:04.598000 audit: BPF prog-id=17 op=LOAD Feb 9 09:55:04.598000 audit: BPF prog-id=13 op=UNLOAD Feb 9 09:55:04.598000 audit: BPF prog-id=14 op=UNLOAD Feb 9 09:55:04.681000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:55:01.426086 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:55:02.992410 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:55:01.427019 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:55:02.995153 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 09:55:01.427080 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:55:01.427156 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:01Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 09:55:01.427187 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:01Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 09:55:01.427284 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:01Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 09:55:01.427324 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:01Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 09:55:01.427818 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:01Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 09:55:01.427939 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:55:01.427980 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:55:01.429077 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 09:55:01.429189 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 09:55:01.429247 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 09:55:01.429323 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 09:55:01.429374 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 09:55:01.429416 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 09:55:02.641192 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:02Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:55:02.641365 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:02Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:55:02.641418 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:02Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:55:02.641508 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:02Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:55:02.641536 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:02Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 09:55:02.641572 /usr/lib/systemd/system-generators/torcx-generator[1150]: time="2024-02-09T09:55:02Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 09:55:04.681000 audit[1260]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff1218d940 a2=4000 a3=7fff1218d9dc items=0 ppid=1 pid=1260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:55:04.681000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:55:04.761465 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:55:04.788309 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:55:04.814296 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:55:04.857917 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 09:55:04.857939 systemd[1]: Stopped verity-setup.service. Feb 9 09:55:04.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:04.903312 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 09:55:04.923462 systemd[1]: Started systemd-journald.service. Feb 9 09:55:04.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:04.930894 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:55:04.938604 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:55:04.945525 systemd[1]: Mounted media.mount. Feb 9 09:55:04.952544 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:55:04.961533 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:55:04.970504 systemd[1]: Mounted tmp.mount. Feb 9 09:55:04.977595 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:55:04.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:04.986592 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:55:04.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:04.995629 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:55:04.995739 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:55:05.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.004769 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:55:05.004910 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:55:05.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.013857 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:55:05.014053 systemd[1]: Finished modprobe@drm.service. Feb 9 09:55:05.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.023087 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:55:05.023405 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:55:05.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.032073 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:55:05.032394 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:55:05.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.041052 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:55:05.041430 systemd[1]: Finished modprobe@loop.service. Feb 9 09:55:05.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.050089 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:55:05.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.059021 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:55:05.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.068164 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:55:05.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.077042 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:55:05.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.086600 systemd[1]: Reached target network-pre.target. Feb 9 09:55:05.098027 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:55:05.108897 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:55:05.115505 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:55:05.116545 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:55:05.123890 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:55:05.127396 systemd-journald[1260]: Time spent on flushing to /var/log/journal/27215dd522f74da38f166c56a2eac983 is 15.803ms for 1647 entries. Feb 9 09:55:05.127396 systemd-journald[1260]: System Journal (/var/log/journal/27215dd522f74da38f166c56a2eac983) is 8.0M, max 195.6M, 187.6M free. Feb 9 09:55:05.174994 systemd-journald[1260]: Received client request to flush runtime journal. Feb 9 09:55:05.141358 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:55:05.141877 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:55:05.155409 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:55:05.155923 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:55:05.162977 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:55:05.169877 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:55:05.177726 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:55:05.186433 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:55:05.194485 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:55:05.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.202485 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:55:05.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.210488 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:55:05.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.218460 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:55:05.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.227435 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:55:05.235555 udevadm[1276]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 09:55:05.416628 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:55:05.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.425000 audit: BPF prog-id=18 op=LOAD Feb 9 09:55:05.425000 audit: BPF prog-id=19 op=LOAD Feb 9 09:55:05.425000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:55:05.425000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:55:05.426624 systemd[1]: Starting systemd-udevd.service... Feb 9 09:55:05.437601 systemd-udevd[1277]: Using default interface naming scheme 'v252'. Feb 9 09:55:05.456752 systemd[1]: Started systemd-udevd.service. Feb 9 09:55:05.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.466367 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Feb 9 09:55:05.465000 audit: BPF prog-id=20 op=LOAD Feb 9 09:55:05.467506 systemd[1]: Starting systemd-networkd.service... Feb 9 09:55:05.493338 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 9 09:55:05.491000 audit: BPF prog-id=21 op=LOAD Feb 9 09:55:05.492000 audit: BPF prog-id=22 op=LOAD Feb 9 09:55:05.492000 audit: BPF prog-id=23 op=LOAD Feb 9 09:55:05.493972 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:55:05.530025 kernel: ACPI: button: Sleep Button [SLPB] Feb 9 09:55:05.530063 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 09:55:05.530081 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 9 09:55:05.551269 kernel: IPMI message handler: version 39.2 Feb 9 09:55:05.551325 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sdb6 scanned by (udev-worker) (1347) Feb 9 09:55:05.555295 kernel: ACPI: button: Power Button [PWRF] Feb 9 09:55:05.619013 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:55:05.628466 systemd[1]: Started systemd-userdbd.service. Feb 9 09:55:05.493000 audit[1285]: AVC avc: denied { confidentiality } for pid=1285 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 09:55:05.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:05.493000 audit[1285]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5640808a6f50 a1=4d8bc a2=7f4472176bc5 a3=5 items=42 ppid=1277 pid=1285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:55:05.493000 audit: CWD cwd="/" Feb 9 09:55:05.493000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=1 name=(null) inode=14998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=2 name=(null) inode=14998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=3 name=(null) inode=14999 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=4 name=(null) inode=14998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=5 name=(null) inode=15000 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=6 name=(null) inode=14998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=7 name=(null) inode=15001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=8 name=(null) inode=15001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=9 name=(null) inode=15002 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=10 name=(null) inode=15001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=11 name=(null) inode=15003 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=12 name=(null) inode=15001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=13 name=(null) inode=15004 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=14 name=(null) inode=15001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=15 name=(null) inode=15005 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=16 name=(null) inode=15001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=17 name=(null) inode=15006 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=18 name=(null) inode=14998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=19 name=(null) inode=15007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=20 name=(null) inode=15007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=21 name=(null) inode=15008 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=22 name=(null) inode=15007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=23 name=(null) inode=15009 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=24 name=(null) inode=15007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=25 name=(null) inode=15010 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=26 name=(null) inode=15007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=27 name=(null) inode=15011 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=28 name=(null) inode=15007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=29 name=(null) inode=15012 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=30 name=(null) inode=14998 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=31 name=(null) inode=15013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=32 name=(null) inode=15013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=33 name=(null) inode=15014 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=34 name=(null) inode=15013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=35 name=(null) inode=15015 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=36 name=(null) inode=15013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=37 name=(null) inode=15016 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=38 name=(null) inode=15013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=39 name=(null) inode=15017 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=40 name=(null) inode=15013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PATH item=41 name=(null) inode=15018 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:55:05.493000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 09:55:05.675272 kernel: ipmi device interface Feb 9 09:55:05.692272 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 9 09:55:05.692451 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 9 09:55:05.692559 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 9 09:55:05.772434 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 9 09:55:05.773266 kernel: ipmi_si: IPMI System Interface driver Feb 9 09:55:05.811158 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 9 09:55:05.811265 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 9 09:55:05.851703 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 9 09:55:05.851727 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 9 09:55:05.870992 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 9 09:55:05.911584 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 9 09:55:05.957043 systemd-networkd[1308]: bond0: netdev ready Feb 9 09:55:05.958701 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 9 09:55:05.958806 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 9 09:55:05.958821 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 9 09:55:05.958833 kernel: iTCO_vendor_support: vendor-support=0 Feb 9 09:55:05.959372 systemd-networkd[1308]: lo: Link UP Feb 9 09:55:05.959375 systemd-networkd[1308]: lo: Gained carrier Feb 9 09:55:05.959841 systemd-networkd[1308]: Enumeration completed Feb 9 09:55:05.959914 systemd[1]: Started systemd-networkd.service. Feb 9 09:55:05.960126 systemd-networkd[1308]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 9 09:55:05.960909 systemd-networkd[1308]: enp2s0f1np1: Configuring with /etc/systemd/network/10-04:3f:72:d9:99:05.network. Feb 9 09:55:05.999264 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 9 09:55:06.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:06.066268 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Feb 9 09:55:06.066359 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Feb 9 09:55:06.117313 kernel: intel_rapl_common: Found RAPL domain package Feb 9 09:55:06.117343 kernel: intel_rapl_common: Found RAPL domain core Feb 9 09:55:06.117364 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 9 09:55:06.117457 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 9 09:55:06.128619 kernel: intel_rapl_common: Found RAPL domain uncore Feb 9 09:55:06.173272 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Feb 9 09:55:06.173298 kernel: intel_rapl_common: Found RAPL domain dram Feb 9 09:55:06.173312 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 09:55:06.200265 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 9 09:55:06.256856 systemd-networkd[1308]: enp2s0f0np0: Configuring with /etc/systemd/network/10-04:3f:72:d9:99:04.network. Feb 9 09:55:06.257297 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 9 09:55:06.297324 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 9 09:55:06.645315 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Feb 9 09:55:06.669324 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Feb 9 09:55:06.671308 systemd-networkd[1308]: bond0: Link UP Feb 9 09:55:06.671709 systemd-networkd[1308]: enp2s0f1np1: Link UP Feb 9 09:55:06.671990 systemd-networkd[1308]: enp2s0f1np1: Gained carrier Feb 9 09:55:06.674067 systemd-networkd[1308]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-04:3f:72:d9:99:04.network. Feb 9 09:55:06.713463 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 09:55:06.713522 kernel: bond0: active interface up! Feb 9 09:55:06.735307 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 9 09:55:06.744526 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:55:06.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:06.753004 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:55:06.768384 lvm[1381]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:55:06.797684 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:55:06.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:06.805393 systemd[1]: Reached target cryptsetup.target. Feb 9 09:55:06.813939 systemd[1]: Starting lvm2-activation.service... Feb 9 09:55:06.815997 lvm[1382]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:55:06.858265 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:55:06.880265 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:55:06.884686 systemd[1]: Finished lvm2-activation.service. Feb 9 09:55:06.903277 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:55:06.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:06.920387 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:55:06.926337 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:55:06.942342 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:55:06.942357 systemd[1]: Reached target local-fs.target. Feb 9 09:55:06.949265 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:55:06.965383 systemd[1]: Reached target machines.target. Feb 9 09:55:06.971306 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:55:06.987935 systemd[1]: Starting ldconfig.service... Feb 9 09:55:06.993266 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:55:07.009760 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:55:07.009780 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:55:07.010568 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:55:07.016266 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:55:07.032947 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:55:07.037266 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:55:07.037314 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:55:07.037431 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:55:07.037490 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:55:07.038189 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:55:07.038420 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1384 (bootctl) Feb 9 09:55:07.039016 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:55:07.058291 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:55:07.079264 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:55:07.090249 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:55:07.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:07.099279 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:55:07.119264 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:55:07.138267 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:55:07.155539 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:55:07.157263 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:55:07.176264 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:55:07.195303 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:55:07.195594 systemd-networkd[1308]: bond0: Gained carrier Feb 9 09:55:07.195726 systemd-networkd[1308]: enp2s0f0np0: Link UP Feb 9 09:55:07.195859 systemd-networkd[1308]: enp2s0f0np0: Gained carrier Feb 9 09:55:07.227929 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Feb 9 09:55:07.227952 kernel: bond0: (slave enp2s0f1np1): invalid new link 1 on slave Feb 9 09:55:07.229490 systemd-networkd[1308]: enp2s0f1np1: Link DOWN Feb 9 09:55:07.229493 systemd-networkd[1308]: enp2s0f1np1: Lost carrier Feb 9 09:55:07.309105 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:55:07.381298 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Feb 9 09:55:07.398323 kernel: bond0: (slave enp2s0f1np1): speed changed to 0 on port 1 Feb 9 09:55:07.399613 systemd-networkd[1308]: enp2s0f1np1: Link UP Feb 9 09:55:07.399762 systemd-networkd[1308]: enp2s0f1np1: Gained carrier Feb 9 09:55:07.451288 kernel: bond0: (slave enp2s0f1np1): link status up again after 200 ms Feb 9 09:55:07.468314 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 9 09:55:07.555233 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:55:07.565391 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:55:07.565725 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:55:07.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:07.593941 systemd-fsck[1393]: fsck.fat 4.2 (2021-01-31) Feb 9 09:55:07.593941 systemd-fsck[1393]: /dev/sdb1: 789 files, 115332/258078 clusters Feb 9 09:55:07.594687 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:55:07.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:07.606162 systemd[1]: Mounting boot.mount... Feb 9 09:55:07.617963 systemd[1]: Mounted boot.mount. Feb 9 09:55:07.634962 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:55:07.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:07.665069 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:55:07.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:55:07.674077 systemd[1]: Starting audit-rules.service... Feb 9 09:55:07.680894 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:55:07.689862 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:55:07.692000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:55:07.692000 audit[1413]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe645f91d0 a2=420 a3=0 items=0 ppid=1396 pid=1413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:55:07.692000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:55:07.693955 augenrules[1413]: No rules Feb 9 09:55:07.699239 systemd[1]: Starting systemd-resolved.service... Feb 9 09:55:07.707205 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:55:07.714771 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:55:07.715305 systemd-networkd[1308]: bond0: Gained IPv6LL Feb 9 09:55:07.721546 systemd[1]: Finished audit-rules.service. Feb 9 09:55:07.728412 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:55:07.736399 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:55:07.747496 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:55:07.748021 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:55:07.771065 ldconfig[1383]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:55:07.773573 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:55:07.776083 systemd-resolved[1418]: Positive Trust Anchors: Feb 9 09:55:07.776089 systemd-resolved[1418]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:55:07.776109 systemd-resolved[1418]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:55:07.779919 systemd-resolved[1418]: Using system hostname 'ci-3510.3.2-a-98b619e81b'. Feb 9 09:55:07.781429 systemd[1]: Started systemd-resolved.service. Feb 9 09:55:07.789472 systemd[1]: Finished ldconfig.service. Feb 9 09:55:07.796389 systemd[1]: Reached target network.target. Feb 9 09:55:07.804343 systemd[1]: Reached target nss-lookup.target. Feb 9 09:55:07.812338 systemd[1]: Reached target time-set.target. Feb 9 09:55:07.820952 systemd[1]: Starting systemd-update-done.service... Feb 9 09:55:07.827615 systemd[1]: Finished systemd-update-done.service. Feb 9 09:55:07.836407 systemd[1]: Reached target sysinit.target. Feb 9 09:55:07.844393 systemd[1]: Started motdgen.path. Feb 9 09:55:07.851360 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:55:07.861406 systemd[1]: Started logrotate.timer. Feb 9 09:55:07.868382 systemd[1]: Started mdadm.timer. Feb 9 09:55:07.875357 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:55:07.883337 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:55:07.883353 systemd[1]: Reached target paths.target. Feb 9 09:55:07.890337 systemd[1]: Reached target timers.target. Feb 9 09:55:07.897448 systemd[1]: Listening on dbus.socket. Feb 9 09:55:07.904799 systemd[1]: Starting docker.socket... Feb 9 09:55:07.912584 systemd[1]: Listening on sshd.socket. Feb 9 09:55:07.919402 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:55:07.919610 systemd[1]: Listening on docker.socket. Feb 9 09:55:07.926529 systemd[1]: Reached target sockets.target. Feb 9 09:55:07.934367 systemd[1]: Reached target basic.target. Feb 9 09:55:07.941394 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:55:07.941410 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:55:07.941886 systemd[1]: Starting containerd.service... Feb 9 09:55:07.948746 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 09:55:07.957833 systemd[1]: Starting coreos-metadata.service... Feb 9 09:55:07.964866 systemd[1]: Starting dbus.service... Feb 9 09:55:07.970803 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:55:07.975238 jq[1433]: false Feb 9 09:55:07.977251 coreos-metadata[1426]: Feb 09 09:55:07.977 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 09:55:07.978915 systemd[1]: Starting extend-filesystems.service... Feb 9 09:55:07.982423 dbus-daemon[1432]: [system] SELinux support is enabled Feb 9 09:55:07.985384 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:55:07.986132 systemd[1]: Starting motdgen.service... Feb 9 09:55:07.986578 coreos-metadata[1429]: Feb 09 09:55:07.986 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 9 09:55:07.987016 extend-filesystems[1435]: Found sda Feb 9 09:55:08.016421 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Feb 9 09:55:07.993923 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:55:08.016512 extend-filesystems[1435]: Found sdb Feb 9 09:55:08.016512 extend-filesystems[1435]: Found sdb1 Feb 9 09:55:08.016512 extend-filesystems[1435]: Found sdb2 Feb 9 09:55:08.016512 extend-filesystems[1435]: Found sdb3 Feb 9 09:55:08.016512 extend-filesystems[1435]: Found usr Feb 9 09:55:08.016512 extend-filesystems[1435]: Found sdb4 Feb 9 09:55:08.016512 extend-filesystems[1435]: Found sdb6 Feb 9 09:55:08.016512 extend-filesystems[1435]: Found sdb7 Feb 9 09:55:08.016512 extend-filesystems[1435]: Found sdb9 Feb 9 09:55:08.016512 extend-filesystems[1435]: Checking size of /dev/sdb9 Feb 9 09:55:08.016512 extend-filesystems[1435]: Resized partition /dev/sdb9 Feb 9 09:55:08.161471 extend-filesystems[1450]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 09:55:08.024065 systemd[1]: Starting prepare-critools.service... Feb 9 09:55:08.037840 systemd[1]: Starting prepare-helm.service... Feb 9 09:55:08.044961 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:55:08.062836 systemd[1]: Starting sshd-keygen.service... Feb 9 09:55:08.084876 systemd[1]: Starting systemd-logind.service... Feb 9 09:55:08.097300 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:55:08.177849 update_engine[1465]: I0209 09:55:08.155400 1465 main.cc:92] Flatcar Update Engine starting Feb 9 09:55:08.177849 update_engine[1465]: I0209 09:55:08.158711 1465 update_check_scheduler.cc:74] Next update check in 4m24s Feb 9 09:55:08.097820 systemd[1]: Starting tcsd.service... Feb 9 09:55:08.178034 jq[1466]: true Feb 9 09:55:08.109510 systemd-logind[1463]: Watching system buttons on /dev/input/event3 (Power Button) Feb 9 09:55:08.109519 systemd-logind[1463]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 9 09:55:08.178356 tar[1468]: ./ Feb 9 09:55:08.178356 tar[1468]: ./macvlan Feb 9 09:55:08.109528 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 9 09:55:08.109605 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 09:55:08.109636 systemd-logind[1463]: New seat seat0. Feb 9 09:55:08.110088 systemd[1]: Starting update-engine.service... Feb 9 09:55:08.123831 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:55:08.138622 systemd[1]: Started dbus.service. Feb 9 09:55:08.154950 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:55:08.155040 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:55:08.155185 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:55:08.155258 systemd[1]: Finished motdgen.service. Feb 9 09:55:08.170303 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:55:08.170381 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:55:08.188157 jq[1474]: true Feb 9 09:55:08.188328 tar[1470]: linux-amd64/helm Feb 9 09:55:08.188759 dbus-daemon[1432]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 09:55:08.190405 tar[1469]: crictl Feb 9 09:55:08.193703 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 9 09:55:08.193833 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 9 09:55:08.195638 systemd[1]: Started systemd-logind.service. Feb 9 09:55:08.195979 tar[1468]: ./static Feb 9 09:55:08.200375 env[1475]: time="2024-02-09T09:55:08.200321460Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:55:08.208092 systemd[1]: Started update-engine.service. Feb 9 09:55:08.208818 env[1475]: time="2024-02-09T09:55:08.208801545Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:55:08.209990 env[1475]: time="2024-02-09T09:55:08.209977747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:55:08.210699 env[1475]: time="2024-02-09T09:55:08.210683389Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:55:08.210733 env[1475]: time="2024-02-09T09:55:08.210699144Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:55:08.212673 env[1475]: time="2024-02-09T09:55:08.212661392Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:55:08.212703 env[1475]: time="2024-02-09T09:55:08.212672623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:55:08.212703 env[1475]: time="2024-02-09T09:55:08.212681053Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:55:08.212703 env[1475]: time="2024-02-09T09:55:08.212686730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:55:08.212753 env[1475]: time="2024-02-09T09:55:08.212733306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:55:08.214845 env[1475]: time="2024-02-09T09:55:08.214833764Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:55:08.214919 env[1475]: time="2024-02-09T09:55:08.214908707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:55:08.214941 env[1475]: time="2024-02-09T09:55:08.214919648Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:55:08.214961 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:55:08.215054 env[1475]: time="2024-02-09T09:55:08.214947770Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:55:08.215054 env[1475]: time="2024-02-09T09:55:08.214957597Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:55:08.217532 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:55:08.220728 tar[1468]: ./vlan Feb 9 09:55:08.221246 env[1475]: time="2024-02-09T09:55:08.221235042Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:55:08.221277 env[1475]: time="2024-02-09T09:55:08.221250113Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:55:08.221277 env[1475]: time="2024-02-09T09:55:08.221258167Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:55:08.221315 env[1475]: time="2024-02-09T09:55:08.221284773Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:55:08.221315 env[1475]: time="2024-02-09T09:55:08.221293960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:55:08.221534 env[1475]: time="2024-02-09T09:55:08.221489358Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:55:08.221579 env[1475]: time="2024-02-09T09:55:08.221547602Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:55:08.221610 env[1475]: time="2024-02-09T09:55:08.221581695Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:55:08.221610 env[1475]: time="2024-02-09T09:55:08.221600800Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:55:08.221651 env[1475]: time="2024-02-09T09:55:08.221641127Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:55:08.221671 env[1475]: time="2024-02-09T09:55:08.221655384Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:55:08.221689 env[1475]: time="2024-02-09T09:55:08.221666286Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:55:08.221810 env[1475]: time="2024-02-09T09:55:08.221796198Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:55:08.221876 env[1475]: time="2024-02-09T09:55:08.221865344Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:55:08.222094 env[1475]: time="2024-02-09T09:55:08.222081363Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:55:08.222116 env[1475]: time="2024-02-09T09:55:08.222106344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:55:08.222132 env[1475]: time="2024-02-09T09:55:08.222122124Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:55:08.222169 env[1475]: time="2024-02-09T09:55:08.222161659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:55:08.222188 env[1475]: time="2024-02-09T09:55:08.222174576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:55:08.222204 env[1475]: time="2024-02-09T09:55:08.222186515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:55:08.222204 env[1475]: time="2024-02-09T09:55:08.222197030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:55:08.222240 env[1475]: time="2024-02-09T09:55:08.222208139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:55:08.222240 env[1475]: time="2024-02-09T09:55:08.222219006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:55:08.222240 env[1475]: time="2024-02-09T09:55:08.222229680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:55:08.222291 env[1475]: time="2024-02-09T09:55:08.222240574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:55:08.222291 env[1475]: time="2024-02-09T09:55:08.222253232Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:55:08.222355 env[1475]: time="2024-02-09T09:55:08.222346403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:55:08.222374 env[1475]: time="2024-02-09T09:55:08.222360216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:55:08.222393 env[1475]: time="2024-02-09T09:55:08.222371672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:55:08.222393 env[1475]: time="2024-02-09T09:55:08.222382632Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:55:08.222429 env[1475]: time="2024-02-09T09:55:08.222394750Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:55:08.222429 env[1475]: time="2024-02-09T09:55:08.222405798Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:55:08.222429 env[1475]: time="2024-02-09T09:55:08.222422838Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:55:08.222474 env[1475]: time="2024-02-09T09:55:08.222448003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:55:08.222652 env[1475]: time="2024-02-09T09:55:08.222610731Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:55:08.224222 env[1475]: time="2024-02-09T09:55:08.222665761Z" level=info msg="Connect containerd service" Feb 9 09:55:08.224222 env[1475]: time="2024-02-09T09:55:08.222691660Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:55:08.224222 env[1475]: time="2024-02-09T09:55:08.223066537Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:55:08.224222 env[1475]: time="2024-02-09T09:55:08.223160545Z" level=info msg="Start subscribing containerd event" Feb 9 09:55:08.224222 env[1475]: time="2024-02-09T09:55:08.223210469Z" level=info msg="Start recovering state" Feb 9 09:55:08.224222 env[1475]: time="2024-02-09T09:55:08.223218012Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:55:08.224222 env[1475]: time="2024-02-09T09:55:08.223245694Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:55:08.224222 env[1475]: time="2024-02-09T09:55:08.223251455Z" level=info msg="Start event monitor" Feb 9 09:55:08.224222 env[1475]: time="2024-02-09T09:55:08.223275542Z" level=info msg="Start snapshots syncer" Feb 9 09:55:08.224222 env[1475]: time="2024-02-09T09:55:08.223283020Z" level=info msg="containerd successfully booted in 0.024286s" Feb 9 09:55:08.224222 env[1475]: time="2024-02-09T09:55:08.223286811Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:55:08.224222 env[1475]: time="2024-02-09T09:55:08.223301823Z" level=info msg="Start streaming server" Feb 9 09:55:08.227394 systemd[1]: Started containerd.service. Feb 9 09:55:08.235987 systemd[1]: Started locksmithd.service. Feb 9 09:55:08.242407 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:55:08.242473 tar[1468]: ./portmap Feb 9 09:55:08.242524 systemd[1]: Reached target system-config.target. Feb 9 09:55:08.250372 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:55:08.250491 systemd[1]: Reached target user-config.target. Feb 9 09:55:08.263164 tar[1468]: ./host-local Feb 9 09:55:08.280511 tar[1468]: ./vrf Feb 9 09:55:08.299472 tar[1468]: ./bridge Feb 9 09:55:08.299714 locksmithd[1512]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:55:08.322587 tar[1468]: ./tuning Feb 9 09:55:08.340825 tar[1468]: ./firewall Feb 9 09:55:08.364790 tar[1468]: ./host-device Feb 9 09:55:08.385621 tar[1468]: ./sbr Feb 9 09:55:08.404799 tar[1468]: ./loopback Feb 9 09:55:08.422958 tar[1468]: ./dhcp Feb 9 09:55:08.442646 tar[1470]: linux-amd64/LICENSE Feb 9 09:55:08.442706 tar[1470]: linux-amd64/README.md Feb 9 09:55:08.444993 systemd[1]: Finished prepare-helm.service. Feb 9 09:55:08.453515 systemd[1]: Finished prepare-critools.service. Feb 9 09:55:08.475703 tar[1468]: ./ptp Feb 9 09:55:08.498370 tar[1468]: ./ipvlan Feb 9 09:55:08.520250 tar[1468]: ./bandwidth Feb 9 09:55:08.539265 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Feb 9 09:55:08.566586 extend-filesystems[1450]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Feb 9 09:55:08.566586 extend-filesystems[1450]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 9 09:55:08.566586 extend-filesystems[1450]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Feb 9 09:55:08.592374 extend-filesystems[1435]: Resized filesystem in /dev/sdb9 Feb 9 09:55:08.567135 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:55:08.567221 systemd[1]: Finished extend-filesystems.service. Feb 9 09:55:08.582938 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:55:08.907093 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:55:08.918940 systemd[1]: Finished sshd-keygen.service. Feb 9 09:55:08.926088 systemd[1]: Starting issuegen.service... Feb 9 09:55:08.932556 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:55:08.932625 systemd[1]: Finished issuegen.service. Feb 9 09:55:08.940049 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:55:08.948549 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:55:08.956963 systemd[1]: Started getty@tty1.service. Feb 9 09:55:08.963911 systemd[1]: Started serial-getty@ttyS1.service. Feb 9 09:55:08.972458 systemd[1]: Reached target getty.target. Feb 9 09:55:09.799494 kernel: mlx5_core 0000:02:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 9 09:55:13.924623 coreos-metadata[1429]: Feb 09 09:55:13.924 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 09:55:13.925444 coreos-metadata[1426]: Feb 09 09:55:13.924 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 9 09:55:14.000583 login[1537]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 09:55:14.000999 login[1536]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 09:55:14.007727 systemd[1]: Created slice user-500.slice. Feb 9 09:55:14.008230 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:55:14.009163 systemd-logind[1463]: New session 1 of user core. Feb 9 09:55:14.013508 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:55:14.014111 systemd[1]: Starting user@500.service... Feb 9 09:55:14.016332 (systemd)[1541]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:14.082693 systemd[1541]: Queued start job for default target default.target. Feb 9 09:55:14.082915 systemd[1541]: Reached target paths.target. Feb 9 09:55:14.082926 systemd[1541]: Reached target sockets.target. Feb 9 09:55:14.082934 systemd[1541]: Reached target timers.target. Feb 9 09:55:14.082940 systemd[1541]: Reached target basic.target. Feb 9 09:55:14.082958 systemd[1541]: Reached target default.target. Feb 9 09:55:14.082972 systemd[1541]: Startup finished in 63ms. Feb 9 09:55:14.083020 systemd[1]: Started user@500.service. Feb 9 09:55:14.083581 systemd[1]: Started session-1.scope. Feb 9 09:55:14.481499 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:2 port 2:2 Feb 9 09:55:14.481654 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:2 Feb 9 09:55:14.925119 coreos-metadata[1429]: Feb 09 09:55:14.924 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 09:55:14.925985 coreos-metadata[1426]: Feb 09 09:55:14.924 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 9 09:55:14.974773 coreos-metadata[1426]: Feb 09 09:55:14.974 INFO Fetch successful Feb 9 09:55:14.975023 coreos-metadata[1429]: Feb 09 09:55:14.974 INFO Fetch successful Feb 9 09:55:14.986062 systemd-timesyncd[1419]: Contacted time server 73.193.62.54:123 (0.flatcar.pool.ntp.org). Feb 9 09:55:14.986096 systemd-timesyncd[1419]: Initial clock synchronization to Fri 2024-02-09 09:55:15.173721 UTC. Feb 9 09:55:15.000535 systemd[1]: Finished coreos-metadata.service. Feb 9 09:55:15.001237 systemd[1]: Started packet-phone-home.service. Feb 9 09:55:15.001608 unknown[1426]: wrote ssh authorized keys file for user: core Feb 9 09:55:15.001872 login[1537]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 09:55:15.004337 systemd-logind[1463]: New session 2 of user core. Feb 9 09:55:15.004913 systemd[1]: Started session-2.scope. Feb 9 09:55:15.007566 curl[1556]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 9 09:55:15.007566 curl[1556]: Dload Upload Total Spent Left Speed Feb 9 09:55:15.012026 update-ssh-keys[1557]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:55:15.012242 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 09:55:15.012415 systemd[1]: Reached target multi-user.target. Feb 9 09:55:15.013135 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:55:15.017006 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:55:15.017083 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:55:15.017243 systemd[1]: Startup finished in 2.007s (kernel) + 19.814s (initrd) + 14.371s (userspace) = 36.193s. Feb 9 09:55:15.204311 curl[1556]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 9 09:55:15.206846 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 9 09:55:15.419921 systemd[1]: Created slice system-sshd.slice. Feb 9 09:55:15.420588 systemd[1]: Started sshd@0-86.109.11.101:22-147.75.109.163:45322.service. Feb 9 09:55:15.459041 sshd[1567]: Accepted publickey for core from 147.75.109.163 port 45322 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 09:55:15.460189 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:15.464038 systemd-logind[1463]: New session 3 of user core. Feb 9 09:55:15.464898 systemd[1]: Started session-3.scope. Feb 9 09:55:15.521200 systemd[1]: Started sshd@1-86.109.11.101:22-147.75.109.163:45336.service. Feb 9 09:55:15.547476 sshd[1572]: Accepted publickey for core from 147.75.109.163 port 45336 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 09:55:15.548150 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:15.550550 systemd-logind[1463]: New session 4 of user core. Feb 9 09:55:15.551029 systemd[1]: Started session-4.scope. Feb 9 09:55:15.601224 sshd[1572]: pam_unix(sshd:session): session closed for user core Feb 9 09:55:15.603107 systemd[1]: sshd@1-86.109.11.101:22-147.75.109.163:45336.service: Deactivated successfully. Feb 9 09:55:15.603532 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:55:15.604011 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:55:15.604668 systemd[1]: Started sshd@2-86.109.11.101:22-147.75.109.163:45350.service. Feb 9 09:55:15.605185 systemd-logind[1463]: Removed session 4. Feb 9 09:55:15.635919 sshd[1578]: Accepted publickey for core from 147.75.109.163 port 45350 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 09:55:15.637443 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:15.643356 systemd-logind[1463]: New session 5 of user core. Feb 9 09:55:15.645023 systemd[1]: Started session-5.scope. Feb 9 09:55:15.712308 sshd[1578]: pam_unix(sshd:session): session closed for user core Feb 9 09:55:15.713865 systemd[1]: sshd@2-86.109.11.101:22-147.75.109.163:45350.service: Deactivated successfully. Feb 9 09:55:15.714177 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:55:15.714584 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:55:15.715077 systemd[1]: Started sshd@3-86.109.11.101:22-147.75.109.163:45360.service. Feb 9 09:55:15.715500 systemd-logind[1463]: Removed session 5. Feb 9 09:55:15.743385 sshd[1584]: Accepted publickey for core from 147.75.109.163 port 45360 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 09:55:15.744449 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:15.748008 systemd-logind[1463]: New session 6 of user core. Feb 9 09:55:15.748846 systemd[1]: Started session-6.scope. Feb 9 09:55:15.806864 sshd[1584]: pam_unix(sshd:session): session closed for user core Feb 9 09:55:15.808457 systemd[1]: sshd@3-86.109.11.101:22-147.75.109.163:45360.service: Deactivated successfully. Feb 9 09:55:15.808761 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 09:55:15.809066 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. Feb 9 09:55:15.809634 systemd[1]: Started sshd@4-86.109.11.101:22-147.75.109.163:45372.service. Feb 9 09:55:15.810082 systemd-logind[1463]: Removed session 6. Feb 9 09:55:15.836713 sshd[1590]: Accepted publickey for core from 147.75.109.163 port 45372 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 09:55:15.837637 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:55:15.840940 systemd-logind[1463]: New session 7 of user core. Feb 9 09:55:15.841677 systemd[1]: Started session-7.scope. Feb 9 09:55:15.939820 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:55:15.940443 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:55:19.495412 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:55:19.499782 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:55:19.499974 systemd[1]: Reached target network-online.target. Feb 9 09:55:19.500668 systemd[1]: Starting docker.service... Feb 9 09:55:19.523046 env[1613]: time="2024-02-09T09:55:19.523017997Z" level=info msg="Starting up" Feb 9 09:55:19.523626 env[1613]: time="2024-02-09T09:55:19.523616899Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:55:19.523626 env[1613]: time="2024-02-09T09:55:19.523625833Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:55:19.523679 env[1613]: time="2024-02-09T09:55:19.523637557Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:55:19.523679 env[1613]: time="2024-02-09T09:55:19.523643455Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:55:19.524475 env[1613]: time="2024-02-09T09:55:19.524465029Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 09:55:19.524475 env[1613]: time="2024-02-09T09:55:19.524472908Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 09:55:19.524518 env[1613]: time="2024-02-09T09:55:19.524480194Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 09:55:19.524518 env[1613]: time="2024-02-09T09:55:19.524485491Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 09:55:19.539757 env[1613]: time="2024-02-09T09:55:19.539720729Z" level=info msg="Loading containers: start." Feb 9 09:55:19.670369 kernel: Initializing XFRM netlink socket Feb 9 09:55:19.731725 env[1613]: time="2024-02-09T09:55:19.731706157Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 09:55:19.776040 systemd-networkd[1308]: docker0: Link UP Feb 9 09:55:19.780407 env[1613]: time="2024-02-09T09:55:19.780363607Z" level=info msg="Loading containers: done." Feb 9 09:55:19.785929 env[1613]: time="2024-02-09T09:55:19.785884168Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 09:55:19.786007 env[1613]: time="2024-02-09T09:55:19.785977945Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 09:55:19.786033 env[1613]: time="2024-02-09T09:55:19.786027281Z" level=info msg="Daemon has completed initialization" Feb 9 09:55:19.826324 systemd[1]: Started docker.service. Feb 9 09:55:19.842380 env[1613]: time="2024-02-09T09:55:19.842216649Z" level=info msg="API listen on /run/docker.sock" Feb 9 09:55:19.887964 systemd[1]: Reloading. Feb 9 09:55:19.950380 /usr/lib/systemd/system-generators/torcx-generator[1766]: time="2024-02-09T09:55:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:55:19.950412 /usr/lib/systemd/system-generators/torcx-generator[1766]: time="2024-02-09T09:55:19Z" level=info msg="torcx already run" Feb 9 09:55:20.046749 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:55:20.046762 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:55:20.062284 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:55:20.118831 systemd[1]: Started kubelet.service. Feb 9 09:55:20.146869 kubelet[1823]: E0209 09:55:20.146807 1823 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:55:20.148156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:55:20.148257 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:55:20.807445 env[1475]: time="2024-02-09T09:55:20.807387455Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 09:55:21.533471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3744881576.mount: Deactivated successfully. Feb 9 09:55:23.098466 env[1475]: time="2024-02-09T09:55:23.098379093Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:23.099470 env[1475]: time="2024-02-09T09:55:23.099429634Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:23.100517 env[1475]: time="2024-02-09T09:55:23.100480924Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:23.101610 env[1475]: time="2024-02-09T09:55:23.101554715Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:23.102102 env[1475]: time="2024-02-09T09:55:23.102067572Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 09:55:23.107590 env[1475]: time="2024-02-09T09:55:23.107520173Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 09:55:26.095738 env[1475]: time="2024-02-09T09:55:26.095681443Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:26.096304 env[1475]: time="2024-02-09T09:55:26.096285598Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:26.097278 env[1475]: time="2024-02-09T09:55:26.097236618Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:26.098386 env[1475]: time="2024-02-09T09:55:26.098316296Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:26.098828 env[1475]: time="2024-02-09T09:55:26.098778038Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 09:55:26.104521 env[1475]: time="2024-02-09T09:55:26.104483844Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 09:55:27.814427 env[1475]: time="2024-02-09T09:55:27.814352014Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:27.815184 env[1475]: time="2024-02-09T09:55:27.815127634Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:27.815966 env[1475]: time="2024-02-09T09:55:27.815926891Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:27.816847 env[1475]: time="2024-02-09T09:55:27.816806550Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:27.817240 env[1475]: time="2024-02-09T09:55:27.817189063Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 09:55:27.824093 env[1475]: time="2024-02-09T09:55:27.824034410Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 09:55:28.734986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3728619030.mount: Deactivated successfully. Feb 9 09:55:29.299029 env[1475]: time="2024-02-09T09:55:29.298961159Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:29.318458 env[1475]: time="2024-02-09T09:55:29.318343998Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:29.321687 env[1475]: time="2024-02-09T09:55:29.321581598Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:29.324704 env[1475]: time="2024-02-09T09:55:29.324605010Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:29.326262 env[1475]: time="2024-02-09T09:55:29.326137015Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 09:55:29.344467 env[1475]: time="2024-02-09T09:55:29.344394337Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 09:55:29.875611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4004440549.mount: Deactivated successfully. Feb 9 09:55:29.877121 env[1475]: time="2024-02-09T09:55:29.877085387Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:29.877854 env[1475]: time="2024-02-09T09:55:29.877815016Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:29.878566 env[1475]: time="2024-02-09T09:55:29.878526620Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:29.879182 env[1475]: time="2024-02-09T09:55:29.879141127Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:29.879537 env[1475]: time="2024-02-09T09:55:29.879492998Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 09:55:29.885216 env[1475]: time="2024-02-09T09:55:29.885197819Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 09:55:30.216231 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 09:55:30.216787 systemd[1]: Stopped kubelet.service. Feb 9 09:55:30.220192 systemd[1]: Started kubelet.service. Feb 9 09:55:30.247604 kubelet[1915]: E0209 09:55:30.247545 1915 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 09:55:30.249781 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:55:30.249853 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:55:30.588927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3203205741.mount: Deactivated successfully. Feb 9 09:55:33.477415 env[1475]: time="2024-02-09T09:55:33.477360890Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:33.478053 env[1475]: time="2024-02-09T09:55:33.478040939Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:33.478896 env[1475]: time="2024-02-09T09:55:33.478881003Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:33.479664 env[1475]: time="2024-02-09T09:55:33.479653185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:33.480112 env[1475]: time="2024-02-09T09:55:33.480098408Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 09:55:33.485883 env[1475]: time="2024-02-09T09:55:33.485818564Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 09:55:34.119821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4284504936.mount: Deactivated successfully. Feb 9 09:55:34.573690 env[1475]: time="2024-02-09T09:55:34.573667087Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:34.574382 env[1475]: time="2024-02-09T09:55:34.574357087Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:34.575102 env[1475]: time="2024-02-09T09:55:34.575090706Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:34.575910 env[1475]: time="2024-02-09T09:55:34.575896761Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:34.576234 env[1475]: time="2024-02-09T09:55:34.576220887Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 09:55:36.364419 systemd[1]: Stopped kubelet.service. Feb 9 09:55:36.375678 systemd[1]: Reloading. Feb 9 09:55:36.416096 /usr/lib/systemd/system-generators/torcx-generator[2070]: time="2024-02-09T09:55:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:55:36.416112 /usr/lib/systemd/system-generators/torcx-generator[2070]: time="2024-02-09T09:55:36Z" level=info msg="torcx already run" Feb 9 09:55:36.472279 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:55:36.472290 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:55:36.486067 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:55:36.544687 systemd[1]: Started kubelet.service. Feb 9 09:55:36.566132 kubelet[2128]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:55:36.566132 kubelet[2128]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:55:36.566132 kubelet[2128]: I0209 09:55:36.566120 2128 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:55:36.566865 kubelet[2128]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:55:36.566865 kubelet[2128]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:55:37.007362 kubelet[2128]: I0209 09:55:37.007349 2128 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:55:37.007362 kubelet[2128]: I0209 09:55:37.007361 2128 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:55:37.007513 kubelet[2128]: I0209 09:55:37.007505 2128 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:55:37.008942 kubelet[2128]: I0209 09:55:37.008902 2128 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:55:37.009371 kubelet[2128]: E0209 09:55:37.009318 2128 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://86.109.11.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 09:55:37.028094 kubelet[2128]: I0209 09:55:37.028084 2128 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:55:37.028196 kubelet[2128]: I0209 09:55:37.028189 2128 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:55:37.028247 kubelet[2128]: I0209 09:55:37.028240 2128 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:55:37.028362 kubelet[2128]: I0209 09:55:37.028256 2128 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:55:37.028362 kubelet[2128]: I0209 09:55:37.028269 2128 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:55:37.028423 kubelet[2128]: I0209 09:55:37.028370 2128 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:55:37.029880 kubelet[2128]: I0209 09:55:37.029872 2128 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:55:37.029911 kubelet[2128]: I0209 09:55:37.029882 2128 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:55:37.029911 kubelet[2128]: I0209 09:55:37.029894 2128 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:55:37.029911 kubelet[2128]: I0209 09:55:37.029903 2128 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:55:37.030174 kubelet[2128]: I0209 09:55:37.030167 2128 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:55:37.030200 kubelet[2128]: W0209 09:55:37.030166 2128 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://86.109.11.101:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 09:55:37.030226 kubelet[2128]: E0209 09:55:37.030205 2128 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://86.109.11.101:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 09:55:37.030248 kubelet[2128]: W0209 09:55:37.030212 2128 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://86.109.11.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-98b619e81b&limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 09:55:37.030248 kubelet[2128]: E0209 09:55:37.030237 2128 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://86.109.11.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-98b619e81b&limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 09:55:37.030300 kubelet[2128]: W0209 09:55:37.030294 2128 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:55:37.030479 kubelet[2128]: I0209 09:55:37.030473 2128 server.go:1186] "Started kubelet" Feb 9 09:55:37.030554 kubelet[2128]: I0209 09:55:37.030543 2128 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:55:37.030748 kubelet[2128]: E0209 09:55:37.030737 2128 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:55:37.030782 kubelet[2128]: E0209 09:55:37.030704 2128 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-98b619e81b.17b22939d16f891d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-98b619e81b", UID:"ci-3510.3.2-a-98b619e81b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-98b619e81b"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 55, 37, 30461725, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 55, 37, 30461725, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://86.109.11.101:6443/api/v1/namespaces/default/events": dial tcp 86.109.11.101:6443: connect: connection refused'(may retry after sleeping) Feb 9 09:55:37.030782 kubelet[2128]: E0209 09:55:37.030753 2128 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:55:37.031061 kubelet[2128]: I0209 09:55:37.031029 2128 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:55:37.040317 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 09:55:37.040393 kubelet[2128]: I0209 09:55:37.040355 2128 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:55:37.040446 kubelet[2128]: I0209 09:55:37.040397 2128 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:55:37.040476 kubelet[2128]: I0209 09:55:37.040446 2128 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:55:37.040505 kubelet[2128]: E0209 09:55:37.040480 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-98b619e81b\" not found" Feb 9 09:55:37.040631 kubelet[2128]: W0209 09:55:37.040602 2128 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://86.109.11.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 09:55:37.040669 kubelet[2128]: E0209 09:55:37.040640 2128 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://86.109.11.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 09:55:37.040693 kubelet[2128]: E0209 09:55:37.040644 2128 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://86.109.11.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-98b619e81b?timeout=10s": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 09:55:37.054318 kubelet[2128]: I0209 09:55:37.054290 2128 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:55:37.054318 kubelet[2128]: I0209 09:55:37.054318 2128 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:55:37.054413 kubelet[2128]: I0209 09:55:37.054329 2128 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:55:37.055190 kubelet[2128]: I0209 09:55:37.055181 2128 policy_none.go:49] "None policy: Start" Feb 9 09:55:37.055424 kubelet[2128]: I0209 09:55:37.055417 2128 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:55:37.055458 kubelet[2128]: I0209 09:55:37.055426 2128 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:55:37.057756 systemd[1]: Created slice kubepods.slice. Feb 9 09:55:37.059689 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 09:55:37.059865 kubelet[2128]: I0209 09:55:37.059837 2128 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:55:37.071004 kubelet[2128]: I0209 09:55:37.070962 2128 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:55:37.071004 kubelet[2128]: I0209 09:55:37.070973 2128 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:55:37.071004 kubelet[2128]: I0209 09:55:37.070985 2128 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:55:37.071082 kubelet[2128]: E0209 09:55:37.071009 2128 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 09:55:37.071242 kubelet[2128]: W0209 09:55:37.071221 2128 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://86.109.11.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 09:55:37.071271 kubelet[2128]: E0209 09:55:37.071249 2128 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://86.109.11.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 09:55:37.083434 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 09:55:37.086275 kubelet[2128]: I0209 09:55:37.086182 2128 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:55:37.086832 kubelet[2128]: I0209 09:55:37.086788 2128 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:55:37.087577 kubelet[2128]: E0209 09:55:37.087530 2128 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-98b619e81b\" not found" Feb 9 09:55:37.145444 kubelet[2128]: I0209 09:55:37.145352 2128 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-98b619e81b" Feb 9 09:55:37.146148 kubelet[2128]: E0209 09:55:37.146076 2128 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://86.109.11.101:6443/api/v1/nodes\": dial tcp 86.109.11.101:6443: connect: connection refused" node="ci-3510.3.2-a-98b619e81b" Feb 9 09:55:37.171520 kubelet[2128]: I0209 09:55:37.171394 2128 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:55:37.175343 kubelet[2128]: I0209 09:55:37.175287 2128 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:55:37.178767 kubelet[2128]: I0209 09:55:37.178726 2128 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:55:37.179397 kubelet[2128]: I0209 09:55:37.179332 2128 status_manager.go:698] "Failed to get status for pod" podUID=7631c0a3b5cce3027ca0468355aef5af pod="kube-system/kube-apiserver-ci-3510.3.2-a-98b619e81b" err="Get \"https://86.109.11.101:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-98b619e81b\": dial tcp 86.109.11.101:6443: connect: connection refused" Feb 9 09:55:37.183258 kubelet[2128]: I0209 09:55:37.183209 2128 status_manager.go:698] "Failed to get status for pod" podUID=7e5f53d959b32160b286e68ca5b496bd pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98b619e81b" err="Get \"https://86.109.11.101:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-98b619e81b\": dial tcp 86.109.11.101:6443: connect: connection refused" Feb 9 09:55:37.186781 kubelet[2128]: I0209 09:55:37.186729 2128 status_manager.go:698] "Failed to get status for pod" podUID=b709a3653644a303478682c4f945157d pod="kube-system/kube-scheduler-ci-3510.3.2-a-98b619e81b" err="Get \"https://86.109.11.101:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-98b619e81b\": dial tcp 86.109.11.101:6443: connect: connection refused" Feb 9 09:55:37.191717 systemd[1]: Created slice kubepods-burstable-pod7631c0a3b5cce3027ca0468355aef5af.slice. Feb 9 09:55:37.222536 systemd[1]: Created slice kubepods-burstable-pod7e5f53d959b32160b286e68ca5b496bd.slice. Feb 9 09:55:37.242292 kubelet[2128]: E0209 09:55:37.242166 2128 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://86.109.11.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-98b619e81b?timeout=10s": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 09:55:37.245358 systemd[1]: Created slice kubepods-burstable-podb709a3653644a303478682c4f945157d.slice. Feb 9 09:55:37.342595 kubelet[2128]: I0209 09:55:37.342374 2128 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e5f53d959b32160b286e68ca5b496bd-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-98b619e81b\" (UID: \"7e5f53d959b32160b286e68ca5b496bd\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:37.342595 kubelet[2128]: I0209 09:55:37.342478 2128 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e5f53d959b32160b286e68ca5b496bd-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-98b619e81b\" (UID: \"7e5f53d959b32160b286e68ca5b496bd\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:37.342987 kubelet[2128]: I0209 09:55:37.342642 2128 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7631c0a3b5cce3027ca0468355aef5af-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-98b619e81b\" (UID: \"7631c0a3b5cce3027ca0468355aef5af\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:37.342987 kubelet[2128]: I0209 09:55:37.342815 2128 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7631c0a3b5cce3027ca0468355aef5af-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-98b619e81b\" (UID: \"7631c0a3b5cce3027ca0468355aef5af\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:37.342987 kubelet[2128]: I0209 09:55:37.342933 2128 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7631c0a3b5cce3027ca0468355aef5af-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-98b619e81b\" (UID: \"7631c0a3b5cce3027ca0468355aef5af\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:37.343480 kubelet[2128]: I0209 09:55:37.343028 2128 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7e5f53d959b32160b286e68ca5b496bd-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-98b619e81b\" (UID: \"7e5f53d959b32160b286e68ca5b496bd\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:37.343480 kubelet[2128]: I0209 09:55:37.343104 2128 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e5f53d959b32160b286e68ca5b496bd-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-98b619e81b\" (UID: \"7e5f53d959b32160b286e68ca5b496bd\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:37.343480 kubelet[2128]: I0209 09:55:37.343217 2128 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e5f53d959b32160b286e68ca5b496bd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-98b619e81b\" (UID: \"7e5f53d959b32160b286e68ca5b496bd\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:37.343480 kubelet[2128]: I0209 09:55:37.343372 2128 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b709a3653644a303478682c4f945157d-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-98b619e81b\" (UID: \"b709a3653644a303478682c4f945157d\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:37.350536 kubelet[2128]: I0209 09:55:37.350496 2128 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-98b619e81b" Feb 9 09:55:37.351285 kubelet[2128]: E0209 09:55:37.351187 2128 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://86.109.11.101:6443/api/v1/nodes\": dial tcp 86.109.11.101:6443: connect: connection refused" node="ci-3510.3.2-a-98b619e81b" Feb 9 09:55:37.518071 env[1475]: time="2024-02-09T09:55:37.517935419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-98b619e81b,Uid:7631c0a3b5cce3027ca0468355aef5af,Namespace:kube-system,Attempt:0,}" Feb 9 09:55:37.541340 env[1475]: time="2024-02-09T09:55:37.541184638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-98b619e81b,Uid:7e5f53d959b32160b286e68ca5b496bd,Namespace:kube-system,Attempt:0,}" Feb 9 09:55:37.550586 env[1475]: time="2024-02-09T09:55:37.550482786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-98b619e81b,Uid:b709a3653644a303478682c4f945157d,Namespace:kube-system,Attempt:0,}" Feb 9 09:55:37.643143 kubelet[2128]: E0209 09:55:37.642926 2128 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://86.109.11.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-98b619e81b?timeout=10s": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 09:55:37.756890 kubelet[2128]: I0209 09:55:37.756811 2128 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-98b619e81b" Feb 9 09:55:37.758699 kubelet[2128]: E0209 09:55:37.758645 2128 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://86.109.11.101:6443/api/v1/nodes\": dial tcp 86.109.11.101:6443: connect: connection refused" node="ci-3510.3.2-a-98b619e81b" Feb 9 09:55:37.888774 kubelet[2128]: W0209 09:55:37.888599 2128 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://86.109.11.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-98b619e81b&limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 09:55:37.888774 kubelet[2128]: E0209 09:55:37.888739 2128 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://86.109.11.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-98b619e81b&limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 09:55:38.067691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3629812209.mount: Deactivated successfully. Feb 9 09:55:38.069149 env[1475]: time="2024-02-09T09:55:38.069127463Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:38.069733 env[1475]: time="2024-02-09T09:55:38.069722469Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:38.070873 env[1475]: time="2024-02-09T09:55:38.070862323Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:38.071264 env[1475]: time="2024-02-09T09:55:38.071227738Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:38.072438 env[1475]: time="2024-02-09T09:55:38.072397683Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:38.073607 env[1475]: time="2024-02-09T09:55:38.073565506Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:38.075142 env[1475]: time="2024-02-09T09:55:38.075100862Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:38.076743 env[1475]: time="2024-02-09T09:55:38.076704958Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:38.077140 env[1475]: time="2024-02-09T09:55:38.077102522Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:38.077542 env[1475]: time="2024-02-09T09:55:38.077507992Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:38.077909 env[1475]: time="2024-02-09T09:55:38.077855997Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:38.078444 env[1475]: time="2024-02-09T09:55:38.078431793Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:55:38.078564 kubelet[2128]: W0209 09:55:38.078539 2128 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://86.109.11.101:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 09:55:38.078594 kubelet[2128]: E0209 09:55:38.078571 2128 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://86.109.11.101:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 86.109.11.101:6443: connect: connection refused Feb 9 09:55:38.083485 env[1475]: time="2024-02-09T09:55:38.083452530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:55:38.083485 env[1475]: time="2024-02-09T09:55:38.083474018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:55:38.083485 env[1475]: time="2024-02-09T09:55:38.083483452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:55:38.083594 env[1475]: time="2024-02-09T09:55:38.083550853Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/020a6d6dccc7aaf7f46b5353d148fac260f026f4e8b1dce5337dd688da81fb21 pid=2215 runtime=io.containerd.runc.v2 Feb 9 09:55:38.085597 env[1475]: time="2024-02-09T09:55:38.085557050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:55:38.085597 env[1475]: time="2024-02-09T09:55:38.085580597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:55:38.085597 env[1475]: time="2024-02-09T09:55:38.085590823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:55:38.085719 env[1475]: time="2024-02-09T09:55:38.085633028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:55:38.085719 env[1475]: time="2024-02-09T09:55:38.085649834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:55:38.085719 env[1475]: time="2024-02-09T09:55:38.085655849Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cca371d11cb89d9fb2a983e7adf495fe1ee5233bcb107ae021d3f2ec19c1e358 pid=2238 runtime=io.containerd.runc.v2 Feb 9 09:55:38.085719 env[1475]: time="2024-02-09T09:55:38.085656968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:55:38.085788 env[1475]: time="2024-02-09T09:55:38.085719959Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4cd1b476de8e3e7ea0a0d93c40891a93c91b20890922f02e24ff42bb8149db51 pid=2240 runtime=io.containerd.runc.v2 Feb 9 09:55:38.102828 systemd[1]: Started cri-containerd-020a6d6dccc7aaf7f46b5353d148fac260f026f4e8b1dce5337dd688da81fb21.scope. Feb 9 09:55:38.105353 systemd[1]: Started cri-containerd-4cd1b476de8e3e7ea0a0d93c40891a93c91b20890922f02e24ff42bb8149db51.scope. Feb 9 09:55:38.106174 systemd[1]: Started cri-containerd-cca371d11cb89d9fb2a983e7adf495fe1ee5233bcb107ae021d3f2ec19c1e358.scope. Feb 9 09:55:38.127416 env[1475]: time="2024-02-09T09:55:38.127384247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-98b619e81b,Uid:b709a3653644a303478682c4f945157d,Namespace:kube-system,Attempt:0,} returns sandbox id \"cca371d11cb89d9fb2a983e7adf495fe1ee5233bcb107ae021d3f2ec19c1e358\"" Feb 9 09:55:38.128895 env[1475]: time="2024-02-09T09:55:38.128882133Z" level=info msg="CreateContainer within sandbox \"cca371d11cb89d9fb2a983e7adf495fe1ee5233bcb107ae021d3f2ec19c1e358\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 09:55:38.133482 env[1475]: time="2024-02-09T09:55:38.133433810Z" level=info msg="CreateContainer within sandbox \"cca371d11cb89d9fb2a983e7adf495fe1ee5233bcb107ae021d3f2ec19c1e358\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1cc193a81481c44f203c46767fd7d7cf7c9e692dcc8be568a9c3c263f811053b\"" Feb 9 09:55:38.133660 env[1475]: time="2024-02-09T09:55:38.133619674Z" level=info msg="StartContainer for \"1cc193a81481c44f203c46767fd7d7cf7c9e692dcc8be568a9c3c263f811053b\"" Feb 9 09:55:38.139785 env[1475]: time="2024-02-09T09:55:38.139730642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-98b619e81b,Uid:7e5f53d959b32160b286e68ca5b496bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"020a6d6dccc7aaf7f46b5353d148fac260f026f4e8b1dce5337dd688da81fb21\"" Feb 9 09:55:38.140280 env[1475]: time="2024-02-09T09:55:38.140266693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-98b619e81b,Uid:7631c0a3b5cce3027ca0468355aef5af,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cd1b476de8e3e7ea0a0d93c40891a93c91b20890922f02e24ff42bb8149db51\"" Feb 9 09:55:38.140950 env[1475]: time="2024-02-09T09:55:38.140938141Z" level=info msg="CreateContainer within sandbox \"020a6d6dccc7aaf7f46b5353d148fac260f026f4e8b1dce5337dd688da81fb21\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 09:55:38.141161 env[1475]: time="2024-02-09T09:55:38.141150202Z" level=info msg="CreateContainer within sandbox \"4cd1b476de8e3e7ea0a0d93c40891a93c91b20890922f02e24ff42bb8149db51\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 09:55:38.146168 env[1475]: time="2024-02-09T09:55:38.146125568Z" level=info msg="CreateContainer within sandbox \"020a6d6dccc7aaf7f46b5353d148fac260f026f4e8b1dce5337dd688da81fb21\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"23b28fbf83b2b78f290e9dabab8e0e23ede2261c3382b71afc43be4c0d575f82\"" Feb 9 09:55:38.146428 env[1475]: time="2024-02-09T09:55:38.146386466Z" level=info msg="StartContainer for \"23b28fbf83b2b78f290e9dabab8e0e23ede2261c3382b71afc43be4c0d575f82\"" Feb 9 09:55:38.147075 env[1475]: time="2024-02-09T09:55:38.147032690Z" level=info msg="CreateContainer within sandbox \"4cd1b476de8e3e7ea0a0d93c40891a93c91b20890922f02e24ff42bb8149db51\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"75cc8fab9c859bda796ee492d67f32a0d7e40678228cd1ff2b199ba8e443ca56\"" Feb 9 09:55:38.147213 env[1475]: time="2024-02-09T09:55:38.147201448Z" level=info msg="StartContainer for \"75cc8fab9c859bda796ee492d67f32a0d7e40678228cd1ff2b199ba8e443ca56\"" Feb 9 09:55:38.153127 systemd[1]: Started cri-containerd-1cc193a81481c44f203c46767fd7d7cf7c9e692dcc8be568a9c3c263f811053b.scope. Feb 9 09:55:38.155048 systemd[1]: Started cri-containerd-23b28fbf83b2b78f290e9dabab8e0e23ede2261c3382b71afc43be4c0d575f82.scope. Feb 9 09:55:38.155595 systemd[1]: Started cri-containerd-75cc8fab9c859bda796ee492d67f32a0d7e40678228cd1ff2b199ba8e443ca56.scope. Feb 9 09:55:38.184709 env[1475]: time="2024-02-09T09:55:38.184680864Z" level=info msg="StartContainer for \"1cc193a81481c44f203c46767fd7d7cf7c9e692dcc8be568a9c3c263f811053b\" returns successfully" Feb 9 09:55:38.184866 env[1475]: time="2024-02-09T09:55:38.184850911Z" level=info msg="StartContainer for \"23b28fbf83b2b78f290e9dabab8e0e23ede2261c3382b71afc43be4c0d575f82\" returns successfully" Feb 9 09:55:38.192011 env[1475]: time="2024-02-09T09:55:38.191987334Z" level=info msg="StartContainer for \"75cc8fab9c859bda796ee492d67f32a0d7e40678228cd1ff2b199ba8e443ca56\" returns successfully" Feb 9 09:55:38.560352 kubelet[2128]: I0209 09:55:38.560282 2128 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-98b619e81b" Feb 9 09:55:38.960009 kubelet[2128]: E0209 09:55:38.959945 2128 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-98b619e81b\" not found" node="ci-3510.3.2-a-98b619e81b" Feb 9 09:55:39.060544 kubelet[2128]: I0209 09:55:39.060467 2128 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-98b619e81b" Feb 9 09:55:39.084486 kubelet[2128]: E0209 09:55:39.084450 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-98b619e81b\" not found" Feb 9 09:55:39.185214 kubelet[2128]: E0209 09:55:39.185121 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-98b619e81b\" not found" Feb 9 09:55:39.285701 kubelet[2128]: E0209 09:55:39.285598 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-98b619e81b\" not found" Feb 9 09:55:39.386362 kubelet[2128]: E0209 09:55:39.386240 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-98b619e81b\" not found" Feb 9 09:55:39.487352 kubelet[2128]: E0209 09:55:39.487238 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-98b619e81b\" not found" Feb 9 09:55:39.588013 kubelet[2128]: E0209 09:55:39.587803 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-98b619e81b\" not found" Feb 9 09:55:39.688913 kubelet[2128]: E0209 09:55:39.688809 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-98b619e81b\" not found" Feb 9 09:55:39.789101 kubelet[2128]: E0209 09:55:39.788964 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-98b619e81b\" not found" Feb 9 09:55:39.889913 kubelet[2128]: E0209 09:55:39.889770 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-98b619e81b\" not found" Feb 9 09:55:39.990901 kubelet[2128]: E0209 09:55:39.990841 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-98b619e81b\" not found" Feb 9 09:55:40.091049 kubelet[2128]: E0209 09:55:40.090981 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-98b619e81b\" not found" Feb 9 09:55:40.192247 kubelet[2128]: E0209 09:55:40.192058 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-98b619e81b\" not found" Feb 9 09:55:40.293106 kubelet[2128]: E0209 09:55:40.293027 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-98b619e81b\" not found" Feb 9 09:55:40.393534 kubelet[2128]: E0209 09:55:40.393423 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-98b619e81b\" not found" Feb 9 09:55:40.494291 kubelet[2128]: E0209 09:55:40.494203 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-98b619e81b\" not found" Feb 9 09:55:40.595270 kubelet[2128]: E0209 09:55:40.595221 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-98b619e81b\" not found" Feb 9 09:55:40.696095 kubelet[2128]: E0209 09:55:40.696009 2128 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-98b619e81b\" not found" Feb 9 09:55:41.032415 kubelet[2128]: I0209 09:55:41.032273 2128 apiserver.go:52] "Watching apiserver" Feb 9 09:55:41.040919 kubelet[2128]: I0209 09:55:41.040875 2128 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:55:41.068440 kubelet[2128]: I0209 09:55:41.068393 2128 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:55:42.219355 systemd[1]: Reloading. Feb 9 09:55:42.284438 /usr/lib/systemd/system-generators/torcx-generator[2499]: time="2024-02-09T09:55:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:55:42.284464 /usr/lib/systemd/system-generators/torcx-generator[2499]: time="2024-02-09T09:55:42Z" level=info msg="torcx already run" Feb 9 09:55:42.354308 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:55:42.354319 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:55:42.369093 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:55:42.436371 systemd[1]: Stopping kubelet.service... Feb 9 09:55:42.451729 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 09:55:42.451831 systemd[1]: Stopped kubelet.service. Feb 9 09:55:42.452770 systemd[1]: Started kubelet.service. Feb 9 09:55:42.475258 kubelet[2556]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:55:42.475258 kubelet[2556]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:55:42.475451 kubelet[2556]: I0209 09:55:42.475248 2556 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:55:42.476111 kubelet[2556]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 09:55:42.476111 kubelet[2556]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:55:42.477815 kubelet[2556]: I0209 09:55:42.477806 2556 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 09:55:42.477815 kubelet[2556]: I0209 09:55:42.477816 2556 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:55:42.477927 kubelet[2556]: I0209 09:55:42.477922 2556 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 09:55:42.479216 kubelet[2556]: I0209 09:55:42.479207 2556 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 09:55:42.479541 kubelet[2556]: I0209 09:55:42.479533 2556 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:55:42.496865 kubelet[2556]: I0209 09:55:42.496813 2556 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:55:42.496922 kubelet[2556]: I0209 09:55:42.496904 2556 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:55:42.496950 kubelet[2556]: I0209 09:55:42.496943 2556 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 09:55:42.497009 kubelet[2556]: I0209 09:55:42.496955 2556 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 09:55:42.497009 kubelet[2556]: I0209 09:55:42.496962 2556 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 09:55:42.497009 kubelet[2556]: I0209 09:55:42.496981 2556 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:55:42.498218 kubelet[2556]: I0209 09:55:42.498211 2556 kubelet.go:398] "Attempting to sync node with API server" Feb 9 09:55:42.498218 kubelet[2556]: I0209 09:55:42.498221 2556 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:55:42.498273 kubelet[2556]: I0209 09:55:42.498235 2556 kubelet.go:297] "Adding apiserver pod source" Feb 9 09:55:42.498273 kubelet[2556]: I0209 09:55:42.498243 2556 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:55:42.498529 kubelet[2556]: I0209 09:55:42.498514 2556 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:55:42.498827 kubelet[2556]: I0209 09:55:42.498819 2556 server.go:1186] "Started kubelet" Feb 9 09:55:42.498864 kubelet[2556]: I0209 09:55:42.498856 2556 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:55:42.499047 kubelet[2556]: E0209 09:55:42.499037 2556 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:55:42.499093 kubelet[2556]: E0209 09:55:42.499055 2556 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:55:42.499485 kubelet[2556]: I0209 09:55:42.499478 2556 server.go:451] "Adding debug handlers to kubelet server" Feb 9 09:55:42.499527 kubelet[2556]: I0209 09:55:42.499520 2556 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:55:42.499591 kubelet[2556]: I0209 09:55:42.499574 2556 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 09:55:42.499591 kubelet[2556]: E0209 09:55:42.499588 2556 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-98b619e81b\" not found" Feb 9 09:55:42.499656 kubelet[2556]: I0209 09:55:42.499601 2556 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:55:42.511216 kubelet[2556]: I0209 09:55:42.511203 2556 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 09:55:42.517535 kubelet[2556]: I0209 09:55:42.517488 2556 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 09:55:42.517535 kubelet[2556]: I0209 09:55:42.517502 2556 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 09:55:42.517535 kubelet[2556]: I0209 09:55:42.517513 2556 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 09:55:42.517639 kubelet[2556]: E0209 09:55:42.517542 2556 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 09:55:42.518997 kubelet[2556]: I0209 09:55:42.518987 2556 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:55:42.518997 kubelet[2556]: I0209 09:55:42.518994 2556 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:55:42.519058 kubelet[2556]: I0209 09:55:42.519003 2556 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:55:42.519088 kubelet[2556]: I0209 09:55:42.519084 2556 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 09:55:42.519119 kubelet[2556]: I0209 09:55:42.519092 2556 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 09:55:42.519119 kubelet[2556]: I0209 09:55:42.519095 2556 policy_none.go:49] "None policy: Start" Feb 9 09:55:42.519342 kubelet[2556]: I0209 09:55:42.519337 2556 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:55:42.519364 kubelet[2556]: I0209 09:55:42.519346 2556 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:55:42.519406 kubelet[2556]: I0209 09:55:42.519402 2556 state_mem.go:75] "Updated machine memory state" Feb 9 09:55:42.521099 kubelet[2556]: I0209 09:55:42.521091 2556 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:55:42.521204 kubelet[2556]: I0209 09:55:42.521198 2556 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:55:42.566464 sudo[2620]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 09:55:42.567334 sudo[2620]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 09:55:42.606293 kubelet[2556]: I0209 09:55:42.606206 2556 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-98b619e81b" Feb 9 09:55:42.615618 kubelet[2556]: I0209 09:55:42.615575 2556 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-98b619e81b" Feb 9 09:55:42.615754 kubelet[2556]: I0209 09:55:42.615646 2556 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-98b619e81b" Feb 9 09:55:42.617854 kubelet[2556]: I0209 09:55:42.617809 2556 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:55:42.617946 kubelet[2556]: I0209 09:55:42.617896 2556 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:55:42.617946 kubelet[2556]: I0209 09:55:42.617938 2556 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:55:42.622809 kubelet[2556]: E0209 09:55:42.622760 2556 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-98b619e81b\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:42.701754 kubelet[2556]: E0209 09:55:42.701736 2556 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-98b619e81b\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:42.801539 kubelet[2556]: I0209 09:55:42.801467 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e5f53d959b32160b286e68ca5b496bd-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-98b619e81b\" (UID: \"7e5f53d959b32160b286e68ca5b496bd\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:42.801539 kubelet[2556]: I0209 09:55:42.801516 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e5f53d959b32160b286e68ca5b496bd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-98b619e81b\" (UID: \"7e5f53d959b32160b286e68ca5b496bd\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:42.801539 kubelet[2556]: I0209 09:55:42.801533 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7631c0a3b5cce3027ca0468355aef5af-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-98b619e81b\" (UID: \"7631c0a3b5cce3027ca0468355aef5af\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:42.801715 kubelet[2556]: I0209 09:55:42.801555 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7631c0a3b5cce3027ca0468355aef5af-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-98b619e81b\" (UID: \"7631c0a3b5cce3027ca0468355aef5af\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:42.801715 kubelet[2556]: I0209 09:55:42.801571 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e5f53d959b32160b286e68ca5b496bd-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-98b619e81b\" (UID: \"7e5f53d959b32160b286e68ca5b496bd\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:42.801715 kubelet[2556]: I0209 09:55:42.801601 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7e5f53d959b32160b286e68ca5b496bd-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-98b619e81b\" (UID: \"7e5f53d959b32160b286e68ca5b496bd\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:42.801715 kubelet[2556]: I0209 09:55:42.801625 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e5f53d959b32160b286e68ca5b496bd-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-98b619e81b\" (UID: \"7e5f53d959b32160b286e68ca5b496bd\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:42.801715 kubelet[2556]: I0209 09:55:42.801660 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b709a3653644a303478682c4f945157d-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-98b619e81b\" (UID: \"b709a3653644a303478682c4f945157d\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:42.801822 kubelet[2556]: I0209 09:55:42.801691 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7631c0a3b5cce3027ca0468355aef5af-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-98b619e81b\" (UID: \"7631c0a3b5cce3027ca0468355aef5af\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:42.903209 kubelet[2556]: E0209 09:55:42.903098 2556 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-98b619e81b\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:42.942999 sudo[2620]: pam_unix(sudo:session): session closed for user root Feb 9 09:55:43.499065 kubelet[2556]: I0209 09:55:43.498952 2556 apiserver.go:52] "Watching apiserver" Feb 9 09:55:43.600935 kubelet[2556]: I0209 09:55:43.600822 2556 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:55:43.606242 kubelet[2556]: I0209 09:55:43.606145 2556 reconciler.go:41] "Reconciler: start to sync state" Feb 9 09:55:43.777690 sudo[1593]: pam_unix(sudo:session): session closed for user root Feb 9 09:55:43.778580 sshd[1590]: pam_unix(sshd:session): session closed for user core Feb 9 09:55:43.780187 systemd[1]: sshd@4-86.109.11.101:22-147.75.109.163:45372.service: Deactivated successfully. Feb 9 09:55:43.780671 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 09:55:43.780773 systemd[1]: session-7.scope: Consumed 3.125s CPU time. Feb 9 09:55:43.781118 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. Feb 9 09:55:43.781829 systemd-logind[1463]: Removed session 7. Feb 9 09:55:43.907306 kubelet[2556]: E0209 09:55:43.907190 2556 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-98b619e81b\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:44.107668 kubelet[2556]: E0209 09:55:44.107473 2556 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-98b619e81b\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:44.307891 kubelet[2556]: E0209 09:55:44.307794 2556 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-98b619e81b\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98b619e81b" Feb 9 09:55:44.508514 kubelet[2556]: I0209 09:55:44.508497 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-98b619e81b" podStartSLOduration=3.508471194 pod.CreationTimestamp="2024-02-09 09:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:55:44.508460167 +0000 UTC m=+2.053975272" watchObservedRunningTime="2024-02-09 09:55:44.508471194 +0000 UTC m=+2.053986300" Feb 9 09:55:44.907437 kubelet[2556]: I0209 09:55:44.907322 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-98b619e81b" podStartSLOduration=3.907273648 pod.CreationTimestamp="2024-02-09 09:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:55:44.907244333 +0000 UTC m=+2.452759438" watchObservedRunningTime="2024-02-09 09:55:44.907273648 +0000 UTC m=+2.452788751" Feb 9 09:55:45.307845 kubelet[2556]: I0209 09:55:45.307786 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-98b619e81b" podStartSLOduration=4.307767204 pod.CreationTimestamp="2024-02-09 09:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:55:45.307755557 +0000 UTC m=+2.853270663" watchObservedRunningTime="2024-02-09 09:55:45.307767204 +0000 UTC m=+2.853282306" Feb 9 09:55:53.833632 update_engine[1465]: I0209 09:55:53.833555 1465 update_attempter.cc:509] Updating boot flags... Feb 9 09:55:54.926819 kubelet[2556]: I0209 09:55:54.926720 2556 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 09:55:54.927698 env[1475]: time="2024-02-09T09:55:54.927461969Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:55:54.928393 kubelet[2556]: I0209 09:55:54.927949 2556 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 09:55:55.616657 kubelet[2556]: I0209 09:55:55.616543 2556 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:55:55.623907 kubelet[2556]: I0209 09:55:55.623839 2556 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:55:55.633854 systemd[1]: Created slice kubepods-besteffort-podfbc2fe45_c7b8_494e_93a8_b3764b695420.slice. Feb 9 09:55:55.648778 systemd[1]: Created slice kubepods-burstable-pod1d5b0bd3_f29a_44fd_a05b_dfd8c6871991.slice. Feb 9 09:55:55.692004 kubelet[2556]: I0209 09:55:55.691983 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbc2fe45-c7b8-494e-93a8-b3764b695420-xtables-lock\") pod \"kube-proxy-qb6d9\" (UID: \"fbc2fe45-c7b8-494e-93a8-b3764b695420\") " pod="kube-system/kube-proxy-qb6d9" Feb 9 09:55:55.692004 kubelet[2556]: I0209 09:55:55.692008 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-bpf-maps\") pod \"cilium-mjgbv\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " pod="kube-system/cilium-mjgbv" Feb 9 09:55:55.692180 kubelet[2556]: I0209 09:55:55.692021 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-cni-path\") pod \"cilium-mjgbv\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " pod="kube-system/cilium-mjgbv" Feb 9 09:55:55.692180 kubelet[2556]: I0209 09:55:55.692042 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-clustermesh-secrets\") pod \"cilium-mjgbv\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " pod="kube-system/cilium-mjgbv" Feb 9 09:55:55.692180 kubelet[2556]: I0209 09:55:55.692056 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-cilium-config-path\") pod \"cilium-mjgbv\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " pod="kube-system/cilium-mjgbv" Feb 9 09:55:55.692180 kubelet[2556]: I0209 09:55:55.692074 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fbc2fe45-c7b8-494e-93a8-b3764b695420-kube-proxy\") pod \"kube-proxy-qb6d9\" (UID: \"fbc2fe45-c7b8-494e-93a8-b3764b695420\") " pod="kube-system/kube-proxy-qb6d9" Feb 9 09:55:55.692180 kubelet[2556]: I0209 09:55:55.692107 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-cilium-run\") pod \"cilium-mjgbv\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " pod="kube-system/cilium-mjgbv" Feb 9 09:55:55.692372 kubelet[2556]: I0209 09:55:55.692147 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xpbz\" (UniqueName: \"kubernetes.io/projected/fbc2fe45-c7b8-494e-93a8-b3764b695420-kube-api-access-7xpbz\") pod \"kube-proxy-qb6d9\" (UID: \"fbc2fe45-c7b8-494e-93a8-b3764b695420\") " pod="kube-system/kube-proxy-qb6d9" Feb 9 09:55:55.692372 kubelet[2556]: I0209 09:55:55.692170 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-host-proc-sys-kernel\") pod \"cilium-mjgbv\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " pod="kube-system/cilium-mjgbv" Feb 9 09:55:55.692372 kubelet[2556]: I0209 09:55:55.692191 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-etc-cni-netd\") pod \"cilium-mjgbv\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " pod="kube-system/cilium-mjgbv" Feb 9 09:55:55.692372 kubelet[2556]: I0209 09:55:55.692215 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-cilium-cgroup\") pod \"cilium-mjgbv\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " pod="kube-system/cilium-mjgbv" Feb 9 09:55:55.692372 kubelet[2556]: I0209 09:55:55.692238 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-hubble-tls\") pod \"cilium-mjgbv\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " pod="kube-system/cilium-mjgbv" Feb 9 09:55:55.692372 kubelet[2556]: I0209 09:55:55.692253 2556 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:55:55.692524 kubelet[2556]: I0209 09:55:55.692259 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-hostproc\") pod \"cilium-mjgbv\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " pod="kube-system/cilium-mjgbv" Feb 9 09:55:55.692524 kubelet[2556]: I0209 09:55:55.692281 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-lib-modules\") pod \"cilium-mjgbv\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " pod="kube-system/cilium-mjgbv" Feb 9 09:55:55.692524 kubelet[2556]: I0209 09:55:55.692314 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-xtables-lock\") pod \"cilium-mjgbv\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " pod="kube-system/cilium-mjgbv" Feb 9 09:55:55.692524 kubelet[2556]: I0209 09:55:55.692336 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9c7r\" (UniqueName: \"kubernetes.io/projected/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-kube-api-access-l9c7r\") pod \"cilium-mjgbv\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " pod="kube-system/cilium-mjgbv" Feb 9 09:55:55.692524 kubelet[2556]: I0209 09:55:55.692364 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbc2fe45-c7b8-494e-93a8-b3764b695420-lib-modules\") pod \"kube-proxy-qb6d9\" (UID: \"fbc2fe45-c7b8-494e-93a8-b3764b695420\") " pod="kube-system/kube-proxy-qb6d9" Feb 9 09:55:55.692524 kubelet[2556]: I0209 09:55:55.692395 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-host-proc-sys-net\") pod \"cilium-mjgbv\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " pod="kube-system/cilium-mjgbv" Feb 9 09:55:55.695170 systemd[1]: Created slice kubepods-besteffort-poda007eab5_e549_415b_b496_abdcf31db7d3.slice. Feb 9 09:55:55.793482 kubelet[2556]: I0209 09:55:55.793355 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc4r7\" (UniqueName: \"kubernetes.io/projected/a007eab5-e549-415b-b496-abdcf31db7d3-kube-api-access-rc4r7\") pod \"cilium-operator-f59cbd8c6-px6t5\" (UID: \"a007eab5-e549-415b-b496-abdcf31db7d3\") " pod="kube-system/cilium-operator-f59cbd8c6-px6t5" Feb 9 09:55:55.794394 kubelet[2556]: I0209 09:55:55.794333 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a007eab5-e549-415b-b496-abdcf31db7d3-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-px6t5\" (UID: \"a007eab5-e549-415b-b496-abdcf31db7d3\") " pod="kube-system/cilium-operator-f59cbd8c6-px6t5" Feb 9 09:55:56.248761 env[1475]: time="2024-02-09T09:55:56.248636007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qb6d9,Uid:fbc2fe45-c7b8-494e-93a8-b3764b695420,Namespace:kube-system,Attempt:0,}" Feb 9 09:55:56.251978 env[1475]: time="2024-02-09T09:55:56.251870568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mjgbv,Uid:1d5b0bd3-f29a-44fd-a05b-dfd8c6871991,Namespace:kube-system,Attempt:0,}" Feb 9 09:55:56.266378 env[1475]: time="2024-02-09T09:55:56.266277943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:55:56.266378 env[1475]: time="2024-02-09T09:55:56.266337815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:55:56.266378 env[1475]: time="2024-02-09T09:55:56.266369295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:55:56.266575 env[1475]: time="2024-02-09T09:55:56.266486909Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ffa560c3727d12a4693de40f0a33a44a7c9507dea6ea01beed52ee82463708a pid=2760 runtime=io.containerd.runc.v2 Feb 9 09:55:56.280272 env[1475]: time="2024-02-09T09:55:56.280229493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:55:56.280272 env[1475]: time="2024-02-09T09:55:56.280251724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:55:56.280272 env[1475]: time="2024-02-09T09:55:56.280264621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:55:56.280431 env[1475]: time="2024-02-09T09:55:56.280361761Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36 pid=2782 runtime=io.containerd.runc.v2 Feb 9 09:55:56.284488 systemd[1]: Started cri-containerd-6ffa560c3727d12a4693de40f0a33a44a7c9507dea6ea01beed52ee82463708a.scope. Feb 9 09:55:56.300021 systemd[1]: Started cri-containerd-62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36.scope. Feb 9 09:55:56.309948 env[1475]: time="2024-02-09T09:55:56.309908718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qb6d9,Uid:fbc2fe45-c7b8-494e-93a8-b3764b695420,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ffa560c3727d12a4693de40f0a33a44a7c9507dea6ea01beed52ee82463708a\"" Feb 9 09:55:56.311919 env[1475]: time="2024-02-09T09:55:56.311890589Z" level=info msg="CreateContainer within sandbox \"6ffa560c3727d12a4693de40f0a33a44a7c9507dea6ea01beed52ee82463708a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:55:56.319461 env[1475]: time="2024-02-09T09:55:56.319396605Z" level=info msg="CreateContainer within sandbox \"6ffa560c3727d12a4693de40f0a33a44a7c9507dea6ea01beed52ee82463708a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1c95ca7937796f85c24e1ae30e17f89eed44916f305666c7c899db1e736332b7\"" Feb 9 09:55:56.319835 env[1475]: time="2024-02-09T09:55:56.319800567Z" level=info msg="StartContainer for \"1c95ca7937796f85c24e1ae30e17f89eed44916f305666c7c899db1e736332b7\"" Feb 9 09:55:56.330170 env[1475]: time="2024-02-09T09:55:56.330134485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mjgbv,Uid:1d5b0bd3-f29a-44fd-a05b-dfd8c6871991,Namespace:kube-system,Attempt:0,} returns sandbox id \"62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36\"" Feb 9 09:55:56.331443 env[1475]: time="2024-02-09T09:55:56.331414465Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 09:55:56.333023 systemd[1]: Started cri-containerd-1c95ca7937796f85c24e1ae30e17f89eed44916f305666c7c899db1e736332b7.scope. Feb 9 09:55:56.368836 env[1475]: time="2024-02-09T09:55:56.368756845Z" level=info msg="StartContainer for \"1c95ca7937796f85c24e1ae30e17f89eed44916f305666c7c899db1e736332b7\" returns successfully" Feb 9 09:55:56.598586 env[1475]: time="2024-02-09T09:55:56.597487155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-px6t5,Uid:a007eab5-e549-415b-b496-abdcf31db7d3,Namespace:kube-system,Attempt:0,}" Feb 9 09:55:56.622875 env[1475]: time="2024-02-09T09:55:56.622678804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:55:56.622875 env[1475]: time="2024-02-09T09:55:56.622829814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:55:56.623430 env[1475]: time="2024-02-09T09:55:56.622899014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:55:56.623430 env[1475]: time="2024-02-09T09:55:56.623309003Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a46b2c1e5324ef1e076643efdbb8d3abeeffa2e2cb1d7e37f5a9908715f41b7 pid=2942 runtime=io.containerd.runc.v2 Feb 9 09:55:56.658241 systemd[1]: Started cri-containerd-7a46b2c1e5324ef1e076643efdbb8d3abeeffa2e2cb1d7e37f5a9908715f41b7.scope. Feb 9 09:55:56.711478 env[1475]: time="2024-02-09T09:55:56.711447668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-px6t5,Uid:a007eab5-e549-415b-b496-abdcf31db7d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a46b2c1e5324ef1e076643efdbb8d3abeeffa2e2cb1d7e37f5a9908715f41b7\"" Feb 9 09:55:56.831002 kubelet[2556]: I0209 09:55:56.830955 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qb6d9" podStartSLOduration=1.83089396 pod.CreationTimestamp="2024-02-09 09:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:55:56.830402691 +0000 UTC m=+14.375917842" watchObservedRunningTime="2024-02-09 09:55:56.83089396 +0000 UTC m=+14.376409094" Feb 9 09:56:00.394741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1370439329.mount: Deactivated successfully. Feb 9 09:56:02.145007 env[1475]: time="2024-02-09T09:56:02.144980092Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:02.145563 env[1475]: time="2024-02-09T09:56:02.145550481Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:02.146925 env[1475]: time="2024-02-09T09:56:02.146878049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:02.147222 env[1475]: time="2024-02-09T09:56:02.147165800Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 09:56:02.147778 env[1475]: time="2024-02-09T09:56:02.147700011Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 09:56:02.148416 env[1475]: time="2024-02-09T09:56:02.148403445Z" level=info msg="CreateContainer within sandbox \"62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:56:02.152923 env[1475]: time="2024-02-09T09:56:02.152878663Z" level=info msg="CreateContainer within sandbox \"62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8ebfd39b6d8e824e76bfe8da267c12780f7ee8b8c8db1c1298cc175a42ba11a4\"" Feb 9 09:56:02.153179 env[1475]: time="2024-02-09T09:56:02.153168580Z" level=info msg="StartContainer for \"8ebfd39b6d8e824e76bfe8da267c12780f7ee8b8c8db1c1298cc175a42ba11a4\"" Feb 9 09:56:02.173848 systemd[1]: Started cri-containerd-8ebfd39b6d8e824e76bfe8da267c12780f7ee8b8c8db1c1298cc175a42ba11a4.scope. Feb 9 09:56:02.205187 env[1475]: time="2024-02-09T09:56:02.205112293Z" level=info msg="StartContainer for \"8ebfd39b6d8e824e76bfe8da267c12780f7ee8b8c8db1c1298cc175a42ba11a4\" returns successfully" Feb 9 09:56:02.210638 systemd[1]: cri-containerd-8ebfd39b6d8e824e76bfe8da267c12780f7ee8b8c8db1c1298cc175a42ba11a4.scope: Deactivated successfully. Feb 9 09:56:03.156537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ebfd39b6d8e824e76bfe8da267c12780f7ee8b8c8db1c1298cc175a42ba11a4-rootfs.mount: Deactivated successfully. Feb 9 09:56:03.341099 env[1475]: time="2024-02-09T09:56:03.340968297Z" level=info msg="shim disconnected" id=8ebfd39b6d8e824e76bfe8da267c12780f7ee8b8c8db1c1298cc175a42ba11a4 Feb 9 09:56:03.341099 env[1475]: time="2024-02-09T09:56:03.341063906Z" level=warning msg="cleaning up after shim disconnected" id=8ebfd39b6d8e824e76bfe8da267c12780f7ee8b8c8db1c1298cc175a42ba11a4 namespace=k8s.io Feb 9 09:56:03.341099 env[1475]: time="2024-02-09T09:56:03.341090533Z" level=info msg="cleaning up dead shim" Feb 9 09:56:03.369829 env[1475]: time="2024-02-09T09:56:03.369743052Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:56:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3072 runtime=io.containerd.runc.v2\n" Feb 9 09:56:03.588086 env[1475]: time="2024-02-09T09:56:03.588036305Z" level=info msg="CreateContainer within sandbox \"62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:56:03.592922 env[1475]: time="2024-02-09T09:56:03.592873496Z" level=info msg="CreateContainer within sandbox \"62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1accff500c7d97bd73609da1d88bb7db13f473a3f2d8702d88f450838b7e0cee\"" Feb 9 09:56:03.593145 env[1475]: time="2024-02-09T09:56:03.593126104Z" level=info msg="StartContainer for \"1accff500c7d97bd73609da1d88bb7db13f473a3f2d8702d88f450838b7e0cee\"" Feb 9 09:56:03.593737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2892678247.mount: Deactivated successfully. Feb 9 09:56:03.614641 systemd[1]: Started cri-containerd-1accff500c7d97bd73609da1d88bb7db13f473a3f2d8702d88f450838b7e0cee.scope. Feb 9 09:56:03.648064 env[1475]: time="2024-02-09T09:56:03.647944977Z" level=info msg="StartContainer for \"1accff500c7d97bd73609da1d88bb7db13f473a3f2d8702d88f450838b7e0cee\" returns successfully" Feb 9 09:56:03.677471 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:56:03.678250 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:56:03.678748 systemd[1]: Stopping systemd-sysctl.service... Feb 9 09:56:03.682478 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:56:03.683640 systemd[1]: cri-containerd-1accff500c7d97bd73609da1d88bb7db13f473a3f2d8702d88f450838b7e0cee.scope: Deactivated successfully. Feb 9 09:56:03.699752 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:56:03.745394 env[1475]: time="2024-02-09T09:56:03.745292080Z" level=info msg="shim disconnected" id=1accff500c7d97bd73609da1d88bb7db13f473a3f2d8702d88f450838b7e0cee Feb 9 09:56:03.745394 env[1475]: time="2024-02-09T09:56:03.745386752Z" level=warning msg="cleaning up after shim disconnected" id=1accff500c7d97bd73609da1d88bb7db13f473a3f2d8702d88f450838b7e0cee namespace=k8s.io Feb 9 09:56:03.745849 env[1475]: time="2024-02-09T09:56:03.745416894Z" level=info msg="cleaning up dead shim" Feb 9 09:56:03.773307 env[1475]: time="2024-02-09T09:56:03.773180203Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:56:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3134 runtime=io.containerd.runc.v2\n" Feb 9 09:56:04.152390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1accff500c7d97bd73609da1d88bb7db13f473a3f2d8702d88f450838b7e0cee-rootfs.mount: Deactivated successfully. Feb 9 09:56:04.389829 env[1475]: time="2024-02-09T09:56:04.389777507Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:04.390436 env[1475]: time="2024-02-09T09:56:04.390397630Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:04.391185 env[1475]: time="2024-02-09T09:56:04.391146338Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:56:04.391572 env[1475]: time="2024-02-09T09:56:04.391515985Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 09:56:04.392688 env[1475]: time="2024-02-09T09:56:04.392673369Z" level=info msg="CreateContainer within sandbox \"7a46b2c1e5324ef1e076643efdbb8d3abeeffa2e2cb1d7e37f5a9908715f41b7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 09:56:04.398266 env[1475]: time="2024-02-09T09:56:04.398246079Z" level=info msg="CreateContainer within sandbox \"7a46b2c1e5324ef1e076643efdbb8d3abeeffa2e2cb1d7e37f5a9908715f41b7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e\"" Feb 9 09:56:04.398775 env[1475]: time="2024-02-09T09:56:04.398759933Z" level=info msg="StartContainer for \"a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e\"" Feb 9 09:56:04.419755 systemd[1]: Started cri-containerd-a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e.scope. Feb 9 09:56:04.431985 env[1475]: time="2024-02-09T09:56:04.431961621Z" level=info msg="StartContainer for \"a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e\" returns successfully" Feb 9 09:56:04.587865 env[1475]: time="2024-02-09T09:56:04.587735059Z" level=info msg="CreateContainer within sandbox \"62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:56:04.603104 kubelet[2556]: I0209 09:56:04.602986 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-px6t5" podStartSLOduration=-9.223372027251892e+09 pod.CreationTimestamp="2024-02-09 09:55:55 +0000 UTC" firstStartedPulling="2024-02-09 09:55:56.712089045 +0000 UTC m=+14.257604157" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:04.601987167 +0000 UTC m=+22.147502348" watchObservedRunningTime="2024-02-09 09:56:04.60288365 +0000 UTC m=+22.148398807" Feb 9 09:56:04.612564 env[1475]: time="2024-02-09T09:56:04.612436762Z" level=info msg="CreateContainer within sandbox \"62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"db76ad0debe57124d998ec22925e1ca6f387df7e884b7ed24b637544d5cd94db\"" Feb 9 09:56:04.613407 env[1475]: time="2024-02-09T09:56:04.613338344Z" level=info msg="StartContainer for \"db76ad0debe57124d998ec22925e1ca6f387df7e884b7ed24b637544d5cd94db\"" Feb 9 09:56:04.649337 systemd[1]: Started cri-containerd-db76ad0debe57124d998ec22925e1ca6f387df7e884b7ed24b637544d5cd94db.scope. Feb 9 09:56:04.690669 env[1475]: time="2024-02-09T09:56:04.690554245Z" level=info msg="StartContainer for \"db76ad0debe57124d998ec22925e1ca6f387df7e884b7ed24b637544d5cd94db\" returns successfully" Feb 9 09:56:04.692444 systemd[1]: cri-containerd-db76ad0debe57124d998ec22925e1ca6f387df7e884b7ed24b637544d5cd94db.scope: Deactivated successfully. Feb 9 09:56:04.870041 env[1475]: time="2024-02-09T09:56:04.869979253Z" level=info msg="shim disconnected" id=db76ad0debe57124d998ec22925e1ca6f387df7e884b7ed24b637544d5cd94db Feb 9 09:56:04.870041 env[1475]: time="2024-02-09T09:56:04.870013829Z" level=warning msg="cleaning up after shim disconnected" id=db76ad0debe57124d998ec22925e1ca6f387df7e884b7ed24b637544d5cd94db namespace=k8s.io Feb 9 09:56:04.870041 env[1475]: time="2024-02-09T09:56:04.870022455Z" level=info msg="cleaning up dead shim" Feb 9 09:56:04.886313 env[1475]: time="2024-02-09T09:56:04.886283814Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:56:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3239 runtime=io.containerd.runc.v2\n" Feb 9 09:56:05.596192 env[1475]: time="2024-02-09T09:56:05.596099089Z" level=info msg="CreateContainer within sandbox \"62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:56:05.607675 env[1475]: time="2024-02-09T09:56:05.607653678Z" level=info msg="CreateContainer within sandbox \"62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f50b8f7b7fc9defaefccf096ba47e22c1888c82eed670418a870e9667d46914c\"" Feb 9 09:56:05.607971 env[1475]: time="2024-02-09T09:56:05.607922969Z" level=info msg="StartContainer for \"f50b8f7b7fc9defaefccf096ba47e22c1888c82eed670418a870e9667d46914c\"" Feb 9 09:56:05.616340 systemd[1]: Started cri-containerd-f50b8f7b7fc9defaefccf096ba47e22c1888c82eed670418a870e9667d46914c.scope. Feb 9 09:56:05.627495 systemd[1]: cri-containerd-f50b8f7b7fc9defaefccf096ba47e22c1888c82eed670418a870e9667d46914c.scope: Deactivated successfully. Feb 9 09:56:05.627890 env[1475]: time="2024-02-09T09:56:05.627871048Z" level=info msg="StartContainer for \"f50b8f7b7fc9defaefccf096ba47e22c1888c82eed670418a870e9667d46914c\" returns successfully" Feb 9 09:56:05.637737 env[1475]: time="2024-02-09T09:56:05.637708268Z" level=info msg="shim disconnected" id=f50b8f7b7fc9defaefccf096ba47e22c1888c82eed670418a870e9667d46914c Feb 9 09:56:05.637850 env[1475]: time="2024-02-09T09:56:05.637737253Z" level=warning msg="cleaning up after shim disconnected" id=f50b8f7b7fc9defaefccf096ba47e22c1888c82eed670418a870e9667d46914c namespace=k8s.io Feb 9 09:56:05.637850 env[1475]: time="2024-02-09T09:56:05.637746013Z" level=info msg="cleaning up dead shim" Feb 9 09:56:05.641499 env[1475]: time="2024-02-09T09:56:05.641482232Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:56:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3293 runtime=io.containerd.runc.v2\n" Feb 9 09:56:06.156774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f50b8f7b7fc9defaefccf096ba47e22c1888c82eed670418a870e9667d46914c-rootfs.mount: Deactivated successfully. Feb 9 09:56:06.594691 env[1475]: time="2024-02-09T09:56:06.594669434Z" level=info msg="CreateContainer within sandbox \"62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:56:06.600914 env[1475]: time="2024-02-09T09:56:06.600891010Z" level=info msg="CreateContainer within sandbox \"62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75\"" Feb 9 09:56:06.601240 env[1475]: time="2024-02-09T09:56:06.601224128Z" level=info msg="StartContainer for \"057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75\"" Feb 9 09:56:06.628836 systemd[1]: Started cri-containerd-057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75.scope. Feb 9 09:56:06.655311 env[1475]: time="2024-02-09T09:56:06.655285675Z" level=info msg="StartContainer for \"057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75\" returns successfully" Feb 9 09:56:06.708350 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:56:06.746719 kubelet[2556]: I0209 09:56:06.746699 2556 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:56:06.757105 kubelet[2556]: I0209 09:56:06.757085 2556 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:06.757739 kubelet[2556]: I0209 09:56:06.757725 2556 topology_manager.go:210] "Topology Admit Handler" Feb 9 09:56:06.760319 systemd[1]: Created slice kubepods-burstable-pode87d14d0_d2c2_4c1a_a26c_76cab561e426.slice. Feb 9 09:56:06.762216 systemd[1]: Created slice kubepods-burstable-pod89e2d4d8_7e11_452d_b893_5279ceca19c5.slice. Feb 9 09:56:06.838273 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:56:06.871553 kubelet[2556]: I0209 09:56:06.871501 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v572h\" (UniqueName: \"kubernetes.io/projected/e87d14d0-d2c2-4c1a-a26c-76cab561e426-kube-api-access-v572h\") pod \"coredns-787d4945fb-c8zxm\" (UID: \"e87d14d0-d2c2-4c1a-a26c-76cab561e426\") " pod="kube-system/coredns-787d4945fb-c8zxm" Feb 9 09:56:06.871553 kubelet[2556]: I0209 09:56:06.871532 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7nq9\" (UniqueName: \"kubernetes.io/projected/89e2d4d8-7e11-452d-b893-5279ceca19c5-kube-api-access-h7nq9\") pod \"coredns-787d4945fb-4r69r\" (UID: \"89e2d4d8-7e11-452d-b893-5279ceca19c5\") " pod="kube-system/coredns-787d4945fb-4r69r" Feb 9 09:56:06.871553 kubelet[2556]: I0209 09:56:06.871546 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e87d14d0-d2c2-4c1a-a26c-76cab561e426-config-volume\") pod \"coredns-787d4945fb-c8zxm\" (UID: \"e87d14d0-d2c2-4c1a-a26c-76cab561e426\") " pod="kube-system/coredns-787d4945fb-c8zxm" Feb 9 09:56:06.871697 kubelet[2556]: I0209 09:56:06.871627 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89e2d4d8-7e11-452d-b893-5279ceca19c5-config-volume\") pod \"coredns-787d4945fb-4r69r\" (UID: \"89e2d4d8-7e11-452d-b893-5279ceca19c5\") " pod="kube-system/coredns-787d4945fb-4r69r" Feb 9 09:56:07.063177 env[1475]: time="2024-02-09T09:56:07.063042936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-c8zxm,Uid:e87d14d0-d2c2-4c1a-a26c-76cab561e426,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:07.065125 env[1475]: time="2024-02-09T09:56:07.065003754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-4r69r,Uid:89e2d4d8-7e11-452d-b893-5279ceca19c5,Namespace:kube-system,Attempt:0,}" Feb 9 09:56:07.622901 kubelet[2556]: I0209 09:56:07.622884 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-mjgbv" podStartSLOduration=-9.223372024231945e+09 pod.CreationTimestamp="2024-02-09 09:55:55 +0000 UTC" firstStartedPulling="2024-02-09 09:55:56.331024391 +0000 UTC m=+13.876539516" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:07.622546162 +0000 UTC m=+25.168061267" watchObservedRunningTime="2024-02-09 09:56:07.622830259 +0000 UTC m=+25.168345361" Feb 9 09:56:08.442922 systemd-networkd[1308]: cilium_host: Link UP Feb 9 09:56:08.443091 systemd-networkd[1308]: cilium_net: Link UP Feb 9 09:56:08.443095 systemd-networkd[1308]: cilium_net: Gained carrier Feb 9 09:56:08.443294 systemd-networkd[1308]: cilium_host: Gained carrier Feb 9 09:56:08.451290 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 09:56:08.451460 systemd-networkd[1308]: cilium_host: Gained IPv6LL Feb 9 09:56:08.514695 systemd-networkd[1308]: cilium_vxlan: Link UP Feb 9 09:56:08.514699 systemd-networkd[1308]: cilium_vxlan: Gained carrier Feb 9 09:56:08.648348 kernel: NET: Registered PF_ALG protocol family Feb 9 09:56:08.900363 systemd-networkd[1308]: cilium_net: Gained IPv6LL Feb 9 09:56:09.191150 systemd-networkd[1308]: lxc_health: Link UP Feb 9 09:56:09.211195 systemd-networkd[1308]: lxc_health: Gained carrier Feb 9 09:56:09.211320 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:56:09.614146 systemd-networkd[1308]: lxcdde1f96eeffd: Link UP Feb 9 09:56:09.641270 kernel: eth0: renamed from tmp7ee9b Feb 9 09:56:09.664304 kernel: eth0: renamed from tmp0435a Feb 9 09:56:09.675719 systemd-networkd[1308]: lxc2d37b61b8965: Link UP Feb 9 09:56:09.689782 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:56:09.689837 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcdde1f96eeffd: link becomes ready Feb 9 09:56:09.690269 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:56:09.703948 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2d37b61b8965: link becomes ready Feb 9 09:56:09.704207 systemd-networkd[1308]: lxcdde1f96eeffd: Gained carrier Feb 9 09:56:09.704347 systemd-networkd[1308]: lxc2d37b61b8965: Gained carrier Feb 9 09:56:09.923398 systemd-networkd[1308]: cilium_vxlan: Gained IPv6LL Feb 9 09:56:10.435410 systemd-networkd[1308]: lxc_health: Gained IPv6LL Feb 9 09:56:11.139405 systemd-networkd[1308]: lxcdde1f96eeffd: Gained IPv6LL Feb 9 09:56:11.267393 systemd-networkd[1308]: lxc2d37b61b8965: Gained IPv6LL Feb 9 09:56:12.014071 env[1475]: time="2024-02-09T09:56:12.014040535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:12.014071 env[1475]: time="2024-02-09T09:56:12.014060410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:12.014071 env[1475]: time="2024-02-09T09:56:12.014067582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:12.014071 env[1475]: time="2024-02-09T09:56:12.014061041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:56:12.014350 env[1475]: time="2024-02-09T09:56:12.014077334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:56:12.014350 env[1475]: time="2024-02-09T09:56:12.014084428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:56:12.014350 env[1475]: time="2024-02-09T09:56:12.014128856Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ee9b0b66aeb27b4a9d43d60d4fa79b36212c368c27c2206626ee0bc2f678e9f pid=3979 runtime=io.containerd.runc.v2 Feb 9 09:56:12.014350 env[1475]: time="2024-02-09T09:56:12.014137473Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0435a4b83572b21b6547b1b4daa14630745a17618319612f87eb2f2ad668eb7e pid=3981 runtime=io.containerd.runc.v2 Feb 9 09:56:12.022861 systemd[1]: Started cri-containerd-0435a4b83572b21b6547b1b4daa14630745a17618319612f87eb2f2ad668eb7e.scope. Feb 9 09:56:12.031841 systemd[1]: Started cri-containerd-7ee9b0b66aeb27b4a9d43d60d4fa79b36212c368c27c2206626ee0bc2f678e9f.scope. Feb 9 09:56:12.055042 env[1475]: time="2024-02-09T09:56:12.055015617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-c8zxm,Uid:e87d14d0-d2c2-4c1a-a26c-76cab561e426,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ee9b0b66aeb27b4a9d43d60d4fa79b36212c368c27c2206626ee0bc2f678e9f\"" Feb 9 09:56:12.056179 env[1475]: time="2024-02-09T09:56:12.056164828Z" level=info msg="CreateContainer within sandbox \"7ee9b0b66aeb27b4a9d43d60d4fa79b36212c368c27c2206626ee0bc2f678e9f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:56:12.057701 env[1475]: time="2024-02-09T09:56:12.057685696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-4r69r,Uid:89e2d4d8-7e11-452d-b893-5279ceca19c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"0435a4b83572b21b6547b1b4daa14630745a17618319612f87eb2f2ad668eb7e\"" Feb 9 09:56:12.059502 env[1475]: time="2024-02-09T09:56:12.059487472Z" level=info msg="CreateContainer within sandbox \"0435a4b83572b21b6547b1b4daa14630745a17618319612f87eb2f2ad668eb7e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 09:56:12.061068 env[1475]: time="2024-02-09T09:56:12.061051943Z" level=info msg="CreateContainer within sandbox \"7ee9b0b66aeb27b4a9d43d60d4fa79b36212c368c27c2206626ee0bc2f678e9f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c651bf1a04425e45442a5905fe7b34ac11debf8667452205939dada3db69e76e\"" Feb 9 09:56:12.061294 env[1475]: time="2024-02-09T09:56:12.061278508Z" level=info msg="StartContainer for \"c651bf1a04425e45442a5905fe7b34ac11debf8667452205939dada3db69e76e\"" Feb 9 09:56:12.064080 env[1475]: time="2024-02-09T09:56:12.064032966Z" level=info msg="CreateContainer within sandbox \"0435a4b83572b21b6547b1b4daa14630745a17618319612f87eb2f2ad668eb7e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e13b1ea44a9e6caec1b6cfe06109bb2ca8068bbd581b2055d8605c30ad7371d0\"" Feb 9 09:56:12.064328 env[1475]: time="2024-02-09T09:56:12.064278382Z" level=info msg="StartContainer for \"e13b1ea44a9e6caec1b6cfe06109bb2ca8068bbd581b2055d8605c30ad7371d0\"" Feb 9 09:56:12.081161 systemd[1]: Started cri-containerd-c651bf1a04425e45442a5905fe7b34ac11debf8667452205939dada3db69e76e.scope. Feb 9 09:56:12.099171 systemd[1]: Started cri-containerd-e13b1ea44a9e6caec1b6cfe06109bb2ca8068bbd581b2055d8605c30ad7371d0.scope. Feb 9 09:56:12.130606 env[1475]: time="2024-02-09T09:56:12.130576555Z" level=info msg="StartContainer for \"c651bf1a04425e45442a5905fe7b34ac11debf8667452205939dada3db69e76e\" returns successfully" Feb 9 09:56:12.130799 env[1475]: time="2024-02-09T09:56:12.130775303Z" level=info msg="StartContainer for \"e13b1ea44a9e6caec1b6cfe06109bb2ca8068bbd581b2055d8605c30ad7371d0\" returns successfully" Feb 9 09:56:12.633388 kubelet[2556]: I0209 09:56:12.633289 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-4r69r" podStartSLOduration=17.633187687 pod.CreationTimestamp="2024-02-09 09:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:12.632406496 +0000 UTC m=+30.177921675" watchObservedRunningTime="2024-02-09 09:56:12.633187687 +0000 UTC m=+30.178702835" Feb 9 09:56:12.676734 kubelet[2556]: I0209 09:56:12.676670 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-c8zxm" podStartSLOduration=17.676581561 pod.CreationTimestamp="2024-02-09 09:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:56:12.675820065 +0000 UTC m=+30.221335250" watchObservedRunningTime="2024-02-09 09:56:12.676581561 +0000 UTC m=+30.222096709" Feb 9 09:56:14.246375 kubelet[2556]: I0209 09:56:14.246302 2556 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 09:58:44.762108 systemd[1]: Started sshd@5-86.109.11.101:22-85.209.11.226:50081.service. Feb 9 09:58:45.746397 sshd[4231]: Invalid user monitor from 85.209.11.226 port 50081 Feb 9 09:58:45.939623 sshd[4231]: pam_faillock(sshd:auth): User unknown Feb 9 09:58:45.940807 sshd[4231]: pam_unix(sshd:auth): check pass; user unknown Feb 9 09:58:45.940957 sshd[4231]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=85.209.11.226 Feb 9 09:58:45.941989 sshd[4231]: pam_faillock(sshd:auth): User unknown Feb 9 09:58:47.593946 sshd[4231]: Failed password for invalid user monitor from 85.209.11.226 port 50081 ssh2 Feb 9 09:58:48.911388 sshd[4231]: Received disconnect from 85.209.11.226 port 50081:11: Client disconnecting normally [preauth] Feb 9 09:58:48.911388 sshd[4231]: Disconnected from invalid user monitor 85.209.11.226 port 50081 [preauth] Feb 9 09:58:48.913902 systemd[1]: sshd@5-86.109.11.101:22-85.209.11.226:50081.service: Deactivated successfully. Feb 9 09:59:32.913975 update_engine[1465]: I0209 09:59:32.913861 1465 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 09:59:32.913975 update_engine[1465]: I0209 09:59:32.913941 1465 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 09:59:32.915310 update_engine[1465]: I0209 09:59:32.914790 1465 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 09:59:32.915770 update_engine[1465]: I0209 09:59:32.915688 1465 omaha_request_params.cc:62] Current group set to lts Feb 9 09:59:32.916059 update_engine[1465]: I0209 09:59:32.915973 1465 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 09:59:32.916059 update_engine[1465]: I0209 09:59:32.915992 1465 update_attempter.cc:643] Scheduling an action processor start. Feb 9 09:59:32.916059 update_engine[1465]: I0209 09:59:32.916025 1465 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 09:59:32.916508 update_engine[1465]: I0209 09:59:32.916088 1465 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 09:59:32.916508 update_engine[1465]: I0209 09:59:32.916225 1465 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 09:59:32.916508 update_engine[1465]: I0209 09:59:32.916242 1465 omaha_request_action.cc:271] Request: Feb 9 09:59:32.916508 update_engine[1465]: Feb 9 09:59:32.916508 update_engine[1465]: Feb 9 09:59:32.916508 update_engine[1465]: Feb 9 09:59:32.916508 update_engine[1465]: Feb 9 09:59:32.916508 update_engine[1465]: Feb 9 09:59:32.916508 update_engine[1465]: Feb 9 09:59:32.916508 update_engine[1465]: Feb 9 09:59:32.916508 update_engine[1465]: Feb 9 09:59:32.916508 update_engine[1465]: I0209 09:59:32.916253 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:59:32.917594 locksmithd[1512]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 09:59:32.919378 update_engine[1465]: I0209 09:59:32.919296 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:59:32.919560 update_engine[1465]: E0209 09:59:32.919521 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:59:32.919695 update_engine[1465]: I0209 09:59:32.919677 1465 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 09:59:42.833945 update_engine[1465]: I0209 09:59:42.833736 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:59:42.834867 update_engine[1465]: I0209 09:59:42.834208 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:59:42.834867 update_engine[1465]: E0209 09:59:42.834445 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:59:42.834867 update_engine[1465]: I0209 09:59:42.834618 1465 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 09:59:52.833901 update_engine[1465]: I0209 09:59:52.833779 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 09:59:52.834857 update_engine[1465]: I0209 09:59:52.834246 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 09:59:52.834857 update_engine[1465]: E0209 09:59:52.834481 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 09:59:52.834857 update_engine[1465]: I0209 09:59:52.834656 1465 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 9 10:00:02.833904 update_engine[1465]: I0209 10:00:02.833781 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 10:00:02.834861 update_engine[1465]: I0209 10:00:02.834251 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 10:00:02.834861 update_engine[1465]: E0209 10:00:02.834501 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 10:00:02.834861 update_engine[1465]: I0209 10:00:02.834652 1465 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 10:00:02.834861 update_engine[1465]: I0209 10:00:02.834668 1465 omaha_request_action.cc:621] Omaha request response: Feb 9 10:00:02.834861 update_engine[1465]: E0209 10:00:02.834811 1465 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 9 10:00:02.834861 update_engine[1465]: I0209 10:00:02.834841 1465 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 9 10:00:02.834861 update_engine[1465]: I0209 10:00:02.834849 1465 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 10:00:02.834861 update_engine[1465]: I0209 10:00:02.834858 1465 update_attempter.cc:306] Processing Done. Feb 9 10:00:02.835670 update_engine[1465]: E0209 10:00:02.834887 1465 update_attempter.cc:619] Update failed. Feb 9 10:00:02.835670 update_engine[1465]: I0209 10:00:02.834897 1465 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 9 10:00:02.835670 update_engine[1465]: I0209 10:00:02.834905 1465 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 9 10:00:02.835670 update_engine[1465]: I0209 10:00:02.834914 1465 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 9 10:00:02.835670 update_engine[1465]: I0209 10:00:02.835065 1465 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 10:00:02.835670 update_engine[1465]: I0209 10:00:02.835116 1465 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 10:00:02.835670 update_engine[1465]: I0209 10:00:02.835126 1465 omaha_request_action.cc:271] Request: Feb 9 10:00:02.835670 update_engine[1465]: Feb 9 10:00:02.835670 update_engine[1465]: Feb 9 10:00:02.835670 update_engine[1465]: Feb 9 10:00:02.835670 update_engine[1465]: Feb 9 10:00:02.835670 update_engine[1465]: Feb 9 10:00:02.835670 update_engine[1465]: Feb 9 10:00:02.835670 update_engine[1465]: I0209 10:00:02.835136 1465 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 10:00:02.835670 update_engine[1465]: I0209 10:00:02.835469 1465 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 10:00:02.835670 update_engine[1465]: E0209 10:00:02.835631 1465 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 10:00:02.837075 update_engine[1465]: I0209 10:00:02.835763 1465 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 10:00:02.837075 update_engine[1465]: I0209 10:00:02.835778 1465 omaha_request_action.cc:621] Omaha request response: Feb 9 10:00:02.837075 update_engine[1465]: I0209 10:00:02.835788 1465 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 10:00:02.837075 update_engine[1465]: I0209 10:00:02.835796 1465 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 10:00:02.837075 update_engine[1465]: I0209 10:00:02.835803 1465 update_attempter.cc:306] Processing Done. Feb 9 10:00:02.837075 update_engine[1465]: I0209 10:00:02.835811 1465 update_attempter.cc:310] Error event sent. Feb 9 10:00:02.837075 update_engine[1465]: I0209 10:00:02.835832 1465 update_check_scheduler.cc:74] Next update check in 46m16s Feb 9 10:00:02.837729 locksmithd[1512]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 9 10:00:02.837729 locksmithd[1512]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 9 10:01:52.840862 systemd[1]: Started sshd@6-86.109.11.101:22-147.75.109.163:46318.service. Feb 9 10:01:52.907160 sshd[4259]: Accepted publickey for core from 147.75.109.163 port 46318 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:01:52.910576 sshd[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:01:52.921960 systemd-logind[1463]: New session 8 of user core. Feb 9 10:01:52.924493 systemd[1]: Started session-8.scope. Feb 9 10:01:53.090917 sshd[4259]: pam_unix(sshd:session): session closed for user core Feb 9 10:01:53.093218 systemd[1]: sshd@6-86.109.11.101:22-147.75.109.163:46318.service: Deactivated successfully. Feb 9 10:01:53.093904 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 10:01:53.094572 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. Feb 9 10:01:53.095466 systemd-logind[1463]: Removed session 8. Feb 9 10:01:58.100429 systemd[1]: Started sshd@7-86.109.11.101:22-147.75.109.163:57206.service. Feb 9 10:01:58.127717 sshd[4288]: Accepted publickey for core from 147.75.109.163 port 57206 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:01:58.128620 sshd[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:01:58.131895 systemd-logind[1463]: New session 9 of user core. Feb 9 10:01:58.132553 systemd[1]: Started session-9.scope. Feb 9 10:01:58.218242 sshd[4288]: pam_unix(sshd:session): session closed for user core Feb 9 10:01:58.219728 systemd[1]: sshd@7-86.109.11.101:22-147.75.109.163:57206.service: Deactivated successfully. Feb 9 10:01:58.220144 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 10:01:58.220552 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. Feb 9 10:01:58.221045 systemd-logind[1463]: Removed session 9. Feb 9 10:02:03.223362 systemd[1]: Started sshd@8-86.109.11.101:22-147.75.109.163:57214.service. Feb 9 10:02:03.252800 sshd[4316]: Accepted publickey for core from 147.75.109.163 port 57214 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:02:03.253658 sshd[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:03.256598 systemd-logind[1463]: New session 10 of user core. Feb 9 10:02:03.257228 systemd[1]: Started session-10.scope. Feb 9 10:02:03.342772 sshd[4316]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:03.344271 systemd[1]: sshd@8-86.109.11.101:22-147.75.109.163:57214.service: Deactivated successfully. Feb 9 10:02:03.344720 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 10:02:03.345122 systemd-logind[1463]: Session 10 logged out. Waiting for processes to exit. Feb 9 10:02:03.345759 systemd-logind[1463]: Removed session 10. Feb 9 10:02:08.353470 systemd[1]: Started sshd@9-86.109.11.101:22-147.75.109.163:58220.service. Feb 9 10:02:08.383826 sshd[4343]: Accepted publickey for core from 147.75.109.163 port 58220 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:02:08.384791 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:08.387748 systemd-logind[1463]: New session 11 of user core. Feb 9 10:02:08.388354 systemd[1]: Started session-11.scope. Feb 9 10:02:08.473166 sshd[4343]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:08.474718 systemd[1]: sshd@9-86.109.11.101:22-147.75.109.163:58220.service: Deactivated successfully. Feb 9 10:02:08.475153 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 10:02:08.475635 systemd-logind[1463]: Session 11 logged out. Waiting for processes to exit. Feb 9 10:02:08.476164 systemd-logind[1463]: Removed session 11. Feb 9 10:02:13.476429 systemd[1]: Started sshd@10-86.109.11.101:22-147.75.109.163:58230.service. Feb 9 10:02:13.503378 sshd[4369]: Accepted publickey for core from 147.75.109.163 port 58230 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:02:13.504165 sshd[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:13.507044 systemd-logind[1463]: New session 12 of user core. Feb 9 10:02:13.507640 systemd[1]: Started session-12.scope. Feb 9 10:02:13.595887 sshd[4369]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:13.597652 systemd[1]: sshd@10-86.109.11.101:22-147.75.109.163:58230.service: Deactivated successfully. Feb 9 10:02:13.597970 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 10:02:13.598256 systemd-logind[1463]: Session 12 logged out. Waiting for processes to exit. Feb 9 10:02:13.598844 systemd[1]: Started sshd@11-86.109.11.101:22-147.75.109.163:58240.service. Feb 9 10:02:13.599172 systemd-logind[1463]: Removed session 12. Feb 9 10:02:13.625976 sshd[4395]: Accepted publickey for core from 147.75.109.163 port 58240 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:02:13.626786 sshd[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:13.629812 systemd-logind[1463]: New session 13 of user core. Feb 9 10:02:13.630405 systemd[1]: Started session-13.scope. Feb 9 10:02:14.149625 sshd[4395]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:14.151427 systemd[1]: sshd@11-86.109.11.101:22-147.75.109.163:58240.service: Deactivated successfully. Feb 9 10:02:14.151994 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 10:02:14.152312 systemd-logind[1463]: Session 13 logged out. Waiting for processes to exit. Feb 9 10:02:14.153054 systemd[1]: Started sshd@12-86.109.11.101:22-147.75.109.163:58242.service. Feb 9 10:02:14.153527 systemd-logind[1463]: Removed session 13. Feb 9 10:02:14.181336 sshd[4419]: Accepted publickey for core from 147.75.109.163 port 58242 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:02:14.182180 sshd[4419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:14.184546 systemd-logind[1463]: New session 14 of user core. Feb 9 10:02:14.185141 systemd[1]: Started session-14.scope. Feb 9 10:02:14.311273 sshd[4419]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:14.312803 systemd[1]: sshd@12-86.109.11.101:22-147.75.109.163:58242.service: Deactivated successfully. Feb 9 10:02:14.313234 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 10:02:14.313677 systemd-logind[1463]: Session 14 logged out. Waiting for processes to exit. Feb 9 10:02:14.314170 systemd-logind[1463]: Removed session 14. Feb 9 10:02:19.320799 systemd[1]: Started sshd@13-86.109.11.101:22-147.75.109.163:52142.service. Feb 9 10:02:19.347826 sshd[4446]: Accepted publickey for core from 147.75.109.163 port 52142 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:02:19.348759 sshd[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:19.352005 systemd-logind[1463]: New session 15 of user core. Feb 9 10:02:19.352708 systemd[1]: Started session-15.scope. Feb 9 10:02:19.442807 sshd[4446]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:19.444257 systemd[1]: sshd@13-86.109.11.101:22-147.75.109.163:52142.service: Deactivated successfully. Feb 9 10:02:19.444691 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 10:02:19.445089 systemd-logind[1463]: Session 15 logged out. Waiting for processes to exit. Feb 9 10:02:19.445725 systemd-logind[1463]: Removed session 15. Feb 9 10:02:24.452062 systemd[1]: Started sshd@14-86.109.11.101:22-147.75.109.163:51302.service. Feb 9 10:02:24.479953 sshd[4472]: Accepted publickey for core from 147.75.109.163 port 51302 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:02:24.483182 sshd[4472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:24.493542 systemd-logind[1463]: New session 16 of user core. Feb 9 10:02:24.496814 systemd[1]: Started session-16.scope. Feb 9 10:02:24.598758 sshd[4472]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:24.600315 systemd[1]: sshd@14-86.109.11.101:22-147.75.109.163:51302.service: Deactivated successfully. Feb 9 10:02:24.600774 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 10:02:24.601099 systemd-logind[1463]: Session 16 logged out. Waiting for processes to exit. Feb 9 10:02:24.601642 systemd-logind[1463]: Removed session 16. Feb 9 10:02:29.610146 systemd[1]: Started sshd@15-86.109.11.101:22-147.75.109.163:51308.service. Feb 9 10:02:29.641181 sshd[4498]: Accepted publickey for core from 147.75.109.163 port 51308 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:02:29.642042 sshd[4498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:29.644923 systemd-logind[1463]: New session 17 of user core. Feb 9 10:02:29.645504 systemd[1]: Started session-17.scope. Feb 9 10:02:29.734206 sshd[4498]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:29.736062 systemd[1]: sshd@15-86.109.11.101:22-147.75.109.163:51308.service: Deactivated successfully. Feb 9 10:02:29.736418 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 10:02:29.736822 systemd-logind[1463]: Session 17 logged out. Waiting for processes to exit. Feb 9 10:02:29.737405 systemd[1]: Started sshd@16-86.109.11.101:22-147.75.109.163:51322.service. Feb 9 10:02:29.737879 systemd-logind[1463]: Removed session 17. Feb 9 10:02:29.765615 sshd[4522]: Accepted publickey for core from 147.75.109.163 port 51322 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:02:29.768834 sshd[4522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:29.779410 systemd-logind[1463]: New session 18 of user core. Feb 9 10:02:29.781987 systemd[1]: Started session-18.scope. Feb 9 10:02:30.910570 sshd[4522]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:30.918077 systemd[1]: sshd@16-86.109.11.101:22-147.75.109.163:51322.service: Deactivated successfully. Feb 9 10:02:30.918478 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 10:02:30.918966 systemd-logind[1463]: Session 18 logged out. Waiting for processes to exit. Feb 9 10:02:30.919555 systemd[1]: Started sshd@17-86.109.11.101:22-147.75.109.163:51326.service. Feb 9 10:02:30.920030 systemd-logind[1463]: Removed session 18. Feb 9 10:02:30.946412 sshd[4545]: Accepted publickey for core from 147.75.109.163 port 51326 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:02:30.947285 sshd[4545]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:30.950153 systemd-logind[1463]: New session 19 of user core. Feb 9 10:02:30.950881 systemd[1]: Started session-19.scope. Feb 9 10:02:31.860512 sshd[4545]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:31.868506 systemd[1]: sshd@17-86.109.11.101:22-147.75.109.163:51326.service: Deactivated successfully. Feb 9 10:02:31.870590 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 10:02:31.872693 systemd-logind[1463]: Session 19 logged out. Waiting for processes to exit. Feb 9 10:02:31.875632 systemd[1]: Started sshd@18-86.109.11.101:22-147.75.109.163:51334.service. Feb 9 10:02:31.877358 systemd-logind[1463]: Removed session 19. Feb 9 10:02:31.912874 sshd[4590]: Accepted publickey for core from 147.75.109.163 port 51334 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:02:31.913946 sshd[4590]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:31.917312 systemd-logind[1463]: New session 20 of user core. Feb 9 10:02:31.918291 systemd[1]: Started session-20.scope. Feb 9 10:02:32.090932 sshd[4590]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:32.092656 systemd[1]: sshd@18-86.109.11.101:22-147.75.109.163:51334.service: Deactivated successfully. Feb 9 10:02:32.093004 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 10:02:32.093320 systemd-logind[1463]: Session 20 logged out. Waiting for processes to exit. Feb 9 10:02:32.093966 systemd[1]: Started sshd@19-86.109.11.101:22-147.75.109.163:51350.service. Feb 9 10:02:32.094314 systemd-logind[1463]: Removed session 20. Feb 9 10:02:32.121197 sshd[4649]: Accepted publickey for core from 147.75.109.163 port 51350 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:02:32.121833 sshd[4649]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:32.124264 systemd-logind[1463]: New session 21 of user core. Feb 9 10:02:32.124844 systemd[1]: Started session-21.scope. Feb 9 10:02:32.269206 sshd[4649]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:32.270950 systemd[1]: sshd@19-86.109.11.101:22-147.75.109.163:51350.service: Deactivated successfully. Feb 9 10:02:32.271458 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 10:02:32.271954 systemd-logind[1463]: Session 21 logged out. Waiting for processes to exit. Feb 9 10:02:32.272704 systemd-logind[1463]: Removed session 21. Feb 9 10:02:37.271723 systemd[1]: Started sshd@20-86.109.11.101:22-147.75.109.163:59086.service. Feb 9 10:02:37.299488 sshd[4676]: Accepted publickey for core from 147.75.109.163 port 59086 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:02:37.300264 sshd[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:37.302976 systemd-logind[1463]: New session 22 of user core. Feb 9 10:02:37.303648 systemd[1]: Started session-22.scope. Feb 9 10:02:37.396631 sshd[4676]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:37.402522 systemd[1]: sshd@20-86.109.11.101:22-147.75.109.163:59086.service: Deactivated successfully. Feb 9 10:02:37.404580 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 10:02:37.406287 systemd-logind[1463]: Session 22 logged out. Waiting for processes to exit. Feb 9 10:02:37.408528 systemd-logind[1463]: Removed session 22. Feb 9 10:02:42.400009 systemd[1]: Started sshd@21-86.109.11.101:22-147.75.109.163:59092.service. Feb 9 10:02:42.429774 sshd[4728]: Accepted publickey for core from 147.75.109.163 port 59092 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:02:42.433008 sshd[4728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:42.444013 systemd-logind[1463]: New session 23 of user core. Feb 9 10:02:42.446879 systemd[1]: Started session-23.scope. Feb 9 10:02:42.583991 sshd[4728]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:42.585742 systemd[1]: sshd@21-86.109.11.101:22-147.75.109.163:59092.service: Deactivated successfully. Feb 9 10:02:42.586361 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 10:02:42.586881 systemd-logind[1463]: Session 23 logged out. Waiting for processes to exit. Feb 9 10:02:42.587435 systemd-logind[1463]: Removed session 23. Feb 9 10:02:47.596171 systemd[1]: Started sshd@22-86.109.11.101:22-147.75.109.163:45344.service. Feb 9 10:02:47.626517 sshd[4755]: Accepted publickey for core from 147.75.109.163 port 45344 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:02:47.627388 sshd[4755]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:47.630340 systemd-logind[1463]: New session 24 of user core. Feb 9 10:02:47.631270 systemd[1]: Started session-24.scope. Feb 9 10:02:47.718451 sshd[4755]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:47.719895 systemd[1]: sshd@22-86.109.11.101:22-147.75.109.163:45344.service: Deactivated successfully. Feb 9 10:02:47.720353 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 10:02:47.720800 systemd-logind[1463]: Session 24 logged out. Waiting for processes to exit. Feb 9 10:02:47.721427 systemd-logind[1463]: Removed session 24. Feb 9 10:02:52.727927 systemd[1]: Started sshd@23-86.109.11.101:22-147.75.109.163:45348.service. Feb 9 10:02:52.755213 sshd[4780]: Accepted publickey for core from 147.75.109.163 port 45348 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:02:52.756130 sshd[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:52.759201 systemd-logind[1463]: New session 25 of user core. Feb 9 10:02:52.759975 systemd[1]: Started session-25.scope. Feb 9 10:02:52.849076 sshd[4780]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:52.851325 systemd[1]: sshd@23-86.109.11.101:22-147.75.109.163:45348.service: Deactivated successfully. Feb 9 10:02:52.851784 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 10:02:52.852211 systemd-logind[1463]: Session 25 logged out. Waiting for processes to exit. Feb 9 10:02:52.852943 systemd[1]: Started sshd@24-86.109.11.101:22-147.75.109.163:45350.service. Feb 9 10:02:52.853505 systemd-logind[1463]: Removed session 25. Feb 9 10:02:52.881957 sshd[4803]: Accepted publickey for core from 147.75.109.163 port 45350 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:02:52.883444 sshd[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:52.888382 systemd-logind[1463]: New session 26 of user core. Feb 9 10:02:52.890416 systemd[1]: Started session-26.scope. Feb 9 10:02:54.195950 env[1475]: time="2024-02-09T10:02:54.195922835Z" level=info msg="StopContainer for \"a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e\" with timeout 30 (s)" Feb 9 10:02:54.196244 env[1475]: time="2024-02-09T10:02:54.196078753Z" level=info msg="Stop container \"a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e\" with signal terminated" Feb 9 10:02:54.213022 systemd[1]: cri-containerd-a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e.scope: Deactivated successfully. Feb 9 10:02:54.220569 env[1475]: time="2024-02-09T10:02:54.220505254Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 10:02:54.223708 env[1475]: time="2024-02-09T10:02:54.223691400Z" level=info msg="StopContainer for \"057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75\" with timeout 1 (s)" Feb 9 10:02:54.223805 env[1475]: time="2024-02-09T10:02:54.223793679Z" level=info msg="Stop container \"057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75\" with signal terminated" Feb 9 10:02:54.227202 systemd-networkd[1308]: lxc_health: Link DOWN Feb 9 10:02:54.227205 systemd-networkd[1308]: lxc_health: Lost carrier Feb 9 10:02:54.233139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e-rootfs.mount: Deactivated successfully. Feb 9 10:02:54.234454 env[1475]: time="2024-02-09T10:02:54.234429274Z" level=info msg="shim disconnected" id=a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e Feb 9 10:02:54.234514 env[1475]: time="2024-02-09T10:02:54.234459045Z" level=warning msg="cleaning up after shim disconnected" id=a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e namespace=k8s.io Feb 9 10:02:54.234514 env[1475]: time="2024-02-09T10:02:54.234465994Z" level=info msg="cleaning up dead shim" Feb 9 10:02:54.238539 env[1475]: time="2024-02-09T10:02:54.238523196Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:02:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4868 runtime=io.containerd.runc.v2\n" Feb 9 10:02:54.239199 env[1475]: time="2024-02-09T10:02:54.239185929Z" level=info msg="StopContainer for \"a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e\" returns successfully" Feb 9 10:02:54.239622 env[1475]: time="2024-02-09T10:02:54.239607782Z" level=info msg="StopPodSandbox for \"7a46b2c1e5324ef1e076643efdbb8d3abeeffa2e2cb1d7e37f5a9908715f41b7\"" Feb 9 10:02:54.239665 env[1475]: time="2024-02-09T10:02:54.239644148Z" level=info msg="Container to stop \"a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:02:54.240759 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a46b2c1e5324ef1e076643efdbb8d3abeeffa2e2cb1d7e37f5a9908715f41b7-shm.mount: Deactivated successfully. Feb 9 10:02:54.243208 systemd[1]: cri-containerd-7a46b2c1e5324ef1e076643efdbb8d3abeeffa2e2cb1d7e37f5a9908715f41b7.scope: Deactivated successfully. Feb 9 10:02:54.253211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a46b2c1e5324ef1e076643efdbb8d3abeeffa2e2cb1d7e37f5a9908715f41b7-rootfs.mount: Deactivated successfully. Feb 9 10:02:54.254070 env[1475]: time="2024-02-09T10:02:54.254031096Z" level=info msg="shim disconnected" id=7a46b2c1e5324ef1e076643efdbb8d3abeeffa2e2cb1d7e37f5a9908715f41b7 Feb 9 10:02:54.254203 env[1475]: time="2024-02-09T10:02:54.254074983Z" level=warning msg="cleaning up after shim disconnected" id=7a46b2c1e5324ef1e076643efdbb8d3abeeffa2e2cb1d7e37f5a9908715f41b7 namespace=k8s.io Feb 9 10:02:54.254203 env[1475]: time="2024-02-09T10:02:54.254084447Z" level=info msg="cleaning up dead shim" Feb 9 10:02:54.270212 env[1475]: time="2024-02-09T10:02:54.270163164Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:02:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4899 runtime=io.containerd.runc.v2\n" Feb 9 10:02:54.270399 env[1475]: time="2024-02-09T10:02:54.270354393Z" level=info msg="TearDown network for sandbox \"7a46b2c1e5324ef1e076643efdbb8d3abeeffa2e2cb1d7e37f5a9908715f41b7\" successfully" Feb 9 10:02:54.270399 env[1475]: time="2024-02-09T10:02:54.270370402Z" level=info msg="StopPodSandbox for \"7a46b2c1e5324ef1e076643efdbb8d3abeeffa2e2cb1d7e37f5a9908715f41b7\" returns successfully" Feb 9 10:02:54.301657 systemd[1]: cri-containerd-057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75.scope: Deactivated successfully. Feb 9 10:02:54.301946 systemd[1]: cri-containerd-057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75.scope: Consumed 6.682s CPU time. Feb 9 10:02:54.350700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75-rootfs.mount: Deactivated successfully. Feb 9 10:02:54.351376 env[1475]: time="2024-02-09T10:02:54.350789347Z" level=info msg="shim disconnected" id=057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75 Feb 9 10:02:54.351376 env[1475]: time="2024-02-09T10:02:54.350897839Z" level=warning msg="cleaning up after shim disconnected" id=057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75 namespace=k8s.io Feb 9 10:02:54.351376 env[1475]: time="2024-02-09T10:02:54.350931981Z" level=info msg="cleaning up dead shim" Feb 9 10:02:54.380566 env[1475]: time="2024-02-09T10:02:54.380459916Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:02:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4927 runtime=io.containerd.runc.v2\n" Feb 9 10:02:54.382553 env[1475]: time="2024-02-09T10:02:54.382474976Z" level=info msg="StopContainer for \"057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75\" returns successfully" Feb 9 10:02:54.383563 env[1475]: time="2024-02-09T10:02:54.383441927Z" level=info msg="StopPodSandbox for \"62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36\"" Feb 9 10:02:54.383799 env[1475]: time="2024-02-09T10:02:54.383590492Z" level=info msg="Container to stop \"1accff500c7d97bd73609da1d88bb7db13f473a3f2d8702d88f450838b7e0cee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:02:54.383799 env[1475]: time="2024-02-09T10:02:54.383637244Z" level=info msg="Container to stop \"8ebfd39b6d8e824e76bfe8da267c12780f7ee8b8c8db1c1298cc175a42ba11a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:02:54.383799 env[1475]: time="2024-02-09T10:02:54.383673231Z" level=info msg="Container to stop \"db76ad0debe57124d998ec22925e1ca6f387df7e884b7ed24b637544d5cd94db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:02:54.383799 env[1475]: time="2024-02-09T10:02:54.383705458Z" level=info msg="Container to stop \"f50b8f7b7fc9defaefccf096ba47e22c1888c82eed670418a870e9667d46914c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:02:54.383799 env[1475]: time="2024-02-09T10:02:54.383736073Z" level=info msg="Container to stop \"057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:02:54.398297 systemd[1]: cri-containerd-62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36.scope: Deactivated successfully. Feb 9 10:02:54.433850 kubelet[2556]: I0209 10:02:54.433790 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc4r7\" (UniqueName: \"kubernetes.io/projected/a007eab5-e549-415b-b496-abdcf31db7d3-kube-api-access-rc4r7\") pod \"a007eab5-e549-415b-b496-abdcf31db7d3\" (UID: \"a007eab5-e549-415b-b496-abdcf31db7d3\") " Feb 9 10:02:54.434868 kubelet[2556]: I0209 10:02:54.433911 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a007eab5-e549-415b-b496-abdcf31db7d3-cilium-config-path\") pod \"a007eab5-e549-415b-b496-abdcf31db7d3\" (UID: \"a007eab5-e549-415b-b496-abdcf31db7d3\") " Feb 9 10:02:54.434868 kubelet[2556]: W0209 10:02:54.434458 2556 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/a007eab5-e549-415b-b496-abdcf31db7d3/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 10:02:54.440567 kubelet[2556]: I0209 10:02:54.440465 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a007eab5-e549-415b-b496-abdcf31db7d3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a007eab5-e549-415b-b496-abdcf31db7d3" (UID: "a007eab5-e549-415b-b496-abdcf31db7d3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 10:02:54.459651 env[1475]: time="2024-02-09T10:02:54.459357059Z" level=info msg="shim disconnected" id=62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36 Feb 9 10:02:54.459651 env[1475]: time="2024-02-09T10:02:54.459500497Z" level=warning msg="cleaning up after shim disconnected" id=62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36 namespace=k8s.io Feb 9 10:02:54.459651 env[1475]: time="2024-02-09T10:02:54.459543205Z" level=info msg="cleaning up dead shim" Feb 9 10:02:54.468631 kubelet[2556]: I0209 10:02:54.468513 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a007eab5-e549-415b-b496-abdcf31db7d3-kube-api-access-rc4r7" (OuterVolumeSpecName: "kube-api-access-rc4r7") pod "a007eab5-e549-415b-b496-abdcf31db7d3" (UID: "a007eab5-e549-415b-b496-abdcf31db7d3"). InnerVolumeSpecName "kube-api-access-rc4r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:02:54.490585 env[1475]: time="2024-02-09T10:02:54.490466856Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:02:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4958 runtime=io.containerd.runc.v2\n" Feb 9 10:02:54.491211 env[1475]: time="2024-02-09T10:02:54.491102814Z" level=info msg="TearDown network for sandbox \"62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36\" successfully" Feb 9 10:02:54.491211 env[1475]: time="2024-02-09T10:02:54.491162692Z" level=info msg="StopPodSandbox for \"62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36\" returns successfully" Feb 9 10:02:54.534492 kubelet[2556]: I0209 10:02:54.534431 2556 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-rc4r7\" (UniqueName: \"kubernetes.io/projected/a007eab5-e549-415b-b496-abdcf31db7d3-kube-api-access-rc4r7\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:54.534492 kubelet[2556]: I0209 10:02:54.534502 2556 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a007eab5-e549-415b-b496-abdcf31db7d3-cilium-config-path\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:54.534648 systemd[1]: Removed slice kubepods-besteffort-poda007eab5_e549_415b_b496_abdcf31db7d3.slice. Feb 9 10:02:54.635818 kubelet[2556]: I0209 10:02:54.635714 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-etc-cni-netd\") pod \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " Feb 9 10:02:54.635818 kubelet[2556]: I0209 10:02:54.635809 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-cilium-cgroup\") pod \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " Feb 9 10:02:54.636244 kubelet[2556]: I0209 10:02:54.635880 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-hubble-tls\") pod \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " Feb 9 10:02:54.636244 kubelet[2556]: I0209 10:02:54.635861 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991" (UID: "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:02:54.636244 kubelet[2556]: I0209 10:02:54.635939 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-cni-path\") pod \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " Feb 9 10:02:54.636244 kubelet[2556]: I0209 10:02:54.635925 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991" (UID: "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:02:54.636244 kubelet[2556]: I0209 10:02:54.635993 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-xtables-lock\") pod \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " Feb 9 10:02:54.636841 kubelet[2556]: I0209 10:02:54.636022 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-cni-path" (OuterVolumeSpecName: "cni-path") pod "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991" (UID: "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:02:54.636841 kubelet[2556]: I0209 10:02:54.636054 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9c7r\" (UniqueName: \"kubernetes.io/projected/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-kube-api-access-l9c7r\") pod \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " Feb 9 10:02:54.636841 kubelet[2556]: I0209 10:02:54.636100 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991" (UID: "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:02:54.636841 kubelet[2556]: I0209 10:02:54.636119 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-cilium-config-path\") pod \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " Feb 9 10:02:54.636841 kubelet[2556]: I0209 10:02:54.636286 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-host-proc-sys-kernel\") pod \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " Feb 9 10:02:54.637397 kubelet[2556]: I0209 10:02:54.636411 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-cilium-run\") pod \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " Feb 9 10:02:54.637397 kubelet[2556]: I0209 10:02:54.636383 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991" (UID: "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:02:54.637397 kubelet[2556]: I0209 10:02:54.636513 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-hostproc\") pod \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " Feb 9 10:02:54.637397 kubelet[2556]: W0209 10:02:54.636521 2556 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 10:02:54.637397 kubelet[2556]: I0209 10:02:54.636626 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-bpf-maps\") pod \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " Feb 9 10:02:54.637397 kubelet[2556]: I0209 10:02:54.636569 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991" (UID: "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:02:54.637991 kubelet[2556]: I0209 10:02:54.636637 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-hostproc" (OuterVolumeSpecName: "hostproc") pod "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991" (UID: "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:02:54.637991 kubelet[2556]: I0209 10:02:54.636723 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-lib-modules\") pod \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " Feb 9 10:02:54.637991 kubelet[2556]: I0209 10:02:54.636704 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991" (UID: "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:02:54.637991 kubelet[2556]: I0209 10:02:54.636801 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991" (UID: "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:02:54.637991 kubelet[2556]: I0209 10:02:54.636826 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-host-proc-sys-net\") pod \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " Feb 9 10:02:54.638527 kubelet[2556]: I0209 10:02:54.636899 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991" (UID: "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:02:54.638527 kubelet[2556]: I0209 10:02:54.636938 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-clustermesh-secrets\") pod \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\" (UID: \"1d5b0bd3-f29a-44fd-a05b-dfd8c6871991\") " Feb 9 10:02:54.638527 kubelet[2556]: I0209 10:02:54.637031 2556 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-bpf-maps\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:54.638527 kubelet[2556]: I0209 10:02:54.637076 2556 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-lib-modules\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:54.638527 kubelet[2556]: I0209 10:02:54.637142 2556 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-host-proc-sys-net\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:54.638527 kubelet[2556]: I0209 10:02:54.637186 2556 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-etc-cni-netd\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:54.638527 kubelet[2556]: I0209 10:02:54.637220 2556 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-cilium-cgroup\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:54.639209 kubelet[2556]: I0209 10:02:54.637249 2556 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-cni-path\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:54.639209 kubelet[2556]: I0209 10:02:54.637311 2556 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-xtables-lock\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:54.639209 kubelet[2556]: I0209 10:02:54.637346 2556 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:54.639209 kubelet[2556]: I0209 10:02:54.637379 2556 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-cilium-run\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:54.639209 kubelet[2556]: I0209 10:02:54.637409 2556 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-hostproc\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:54.641361 kubelet[2556]: I0209 10:02:54.641298 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991" (UID: "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 10:02:54.642614 kubelet[2556]: I0209 10:02:54.642508 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991" (UID: "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:02:54.642849 kubelet[2556]: I0209 10:02:54.642665 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-kube-api-access-l9c7r" (OuterVolumeSpecName: "kube-api-access-l9c7r") pod "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991" (UID: "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991"). InnerVolumeSpecName "kube-api-access-l9c7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:02:54.643136 kubelet[2556]: I0209 10:02:54.643034 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991" (UID: "1d5b0bd3-f29a-44fd-a05b-dfd8c6871991"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:02:54.738050 kubelet[2556]: I0209 10:02:54.737934 2556 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-cilium-config-path\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:54.738050 kubelet[2556]: I0209 10:02:54.738012 2556 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-clustermesh-secrets\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:54.738050 kubelet[2556]: I0209 10:02:54.738050 2556 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-hubble-tls\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:54.738627 kubelet[2556]: I0209 10:02:54.738084 2556 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-l9c7r\" (UniqueName: \"kubernetes.io/projected/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991-kube-api-access-l9c7r\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:54.757972 kubelet[2556]: I0209 10:02:54.757871 2556 scope.go:115] "RemoveContainer" containerID="a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e" Feb 9 10:02:54.760723 env[1475]: time="2024-02-09T10:02:54.760631366Z" level=info msg="RemoveContainer for \"a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e\"" Feb 9 10:02:54.766759 env[1475]: time="2024-02-09T10:02:54.766678709Z" level=info msg="RemoveContainer for \"a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e\" returns successfully" Feb 9 10:02:54.767241 kubelet[2556]: I0209 10:02:54.767196 2556 scope.go:115] "RemoveContainer" containerID="a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e" Feb 9 10:02:54.767847 env[1475]: time="2024-02-09T10:02:54.767676915Z" level=error msg="ContainerStatus for \"a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e\": not found" Feb 9 10:02:54.768178 kubelet[2556]: E0209 10:02:54.768139 2556 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e\": not found" containerID="a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e" Feb 9 10:02:54.768366 kubelet[2556]: I0209 10:02:54.768236 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e} err="failed to get container status \"a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e\": rpc error: code = NotFound desc = an error occurred when try to find container \"a7e9da8024511edf8dc1e32744488d50ff09639c601c9e0f55a4ef6d342c948e\": not found" Feb 9 10:02:54.768366 kubelet[2556]: I0209 10:02:54.768300 2556 scope.go:115] "RemoveContainer" containerID="057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75" Feb 9 10:02:54.771056 env[1475]: time="2024-02-09T10:02:54.770967587Z" level=info msg="RemoveContainer for \"057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75\"" Feb 9 10:02:54.775404 env[1475]: time="2024-02-09T10:02:54.775301033Z" level=info msg="RemoveContainer for \"057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75\" returns successfully" Feb 9 10:02:54.775830 kubelet[2556]: I0209 10:02:54.775745 2556 scope.go:115] "RemoveContainer" containerID="f50b8f7b7fc9defaefccf096ba47e22c1888c82eed670418a870e9667d46914c" Feb 9 10:02:54.775836 systemd[1]: Removed slice kubepods-burstable-pod1d5b0bd3_f29a_44fd_a05b_dfd8c6871991.slice. Feb 9 10:02:54.776107 systemd[1]: kubepods-burstable-pod1d5b0bd3_f29a_44fd_a05b_dfd8c6871991.slice: Consumed 6.776s CPU time. Feb 9 10:02:54.778566 env[1475]: time="2024-02-09T10:02:54.778462546Z" level=info msg="RemoveContainer for \"f50b8f7b7fc9defaefccf096ba47e22c1888c82eed670418a870e9667d46914c\"" Feb 9 10:02:54.782717 env[1475]: time="2024-02-09T10:02:54.782540483Z" level=info msg="RemoveContainer for \"f50b8f7b7fc9defaefccf096ba47e22c1888c82eed670418a870e9667d46914c\" returns successfully" Feb 9 10:02:54.782994 kubelet[2556]: I0209 10:02:54.782911 2556 scope.go:115] "RemoveContainer" containerID="db76ad0debe57124d998ec22925e1ca6f387df7e884b7ed24b637544d5cd94db" Feb 9 10:02:54.785489 env[1475]: time="2024-02-09T10:02:54.785402039Z" level=info msg="RemoveContainer for \"db76ad0debe57124d998ec22925e1ca6f387df7e884b7ed24b637544d5cd94db\"" Feb 9 10:02:54.789458 env[1475]: time="2024-02-09T10:02:54.789329752Z" level=info msg="RemoveContainer for \"db76ad0debe57124d998ec22925e1ca6f387df7e884b7ed24b637544d5cd94db\" returns successfully" Feb 9 10:02:54.789799 kubelet[2556]: I0209 10:02:54.789744 2556 scope.go:115] "RemoveContainer" containerID="1accff500c7d97bd73609da1d88bb7db13f473a3f2d8702d88f450838b7e0cee" Feb 9 10:02:54.792237 env[1475]: time="2024-02-09T10:02:54.792172134Z" level=info msg="RemoveContainer for \"1accff500c7d97bd73609da1d88bb7db13f473a3f2d8702d88f450838b7e0cee\"" Feb 9 10:02:54.795972 env[1475]: time="2024-02-09T10:02:54.795898557Z" level=info msg="RemoveContainer for \"1accff500c7d97bd73609da1d88bb7db13f473a3f2d8702d88f450838b7e0cee\" returns successfully" Feb 9 10:02:54.796346 kubelet[2556]: I0209 10:02:54.796297 2556 scope.go:115] "RemoveContainer" containerID="8ebfd39b6d8e824e76bfe8da267c12780f7ee8b8c8db1c1298cc175a42ba11a4" Feb 9 10:02:54.799066 env[1475]: time="2024-02-09T10:02:54.798934158Z" level=info msg="RemoveContainer for \"8ebfd39b6d8e824e76bfe8da267c12780f7ee8b8c8db1c1298cc175a42ba11a4\"" Feb 9 10:02:54.803371 env[1475]: time="2024-02-09T10:02:54.803247936Z" level=info msg="RemoveContainer for \"8ebfd39b6d8e824e76bfe8da267c12780f7ee8b8c8db1c1298cc175a42ba11a4\" returns successfully" Feb 9 10:02:54.803731 kubelet[2556]: I0209 10:02:54.803646 2556 scope.go:115] "RemoveContainer" containerID="057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75" Feb 9 10:02:54.804341 env[1475]: time="2024-02-09T10:02:54.804135007Z" level=error msg="ContainerStatus for \"057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75\": not found" Feb 9 10:02:54.804648 kubelet[2556]: E0209 10:02:54.804587 2556 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75\": not found" containerID="057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75" Feb 9 10:02:54.804810 kubelet[2556]: I0209 10:02:54.804665 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75} err="failed to get container status \"057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75\": rpc error: code = NotFound desc = an error occurred when try to find container \"057b2b7e36818fc7256e7fc1e590e0139fe3dcd1bf6085930c163a012623ce75\": not found" Feb 9 10:02:54.804810 kubelet[2556]: I0209 10:02:54.804697 2556 scope.go:115] "RemoveContainer" containerID="f50b8f7b7fc9defaefccf096ba47e22c1888c82eed670418a870e9667d46914c" Feb 9 10:02:54.805345 env[1475]: time="2024-02-09T10:02:54.805157186Z" level=error msg="ContainerStatus for \"f50b8f7b7fc9defaefccf096ba47e22c1888c82eed670418a870e9667d46914c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f50b8f7b7fc9defaefccf096ba47e22c1888c82eed670418a870e9667d46914c\": not found" Feb 9 10:02:54.805622 kubelet[2556]: E0209 10:02:54.805568 2556 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f50b8f7b7fc9defaefccf096ba47e22c1888c82eed670418a870e9667d46914c\": not found" containerID="f50b8f7b7fc9defaefccf096ba47e22c1888c82eed670418a870e9667d46914c" Feb 9 10:02:54.805828 kubelet[2556]: I0209 10:02:54.805652 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f50b8f7b7fc9defaefccf096ba47e22c1888c82eed670418a870e9667d46914c} err="failed to get container status \"f50b8f7b7fc9defaefccf096ba47e22c1888c82eed670418a870e9667d46914c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f50b8f7b7fc9defaefccf096ba47e22c1888c82eed670418a870e9667d46914c\": not found" Feb 9 10:02:54.805828 kubelet[2556]: I0209 10:02:54.805697 2556 scope.go:115] "RemoveContainer" containerID="db76ad0debe57124d998ec22925e1ca6f387df7e884b7ed24b637544d5cd94db" Feb 9 10:02:54.806332 env[1475]: time="2024-02-09T10:02:54.806159704Z" level=error msg="ContainerStatus for \"db76ad0debe57124d998ec22925e1ca6f387df7e884b7ed24b637544d5cd94db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db76ad0debe57124d998ec22925e1ca6f387df7e884b7ed24b637544d5cd94db\": not found" Feb 9 10:02:54.806607 kubelet[2556]: E0209 10:02:54.806537 2556 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db76ad0debe57124d998ec22925e1ca6f387df7e884b7ed24b637544d5cd94db\": not found" containerID="db76ad0debe57124d998ec22925e1ca6f387df7e884b7ed24b637544d5cd94db" Feb 9 10:02:54.806607 kubelet[2556]: I0209 10:02:54.806613 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:db76ad0debe57124d998ec22925e1ca6f387df7e884b7ed24b637544d5cd94db} err="failed to get container status \"db76ad0debe57124d998ec22925e1ca6f387df7e884b7ed24b637544d5cd94db\": rpc error: code = NotFound desc = an error occurred when try to find container \"db76ad0debe57124d998ec22925e1ca6f387df7e884b7ed24b637544d5cd94db\": not found" Feb 9 10:02:54.806937 kubelet[2556]: I0209 10:02:54.806645 2556 scope.go:115] "RemoveContainer" containerID="1accff500c7d97bd73609da1d88bb7db13f473a3f2d8702d88f450838b7e0cee" Feb 9 10:02:54.807129 env[1475]: time="2024-02-09T10:02:54.806996411Z" level=error msg="ContainerStatus for \"1accff500c7d97bd73609da1d88bb7db13f473a3f2d8702d88f450838b7e0cee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1accff500c7d97bd73609da1d88bb7db13f473a3f2d8702d88f450838b7e0cee\": not found" Feb 9 10:02:54.807404 kubelet[2556]: E0209 10:02:54.807375 2556 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1accff500c7d97bd73609da1d88bb7db13f473a3f2d8702d88f450838b7e0cee\": not found" containerID="1accff500c7d97bd73609da1d88bb7db13f473a3f2d8702d88f450838b7e0cee" Feb 9 10:02:54.807552 kubelet[2556]: I0209 10:02:54.807451 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:1accff500c7d97bd73609da1d88bb7db13f473a3f2d8702d88f450838b7e0cee} err="failed to get container status \"1accff500c7d97bd73609da1d88bb7db13f473a3f2d8702d88f450838b7e0cee\": rpc error: code = NotFound desc = an error occurred when try to find container \"1accff500c7d97bd73609da1d88bb7db13f473a3f2d8702d88f450838b7e0cee\": not found" Feb 9 10:02:54.807552 kubelet[2556]: I0209 10:02:54.807481 2556 scope.go:115] "RemoveContainer" containerID="8ebfd39b6d8e824e76bfe8da267c12780f7ee8b8c8db1c1298cc175a42ba11a4" Feb 9 10:02:54.808053 env[1475]: time="2024-02-09T10:02:54.807924675Z" level=error msg="ContainerStatus for \"8ebfd39b6d8e824e76bfe8da267c12780f7ee8b8c8db1c1298cc175a42ba11a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ebfd39b6d8e824e76bfe8da267c12780f7ee8b8c8db1c1298cc175a42ba11a4\": not found" Feb 9 10:02:54.808371 kubelet[2556]: E0209 10:02:54.808259 2556 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ebfd39b6d8e824e76bfe8da267c12780f7ee8b8c8db1c1298cc175a42ba11a4\": not found" containerID="8ebfd39b6d8e824e76bfe8da267c12780f7ee8b8c8db1c1298cc175a42ba11a4" Feb 9 10:02:54.808371 kubelet[2556]: I0209 10:02:54.808377 2556 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8ebfd39b6d8e824e76bfe8da267c12780f7ee8b8c8db1c1298cc175a42ba11a4} err="failed to get container status \"8ebfd39b6d8e824e76bfe8da267c12780f7ee8b8c8db1c1298cc175a42ba11a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ebfd39b6d8e824e76bfe8da267c12780f7ee8b8c8db1c1298cc175a42ba11a4\": not found" Feb 9 10:02:55.216576 systemd[1]: var-lib-kubelet-pods-a007eab5\x2de549\x2d415b\x2db496\x2dabdcf31db7d3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drc4r7.mount: Deactivated successfully. Feb 9 10:02:55.216744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36-rootfs.mount: Deactivated successfully. Feb 9 10:02:55.216777 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62761412b05506fc0c47c1ea0d06e7cb018bd4d3f99b094372dd6b55ef18eb36-shm.mount: Deactivated successfully. Feb 9 10:02:55.216812 systemd[1]: var-lib-kubelet-pods-1d5b0bd3\x2df29a\x2d44fd\x2da05b\x2ddfd8c6871991-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl9c7r.mount: Deactivated successfully. Feb 9 10:02:55.216843 systemd[1]: var-lib-kubelet-pods-1d5b0bd3\x2df29a\x2d44fd\x2da05b\x2ddfd8c6871991-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 10:02:55.216872 systemd[1]: var-lib-kubelet-pods-1d5b0bd3\x2df29a\x2d44fd\x2da05b\x2ddfd8c6871991-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 10:02:56.177841 sshd[4803]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:56.180874 systemd[1]: sshd@24-86.109.11.101:22-147.75.109.163:45350.service: Deactivated successfully. Feb 9 10:02:56.181372 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 10:02:56.181851 systemd-logind[1463]: Session 26 logged out. Waiting for processes to exit. Feb 9 10:02:56.182542 systemd[1]: Started sshd@25-86.109.11.101:22-147.75.109.163:34874.service. Feb 9 10:02:56.183011 systemd-logind[1463]: Removed session 26. Feb 9 10:02:56.210414 sshd[4975]: Accepted publickey for core from 147.75.109.163 port 34874 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:02:56.211358 sshd[4975]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:56.214398 systemd-logind[1463]: New session 27 of user core. Feb 9 10:02:56.215427 systemd[1]: Started session-27.scope. Feb 9 10:02:56.519923 kubelet[2556]: I0209 10:02:56.519904 2556 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=1d5b0bd3-f29a-44fd-a05b-dfd8c6871991 path="/var/lib/kubelet/pods/1d5b0bd3-f29a-44fd-a05b-dfd8c6871991/volumes" Feb 9 10:02:56.520364 kubelet[2556]: I0209 10:02:56.520356 2556 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a007eab5-e549-415b-b496-abdcf31db7d3 path="/var/lib/kubelet/pods/a007eab5-e549-415b-b496-abdcf31db7d3/volumes" Feb 9 10:02:56.771761 sshd[4975]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:56.773883 systemd[1]: sshd@25-86.109.11.101:22-147.75.109.163:34874.service: Deactivated successfully. Feb 9 10:02:56.774305 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 10:02:56.774749 systemd-logind[1463]: Session 27 logged out. Waiting for processes to exit. Feb 9 10:02:56.775534 systemd[1]: Started sshd@26-86.109.11.101:22-147.75.109.163:34878.service. Feb 9 10:02:56.775987 systemd-logind[1463]: Removed session 27. Feb 9 10:02:56.776725 kubelet[2556]: I0209 10:02:56.776709 2556 topology_manager.go:210] "Topology Admit Handler" Feb 9 10:02:56.776777 kubelet[2556]: E0209 10:02:56.776750 2556 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1d5b0bd3-f29a-44fd-a05b-dfd8c6871991" containerName="mount-cgroup" Feb 9 10:02:56.776777 kubelet[2556]: E0209 10:02:56.776759 2556 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a007eab5-e549-415b-b496-abdcf31db7d3" containerName="cilium-operator" Feb 9 10:02:56.776777 kubelet[2556]: E0209 10:02:56.776765 2556 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1d5b0bd3-f29a-44fd-a05b-dfd8c6871991" containerName="clean-cilium-state" Feb 9 10:02:56.776777 kubelet[2556]: E0209 10:02:56.776771 2556 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1d5b0bd3-f29a-44fd-a05b-dfd8c6871991" containerName="cilium-agent" Feb 9 10:02:56.776777 kubelet[2556]: E0209 10:02:56.776777 2556 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1d5b0bd3-f29a-44fd-a05b-dfd8c6871991" containerName="apply-sysctl-overwrites" Feb 9 10:02:56.776868 kubelet[2556]: E0209 10:02:56.776782 2556 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1d5b0bd3-f29a-44fd-a05b-dfd8c6871991" containerName="mount-bpf-fs" Feb 9 10:02:56.776868 kubelet[2556]: I0209 10:02:56.776803 2556 memory_manager.go:346] "RemoveStaleState removing state" podUID="a007eab5-e549-415b-b496-abdcf31db7d3" containerName="cilium-operator" Feb 9 10:02:56.776868 kubelet[2556]: I0209 10:02:56.776809 2556 memory_manager.go:346] "RemoveStaleState removing state" podUID="1d5b0bd3-f29a-44fd-a05b-dfd8c6871991" containerName="cilium-agent" Feb 9 10:02:56.780632 systemd[1]: Created slice kubepods-burstable-podb82f5ebd_f395_4bf3_a29d_a22ac1cf26c6.slice. Feb 9 10:02:56.805788 sshd[5001]: Accepted publickey for core from 147.75.109.163 port 34878 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:02:56.809165 sshd[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:56.819464 systemd-logind[1463]: New session 28 of user core. Feb 9 10:02:56.822531 systemd[1]: Started session-28.scope. Feb 9 10:02:56.853683 kubelet[2556]: I0209 10:02:56.853578 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-host-proc-sys-net\") pod \"cilium-j62fs\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " pod="kube-system/cilium-j62fs" Feb 9 10:02:56.853683 kubelet[2556]: I0209 10:02:56.853683 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-cilium-config-path\") pod \"cilium-j62fs\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " pod="kube-system/cilium-j62fs" Feb 9 10:02:56.854029 kubelet[2556]: I0209 10:02:56.853852 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-cilium-ipsec-secrets\") pod \"cilium-j62fs\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " pod="kube-system/cilium-j62fs" Feb 9 10:02:56.854155 kubelet[2556]: I0209 10:02:56.854020 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-bpf-maps\") pod \"cilium-j62fs\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " pod="kube-system/cilium-j62fs" Feb 9 10:02:56.854155 kubelet[2556]: I0209 10:02:56.854104 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-hostproc\") pod \"cilium-j62fs\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " pod="kube-system/cilium-j62fs" Feb 9 10:02:56.854431 kubelet[2556]: I0209 10:02:56.854165 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-cni-path\") pod \"cilium-j62fs\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " pod="kube-system/cilium-j62fs" Feb 9 10:02:56.854431 kubelet[2556]: I0209 10:02:56.854224 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-lib-modules\") pod \"cilium-j62fs\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " pod="kube-system/cilium-j62fs" Feb 9 10:02:56.854431 kubelet[2556]: I0209 10:02:56.854310 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-hubble-tls\") pod \"cilium-j62fs\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " pod="kube-system/cilium-j62fs" Feb 9 10:02:56.854745 kubelet[2556]: I0209 10:02:56.854465 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-cilium-cgroup\") pod \"cilium-j62fs\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " pod="kube-system/cilium-j62fs" Feb 9 10:02:56.854745 kubelet[2556]: I0209 10:02:56.854618 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-clustermesh-secrets\") pod \"cilium-j62fs\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " pod="kube-system/cilium-j62fs" Feb 9 10:02:56.854745 kubelet[2556]: I0209 10:02:56.854710 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-cilium-run\") pod \"cilium-j62fs\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " pod="kube-system/cilium-j62fs" Feb 9 10:02:56.855036 kubelet[2556]: I0209 10:02:56.854836 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6t89\" (UniqueName: \"kubernetes.io/projected/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-kube-api-access-t6t89\") pod \"cilium-j62fs\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " pod="kube-system/cilium-j62fs" Feb 9 10:02:56.855036 kubelet[2556]: I0209 10:02:56.854932 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-etc-cni-netd\") pod \"cilium-j62fs\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " pod="kube-system/cilium-j62fs" Feb 9 10:02:56.855380 kubelet[2556]: I0209 10:02:56.855123 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-xtables-lock\") pod \"cilium-j62fs\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " pod="kube-system/cilium-j62fs" Feb 9 10:02:56.855380 kubelet[2556]: I0209 10:02:56.855249 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-host-proc-sys-kernel\") pod \"cilium-j62fs\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " pod="kube-system/cilium-j62fs" Feb 9 10:02:56.977994 sshd[5001]: pam_unix(sshd:session): session closed for user core Feb 9 10:02:56.980139 systemd[1]: sshd@26-86.109.11.101:22-147.75.109.163:34878.service: Deactivated successfully. Feb 9 10:02:56.980519 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 10:02:56.981001 systemd-logind[1463]: Session 28 logged out. Waiting for processes to exit. Feb 9 10:02:56.981682 systemd[1]: Started sshd@27-86.109.11.101:22-147.75.109.163:34884.service. Feb 9 10:02:56.982227 systemd-logind[1463]: Removed session 28. Feb 9 10:02:57.009149 sshd[5030]: Accepted publickey for core from 147.75.109.163 port 34884 ssh2: RSA SHA256:ZOC355QqrH2+lGBbdK08UfA2mwkOMdsag732KUNE1EI Feb 9 10:02:57.012557 sshd[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 10:02:57.024067 systemd-logind[1463]: New session 29 of user core. Feb 9 10:02:57.027535 systemd[1]: Started session-29.scope. Feb 9 10:02:57.083649 env[1475]: time="2024-02-09T10:02:57.083526579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j62fs,Uid:b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6,Namespace:kube-system,Attempt:0,}" Feb 9 10:02:57.106163 env[1475]: time="2024-02-09T10:02:57.105944308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:02:57.106163 env[1475]: time="2024-02-09T10:02:57.106037100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:02:57.106163 env[1475]: time="2024-02-09T10:02:57.106073982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:02:57.106687 env[1475]: time="2024-02-09T10:02:57.106443343Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f13dde79b4bd315bf6144e3c2d7eaf15702b6c3fb1160a99fe36ba2c4a074ebd pid=5044 runtime=io.containerd.runc.v2 Feb 9 10:02:57.136047 systemd[1]: Started cri-containerd-f13dde79b4bd315bf6144e3c2d7eaf15702b6c3fb1160a99fe36ba2c4a074ebd.scope. Feb 9 10:02:57.165550 env[1475]: time="2024-02-09T10:02:57.165516875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j62fs,Uid:b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f13dde79b4bd315bf6144e3c2d7eaf15702b6c3fb1160a99fe36ba2c4a074ebd\"" Feb 9 10:02:57.166877 env[1475]: time="2024-02-09T10:02:57.166856201Z" level=info msg="CreateContainer within sandbox \"f13dde79b4bd315bf6144e3c2d7eaf15702b6c3fb1160a99fe36ba2c4a074ebd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:02:57.171892 env[1475]: time="2024-02-09T10:02:57.171841893Z" level=info msg="CreateContainer within sandbox \"f13dde79b4bd315bf6144e3c2d7eaf15702b6c3fb1160a99fe36ba2c4a074ebd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"35e5324bd2424e1fd0f1be49be144c5325dcb0c0d3cc526377f762c15ff5c827\"" Feb 9 10:02:57.172086 env[1475]: time="2024-02-09T10:02:57.172073244Z" level=info msg="StartContainer for \"35e5324bd2424e1fd0f1be49be144c5325dcb0c0d3cc526377f762c15ff5c827\"" Feb 9 10:02:57.180816 systemd[1]: Started cri-containerd-35e5324bd2424e1fd0f1be49be144c5325dcb0c0d3cc526377f762c15ff5c827.scope. Feb 9 10:02:57.186395 systemd[1]: cri-containerd-35e5324bd2424e1fd0f1be49be144c5325dcb0c0d3cc526377f762c15ff5c827.scope: Deactivated successfully. Feb 9 10:02:57.186554 systemd[1]: Stopped cri-containerd-35e5324bd2424e1fd0f1be49be144c5325dcb0c0d3cc526377f762c15ff5c827.scope. Feb 9 10:02:57.220474 env[1475]: time="2024-02-09T10:02:57.220325253Z" level=info msg="shim disconnected" id=35e5324bd2424e1fd0f1be49be144c5325dcb0c0d3cc526377f762c15ff5c827 Feb 9 10:02:57.220474 env[1475]: time="2024-02-09T10:02:57.220449303Z" level=warning msg="cleaning up after shim disconnected" id=35e5324bd2424e1fd0f1be49be144c5325dcb0c0d3cc526377f762c15ff5c827 namespace=k8s.io Feb 9 10:02:57.220474 env[1475]: time="2024-02-09T10:02:57.220481321Z" level=info msg="cleaning up dead shim" Feb 9 10:02:57.249907 env[1475]: time="2024-02-09T10:02:57.249798197Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:02:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5117 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T10:02:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/35e5324bd2424e1fd0f1be49be144c5325dcb0c0d3cc526377f762c15ff5c827/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 10:02:57.250653 env[1475]: time="2024-02-09T10:02:57.250417809Z" level=error msg="copy shim log" error="read /proc/self/fd/34: file already closed" Feb 9 10:02:57.251015 env[1475]: time="2024-02-09T10:02:57.250905136Z" level=error msg="Failed to pipe stdout of container \"35e5324bd2424e1fd0f1be49be144c5325dcb0c0d3cc526377f762c15ff5c827\"" error="reading from a closed fifo" Feb 9 10:02:57.251191 env[1475]: time="2024-02-09T10:02:57.251001693Z" level=error msg="Failed to pipe stderr of container \"35e5324bd2424e1fd0f1be49be144c5325dcb0c0d3cc526377f762c15ff5c827\"" error="reading from a closed fifo" Feb 9 10:02:57.252375 env[1475]: time="2024-02-09T10:02:57.252221997Z" level=error msg="StartContainer for \"35e5324bd2424e1fd0f1be49be144c5325dcb0c0d3cc526377f762c15ff5c827\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 10:02:57.252849 kubelet[2556]: E0209 10:02:57.252792 2556 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="35e5324bd2424e1fd0f1be49be144c5325dcb0c0d3cc526377f762c15ff5c827" Feb 9 10:02:57.253206 kubelet[2556]: E0209 10:02:57.253167 2556 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 10:02:57.253206 kubelet[2556]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 10:02:57.253206 kubelet[2556]: rm /hostbin/cilium-mount Feb 9 10:02:57.253206 kubelet[2556]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-t6t89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-j62fs_kube-system(b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 10:02:57.254012 kubelet[2556]: E0209 10:02:57.253336 2556 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-j62fs" podUID=b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6 Feb 9 10:02:57.653171 kubelet[2556]: E0209 10:02:57.653100 2556 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 10:02:57.780388 env[1475]: time="2024-02-09T10:02:57.780247960Z" level=info msg="StopPodSandbox for \"f13dde79b4bd315bf6144e3c2d7eaf15702b6c3fb1160a99fe36ba2c4a074ebd\"" Feb 9 10:02:57.780720 env[1475]: time="2024-02-09T10:02:57.780431317Z" level=info msg="Container to stop \"35e5324bd2424e1fd0f1be49be144c5325dcb0c0d3cc526377f762c15ff5c827\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 10:02:57.802496 systemd[1]: cri-containerd-f13dde79b4bd315bf6144e3c2d7eaf15702b6c3fb1160a99fe36ba2c4a074ebd.scope: Deactivated successfully. Feb 9 10:02:57.840836 env[1475]: time="2024-02-09T10:02:57.840780874Z" level=info msg="shim disconnected" id=f13dde79b4bd315bf6144e3c2d7eaf15702b6c3fb1160a99fe36ba2c4a074ebd Feb 9 10:02:57.841046 env[1475]: time="2024-02-09T10:02:57.840838688Z" level=warning msg="cleaning up after shim disconnected" id=f13dde79b4bd315bf6144e3c2d7eaf15702b6c3fb1160a99fe36ba2c4a074ebd namespace=k8s.io Feb 9 10:02:57.841046 env[1475]: time="2024-02-09T10:02:57.840851381Z" level=info msg="cleaning up dead shim" Feb 9 10:02:57.848757 env[1475]: time="2024-02-09T10:02:57.848683343Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:02:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5148 runtime=io.containerd.runc.v2\n" Feb 9 10:02:57.849110 env[1475]: time="2024-02-09T10:02:57.849044178Z" level=info msg="TearDown network for sandbox \"f13dde79b4bd315bf6144e3c2d7eaf15702b6c3fb1160a99fe36ba2c4a074ebd\" successfully" Feb 9 10:02:57.849110 env[1475]: time="2024-02-09T10:02:57.849076025Z" level=info msg="StopPodSandbox for \"f13dde79b4bd315bf6144e3c2d7eaf15702b6c3fb1160a99fe36ba2c4a074ebd\" returns successfully" Feb 9 10:02:57.864107 kubelet[2556]: I0209 10:02:57.864046 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6t89\" (UniqueName: \"kubernetes.io/projected/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-kube-api-access-t6t89\") pod \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " Feb 9 10:02:57.864438 kubelet[2556]: I0209 10:02:57.864141 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-cni-path\") pod \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " Feb 9 10:02:57.864438 kubelet[2556]: I0209 10:02:57.864198 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-etc-cni-netd\") pod \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " Feb 9 10:02:57.864438 kubelet[2556]: I0209 10:02:57.864253 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-hostproc\") pod \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " Feb 9 10:02:57.864438 kubelet[2556]: I0209 10:02:57.864323 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-cilium-cgroup\") pod \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " Feb 9 10:02:57.864438 kubelet[2556]: I0209 10:02:57.864307 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-cni-path" (OuterVolumeSpecName: "cni-path") pod "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6" (UID: "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:02:57.864438 kubelet[2556]: I0209 10:02:57.864380 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-host-proc-sys-net\") pod \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " Feb 9 10:02:57.865232 kubelet[2556]: I0209 10:02:57.864358 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6" (UID: "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:02:57.865232 kubelet[2556]: I0209 10:02:57.864416 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6" (UID: "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:02:57.865232 kubelet[2556]: I0209 10:02:57.864411 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-hostproc" (OuterVolumeSpecName: "hostproc") pod "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6" (UID: "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:02:57.865232 kubelet[2556]: I0209 10:02:57.864444 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-cilium-config-path\") pod \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " Feb 9 10:02:57.865232 kubelet[2556]: I0209 10:02:57.864491 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6" (UID: "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:02:57.865913 kubelet[2556]: I0209 10:02:57.864569 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-lib-modules\") pod \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " Feb 9 10:02:57.865913 kubelet[2556]: I0209 10:02:57.864607 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6" (UID: "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:02:57.865913 kubelet[2556]: I0209 10:02:57.864714 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-clustermesh-secrets\") pod \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " Feb 9 10:02:57.865913 kubelet[2556]: I0209 10:02:57.864783 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-host-proc-sys-kernel\") pod \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " Feb 9 10:02:57.865913 kubelet[2556]: W0209 10:02:57.864760 2556 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 10:02:57.865913 kubelet[2556]: I0209 10:02:57.864850 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-cilium-ipsec-secrets\") pod \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " Feb 9 10:02:57.866687 kubelet[2556]: I0209 10:02:57.864907 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-bpf-maps\") pod \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " Feb 9 10:02:57.866687 kubelet[2556]: I0209 10:02:57.864945 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6" (UID: "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:02:57.866687 kubelet[2556]: I0209 10:02:57.864969 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-hubble-tls\") pod \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " Feb 9 10:02:57.866687 kubelet[2556]: I0209 10:02:57.865047 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6" (UID: "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:02:57.866687 kubelet[2556]: I0209 10:02:57.865122 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-cilium-run\") pod \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " Feb 9 10:02:57.866687 kubelet[2556]: I0209 10:02:57.865219 2556 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-xtables-lock\") pod \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\" (UID: \"b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6\") " Feb 9 10:02:57.867462 kubelet[2556]: I0209 10:02:57.865310 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6" (UID: "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:02:57.867462 kubelet[2556]: I0209 10:02:57.865219 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6" (UID: "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 10:02:57.867462 kubelet[2556]: I0209 10:02:57.865366 2556 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-etc-cni-netd\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:57.867462 kubelet[2556]: I0209 10:02:57.865423 2556 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-cni-path\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:57.867462 kubelet[2556]: I0209 10:02:57.865459 2556 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-cilium-cgroup\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:57.867462 kubelet[2556]: I0209 10:02:57.865489 2556 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-hostproc\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:57.867462 kubelet[2556]: I0209 10:02:57.865518 2556 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-lib-modules\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:57.868420 kubelet[2556]: I0209 10:02:57.865551 2556 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:57.868420 kubelet[2556]: I0209 10:02:57.865583 2556 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-host-proc-sys-net\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:57.868420 kubelet[2556]: I0209 10:02:57.865613 2556 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-bpf-maps\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:57.869974 kubelet[2556]: I0209 10:02:57.869888 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6" (UID: "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 10:02:57.870766 kubelet[2556]: I0209 10:02:57.870713 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-kube-api-access-t6t89" (OuterVolumeSpecName: "kube-api-access-t6t89") pod "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6" (UID: "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6"). InnerVolumeSpecName "kube-api-access-t6t89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:02:57.870766 kubelet[2556]: I0209 10:02:57.870726 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6" (UID: "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:02:57.870828 kubelet[2556]: I0209 10:02:57.870796 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6" (UID: "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 10:02:57.870908 kubelet[2556]: I0209 10:02:57.870861 2556 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6" (UID: "b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 10:02:57.962652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f13dde79b4bd315bf6144e3c2d7eaf15702b6c3fb1160a99fe36ba2c4a074ebd-rootfs.mount: Deactivated successfully. Feb 9 10:02:57.962918 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f13dde79b4bd315bf6144e3c2d7eaf15702b6c3fb1160a99fe36ba2c4a074ebd-shm.mount: Deactivated successfully. Feb 9 10:02:57.963104 systemd[1]: var-lib-kubelet-pods-b82f5ebd\x2df395\x2d4bf3\x2da29d\x2da22ac1cf26c6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt6t89.mount: Deactivated successfully. Feb 9 10:02:57.963301 systemd[1]: var-lib-kubelet-pods-b82f5ebd\x2df395\x2d4bf3\x2da29d\x2da22ac1cf26c6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 10:02:57.963483 systemd[1]: var-lib-kubelet-pods-b82f5ebd\x2df395\x2d4bf3\x2da29d\x2da22ac1cf26c6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 10:02:57.963639 systemd[1]: var-lib-kubelet-pods-b82f5ebd\x2df395\x2d4bf3\x2da29d\x2da22ac1cf26c6-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 10:02:57.966560 kubelet[2556]: I0209 10:02:57.966519 2556 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-cilium-config-path\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:57.966560 kubelet[2556]: I0209 10:02:57.966536 2556 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-clustermesh-secrets\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:57.966560 kubelet[2556]: I0209 10:02:57.966544 2556 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:57.966560 kubelet[2556]: I0209 10:02:57.966550 2556 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-hubble-tls\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:57.966560 kubelet[2556]: I0209 10:02:57.966557 2556 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-cilium-run\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:57.966560 kubelet[2556]: I0209 10:02:57.966563 2556 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-xtables-lock\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:57.966706 kubelet[2556]: I0209 10:02:57.966569 2556 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-t6t89\" (UniqueName: \"kubernetes.io/projected/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6-kube-api-access-t6t89\") on node \"ci-3510.3.2-a-98b619e81b\" DevicePath \"\"" Feb 9 10:02:58.518558 kubelet[2556]: E0209 10:02:58.518480 2556 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-c8zxm" podUID=e87d14d0-d2c2-4c1a-a26c-76cab561e426 Feb 9 10:02:58.525970 systemd[1]: Removed slice kubepods-burstable-podb82f5ebd_f395_4bf3_a29d_a22ac1cf26c6.slice. Feb 9 10:02:58.786182 kubelet[2556]: I0209 10:02:58.785980 2556 scope.go:115] "RemoveContainer" containerID="35e5324bd2424e1fd0f1be49be144c5325dcb0c0d3cc526377f762c15ff5c827" Feb 9 10:02:58.788818 env[1475]: time="2024-02-09T10:02:58.788704827Z" level=info msg="RemoveContainer for \"35e5324bd2424e1fd0f1be49be144c5325dcb0c0d3cc526377f762c15ff5c827\"" Feb 9 10:02:58.793241 env[1475]: time="2024-02-09T10:02:58.793154620Z" level=info msg="RemoveContainer for \"35e5324bd2424e1fd0f1be49be144c5325dcb0c0d3cc526377f762c15ff5c827\" returns successfully" Feb 9 10:02:58.829868 kubelet[2556]: I0209 10:02:58.829800 2556 topology_manager.go:210] "Topology Admit Handler" Feb 9 10:02:58.830229 kubelet[2556]: E0209 10:02:58.829919 2556 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6" containerName="mount-cgroup" Feb 9 10:02:58.830229 kubelet[2556]: I0209 10:02:58.830001 2556 memory_manager.go:346] "RemoveStaleState removing state" podUID="b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6" containerName="mount-cgroup" Feb 9 10:02:58.844579 systemd[1]: Created slice kubepods-burstable-pod9102be1f_358b_45e8_bfa5_237edd6ebd19.slice. Feb 9 10:02:58.874480 kubelet[2556]: I0209 10:02:58.874438 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9102be1f-358b-45e8-bfa5-237edd6ebd19-cilium-config-path\") pod \"cilium-jrrft\" (UID: \"9102be1f-358b-45e8-bfa5-237edd6ebd19\") " pod="kube-system/cilium-jrrft" Feb 9 10:02:58.874701 kubelet[2556]: I0209 10:02:58.874559 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9102be1f-358b-45e8-bfa5-237edd6ebd19-cilium-cgroup\") pod \"cilium-jrrft\" (UID: \"9102be1f-358b-45e8-bfa5-237edd6ebd19\") " pod="kube-system/cilium-jrrft" Feb 9 10:02:58.874701 kubelet[2556]: I0209 10:02:58.874626 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9102be1f-358b-45e8-bfa5-237edd6ebd19-lib-modules\") pod \"cilium-jrrft\" (UID: \"9102be1f-358b-45e8-bfa5-237edd6ebd19\") " pod="kube-system/cilium-jrrft" Feb 9 10:02:58.874870 kubelet[2556]: I0209 10:02:58.874707 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9102be1f-358b-45e8-bfa5-237edd6ebd19-hubble-tls\") pod \"cilium-jrrft\" (UID: \"9102be1f-358b-45e8-bfa5-237edd6ebd19\") " pod="kube-system/cilium-jrrft" Feb 9 10:02:58.874870 kubelet[2556]: I0209 10:02:58.874773 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9102be1f-358b-45e8-bfa5-237edd6ebd19-xtables-lock\") pod \"cilium-jrrft\" (UID: \"9102be1f-358b-45e8-bfa5-237edd6ebd19\") " pod="kube-system/cilium-jrrft" Feb 9 10:02:58.874870 kubelet[2556]: I0209 10:02:58.874812 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9102be1f-358b-45e8-bfa5-237edd6ebd19-hostproc\") pod \"cilium-jrrft\" (UID: \"9102be1f-358b-45e8-bfa5-237edd6ebd19\") " pod="kube-system/cilium-jrrft" Feb 9 10:02:58.875065 kubelet[2556]: I0209 10:02:58.874952 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9102be1f-358b-45e8-bfa5-237edd6ebd19-bpf-maps\") pod \"cilium-jrrft\" (UID: \"9102be1f-358b-45e8-bfa5-237edd6ebd19\") " pod="kube-system/cilium-jrrft" Feb 9 10:02:58.875065 kubelet[2556]: I0209 10:02:58.875006 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9102be1f-358b-45e8-bfa5-237edd6ebd19-cilium-run\") pod \"cilium-jrrft\" (UID: \"9102be1f-358b-45e8-bfa5-237edd6ebd19\") " pod="kube-system/cilium-jrrft" Feb 9 10:02:58.875065 kubelet[2556]: I0209 10:02:58.875058 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9102be1f-358b-45e8-bfa5-237edd6ebd19-host-proc-sys-kernel\") pod \"cilium-jrrft\" (UID: \"9102be1f-358b-45e8-bfa5-237edd6ebd19\") " pod="kube-system/cilium-jrrft" Feb 9 10:02:58.875317 kubelet[2556]: I0209 10:02:58.875117 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwv25\" (UniqueName: \"kubernetes.io/projected/9102be1f-358b-45e8-bfa5-237edd6ebd19-kube-api-access-fwv25\") pod \"cilium-jrrft\" (UID: \"9102be1f-358b-45e8-bfa5-237edd6ebd19\") " pod="kube-system/cilium-jrrft" Feb 9 10:02:58.875317 kubelet[2556]: I0209 10:02:58.875171 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9102be1f-358b-45e8-bfa5-237edd6ebd19-clustermesh-secrets\") pod \"cilium-jrrft\" (UID: \"9102be1f-358b-45e8-bfa5-237edd6ebd19\") " pod="kube-system/cilium-jrrft" Feb 9 10:02:58.875317 kubelet[2556]: I0209 10:02:58.875253 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9102be1f-358b-45e8-bfa5-237edd6ebd19-cilium-ipsec-secrets\") pod \"cilium-jrrft\" (UID: \"9102be1f-358b-45e8-bfa5-237edd6ebd19\") " pod="kube-system/cilium-jrrft" Feb 9 10:02:58.875526 kubelet[2556]: I0209 10:02:58.875329 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9102be1f-358b-45e8-bfa5-237edd6ebd19-host-proc-sys-net\") pod \"cilium-jrrft\" (UID: \"9102be1f-358b-45e8-bfa5-237edd6ebd19\") " pod="kube-system/cilium-jrrft" Feb 9 10:02:58.875526 kubelet[2556]: I0209 10:02:58.875417 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9102be1f-358b-45e8-bfa5-237edd6ebd19-cni-path\") pod \"cilium-jrrft\" (UID: \"9102be1f-358b-45e8-bfa5-237edd6ebd19\") " pod="kube-system/cilium-jrrft" Feb 9 10:02:58.875526 kubelet[2556]: I0209 10:02:58.875502 2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9102be1f-358b-45e8-bfa5-237edd6ebd19-etc-cni-netd\") pod \"cilium-jrrft\" (UID: \"9102be1f-358b-45e8-bfa5-237edd6ebd19\") " pod="kube-system/cilium-jrrft" Feb 9 10:02:59.149374 env[1475]: time="2024-02-09T10:02:59.149097320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jrrft,Uid:9102be1f-358b-45e8-bfa5-237edd6ebd19,Namespace:kube-system,Attempt:0,}" Feb 9 10:02:59.169855 env[1475]: time="2024-02-09T10:02:59.169632263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 10:02:59.169855 env[1475]: time="2024-02-09T10:02:59.169728809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 10:02:59.169855 env[1475]: time="2024-02-09T10:02:59.169767271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 10:02:59.170369 env[1475]: time="2024-02-09T10:02:59.170184523Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f397fcfec252a0a2760068c7f0bed4d45a772099904d558126c1931f6e9a792b pid=5175 runtime=io.containerd.runc.v2 Feb 9 10:02:59.213657 systemd[1]: Started cri-containerd-f397fcfec252a0a2760068c7f0bed4d45a772099904d558126c1931f6e9a792b.scope. Feb 9 10:02:59.250153 env[1475]: time="2024-02-09T10:02:59.250095226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jrrft,Uid:9102be1f-358b-45e8-bfa5-237edd6ebd19,Namespace:kube-system,Attempt:0,} returns sandbox id \"f397fcfec252a0a2760068c7f0bed4d45a772099904d558126c1931f6e9a792b\"" Feb 9 10:02:59.251883 env[1475]: time="2024-02-09T10:02:59.251838596Z" level=info msg="CreateContainer within sandbox \"f397fcfec252a0a2760068c7f0bed4d45a772099904d558126c1931f6e9a792b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 10:02:59.257722 env[1475]: time="2024-02-09T10:02:59.257670326Z" level=info msg="CreateContainer within sandbox \"f397fcfec252a0a2760068c7f0bed4d45a772099904d558126c1931f6e9a792b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"939d63ec65ef52d6341a0af0aba877ef99677e294d21ddb6fd710db6def1fb84\"" Feb 9 10:02:59.258009 env[1475]: time="2024-02-09T10:02:59.257984554Z" level=info msg="StartContainer for \"939d63ec65ef52d6341a0af0aba877ef99677e294d21ddb6fd710db6def1fb84\"" Feb 9 10:02:59.282364 systemd[1]: Started cri-containerd-939d63ec65ef52d6341a0af0aba877ef99677e294d21ddb6fd710db6def1fb84.scope. Feb 9 10:02:59.316968 env[1475]: time="2024-02-09T10:02:59.316865445Z" level=info msg="StartContainer for \"939d63ec65ef52d6341a0af0aba877ef99677e294d21ddb6fd710db6def1fb84\" returns successfully" Feb 9 10:02:59.334428 systemd[1]: cri-containerd-939d63ec65ef52d6341a0af0aba877ef99677e294d21ddb6fd710db6def1fb84.scope: Deactivated successfully. Feb 9 10:02:59.394198 env[1475]: time="2024-02-09T10:02:59.394075036Z" level=info msg="shim disconnected" id=939d63ec65ef52d6341a0af0aba877ef99677e294d21ddb6fd710db6def1fb84 Feb 9 10:02:59.394198 env[1475]: time="2024-02-09T10:02:59.394180923Z" level=warning msg="cleaning up after shim disconnected" id=939d63ec65ef52d6341a0af0aba877ef99677e294d21ddb6fd710db6def1fb84 namespace=k8s.io Feb 9 10:02:59.394740 env[1475]: time="2024-02-09T10:02:59.394211750Z" level=info msg="cleaning up dead shim" Feb 9 10:02:59.423845 env[1475]: time="2024-02-09T10:02:59.423625304Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:02:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5257 runtime=io.containerd.runc.v2\n" Feb 9 10:02:59.797106 env[1475]: time="2024-02-09T10:02:59.796967348Z" level=info msg="CreateContainer within sandbox \"f397fcfec252a0a2760068c7f0bed4d45a772099904d558126c1931f6e9a792b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 10:02:59.808784 env[1475]: time="2024-02-09T10:02:59.808732382Z" level=info msg="CreateContainer within sandbox \"f397fcfec252a0a2760068c7f0bed4d45a772099904d558126c1931f6e9a792b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8cd4ab0d2193086e45c368033692214c44e52277272f691ca5baba6412d1a226\"" Feb 9 10:02:59.809172 env[1475]: time="2024-02-09T10:02:59.809158607Z" level=info msg="StartContainer for \"8cd4ab0d2193086e45c368033692214c44e52277272f691ca5baba6412d1a226\"" Feb 9 10:02:59.817014 systemd[1]: Started cri-containerd-8cd4ab0d2193086e45c368033692214c44e52277272f691ca5baba6412d1a226.scope. Feb 9 10:02:59.829752 env[1475]: time="2024-02-09T10:02:59.829704689Z" level=info msg="StartContainer for \"8cd4ab0d2193086e45c368033692214c44e52277272f691ca5baba6412d1a226\" returns successfully" Feb 9 10:02:59.832894 systemd[1]: cri-containerd-8cd4ab0d2193086e45c368033692214c44e52277272f691ca5baba6412d1a226.scope: Deactivated successfully. Feb 9 10:02:59.864792 env[1475]: time="2024-02-09T10:02:59.864680922Z" level=info msg="shim disconnected" id=8cd4ab0d2193086e45c368033692214c44e52277272f691ca5baba6412d1a226 Feb 9 10:02:59.865179 env[1475]: time="2024-02-09T10:02:59.864795098Z" level=warning msg="cleaning up after shim disconnected" id=8cd4ab0d2193086e45c368033692214c44e52277272f691ca5baba6412d1a226 namespace=k8s.io Feb 9 10:02:59.865179 env[1475]: time="2024-02-09T10:02:59.864826955Z" level=info msg="cleaning up dead shim" Feb 9 10:02:59.895712 env[1475]: time="2024-02-09T10:02:59.895624211Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:02:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5319 runtime=io.containerd.runc.v2\n" Feb 9 10:03:00.327532 kubelet[2556]: W0209 10:03:00.327458 2556 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb82f5ebd_f395_4bf3_a29d_a22ac1cf26c6.slice/cri-containerd-35e5324bd2424e1fd0f1be49be144c5325dcb0c0d3cc526377f762c15ff5c827.scope WatchSource:0}: container "35e5324bd2424e1fd0f1be49be144c5325dcb0c0d3cc526377f762c15ff5c827" in namespace "k8s.io": not found Feb 9 10:03:00.518839 kubelet[2556]: E0209 10:03:00.518780 2556 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-c8zxm" podUID=e87d14d0-d2c2-4c1a-a26c-76cab561e426 Feb 9 10:03:00.522154 kubelet[2556]: I0209 10:03:00.522112 2556 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6 path="/var/lib/kubelet/pods/b82f5ebd-f395-4bf3-a29d-a22ac1cf26c6/volumes" Feb 9 10:03:00.797504 env[1475]: time="2024-02-09T10:03:00.797446133Z" level=info msg="CreateContainer within sandbox \"f397fcfec252a0a2760068c7f0bed4d45a772099904d558126c1931f6e9a792b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 10:03:00.804566 env[1475]: time="2024-02-09T10:03:00.804523242Z" level=info msg="CreateContainer within sandbox \"f397fcfec252a0a2760068c7f0bed4d45a772099904d558126c1931f6e9a792b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7e0b7a1e841c5cde55dd57604325f47e1bf159c9096dedccf8bdac7b3db9454b\"" Feb 9 10:03:00.804954 env[1475]: time="2024-02-09T10:03:00.804939619Z" level=info msg="StartContainer for \"7e0b7a1e841c5cde55dd57604325f47e1bf159c9096dedccf8bdac7b3db9454b\"" Feb 9 10:03:00.805273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2702589601.mount: Deactivated successfully. Feb 9 10:03:00.814959 systemd[1]: Started cri-containerd-7e0b7a1e841c5cde55dd57604325f47e1bf159c9096dedccf8bdac7b3db9454b.scope. Feb 9 10:03:00.842334 env[1475]: time="2024-02-09T10:03:00.842301993Z" level=info msg="StartContainer for \"7e0b7a1e841c5cde55dd57604325f47e1bf159c9096dedccf8bdac7b3db9454b\" returns successfully" Feb 9 10:03:00.843677 systemd[1]: cri-containerd-7e0b7a1e841c5cde55dd57604325f47e1bf159c9096dedccf8bdac7b3db9454b.scope: Deactivated successfully. Feb 9 10:03:00.898315 env[1475]: time="2024-02-09T10:03:00.898140500Z" level=info msg="shim disconnected" id=7e0b7a1e841c5cde55dd57604325f47e1bf159c9096dedccf8bdac7b3db9454b Feb 9 10:03:00.898315 env[1475]: time="2024-02-09T10:03:00.898252789Z" level=warning msg="cleaning up after shim disconnected" id=7e0b7a1e841c5cde55dd57604325f47e1bf159c9096dedccf8bdac7b3db9454b namespace=k8s.io Feb 9 10:03:00.898819 env[1475]: time="2024-02-09T10:03:00.898326079Z" level=info msg="cleaning up dead shim" Feb 9 10:03:00.916632 env[1475]: time="2024-02-09T10:03:00.916518788Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:03:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5375 runtime=io.containerd.runc.v2\n" Feb 9 10:03:00.987270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e0b7a1e841c5cde55dd57604325f47e1bf159c9096dedccf8bdac7b3db9454b-rootfs.mount: Deactivated successfully. Feb 9 10:03:01.400620 kubelet[2556]: I0209 10:03:01.400562 2556 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-98b619e81b" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 10:03:01.400407074 +0000 UTC m=+438.945922236 LastTransitionTime:2024-02-09 10:03:01.400407074 +0000 UTC m=+438.945922236 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 10:03:01.809158 env[1475]: time="2024-02-09T10:03:01.809062804Z" level=info msg="CreateContainer within sandbox \"f397fcfec252a0a2760068c7f0bed4d45a772099904d558126c1931f6e9a792b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 10:03:01.823420 env[1475]: time="2024-02-09T10:03:01.823279050Z" level=info msg="CreateContainer within sandbox \"f397fcfec252a0a2760068c7f0bed4d45a772099904d558126c1931f6e9a792b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"adc9d86ed084097bbd7e53b0d54b54fb223b497ca7f07f4008bd3ad7f9e9c22c\"" Feb 9 10:03:01.824327 env[1475]: time="2024-02-09T10:03:01.824230429Z" level=info msg="StartContainer for \"adc9d86ed084097bbd7e53b0d54b54fb223b497ca7f07f4008bd3ad7f9e9c22c\"" Feb 9 10:03:01.880519 systemd[1]: Started cri-containerd-adc9d86ed084097bbd7e53b0d54b54fb223b497ca7f07f4008bd3ad7f9e9c22c.scope. Feb 9 10:03:01.946685 env[1475]: time="2024-02-09T10:03:01.946557358Z" level=info msg="StartContainer for \"adc9d86ed084097bbd7e53b0d54b54fb223b497ca7f07f4008bd3ad7f9e9c22c\" returns successfully" Feb 9 10:03:01.947351 systemd[1]: cri-containerd-adc9d86ed084097bbd7e53b0d54b54fb223b497ca7f07f4008bd3ad7f9e9c22c.scope: Deactivated successfully. Feb 9 10:03:02.010301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adc9d86ed084097bbd7e53b0d54b54fb223b497ca7f07f4008bd3ad7f9e9c22c-rootfs.mount: Deactivated successfully. Feb 9 10:03:02.011244 env[1475]: time="2024-02-09T10:03:02.011113132Z" level=info msg="shim disconnected" id=adc9d86ed084097bbd7e53b0d54b54fb223b497ca7f07f4008bd3ad7f9e9c22c Feb 9 10:03:02.011244 env[1475]: time="2024-02-09T10:03:02.011215969Z" level=warning msg="cleaning up after shim disconnected" id=adc9d86ed084097bbd7e53b0d54b54fb223b497ca7f07f4008bd3ad7f9e9c22c namespace=k8s.io Feb 9 10:03:02.011244 env[1475]: time="2024-02-09T10:03:02.011246337Z" level=info msg="cleaning up dead shim" Feb 9 10:03:02.040695 env[1475]: time="2024-02-09T10:03:02.040577533Z" level=warning msg="cleanup warnings time=\"2024-02-09T10:03:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5428 runtime=io.containerd.runc.v2\n" Feb 9 10:03:02.518115 kubelet[2556]: E0209 10:03:02.518065 2556 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-c8zxm" podUID=e87d14d0-d2c2-4c1a-a26c-76cab561e426 Feb 9 10:03:02.654961 kubelet[2556]: E0209 10:03:02.654857 2556 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 10:03:02.818173 env[1475]: time="2024-02-09T10:03:02.817940455Z" level=info msg="CreateContainer within sandbox \"f397fcfec252a0a2760068c7f0bed4d45a772099904d558126c1931f6e9a792b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 10:03:02.837313 env[1475]: time="2024-02-09T10:03:02.837177595Z" level=info msg="CreateContainer within sandbox \"f397fcfec252a0a2760068c7f0bed4d45a772099904d558126c1931f6e9a792b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"caf134e5527bdf65e43de232427d1a9a4596b88e0bb4a0a230f0855f61487d2e\"" Feb 9 10:03:02.837887 env[1475]: time="2024-02-09T10:03:02.837871018Z" level=info msg="StartContainer for \"caf134e5527bdf65e43de232427d1a9a4596b88e0bb4a0a230f0855f61487d2e\"" Feb 9 10:03:02.839545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3264621067.mount: Deactivated successfully. Feb 9 10:03:02.846783 systemd[1]: Started cri-containerd-caf134e5527bdf65e43de232427d1a9a4596b88e0bb4a0a230f0855f61487d2e.scope. Feb 9 10:03:02.872565 env[1475]: time="2024-02-09T10:03:02.872508212Z" level=info msg="StartContainer for \"caf134e5527bdf65e43de232427d1a9a4596b88e0bb4a0a230f0855f61487d2e\" returns successfully" Feb 9 10:03:03.027271 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 10:03:03.442922 kubelet[2556]: W0209 10:03:03.442848 2556 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9102be1f_358b_45e8_bfa5_237edd6ebd19.slice/cri-containerd-939d63ec65ef52d6341a0af0aba877ef99677e294d21ddb6fd710db6def1fb84.scope WatchSource:0}: task 939d63ec65ef52d6341a0af0aba877ef99677e294d21ddb6fd710db6def1fb84 not found: not found Feb 9 10:03:03.856761 kubelet[2556]: I0209 10:03:03.856703 2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jrrft" podStartSLOduration=5.856616771 pod.CreationTimestamp="2024-02-09 10:02:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 10:03:03.856121236 +0000 UTC m=+441.401636415" watchObservedRunningTime="2024-02-09 10:03:03.856616771 +0000 UTC m=+441.402131928" Feb 9 10:03:04.518647 kubelet[2556]: E0209 10:03:04.518599 2556 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-c8zxm" podUID=e87d14d0-d2c2-4c1a-a26c-76cab561e426 Feb 9 10:03:05.907499 systemd-networkd[1308]: lxc_health: Link UP Feb 9 10:03:05.927113 systemd-networkd[1308]: lxc_health: Gained carrier Feb 9 10:03:05.927281 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 10:03:06.518747 kubelet[2556]: E0209 10:03:06.518683 2556 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-c8zxm" podUID=e87d14d0-d2c2-4c1a-a26c-76cab561e426 Feb 9 10:03:06.554181 kubelet[2556]: W0209 10:03:06.554110 2556 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9102be1f_358b_45e8_bfa5_237edd6ebd19.slice/cri-containerd-8cd4ab0d2193086e45c368033692214c44e52277272f691ca5baba6412d1a226.scope WatchSource:0}: task 8cd4ab0d2193086e45c368033692214c44e52277272f691ca5baba6412d1a226 not found: not found Feb 9 10:03:07.203348 systemd-networkd[1308]: lxc_health: Gained IPv6LL Feb 9 10:03:09.658750 kubelet[2556]: W0209 10:03:09.658700 2556 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9102be1f_358b_45e8_bfa5_237edd6ebd19.slice/cri-containerd-7e0b7a1e841c5cde55dd57604325f47e1bf159c9096dedccf8bdac7b3db9454b.scope WatchSource:0}: task 7e0b7a1e841c5cde55dd57604325f47e1bf159c9096dedccf8bdac7b3db9454b not found: not found Feb 9 10:03:12.767671 kubelet[2556]: W0209 10:03:12.767547 2556 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9102be1f_358b_45e8_bfa5_237edd6ebd19.slice/cri-containerd-adc9d86ed084097bbd7e53b0d54b54fb223b497ca7f07f4008bd3ad7f9e9c22c.scope WatchSource:0}: task adc9d86ed084097bbd7e53b0d54b54fb223b497ca7f07f4008bd3ad7f9e9c22c not found: not found Feb 9 10:03:13.815857 sshd[5030]: pam_unix(sshd:session): session closed for user core Feb 9 10:03:13.817148 systemd[1]: sshd@27-86.109.11.101:22-147.75.109.163:34884.service: Deactivated successfully. Feb 9 10:03:13.817589 systemd[1]: session-29.scope: Deactivated successfully. Feb 9 10:03:13.817996 systemd-logind[1463]: Session 29 logged out. Waiting for processes to exit. Feb 9 10:03:13.818600 systemd-logind[1463]: Removed session 29.