Feb 14 00:38:27.161551 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] Feb 14 00:38:27.161574 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 14 00:38:27.161598 kernel: KASLR enabled Feb 14 00:38:27.161604 kernel: efi: EFI v2.7 by American Megatrends Feb 14 00:38:27.161610 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea465818 RNG=0xebf00018 MEMRESERVE=0xe4642f98 Feb 14 00:38:27.161616 kernel: random: crng init done Feb 14 00:38:27.161623 kernel: esrt: Reserving ESRT space from 0x00000000ea465818 to 0x00000000ea465878. Feb 14 00:38:27.161628 kernel: ACPI: Early table checksum verification disabled Feb 14 00:38:27.161636 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) Feb 14 00:38:27.161642 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) Feb 14 00:38:27.161648 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) Feb 14 00:38:27.161654 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) Feb 14 00:38:27.161660 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) Feb 14 00:38:27.161667 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) Feb 14 00:38:27.161676 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) Feb 14 00:38:27.161682 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) Feb 14 00:38:27.161689 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) Feb 14 00:38:27.161695 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) Feb 14 00:38:27.161702 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) Feb 14 00:38:27.161708 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) Feb 14 00:38:27.161714 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) Feb 14 00:38:27.161721 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) Feb 14 00:38:27.161727 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) Feb 14 00:38:27.161735 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) Feb 14 00:38:27.161741 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) Feb 14 00:38:27.161748 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) Feb 14 00:38:27.161754 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) Feb 14 00:38:27.161761 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 Feb 14 00:38:27.161767 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] Feb 14 00:38:27.161773 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] Feb 14 00:38:27.161780 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] Feb 14 00:38:27.161786 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] Feb 14 00:38:27.161793 kernel: NUMA: NODE_DATA [mem 0x83fdffca800-0x83fdffcffff] Feb 14 00:38:27.161799 kernel: Zone ranges: Feb 14 00:38:27.161805 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] Feb 14 00:38:27.161813 kernel: DMA32 empty Feb 14 00:38:27.161820 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] Feb 14 00:38:27.161826 kernel: Movable zone start for each node Feb 14 00:38:27.161833 kernel: Early memory node ranges Feb 14 00:38:27.161839 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] Feb 14 00:38:27.161848 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] Feb 14 00:38:27.161855 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] Feb 14 00:38:27.161863 kernel: node 0: [mem 0x0000000094000000-0x00000000eba36fff] Feb 14 00:38:27.161870 kernel: node 0: [mem 0x00000000eba37000-0x00000000ebeadfff] Feb 14 00:38:27.161877 kernel: node 0: [mem 0x00000000ebeae000-0x00000000ebeaefff] Feb 14 00:38:27.161884 kernel: node 0: [mem 0x00000000ebeaf000-0x00000000ebeccfff] Feb 14 00:38:27.161890 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] Feb 14 00:38:27.161897 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] Feb 14 00:38:27.161904 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] Feb 14 00:38:27.161910 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] Feb 14 00:38:27.161917 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee54ffff] Feb 14 00:38:27.161924 kernel: node 0: [mem 0x00000000ee550000-0x00000000f765ffff] Feb 14 00:38:27.161932 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] Feb 14 00:38:27.161939 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] Feb 14 00:38:27.161945 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] Feb 14 00:38:27.161952 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] Feb 14 00:38:27.161958 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] Feb 14 00:38:27.161965 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] Feb 14 00:38:27.161972 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] Feb 14 00:38:27.161979 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] Feb 14 00:38:27.161985 kernel: On node 0, zone DMA: 768 pages in unavailable ranges Feb 14 00:38:27.161992 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges Feb 14 00:38:27.161999 kernel: psci: probing for conduit method from ACPI. Feb 14 00:38:27.162007 kernel: psci: PSCIv1.1 detected in firmware. Feb 14 00:38:27.162013 kernel: psci: Using standard PSCI v0.2 function IDs Feb 14 00:38:27.162020 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 14 00:38:27.162027 kernel: psci: SMC Calling Convention v1.2 Feb 14 00:38:27.162033 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Feb 14 00:38:27.162040 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 Feb 14 00:38:27.162047 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 Feb 14 00:38:27.162053 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 Feb 14 00:38:27.162060 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 Feb 14 00:38:27.162067 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 Feb 14 00:38:27.162073 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 Feb 14 00:38:27.162080 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 Feb 14 00:38:27.162088 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 Feb 14 00:38:27.162095 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 Feb 14 00:38:27.162102 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 Feb 14 00:38:27.162108 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 Feb 14 00:38:27.162115 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 Feb 14 00:38:27.162122 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 Feb 14 00:38:27.162129 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 Feb 14 00:38:27.162135 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 Feb 14 00:38:27.162142 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 Feb 14 00:38:27.162149 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 Feb 14 00:38:27.162156 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 Feb 14 00:38:27.162162 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 Feb 14 00:38:27.162170 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 Feb 14 00:38:27.162177 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 Feb 14 00:38:27.162184 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 Feb 14 00:38:27.162190 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 Feb 14 00:38:27.162197 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 Feb 14 00:38:27.162204 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 Feb 14 00:38:27.162210 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 Feb 14 00:38:27.162217 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 Feb 14 00:38:27.162224 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 Feb 14 00:38:27.162230 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 Feb 14 00:38:27.162237 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 Feb 14 00:38:27.162245 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 Feb 14 00:38:27.162251 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 Feb 14 00:38:27.162258 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 Feb 14 00:38:27.162265 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 Feb 14 00:38:27.162272 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 Feb 14 00:38:27.162278 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 Feb 14 00:38:27.162285 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 Feb 14 00:38:27.162292 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 Feb 14 00:38:27.162298 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 Feb 14 00:38:27.162305 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 Feb 14 00:38:27.162312 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 Feb 14 00:38:27.162318 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 Feb 14 00:38:27.162326 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 Feb 14 00:38:27.162333 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 Feb 14 00:38:27.162340 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 Feb 14 00:38:27.162346 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 Feb 14 00:38:27.162353 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 Feb 14 00:38:27.162360 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 Feb 14 00:38:27.162366 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 Feb 14 00:38:27.162373 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 Feb 14 00:38:27.162387 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 Feb 14 00:38:27.162394 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 Feb 14 00:38:27.162402 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 Feb 14 00:38:27.162410 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 Feb 14 00:38:27.162417 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 Feb 14 00:38:27.162424 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 Feb 14 00:38:27.162431 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 Feb 14 00:38:27.162438 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 Feb 14 00:38:27.162447 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 Feb 14 00:38:27.162454 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 Feb 14 00:38:27.162461 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 Feb 14 00:38:27.162468 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 Feb 14 00:38:27.162475 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 Feb 14 00:38:27.162482 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 Feb 14 00:38:27.162489 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 Feb 14 00:38:27.162496 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 Feb 14 00:38:27.162503 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 Feb 14 00:38:27.162510 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 Feb 14 00:38:27.162518 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 Feb 14 00:38:27.162525 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 Feb 14 00:38:27.162533 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 Feb 14 00:38:27.162540 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 Feb 14 00:38:27.162547 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 Feb 14 00:38:27.162554 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 Feb 14 00:38:27.162561 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 Feb 14 00:38:27.162568 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 Feb 14 00:38:27.162576 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 Feb 14 00:38:27.162615 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 Feb 14 00:38:27.162623 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 Feb 14 00:38:27.162630 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 14 00:38:27.162637 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 14 00:38:27.162647 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 Feb 14 00:38:27.162654 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 Feb 14 00:38:27.162661 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 Feb 14 00:38:27.162668 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 Feb 14 00:38:27.162676 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 Feb 14 00:38:27.162683 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 Feb 14 00:38:27.162690 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 Feb 14 00:38:27.162697 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 Feb 14 00:38:27.162704 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 Feb 14 00:38:27.162711 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 Feb 14 00:38:27.162718 kernel: Detected PIPT I-cache on CPU0 Feb 14 00:38:27.162727 kernel: CPU features: detected: GIC system register CPU interface Feb 14 00:38:27.162734 kernel: CPU features: detected: Virtualization Host Extensions Feb 14 00:38:27.162741 kernel: CPU features: detected: Hardware dirty bit management Feb 14 00:38:27.162748 kernel: CPU features: detected: Spectre-v4 Feb 14 00:38:27.162755 kernel: CPU features: detected: Spectre-BHB Feb 14 00:38:27.162763 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 14 00:38:27.162770 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 14 00:38:27.162777 kernel: CPU features: detected: ARM erratum 1418040 Feb 14 00:38:27.162784 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 14 00:38:27.162791 kernel: alternatives: applying boot alternatives Feb 14 00:38:27.162800 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 14 00:38:27.162809 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 14 00:38:27.162816 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Feb 14 00:38:27.162823 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes Feb 14 00:38:27.162830 kernel: printk: log_buf_len min size: 262144 bytes Feb 14 00:38:27.162837 kernel: printk: log_buf_len: 1048576 bytes Feb 14 00:38:27.162844 kernel: printk: early log buf free: 249904(95%) Feb 14 00:38:27.162852 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) Feb 14 00:38:27.162859 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) Feb 14 00:38:27.162866 kernel: Fallback order for Node 0: 0 Feb 14 00:38:27.162873 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 Feb 14 00:38:27.162880 kernel: Policy zone: Normal Feb 14 00:38:27.162889 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 14 00:38:27.162896 kernel: software IO TLB: area num 128. Feb 14 00:38:27.162903 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) Feb 14 00:38:27.162910 kernel: Memory: 262922516K/268174336K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 5251820K reserved, 0K cma-reserved) Feb 14 00:38:27.162918 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 Feb 14 00:38:27.162925 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 14 00:38:27.162933 kernel: rcu: RCU event tracing is enabled. Feb 14 00:38:27.162940 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. Feb 14 00:38:27.162947 kernel: Trampoline variant of Tasks RCU enabled. Feb 14 00:38:27.162954 kernel: Tracing variant of Tasks RCU enabled. Feb 14 00:38:27.162962 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 14 00:38:27.162970 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 Feb 14 00:38:27.162978 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 14 00:38:27.162985 kernel: GICv3: GIC: Using split EOI/Deactivate mode Feb 14 00:38:27.162992 kernel: GICv3: 672 SPIs implemented Feb 14 00:38:27.162999 kernel: GICv3: 0 Extended SPIs implemented Feb 14 00:38:27.163006 kernel: Root IRQ handler: gic_handle_irq Feb 14 00:38:27.163013 kernel: GICv3: GICv3 features: 16 PPIs Feb 14 00:38:27.163020 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 Feb 14 00:38:27.163027 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 Feb 14 00:38:27.163034 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 Feb 14 00:38:27.163041 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 Feb 14 00:38:27.163048 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 Feb 14 00:38:27.163055 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 Feb 14 00:38:27.163064 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 Feb 14 00:38:27.163071 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 Feb 14 00:38:27.163078 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 Feb 14 00:38:27.163085 kernel: ITS [mem 0x100100040000-0x10010005ffff] Feb 14 00:38:27.163092 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) Feb 14 00:38:27.163100 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) Feb 14 00:38:27.163107 kernel: ITS [mem 0x100100060000-0x10010007ffff] Feb 14 00:38:27.163114 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) Feb 14 00:38:27.163121 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) Feb 14 00:38:27.163128 kernel: ITS [mem 0x100100080000-0x10010009ffff] Feb 14 00:38:27.163136 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) Feb 14 00:38:27.163144 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) Feb 14 00:38:27.163151 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] Feb 14 00:38:27.163159 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) Feb 14 00:38:27.163166 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) Feb 14 00:38:27.163173 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] Feb 14 00:38:27.163180 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) Feb 14 00:38:27.163187 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) Feb 14 00:38:27.163195 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] Feb 14 00:38:27.163202 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) Feb 14 00:38:27.163209 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) Feb 14 00:38:27.163216 kernel: ITS [mem 0x100100100000-0x10010011ffff] Feb 14 00:38:27.163225 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) Feb 14 00:38:27.163233 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) Feb 14 00:38:27.163240 kernel: ITS [mem 0x100100120000-0x10010013ffff] Feb 14 00:38:27.163247 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) Feb 14 00:38:27.163254 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) Feb 14 00:38:27.163261 kernel: GICv3: using LPI property table @0x00000800003e0000 Feb 14 00:38:27.163269 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 Feb 14 00:38:27.163276 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 14 00:38:27.163283 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163290 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). Feb 14 00:38:27.163297 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). Feb 14 00:38:27.163306 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 14 00:38:27.163314 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 14 00:38:27.163321 kernel: Console: colour dummy device 80x25 Feb 14 00:38:27.163328 kernel: printk: console [tty0] enabled Feb 14 00:38:27.163336 kernel: ACPI: Core revision 20230628 Feb 14 00:38:27.163343 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 14 00:38:27.163350 kernel: pid_max: default: 81920 minimum: 640 Feb 14 00:38:27.163358 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 14 00:38:27.163365 kernel: landlock: Up and running. Feb 14 00:38:27.163372 kernel: SELinux: Initializing. Feb 14 00:38:27.163380 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 14 00:38:27.163388 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 14 00:38:27.163395 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Feb 14 00:38:27.163403 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Feb 14 00:38:27.163410 kernel: rcu: Hierarchical SRCU implementation. Feb 14 00:38:27.163417 kernel: rcu: Max phase no-delay instances is 400. Feb 14 00:38:27.163425 kernel: Platform MSI: ITS@0x100100040000 domain created Feb 14 00:38:27.163432 kernel: Platform MSI: ITS@0x100100060000 domain created Feb 14 00:38:27.163439 kernel: Platform MSI: ITS@0x100100080000 domain created Feb 14 00:38:27.163448 kernel: Platform MSI: ITS@0x1001000a0000 domain created Feb 14 00:38:27.163455 kernel: Platform MSI: ITS@0x1001000c0000 domain created Feb 14 00:38:27.163462 kernel: Platform MSI: ITS@0x1001000e0000 domain created Feb 14 00:38:27.163469 kernel: Platform MSI: ITS@0x100100100000 domain created Feb 14 00:38:27.163477 kernel: Platform MSI: ITS@0x100100120000 domain created Feb 14 00:38:27.163484 kernel: PCI/MSI: ITS@0x100100040000 domain created Feb 14 00:38:27.163491 kernel: PCI/MSI: ITS@0x100100060000 domain created Feb 14 00:38:27.163498 kernel: PCI/MSI: ITS@0x100100080000 domain created Feb 14 00:38:27.163506 kernel: PCI/MSI: ITS@0x1001000a0000 domain created Feb 14 00:38:27.163514 kernel: PCI/MSI: ITS@0x1001000c0000 domain created Feb 14 00:38:27.163521 kernel: PCI/MSI: ITS@0x1001000e0000 domain created Feb 14 00:38:27.163528 kernel: PCI/MSI: ITS@0x100100100000 domain created Feb 14 00:38:27.163536 kernel: PCI/MSI: ITS@0x100100120000 domain created Feb 14 00:38:27.163543 kernel: Remapping and enabling EFI services. Feb 14 00:38:27.163550 kernel: smp: Bringing up secondary CPUs ... Feb 14 00:38:27.163557 kernel: Detected PIPT I-cache on CPU1 Feb 14 00:38:27.163564 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 Feb 14 00:38:27.163572 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 Feb 14 00:38:27.163584 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163591 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] Feb 14 00:38:27.163598 kernel: Detected PIPT I-cache on CPU2 Feb 14 00:38:27.163606 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 Feb 14 00:38:27.163613 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 Feb 14 00:38:27.163620 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163627 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] Feb 14 00:38:27.163634 kernel: Detected PIPT I-cache on CPU3 Feb 14 00:38:27.163642 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 Feb 14 00:38:27.163649 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 Feb 14 00:38:27.163658 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163665 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] Feb 14 00:38:27.163672 kernel: Detected PIPT I-cache on CPU4 Feb 14 00:38:27.163679 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 Feb 14 00:38:27.163686 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 Feb 14 00:38:27.163694 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163701 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] Feb 14 00:38:27.163708 kernel: Detected PIPT I-cache on CPU5 Feb 14 00:38:27.163715 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 Feb 14 00:38:27.163724 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 Feb 14 00:38:27.163731 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163739 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] Feb 14 00:38:27.163746 kernel: Detected PIPT I-cache on CPU6 Feb 14 00:38:27.163753 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 Feb 14 00:38:27.163760 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 Feb 14 00:38:27.163768 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163775 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] Feb 14 00:38:27.163782 kernel: Detected PIPT I-cache on CPU7 Feb 14 00:38:27.163789 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 Feb 14 00:38:27.163798 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 Feb 14 00:38:27.163805 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163812 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] Feb 14 00:38:27.163819 kernel: Detected PIPT I-cache on CPU8 Feb 14 00:38:27.163827 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 Feb 14 00:38:27.163834 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 Feb 14 00:38:27.163841 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163848 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] Feb 14 00:38:27.163855 kernel: Detected PIPT I-cache on CPU9 Feb 14 00:38:27.163863 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 Feb 14 00:38:27.163872 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 Feb 14 00:38:27.163879 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163886 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] Feb 14 00:38:27.163893 kernel: Detected PIPT I-cache on CPU10 Feb 14 00:38:27.163901 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 Feb 14 00:38:27.163908 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 Feb 14 00:38:27.163915 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163922 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] Feb 14 00:38:27.163929 kernel: Detected PIPT I-cache on CPU11 Feb 14 00:38:27.163938 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 Feb 14 00:38:27.163945 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 Feb 14 00:38:27.163953 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163960 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] Feb 14 00:38:27.163967 kernel: Detected PIPT I-cache on CPU12 Feb 14 00:38:27.163974 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 Feb 14 00:38:27.163982 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 Feb 14 00:38:27.163989 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163996 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] Feb 14 00:38:27.164003 kernel: Detected PIPT I-cache on CPU13 Feb 14 00:38:27.164012 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 Feb 14 00:38:27.164019 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 Feb 14 00:38:27.164027 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164034 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] Feb 14 00:38:27.164041 kernel: Detected PIPT I-cache on CPU14 Feb 14 00:38:27.164049 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 Feb 14 00:38:27.164056 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 Feb 14 00:38:27.164064 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164071 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] Feb 14 00:38:27.164079 kernel: Detected PIPT I-cache on CPU15 Feb 14 00:38:27.164087 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 Feb 14 00:38:27.164094 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 Feb 14 00:38:27.164101 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164108 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] Feb 14 00:38:27.164116 kernel: Detected PIPT I-cache on CPU16 Feb 14 00:38:27.164123 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 Feb 14 00:38:27.164130 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 Feb 14 00:38:27.164138 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164155 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] Feb 14 00:38:27.164164 kernel: Detected PIPT I-cache on CPU17 Feb 14 00:38:27.164172 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 Feb 14 00:38:27.164179 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 Feb 14 00:38:27.164187 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164194 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] Feb 14 00:38:27.164202 kernel: Detected PIPT I-cache on CPU18 Feb 14 00:38:27.164209 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 Feb 14 00:38:27.164217 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 Feb 14 00:38:27.164226 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164234 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] Feb 14 00:38:27.164241 kernel: Detected PIPT I-cache on CPU19 Feb 14 00:38:27.164249 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 Feb 14 00:38:27.164256 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 Feb 14 00:38:27.164264 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164272 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] Feb 14 00:38:27.164281 kernel: Detected PIPT I-cache on CPU20 Feb 14 00:38:27.164289 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 Feb 14 00:38:27.164298 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 Feb 14 00:38:27.164305 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164313 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] Feb 14 00:38:27.164321 kernel: Detected PIPT I-cache on CPU21 Feb 14 00:38:27.164329 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 Feb 14 00:38:27.164336 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 Feb 14 00:38:27.164344 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164353 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] Feb 14 00:38:27.164360 kernel: Detected PIPT I-cache on CPU22 Feb 14 00:38:27.164368 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 Feb 14 00:38:27.164376 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 Feb 14 00:38:27.164383 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164391 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] Feb 14 00:38:27.164398 kernel: Detected PIPT I-cache on CPU23 Feb 14 00:38:27.164406 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 Feb 14 00:38:27.164413 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 Feb 14 00:38:27.164423 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164430 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] Feb 14 00:38:27.164438 kernel: Detected PIPT I-cache on CPU24 Feb 14 00:38:27.164447 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 Feb 14 00:38:27.164455 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 Feb 14 00:38:27.164462 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164470 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] Feb 14 00:38:27.164478 kernel: Detected PIPT I-cache on CPU25 Feb 14 00:38:27.164485 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 Feb 14 00:38:27.164493 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 Feb 14 00:38:27.164502 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164509 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] Feb 14 00:38:27.164517 kernel: Detected PIPT I-cache on CPU26 Feb 14 00:38:27.164525 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 Feb 14 00:38:27.164532 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 Feb 14 00:38:27.164540 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164548 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] Feb 14 00:38:27.164556 kernel: Detected PIPT I-cache on CPU27 Feb 14 00:38:27.164563 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 Feb 14 00:38:27.164572 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 Feb 14 00:38:27.164583 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164591 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] Feb 14 00:38:27.164598 kernel: Detected PIPT I-cache on CPU28 Feb 14 00:38:27.164606 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 Feb 14 00:38:27.164614 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 Feb 14 00:38:27.164621 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164629 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] Feb 14 00:38:27.164636 kernel: Detected PIPT I-cache on CPU29 Feb 14 00:38:27.164644 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 Feb 14 00:38:27.164653 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 Feb 14 00:38:27.164661 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164669 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] Feb 14 00:38:27.164676 kernel: Detected PIPT I-cache on CPU30 Feb 14 00:38:27.164684 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 Feb 14 00:38:27.164692 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 Feb 14 00:38:27.164699 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164707 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] Feb 14 00:38:27.164715 kernel: Detected PIPT I-cache on CPU31 Feb 14 00:38:27.164724 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 Feb 14 00:38:27.164732 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 Feb 14 00:38:27.164740 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164747 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] Feb 14 00:38:27.164755 kernel: Detected PIPT I-cache on CPU32 Feb 14 00:38:27.164762 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 Feb 14 00:38:27.164770 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 Feb 14 00:38:27.164778 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164785 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] Feb 14 00:38:27.164794 kernel: Detected PIPT I-cache on CPU33 Feb 14 00:38:27.164802 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 Feb 14 00:38:27.164810 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 Feb 14 00:38:27.164818 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164825 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] Feb 14 00:38:27.164833 kernel: Detected PIPT I-cache on CPU34 Feb 14 00:38:27.164841 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 Feb 14 00:38:27.164848 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 Feb 14 00:38:27.164856 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164864 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] Feb 14 00:38:27.164873 kernel: Detected PIPT I-cache on CPU35 Feb 14 00:38:27.164880 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 Feb 14 00:38:27.164888 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 Feb 14 00:38:27.164896 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164904 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] Feb 14 00:38:27.164911 kernel: Detected PIPT I-cache on CPU36 Feb 14 00:38:27.164919 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 Feb 14 00:38:27.164927 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 Feb 14 00:38:27.164934 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164943 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] Feb 14 00:38:27.164951 kernel: Detected PIPT I-cache on CPU37 Feb 14 00:38:27.164959 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 Feb 14 00:38:27.164967 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 Feb 14 00:38:27.164976 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164983 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] Feb 14 00:38:27.164991 kernel: Detected PIPT I-cache on CPU38 Feb 14 00:38:27.164998 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 Feb 14 00:38:27.165006 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 Feb 14 00:38:27.165014 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165022 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] Feb 14 00:38:27.165030 kernel: Detected PIPT I-cache on CPU39 Feb 14 00:38:27.165038 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 Feb 14 00:38:27.165045 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 Feb 14 00:38:27.165053 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165060 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] Feb 14 00:38:27.165068 kernel: Detected PIPT I-cache on CPU40 Feb 14 00:38:27.165076 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 Feb 14 00:38:27.165085 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 Feb 14 00:38:27.165092 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165100 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] Feb 14 00:38:27.165107 kernel: Detected PIPT I-cache on CPU41 Feb 14 00:38:27.165115 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 Feb 14 00:38:27.165123 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 Feb 14 00:38:27.165130 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165138 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] Feb 14 00:38:27.165146 kernel: Detected PIPT I-cache on CPU42 Feb 14 00:38:27.165155 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 Feb 14 00:38:27.165163 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 Feb 14 00:38:27.165170 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165178 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] Feb 14 00:38:27.165185 kernel: Detected PIPT I-cache on CPU43 Feb 14 00:38:27.165193 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 Feb 14 00:38:27.165201 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 Feb 14 00:38:27.165208 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165216 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] Feb 14 00:38:27.165223 kernel: Detected PIPT I-cache on CPU44 Feb 14 00:38:27.165233 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 Feb 14 00:38:27.165240 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 Feb 14 00:38:27.165248 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165255 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] Feb 14 00:38:27.165263 kernel: Detected PIPT I-cache on CPU45 Feb 14 00:38:27.165271 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 Feb 14 00:38:27.165278 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 Feb 14 00:38:27.165286 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165293 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] Feb 14 00:38:27.165303 kernel: Detected PIPT I-cache on CPU46 Feb 14 00:38:27.165310 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 Feb 14 00:38:27.165318 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 Feb 14 00:38:27.165326 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165333 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] Feb 14 00:38:27.165341 kernel: Detected PIPT I-cache on CPU47 Feb 14 00:38:27.165349 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 Feb 14 00:38:27.165356 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 Feb 14 00:38:27.165364 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165372 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] Feb 14 00:38:27.165381 kernel: Detected PIPT I-cache on CPU48 Feb 14 00:38:27.165388 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 Feb 14 00:38:27.165396 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 Feb 14 00:38:27.165404 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165411 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] Feb 14 00:38:27.165419 kernel: Detected PIPT I-cache on CPU49 Feb 14 00:38:27.165427 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 Feb 14 00:38:27.165435 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 Feb 14 00:38:27.165443 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165452 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] Feb 14 00:38:27.165460 kernel: Detected PIPT I-cache on CPU50 Feb 14 00:38:27.165468 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 Feb 14 00:38:27.165476 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 Feb 14 00:38:27.165483 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165491 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] Feb 14 00:38:27.165498 kernel: Detected PIPT I-cache on CPU51 Feb 14 00:38:27.165506 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 Feb 14 00:38:27.165514 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 Feb 14 00:38:27.165523 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165531 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] Feb 14 00:38:27.165538 kernel: Detected PIPT I-cache on CPU52 Feb 14 00:38:27.165546 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 Feb 14 00:38:27.165554 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 Feb 14 00:38:27.165561 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165569 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] Feb 14 00:38:27.165577 kernel: Detected PIPT I-cache on CPU53 Feb 14 00:38:27.165586 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 Feb 14 00:38:27.165594 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 Feb 14 00:38:27.165603 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165611 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] Feb 14 00:38:27.165619 kernel: Detected PIPT I-cache on CPU54 Feb 14 00:38:27.165626 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 Feb 14 00:38:27.165634 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 Feb 14 00:38:27.165642 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165649 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] Feb 14 00:38:27.165657 kernel: Detected PIPT I-cache on CPU55 Feb 14 00:38:27.165664 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 Feb 14 00:38:27.165673 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 Feb 14 00:38:27.165681 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165689 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] Feb 14 00:38:27.165696 kernel: Detected PIPT I-cache on CPU56 Feb 14 00:38:27.165705 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 Feb 14 00:38:27.165713 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 Feb 14 00:38:27.165721 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165728 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] Feb 14 00:38:27.165736 kernel: Detected PIPT I-cache on CPU57 Feb 14 00:38:27.165744 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 Feb 14 00:38:27.165753 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 Feb 14 00:38:27.165760 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165768 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] Feb 14 00:38:27.165776 kernel: Detected PIPT I-cache on CPU58 Feb 14 00:38:27.165783 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 Feb 14 00:38:27.165791 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 Feb 14 00:38:27.165799 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165806 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] Feb 14 00:38:27.165814 kernel: Detected PIPT I-cache on CPU59 Feb 14 00:38:27.165823 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 Feb 14 00:38:27.165831 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 Feb 14 00:38:27.165838 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165846 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] Feb 14 00:38:27.165854 kernel: Detected PIPT I-cache on CPU60 Feb 14 00:38:27.165861 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 Feb 14 00:38:27.165869 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 Feb 14 00:38:27.165877 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165884 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] Feb 14 00:38:27.165892 kernel: Detected PIPT I-cache on CPU61 Feb 14 00:38:27.165901 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 Feb 14 00:38:27.165908 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 Feb 14 00:38:27.165916 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165924 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] Feb 14 00:38:27.165932 kernel: Detected PIPT I-cache on CPU62 Feb 14 00:38:27.165939 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 Feb 14 00:38:27.165947 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 Feb 14 00:38:27.165955 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165962 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] Feb 14 00:38:27.165971 kernel: Detected PIPT I-cache on CPU63 Feb 14 00:38:27.165979 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 Feb 14 00:38:27.165987 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 Feb 14 00:38:27.165995 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166002 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] Feb 14 00:38:27.166010 kernel: Detected PIPT I-cache on CPU64 Feb 14 00:38:27.166018 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 Feb 14 00:38:27.166025 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 Feb 14 00:38:27.166033 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166041 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] Feb 14 00:38:27.166050 kernel: Detected PIPT I-cache on CPU65 Feb 14 00:38:27.166057 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 Feb 14 00:38:27.166065 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 Feb 14 00:38:27.166073 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166080 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] Feb 14 00:38:27.166088 kernel: Detected PIPT I-cache on CPU66 Feb 14 00:38:27.166096 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 Feb 14 00:38:27.166104 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 Feb 14 00:38:27.166111 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166120 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] Feb 14 00:38:27.166128 kernel: Detected PIPT I-cache on CPU67 Feb 14 00:38:27.166136 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 Feb 14 00:38:27.166143 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 Feb 14 00:38:27.166151 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166159 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] Feb 14 00:38:27.166166 kernel: Detected PIPT I-cache on CPU68 Feb 14 00:38:27.166174 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 Feb 14 00:38:27.166181 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 Feb 14 00:38:27.166191 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166198 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] Feb 14 00:38:27.166206 kernel: Detected PIPT I-cache on CPU69 Feb 14 00:38:27.166214 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 Feb 14 00:38:27.166221 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 Feb 14 00:38:27.166229 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166237 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] Feb 14 00:38:27.166244 kernel: Detected PIPT I-cache on CPU70 Feb 14 00:38:27.166252 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 Feb 14 00:38:27.166259 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 Feb 14 00:38:27.166269 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166276 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] Feb 14 00:38:27.166284 kernel: Detected PIPT I-cache on CPU71 Feb 14 00:38:27.166292 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 Feb 14 00:38:27.166299 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 Feb 14 00:38:27.166307 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166315 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] Feb 14 00:38:27.166322 kernel: Detected PIPT I-cache on CPU72 Feb 14 00:38:27.166330 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 Feb 14 00:38:27.166339 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 Feb 14 00:38:27.166347 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166355 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] Feb 14 00:38:27.166362 kernel: Detected PIPT I-cache on CPU73 Feb 14 00:38:27.166370 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 Feb 14 00:38:27.166377 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 Feb 14 00:38:27.166385 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166393 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] Feb 14 00:38:27.166400 kernel: Detected PIPT I-cache on CPU74 Feb 14 00:38:27.166408 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 Feb 14 00:38:27.166417 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 Feb 14 00:38:27.166425 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166432 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] Feb 14 00:38:27.166440 kernel: Detected PIPT I-cache on CPU75 Feb 14 00:38:27.166447 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 Feb 14 00:38:27.166455 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 Feb 14 00:38:27.166463 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166470 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] Feb 14 00:38:27.166478 kernel: Detected PIPT I-cache on CPU76 Feb 14 00:38:27.166487 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 Feb 14 00:38:27.166495 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 Feb 14 00:38:27.166503 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166510 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] Feb 14 00:38:27.166518 kernel: Detected PIPT I-cache on CPU77 Feb 14 00:38:27.166526 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 Feb 14 00:38:27.166534 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 Feb 14 00:38:27.166541 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166549 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] Feb 14 00:38:27.166557 kernel: Detected PIPT I-cache on CPU78 Feb 14 00:38:27.166565 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 Feb 14 00:38:27.166573 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 Feb 14 00:38:27.166583 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166591 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] Feb 14 00:38:27.166598 kernel: Detected PIPT I-cache on CPU79 Feb 14 00:38:27.166606 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 Feb 14 00:38:27.166614 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 Feb 14 00:38:27.166622 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166629 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] Feb 14 00:38:27.166639 kernel: smp: Brought up 1 node, 80 CPUs Feb 14 00:38:27.166647 kernel: SMP: Total of 80 processors activated. Feb 14 00:38:27.166654 kernel: CPU features: detected: 32-bit EL0 Support Feb 14 00:38:27.166662 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 14 00:38:27.166670 kernel: CPU features: detected: Common not Private translations Feb 14 00:38:27.166677 kernel: CPU features: detected: CRC32 instructions Feb 14 00:38:27.166685 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 14 00:38:27.166693 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 14 00:38:27.166701 kernel: CPU features: detected: LSE atomic instructions Feb 14 00:38:27.166710 kernel: CPU features: detected: Privileged Access Never Feb 14 00:38:27.166717 kernel: CPU features: detected: RAS Extension Support Feb 14 00:38:27.166725 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 14 00:38:27.166733 kernel: CPU: All CPU(s) started at EL2 Feb 14 00:38:27.166740 kernel: alternatives: applying system-wide alternatives Feb 14 00:38:27.166748 kernel: devtmpfs: initialized Feb 14 00:38:27.166755 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 14 00:38:27.166763 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Feb 14 00:38:27.166771 kernel: pinctrl core: initialized pinctrl subsystem Feb 14 00:38:27.166780 kernel: SMBIOS 3.4.0 present. Feb 14 00:38:27.166787 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 Feb 14 00:38:27.166795 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 14 00:38:27.166803 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations Feb 14 00:38:27.166811 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 14 00:38:27.166818 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 14 00:38:27.166826 kernel: audit: initializing netlink subsys (disabled) Feb 14 00:38:27.166833 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 Feb 14 00:38:27.166841 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 14 00:38:27.166850 kernel: cpuidle: using governor menu Feb 14 00:38:27.166858 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 14 00:38:27.166865 kernel: ASID allocator initialised with 32768 entries Feb 14 00:38:27.166873 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 14 00:38:27.166880 kernel: Serial: AMBA PL011 UART driver Feb 14 00:38:27.166888 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 14 00:38:27.166896 kernel: Modules: 0 pages in range for non-PLT usage Feb 14 00:38:27.166903 kernel: Modules: 509040 pages in range for PLT usage Feb 14 00:38:27.166911 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 14 00:38:27.166920 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 14 00:38:27.166927 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 14 00:38:27.166935 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 14 00:38:27.166943 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 14 00:38:27.166951 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 14 00:38:27.166958 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 14 00:38:27.166966 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 14 00:38:27.166974 kernel: ACPI: Added _OSI(Module Device) Feb 14 00:38:27.166981 kernel: ACPI: Added _OSI(Processor Device) Feb 14 00:38:27.166990 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 14 00:38:27.166998 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 14 00:38:27.167005 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded Feb 14 00:38:27.167013 kernel: ACPI: Interpreter enabled Feb 14 00:38:27.167021 kernel: ACPI: Using GIC for interrupt routing Feb 14 00:38:27.167028 kernel: ACPI: MCFG table detected, 8 entries Feb 14 00:38:27.167036 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 Feb 14 00:38:27.167044 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 Feb 14 00:38:27.167051 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 Feb 14 00:38:27.167060 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 Feb 14 00:38:27.167068 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 Feb 14 00:38:27.167076 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 Feb 14 00:38:27.167083 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 Feb 14 00:38:27.167091 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 Feb 14 00:38:27.167099 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA Feb 14 00:38:27.167107 kernel: printk: console [ttyAMA0] enabled Feb 14 00:38:27.167114 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA Feb 14 00:38:27.167122 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) Feb 14 00:38:27.167249 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 00:38:27.167323 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 00:38:27.167386 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] Feb 14 00:38:27.167448 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 00:38:27.167511 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 Feb 14 00:38:27.167572 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] Feb 14 00:38:27.167589 kernel: PCI host bridge to bus 000d:00 Feb 14 00:38:27.167661 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] Feb 14 00:38:27.167720 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] Feb 14 00:38:27.167776 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] Feb 14 00:38:27.167857 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 Feb 14 00:38:27.167931 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 Feb 14 00:38:27.167998 kernel: pci 000d:00:01.0: enabling Extended Tags Feb 14 00:38:27.168065 kernel: pci 000d:00:01.0: supports D1 D2 Feb 14 00:38:27.168131 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.168203 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 Feb 14 00:38:27.168269 kernel: pci 000d:00:02.0: supports D1 D2 Feb 14 00:38:27.168333 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.168405 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 Feb 14 00:38:27.168472 kernel: pci 000d:00:03.0: supports D1 D2 Feb 14 00:38:27.168539 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.168637 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 Feb 14 00:38:27.168703 kernel: pci 000d:00:04.0: supports D1 D2 Feb 14 00:38:27.168766 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.168776 kernel: acpiphp: Slot [1] registered Feb 14 00:38:27.168784 kernel: acpiphp: Slot [2] registered Feb 14 00:38:27.168792 kernel: acpiphp: Slot [3] registered Feb 14 00:38:27.168803 kernel: acpiphp: Slot [4] registered Feb 14 00:38:27.168860 kernel: pci_bus 000d:00: on NUMA node 0 Feb 14 00:38:27.168925 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 00:38:27.168991 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.169055 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.169121 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 00:38:27.169186 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.169254 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.169320 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 00:38:27.169383 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.169449 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.169513 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 00:38:27.169577 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.169646 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.169713 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] Feb 14 00:38:27.169777 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] Feb 14 00:38:27.169841 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] Feb 14 00:38:27.169905 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] Feb 14 00:38:27.169969 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] Feb 14 00:38:27.170034 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] Feb 14 00:38:27.170099 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] Feb 14 00:38:27.170163 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] Feb 14 00:38:27.170230 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.170295 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.170358 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.170423 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.170487 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.170551 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.170619 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.170688 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.170752 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.170817 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.170880 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.170945 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.171009 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.171073 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.171136 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.171202 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.171266 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Feb 14 00:38:27.171331 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] Feb 14 00:38:27.171395 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] Feb 14 00:38:27.171459 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Feb 14 00:38:27.171524 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] Feb 14 00:38:27.171591 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] Feb 14 00:38:27.171658 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Feb 14 00:38:27.171723 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] Feb 14 00:38:27.171787 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] Feb 14 00:38:27.171852 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Feb 14 00:38:27.171916 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] Feb 14 00:38:27.171980 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] Feb 14 00:38:27.172042 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] Feb 14 00:38:27.172099 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] Feb 14 00:38:27.172170 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] Feb 14 00:38:27.172231 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] Feb 14 00:38:27.172300 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] Feb 14 00:38:27.172361 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] Feb 14 00:38:27.172439 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] Feb 14 00:38:27.172501 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] Feb 14 00:38:27.172567 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] Feb 14 00:38:27.172632 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] Feb 14 00:38:27.172643 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) Feb 14 00:38:27.172712 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 00:38:27.172779 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 00:38:27.172841 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] Feb 14 00:38:27.172904 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 00:38:27.172965 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 Feb 14 00:38:27.173027 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] Feb 14 00:38:27.173037 kernel: PCI host bridge to bus 0000:00 Feb 14 00:38:27.173100 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] Feb 14 00:38:27.173161 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] Feb 14 00:38:27.173217 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 14 00:38:27.173289 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 Feb 14 00:38:27.173361 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 Feb 14 00:38:27.173425 kernel: pci 0000:00:01.0: enabling Extended Tags Feb 14 00:38:27.173490 kernel: pci 0000:00:01.0: supports D1 D2 Feb 14 00:38:27.173553 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.173629 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 Feb 14 00:38:27.173693 kernel: pci 0000:00:02.0: supports D1 D2 Feb 14 00:38:27.173758 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.173830 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 Feb 14 00:38:27.173896 kernel: pci 0000:00:03.0: supports D1 D2 Feb 14 00:38:27.173959 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.174030 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 Feb 14 00:38:27.174096 kernel: pci 0000:00:04.0: supports D1 D2 Feb 14 00:38:27.174161 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.174171 kernel: acpiphp: Slot [1-1] registered Feb 14 00:38:27.174179 kernel: acpiphp: Slot [2-1] registered Feb 14 00:38:27.174186 kernel: acpiphp: Slot [3-1] registered Feb 14 00:38:27.174194 kernel: acpiphp: Slot [4-1] registered Feb 14 00:38:27.174249 kernel: pci_bus 0000:00: on NUMA node 0 Feb 14 00:38:27.174313 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 00:38:27.174377 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.174445 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.174509 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 00:38:27.174573 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.174641 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.174705 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 00:38:27.174769 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.174835 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.174900 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 00:38:27.174964 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.175028 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.175092 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] Feb 14 00:38:27.175157 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Feb 14 00:38:27.175221 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] Feb 14 00:38:27.175287 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Feb 14 00:38:27.175351 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] Feb 14 00:38:27.175416 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Feb 14 00:38:27.175480 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] Feb 14 00:38:27.175545 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Feb 14 00:38:27.175614 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.175677 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.175743 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.175809 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.175874 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.175937 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.176003 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.176066 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.176131 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.176194 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.176258 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.176321 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.176387 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.176451 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.176517 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.176584 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.176648 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 14 00:38:27.176712 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] Feb 14 00:38:27.176776 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Feb 14 00:38:27.176841 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Feb 14 00:38:27.176906 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] Feb 14 00:38:27.176971 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Feb 14 00:38:27.177035 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Feb 14 00:38:27.177101 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] Feb 14 00:38:27.177166 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Feb 14 00:38:27.177232 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Feb 14 00:38:27.177295 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] Feb 14 00:38:27.177360 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Feb 14 00:38:27.177418 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] Feb 14 00:38:27.177478 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] Feb 14 00:38:27.177546 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] Feb 14 00:38:27.177609 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Feb 14 00:38:27.177676 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] Feb 14 00:38:27.177736 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Feb 14 00:38:27.177811 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] Feb 14 00:38:27.177874 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Feb 14 00:38:27.177942 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] Feb 14 00:38:27.178001 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Feb 14 00:38:27.178011 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) Feb 14 00:38:27.178082 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 00:38:27.178145 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 00:38:27.178211 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] Feb 14 00:38:27.178273 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 00:38:27.178335 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 Feb 14 00:38:27.178397 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] Feb 14 00:38:27.178407 kernel: PCI host bridge to bus 0005:00 Feb 14 00:38:27.178471 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] Feb 14 00:38:27.178530 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] Feb 14 00:38:27.178589 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] Feb 14 00:38:27.178663 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 Feb 14 00:38:27.178735 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 Feb 14 00:38:27.178801 kernel: pci 0005:00:01.0: supports D1 D2 Feb 14 00:38:27.178865 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.178939 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 Feb 14 00:38:27.179004 kernel: pci 0005:00:03.0: supports D1 D2 Feb 14 00:38:27.179072 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.179141 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 Feb 14 00:38:27.179207 kernel: pci 0005:00:05.0: supports D1 D2 Feb 14 00:38:27.179271 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.179344 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 Feb 14 00:38:27.179410 kernel: pci 0005:00:07.0: supports D1 D2 Feb 14 00:38:27.179474 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.179486 kernel: acpiphp: Slot [1-2] registered Feb 14 00:38:27.179494 kernel: acpiphp: Slot [2-2] registered Feb 14 00:38:27.179564 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 Feb 14 00:38:27.179637 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] Feb 14 00:38:27.179704 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] Feb 14 00:38:27.179777 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 Feb 14 00:38:27.179845 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] Feb 14 00:38:27.179912 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] Feb 14 00:38:27.179972 kernel: pci_bus 0005:00: on NUMA node 0 Feb 14 00:38:27.180036 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 00:38:27.180101 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.180166 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.180256 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 00:38:27.180323 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.180394 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.180460 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 00:38:27.180527 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.180604 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 14 00:38:27.180669 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 00:38:27.180734 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.180798 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 Feb 14 00:38:27.180866 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] Feb 14 00:38:27.180932 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Feb 14 00:38:27.180997 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] Feb 14 00:38:27.181060 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Feb 14 00:38:27.181124 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] Feb 14 00:38:27.181187 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Feb 14 00:38:27.181253 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] Feb 14 00:38:27.181316 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Feb 14 00:38:27.181383 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.181447 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.181512 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.181577 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.181647 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.181710 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.181775 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.181840 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.181907 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.181972 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.182036 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.182100 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.182165 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.182230 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.182295 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.182360 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.182424 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Feb 14 00:38:27.182491 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] Feb 14 00:38:27.182555 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Feb 14 00:38:27.182623 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Feb 14 00:38:27.182686 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] Feb 14 00:38:27.182761 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Feb 14 00:38:27.182829 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] Feb 14 00:38:27.182899 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] Feb 14 00:38:27.182963 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Feb 14 00:38:27.183027 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] Feb 14 00:38:27.183093 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Feb 14 00:38:27.183160 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] Feb 14 00:38:27.183227 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] Feb 14 00:38:27.183291 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Feb 14 00:38:27.183358 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] Feb 14 00:38:27.183422 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Feb 14 00:38:27.183482 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] Feb 14 00:38:27.183539 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] Feb 14 00:38:27.183612 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] Feb 14 00:38:27.183673 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Feb 14 00:38:27.183751 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] Feb 14 00:38:27.183812 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Feb 14 00:38:27.183878 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] Feb 14 00:38:27.183940 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Feb 14 00:38:27.184006 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] Feb 14 00:38:27.184069 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Feb 14 00:38:27.184079 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) Feb 14 00:38:27.184149 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 00:38:27.184216 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 00:38:27.184279 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] Feb 14 00:38:27.184342 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 00:38:27.184403 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 Feb 14 00:38:27.184477 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] Feb 14 00:38:27.184487 kernel: PCI host bridge to bus 0003:00 Feb 14 00:38:27.184552 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] Feb 14 00:38:27.184647 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] Feb 14 00:38:27.184708 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] Feb 14 00:38:27.184783 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 Feb 14 00:38:27.184857 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 Feb 14 00:38:27.184929 kernel: pci 0003:00:01.0: supports D1 D2 Feb 14 00:38:27.184995 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.185066 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 Feb 14 00:38:27.185129 kernel: pci 0003:00:03.0: supports D1 D2 Feb 14 00:38:27.185193 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.185263 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 Feb 14 00:38:27.185327 kernel: pci 0003:00:05.0: supports D1 D2 Feb 14 00:38:27.185393 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.185403 kernel: acpiphp: Slot [1-3] registered Feb 14 00:38:27.185411 kernel: acpiphp: Slot [2-3] registered Feb 14 00:38:27.185482 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 Feb 14 00:38:27.185548 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] Feb 14 00:38:27.185615 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] Feb 14 00:38:27.185680 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] Feb 14 00:38:27.185744 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold Feb 14 00:38:27.185811 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] Feb 14 00:38:27.185875 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) Feb 14 00:38:27.185940 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] Feb 14 00:38:27.186005 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) Feb 14 00:38:27.186070 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) Feb 14 00:38:27.186142 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 Feb 14 00:38:27.186209 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] Feb 14 00:38:27.186274 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] Feb 14 00:38:27.186338 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] Feb 14 00:38:27.186404 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold Feb 14 00:38:27.186471 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] Feb 14 00:38:27.186537 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) Feb 14 00:38:27.186607 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] Feb 14 00:38:27.186672 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) Feb 14 00:38:27.186733 kernel: pci_bus 0003:00: on NUMA node 0 Feb 14 00:38:27.186797 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 00:38:27.186862 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.186925 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.186991 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 00:38:27.187055 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.187119 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.187186 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 Feb 14 00:38:27.187251 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 Feb 14 00:38:27.187315 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Feb 14 00:38:27.187380 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] Feb 14 00:38:27.187454 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] Feb 14 00:38:27.187521 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] Feb 14 00:38:27.187590 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] Feb 14 00:38:27.187655 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] Feb 14 00:38:27.187722 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.187786 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.187850 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.187914 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.187979 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.188043 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.188108 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.188172 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.188239 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.188302 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.188366 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.188430 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.188494 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Feb 14 00:38:27.188558 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] Feb 14 00:38:27.188625 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] Feb 14 00:38:27.188692 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Feb 14 00:38:27.188756 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] Feb 14 00:38:27.188823 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] Feb 14 00:38:27.188889 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] Feb 14 00:38:27.188957 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] Feb 14 00:38:27.189023 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] Feb 14 00:38:27.189091 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] Feb 14 00:38:27.189160 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] Feb 14 00:38:27.189228 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] Feb 14 00:38:27.189295 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] Feb 14 00:38:27.189360 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] Feb 14 00:38:27.189428 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] Feb 14 00:38:27.189493 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] Feb 14 00:38:27.189560 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] Feb 14 00:38:27.189630 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] Feb 14 00:38:27.189700 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] Feb 14 00:38:27.189767 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] Feb 14 00:38:27.189833 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] Feb 14 00:38:27.189899 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] Feb 14 00:38:27.189963 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] Feb 14 00:38:27.190029 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] Feb 14 00:38:27.190094 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] Feb 14 00:38:27.190154 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 14 00:38:27.190212 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] Feb 14 00:38:27.190270 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] Feb 14 00:38:27.190345 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] Feb 14 00:38:27.190407 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] Feb 14 00:38:27.190477 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] Feb 14 00:38:27.190538 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] Feb 14 00:38:27.190817 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] Feb 14 00:38:27.190883 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] Feb 14 00:38:27.190894 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) Feb 14 00:38:27.190963 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 00:38:27.191026 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 00:38:27.191090 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] Feb 14 00:38:27.191151 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 00:38:27.191211 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 Feb 14 00:38:27.191271 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] Feb 14 00:38:27.191282 kernel: PCI host bridge to bus 000c:00 Feb 14 00:38:27.191345 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] Feb 14 00:38:27.191401 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] Feb 14 00:38:27.191459 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] Feb 14 00:38:27.191529 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 Feb 14 00:38:27.191606 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 Feb 14 00:38:27.191671 kernel: pci 000c:00:01.0: enabling Extended Tags Feb 14 00:38:27.191733 kernel: pci 000c:00:01.0: supports D1 D2 Feb 14 00:38:27.191800 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.191873 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 Feb 14 00:38:27.191941 kernel: pci 000c:00:02.0: supports D1 D2 Feb 14 00:38:27.192005 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.192076 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 Feb 14 00:38:27.192140 kernel: pci 000c:00:03.0: supports D1 D2 Feb 14 00:38:27.192203 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.192273 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 Feb 14 00:38:27.192336 kernel: pci 000c:00:04.0: supports D1 D2 Feb 14 00:38:27.192402 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.192412 kernel: acpiphp: Slot [1-4] registered Feb 14 00:38:27.192421 kernel: acpiphp: Slot [2-4] registered Feb 14 00:38:27.192429 kernel: acpiphp: Slot [3-2] registered Feb 14 00:38:27.192437 kernel: acpiphp: Slot [4-2] registered Feb 14 00:38:27.192492 kernel: pci_bus 000c:00: on NUMA node 0 Feb 14 00:38:27.192555 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 00:38:27.192623 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.192690 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.192753 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 00:38:27.192817 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.192880 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.192943 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 00:38:27.193006 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.193069 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.193136 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 00:38:27.193198 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.193261 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.193324 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] Feb 14 00:38:27.193388 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] Feb 14 00:38:27.193450 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] Feb 14 00:38:27.193514 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] Feb 14 00:38:27.193578 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] Feb 14 00:38:27.193646 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] Feb 14 00:38:27.193709 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] Feb 14 00:38:27.193772 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] Feb 14 00:38:27.193835 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.193898 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.193961 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.194024 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.194089 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.194152 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.194215 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.194278 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.194341 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.194403 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.194466 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.194529 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.194597 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.194660 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.194723 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.194787 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.194850 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Feb 14 00:38:27.194914 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] Feb 14 00:38:27.194978 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] Feb 14 00:38:27.195040 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Feb 14 00:38:27.195105 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] Feb 14 00:38:27.195170 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] Feb 14 00:38:27.195233 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Feb 14 00:38:27.195298 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] Feb 14 00:38:27.195361 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] Feb 14 00:38:27.195425 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Feb 14 00:38:27.195490 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] Feb 14 00:38:27.195554 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] Feb 14 00:38:27.195614 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] Feb 14 00:38:27.195672 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] Feb 14 00:38:27.195739 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] Feb 14 00:38:27.195799 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] Feb 14 00:38:27.195873 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] Feb 14 00:38:27.195935 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] Feb 14 00:38:27.196000 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] Feb 14 00:38:27.196060 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] Feb 14 00:38:27.196126 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] Feb 14 00:38:27.196186 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] Feb 14 00:38:27.196197 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) Feb 14 00:38:27.196267 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 00:38:27.196330 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 00:38:27.196392 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] Feb 14 00:38:27.196453 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 00:38:27.196514 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 Feb 14 00:38:27.196575 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] Feb 14 00:38:27.196588 kernel: PCI host bridge to bus 0002:00 Feb 14 00:38:27.196655 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] Feb 14 00:38:27.196713 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] Feb 14 00:38:27.196769 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] Feb 14 00:38:27.196840 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 Feb 14 00:38:27.196910 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 Feb 14 00:38:27.196974 kernel: pci 0002:00:01.0: supports D1 D2 Feb 14 00:38:27.197041 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.197110 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 Feb 14 00:38:27.197175 kernel: pci 0002:00:03.0: supports D1 D2 Feb 14 00:38:27.197239 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.197307 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 Feb 14 00:38:27.197372 kernel: pci 0002:00:05.0: supports D1 D2 Feb 14 00:38:27.197434 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.197507 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 Feb 14 00:38:27.197571 kernel: pci 0002:00:07.0: supports D1 D2 Feb 14 00:38:27.197639 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.197650 kernel: acpiphp: Slot [1-5] registered Feb 14 00:38:27.197658 kernel: acpiphp: Slot [2-5] registered Feb 14 00:38:27.197666 kernel: acpiphp: Slot [3-3] registered Feb 14 00:38:27.197674 kernel: acpiphp: Slot [4-3] registered Feb 14 00:38:27.197729 kernel: pci_bus 0002:00: on NUMA node 0 Feb 14 00:38:27.197793 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 00:38:27.197858 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.197925 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.197992 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 00:38:27.198057 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.198123 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.198189 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 00:38:27.198252 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.198317 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.198381 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 00:38:27.198444 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.198509 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.198577 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] Feb 14 00:38:27.198643 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] Feb 14 00:38:27.198707 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] Feb 14 00:38:27.198770 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] Feb 14 00:38:27.198835 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] Feb 14 00:38:27.198898 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] Feb 14 00:38:27.198962 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] Feb 14 00:38:27.199028 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] Feb 14 00:38:27.199091 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.199155 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.199223 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.199289 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.199353 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.199415 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.199479 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.199544 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.199630 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.199694 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.199758 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.199821 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.199885 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.199948 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.200011 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.200074 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.200140 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Feb 14 00:38:27.200203 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] Feb 14 00:38:27.200268 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] Feb 14 00:38:27.200332 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Feb 14 00:38:27.200397 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] Feb 14 00:38:27.200462 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] Feb 14 00:38:27.200525 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Feb 14 00:38:27.200595 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] Feb 14 00:38:27.200659 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] Feb 14 00:38:27.200723 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Feb 14 00:38:27.200787 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] Feb 14 00:38:27.200851 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] Feb 14 00:38:27.200909 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] Feb 14 00:38:27.200969 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] Feb 14 00:38:27.201038 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] Feb 14 00:38:27.201098 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] Feb 14 00:38:27.201167 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] Feb 14 00:38:27.201226 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] Feb 14 00:38:27.201300 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] Feb 14 00:38:27.201363 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] Feb 14 00:38:27.201428 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] Feb 14 00:38:27.201488 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] Feb 14 00:38:27.201498 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) Feb 14 00:38:27.201567 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 00:38:27.201639 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 00:38:27.201704 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] Feb 14 00:38:27.201769 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 00:38:27.201832 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 Feb 14 00:38:27.201893 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] Feb 14 00:38:27.201903 kernel: PCI host bridge to bus 0001:00 Feb 14 00:38:27.201968 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] Feb 14 00:38:27.202025 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] Feb 14 00:38:27.202084 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] Feb 14 00:38:27.202157 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 Feb 14 00:38:27.202230 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 Feb 14 00:38:27.202293 kernel: pci 0001:00:01.0: enabling Extended Tags Feb 14 00:38:27.202358 kernel: pci 0001:00:01.0: supports D1 D2 Feb 14 00:38:27.202422 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.202492 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 Feb 14 00:38:27.202558 kernel: pci 0001:00:02.0: supports D1 D2 Feb 14 00:38:27.202697 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.202774 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 Feb 14 00:38:27.202838 kernel: pci 0001:00:03.0: supports D1 D2 Feb 14 00:38:27.202901 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.202971 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 Feb 14 00:38:27.203039 kernel: pci 0001:00:04.0: supports D1 D2 Feb 14 00:38:27.203104 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.203116 kernel: acpiphp: Slot [1-6] registered Feb 14 00:38:27.203187 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 14 00:38:27.203253 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] Feb 14 00:38:27.203318 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] Feb 14 00:38:27.203383 kernel: pci 0001:01:00.0: PME# supported from D3cold Feb 14 00:38:27.203449 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 14 00:38:27.203523 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 14 00:38:27.203596 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] Feb 14 00:38:27.203663 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] Feb 14 00:38:27.203728 kernel: pci 0001:01:00.1: PME# supported from D3cold Feb 14 00:38:27.203738 kernel: acpiphp: Slot [2-6] registered Feb 14 00:38:27.203746 kernel: acpiphp: Slot [3-4] registered Feb 14 00:38:27.203755 kernel: acpiphp: Slot [4-4] registered Feb 14 00:38:27.203814 kernel: pci_bus 0001:00: on NUMA node 0 Feb 14 00:38:27.203878 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 00:38:27.203942 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 00:38:27.204005 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.204068 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.204131 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 00:38:27.204197 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.204259 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.204325 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 00:38:27.204388 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.204451 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.204515 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] Feb 14 00:38:27.204578 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] Feb 14 00:38:27.204645 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] Feb 14 00:38:27.204710 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] Feb 14 00:38:27.204774 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] Feb 14 00:38:27.204837 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] Feb 14 00:38:27.204901 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] Feb 14 00:38:27.204964 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] Feb 14 00:38:27.205027 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.205090 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.205153 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.205217 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.205281 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.205344 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.205407 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.205470 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.205534 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.205709 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.205776 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.205839 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.205905 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.205967 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.206031 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.206094 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.206160 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] Feb 14 00:38:27.206227 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] Feb 14 00:38:27.206291 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] Feb 14 00:38:27.206356 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] Feb 14 00:38:27.206421 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Feb 14 00:38:27.206484 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Feb 14 00:38:27.206547 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] Feb 14 00:38:27.206614 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Feb 14 00:38:27.206677 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] Feb 14 00:38:27.206740 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] Feb 14 00:38:27.206805 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Feb 14 00:38:27.206868 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] Feb 14 00:38:27.206931 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] Feb 14 00:38:27.206995 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Feb 14 00:38:27.207058 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] Feb 14 00:38:27.207121 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] Feb 14 00:38:27.207182 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] Feb 14 00:38:27.207239 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] Feb 14 00:38:27.207314 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] Feb 14 00:38:27.207375 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] Feb 14 00:38:27.207440 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] Feb 14 00:38:27.207500 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] Feb 14 00:38:27.207565 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] Feb 14 00:38:27.207630 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] Feb 14 00:38:27.207696 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] Feb 14 00:38:27.207755 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] Feb 14 00:38:27.207766 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) Feb 14 00:38:27.207833 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 00:38:27.207895 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 00:38:27.207960 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] Feb 14 00:38:27.208020 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 00:38:27.208082 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 Feb 14 00:38:27.208144 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] Feb 14 00:38:27.208155 kernel: PCI host bridge to bus 0004:00 Feb 14 00:38:27.208218 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] Feb 14 00:38:27.208275 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] Feb 14 00:38:27.208333 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] Feb 14 00:38:27.208403 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 Feb 14 00:38:27.208475 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 Feb 14 00:38:27.208539 kernel: pci 0004:00:01.0: supports D1 D2 Feb 14 00:38:27.208606 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.208675 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 Feb 14 00:38:27.208739 kernel: pci 0004:00:03.0: supports D1 D2 Feb 14 00:38:27.208805 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.208874 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 Feb 14 00:38:27.208939 kernel: pci 0004:00:05.0: supports D1 D2 Feb 14 00:38:27.209001 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.209076 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 Feb 14 00:38:27.209141 kernel: pci 0004:01:00.0: enabling Extended Tags Feb 14 00:38:27.209207 kernel: pci 0004:01:00.0: supports D1 D2 Feb 14 00:38:27.209274 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 14 00:38:27.209350 kernel: pci_bus 0004:02: extended config space not accessible Feb 14 00:38:27.209426 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 Feb 14 00:38:27.209494 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] Feb 14 00:38:27.209563 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] Feb 14 00:38:27.209634 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] Feb 14 00:38:27.209703 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb Feb 14 00:38:27.209773 kernel: pci 0004:02:00.0: supports D1 D2 Feb 14 00:38:27.209840 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 14 00:38:27.209913 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 Feb 14 00:38:27.209978 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] Feb 14 00:38:27.210044 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold Feb 14 00:38:27.210105 kernel: pci_bus 0004:00: on NUMA node 0 Feb 14 00:38:27.210171 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 Feb 14 00:38:27.210237 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 00:38:27.210301 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.210365 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 14 00:38:27.210429 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 00:38:27.210493 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.210556 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.210623 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] Feb 14 00:38:27.210689 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] Feb 14 00:38:27.210753 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] Feb 14 00:38:27.210816 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] Feb 14 00:38:27.210879 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] Feb 14 00:38:27.210943 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] Feb 14 00:38:27.211006 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.211069 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.211134 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.211197 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.211260 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.211323 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.211387 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.211451 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.211514 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.211577 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.211643 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.211709 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.211774 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] Feb 14 00:38:27.211841 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.211906 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.211975 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] Feb 14 00:38:27.212043 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] Feb 14 00:38:27.212112 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] Feb 14 00:38:27.212180 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] Feb 14 00:38:27.212247 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Feb 14 00:38:27.212313 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] Feb 14 00:38:27.212376 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Feb 14 00:38:27.212441 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] Feb 14 00:38:27.212504 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] Feb 14 00:38:27.212570 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] Feb 14 00:38:27.212639 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Feb 14 00:38:27.212702 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] Feb 14 00:38:27.212770 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] Feb 14 00:38:27.212835 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Feb 14 00:38:27.212899 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] Feb 14 00:38:27.212964 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] Feb 14 00:38:27.213023 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 14 00:38:27.213080 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] Feb 14 00:38:27.213140 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] Feb 14 00:38:27.213208 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] Feb 14 00:38:27.213269 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] Feb 14 00:38:27.213334 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] Feb 14 00:38:27.213403 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] Feb 14 00:38:27.213463 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] Feb 14 00:38:27.213533 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] Feb 14 00:38:27.213596 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] Feb 14 00:38:27.213607 kernel: iommu: Default domain type: Translated Feb 14 00:38:27.213615 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 14 00:38:27.213623 kernel: efivars: Registered efivars operations Feb 14 00:38:27.213693 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device Feb 14 00:38:27.213763 kernel: pci 0004:02:00.0: vgaarb: bridge control possible Feb 14 00:38:27.213831 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none Feb 14 00:38:27.213844 kernel: vgaarb: loaded Feb 14 00:38:27.213853 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 14 00:38:27.213861 kernel: VFS: Disk quotas dquot_6.6.0 Feb 14 00:38:27.213869 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 14 00:38:27.213877 kernel: pnp: PnP ACPI init Feb 14 00:38:27.213946 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved Feb 14 00:38:27.214007 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved Feb 14 00:38:27.214069 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved Feb 14 00:38:27.214128 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved Feb 14 00:38:27.214187 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved Feb 14 00:38:27.214246 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved Feb 14 00:38:27.214307 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved Feb 14 00:38:27.214367 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved Feb 14 00:38:27.214377 kernel: pnp: PnP ACPI: found 1 devices Feb 14 00:38:27.214387 kernel: NET: Registered PF_INET protocol family Feb 14 00:38:27.214396 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 14 00:38:27.214404 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 14 00:38:27.214412 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 14 00:38:27.214421 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 14 00:38:27.214429 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 14 00:38:27.214437 kernel: TCP: Hash tables configured (established 524288 bind 65536) Feb 14 00:38:27.214445 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 14 00:38:27.214454 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 14 00:38:27.214463 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 14 00:38:27.214531 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes Feb 14 00:38:27.214542 kernel: kvm [1]: IPA Size Limit: 48 bits Feb 14 00:38:27.214551 kernel: kvm [1]: GICv3: no GICV resource entry Feb 14 00:38:27.214559 kernel: kvm [1]: disabling GICv2 emulation Feb 14 00:38:27.214567 kernel: kvm [1]: GIC system register CPU interface enabled Feb 14 00:38:27.214575 kernel: kvm [1]: vgic interrupt IRQ9 Feb 14 00:38:27.214586 kernel: kvm [1]: VHE mode initialized successfully Feb 14 00:38:27.214596 kernel: Initialise system trusted keyrings Feb 14 00:38:27.214606 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 Feb 14 00:38:27.214614 kernel: Key type asymmetric registered Feb 14 00:38:27.214622 kernel: Asymmetric key parser 'x509' registered Feb 14 00:38:27.214630 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 14 00:38:27.214638 kernel: io scheduler mq-deadline registered Feb 14 00:38:27.214646 kernel: io scheduler kyber registered Feb 14 00:38:27.214654 kernel: io scheduler bfq registered Feb 14 00:38:27.214662 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 14 00:38:27.214670 kernel: ACPI: button: Power Button [PWRB] Feb 14 00:38:27.214680 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). Feb 14 00:38:27.214688 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 14 00:38:27.214761 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 Feb 14 00:38:27.214824 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 00:38:27.214886 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 00:38:27.214946 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq Feb 14 00:38:27.215007 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq Feb 14 00:38:27.215070 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq Feb 14 00:38:27.215137 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 Feb 14 00:38:27.215198 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 00:38:27.215259 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 00:38:27.215319 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq Feb 14 00:38:27.215380 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq Feb 14 00:38:27.215440 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq Feb 14 00:38:27.215510 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 Feb 14 00:38:27.215570 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 00:38:27.215633 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 00:38:27.215699 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq Feb 14 00:38:27.215759 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq Feb 14 00:38:27.215820 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq Feb 14 00:38:27.215889 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 Feb 14 00:38:27.215950 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 00:38:27.216010 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 00:38:27.216071 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq Feb 14 00:38:27.216131 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq Feb 14 00:38:27.216191 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq Feb 14 00:38:27.216265 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 Feb 14 00:38:27.216329 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 00:38:27.216389 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 00:38:27.216451 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq Feb 14 00:38:27.216511 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq Feb 14 00:38:27.216572 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq Feb 14 00:38:27.216643 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 Feb 14 00:38:27.216708 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 00:38:27.216769 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 00:38:27.216830 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq Feb 14 00:38:27.216890 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq Feb 14 00:38:27.216951 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq Feb 14 00:38:27.217020 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 Feb 14 00:38:27.217080 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 00:38:27.217143 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 00:38:27.217205 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq Feb 14 00:38:27.217265 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq Feb 14 00:38:27.217325 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq Feb 14 00:38:27.217393 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 Feb 14 00:38:27.217454 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 00:38:27.217516 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 00:38:27.217576 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq Feb 14 00:38:27.217641 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq Feb 14 00:38:27.217702 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq Feb 14 00:38:27.217713 kernel: thunder_xcv, ver 1.0 Feb 14 00:38:27.217723 kernel: thunder_bgx, ver 1.0 Feb 14 00:38:27.217731 kernel: nicpf, ver 1.0 Feb 14 00:38:27.217740 kernel: nicvf, ver 1.0 Feb 14 00:38:27.217806 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 14 00:38:27.217869 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-14T00:38:25 UTC (1739493505) Feb 14 00:38:27.217880 kernel: efifb: probing for efifb Feb 14 00:38:27.217888 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k Feb 14 00:38:27.217896 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Feb 14 00:38:27.217904 kernel: efifb: scrolling: redraw Feb 14 00:38:27.217912 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 14 00:38:27.217921 kernel: Console: switching to colour frame buffer device 100x37 Feb 14 00:38:27.217931 kernel: fb0: EFI VGA frame buffer device Feb 14 00:38:27.217939 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 Feb 14 00:38:27.217947 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 14 00:38:27.217955 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 14 00:38:27.217963 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 14 00:38:27.217972 kernel: watchdog: Hard watchdog permanently disabled Feb 14 00:38:27.217980 kernel: NET: Registered PF_INET6 protocol family Feb 14 00:38:27.217988 kernel: Segment Routing with IPv6 Feb 14 00:38:27.217996 kernel: In-situ OAM (IOAM) with IPv6 Feb 14 00:38:27.218005 kernel: NET: Registered PF_PACKET protocol family Feb 14 00:38:27.218013 kernel: Key type dns_resolver registered Feb 14 00:38:27.218021 kernel: registered taskstats version 1 Feb 14 00:38:27.218029 kernel: Loading compiled-in X.509 certificates Feb 14 00:38:27.218038 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 14 00:38:27.218046 kernel: Key type .fscrypt registered Feb 14 00:38:27.218053 kernel: Key type fscrypt-provisioning registered Feb 14 00:38:27.218061 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 14 00:38:27.218069 kernel: ima: Allocated hash algorithm: sha1 Feb 14 00:38:27.218078 kernel: ima: No architecture policies found Feb 14 00:38:27.218087 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 14 00:38:27.218152 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 Feb 14 00:38:27.218219 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 Feb 14 00:38:27.218285 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 Feb 14 00:38:27.218352 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 Feb 14 00:38:27.218419 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 Feb 14 00:38:27.218485 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 Feb 14 00:38:27.218551 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 Feb 14 00:38:27.218622 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 Feb 14 00:38:27.218689 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 Feb 14 00:38:27.218755 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 Feb 14 00:38:27.218820 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 Feb 14 00:38:27.218886 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 Feb 14 00:38:27.218952 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 Feb 14 00:38:27.219017 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 Feb 14 00:38:27.219082 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 Feb 14 00:38:27.219149 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 Feb 14 00:38:27.219216 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 Feb 14 00:38:27.219281 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 Feb 14 00:38:27.219347 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 Feb 14 00:38:27.219411 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 Feb 14 00:38:27.219478 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 Feb 14 00:38:27.219542 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 Feb 14 00:38:27.219611 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 Feb 14 00:38:27.219679 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 Feb 14 00:38:27.219749 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 Feb 14 00:38:27.219813 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 Feb 14 00:38:27.219879 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 Feb 14 00:38:27.219943 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 Feb 14 00:38:27.220009 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 Feb 14 00:38:27.220075 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 Feb 14 00:38:27.220144 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 Feb 14 00:38:27.220213 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 Feb 14 00:38:27.220279 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 Feb 14 00:38:27.220345 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 Feb 14 00:38:27.220410 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 Feb 14 00:38:27.220476 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 Feb 14 00:38:27.220541 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 Feb 14 00:38:27.220610 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 Feb 14 00:38:27.220676 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 Feb 14 00:38:27.220741 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 Feb 14 00:38:27.220809 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 Feb 14 00:38:27.220875 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 Feb 14 00:38:27.220940 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 Feb 14 00:38:27.221005 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 Feb 14 00:38:27.221071 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 Feb 14 00:38:27.221137 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 Feb 14 00:38:27.221204 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 Feb 14 00:38:27.221268 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 Feb 14 00:38:27.221336 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 Feb 14 00:38:27.221401 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 Feb 14 00:38:27.221467 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 Feb 14 00:38:27.221530 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 Feb 14 00:38:27.221600 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 Feb 14 00:38:27.221664 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 Feb 14 00:38:27.221731 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 Feb 14 00:38:27.221796 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 Feb 14 00:38:27.221865 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 Feb 14 00:38:27.221929 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 Feb 14 00:38:27.221996 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 Feb 14 00:38:27.222059 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 Feb 14 00:38:27.222128 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 Feb 14 00:38:27.222139 kernel: clk: Disabling unused clocks Feb 14 00:38:27.222148 kernel: Freeing unused kernel memory: 39360K Feb 14 00:38:27.222156 kernel: Run /init as init process Feb 14 00:38:27.222166 kernel: with arguments: Feb 14 00:38:27.222174 kernel: /init Feb 14 00:38:27.222182 kernel: with environment: Feb 14 00:38:27.222190 kernel: HOME=/ Feb 14 00:38:27.222198 kernel: TERM=linux Feb 14 00:38:27.222206 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 14 00:38:27.222216 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 14 00:38:27.222226 systemd[1]: Detected architecture arm64. Feb 14 00:38:27.222236 systemd[1]: Running in initrd. Feb 14 00:38:27.222244 systemd[1]: No hostname configured, using default hostname. Feb 14 00:38:27.222252 systemd[1]: Hostname set to . Feb 14 00:38:27.222260 systemd[1]: Initializing machine ID from random generator. Feb 14 00:38:27.222269 systemd[1]: Queued start job for default target initrd.target. Feb 14 00:38:27.222278 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 00:38:27.222286 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 00:38:27.222295 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 14 00:38:27.222306 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 14 00:38:27.222314 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 14 00:38:27.222323 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 14 00:38:27.222332 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 14 00:38:27.222342 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 14 00:38:27.222350 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 00:38:27.222360 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 14 00:38:27.222368 systemd[1]: Reached target paths.target - Path Units. Feb 14 00:38:27.222377 systemd[1]: Reached target slices.target - Slice Units. Feb 14 00:38:27.222385 systemd[1]: Reached target swap.target - Swaps. Feb 14 00:38:27.222394 systemd[1]: Reached target timers.target - Timer Units. Feb 14 00:38:27.222404 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 14 00:38:27.222412 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 14 00:38:27.222421 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 14 00:38:27.222429 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 14 00:38:27.222439 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 14 00:38:27.222448 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 14 00:38:27.222456 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 00:38:27.222464 systemd[1]: Reached target sockets.target - Socket Units. Feb 14 00:38:27.222473 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 14 00:38:27.222481 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 14 00:38:27.222490 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 14 00:38:27.222498 systemd[1]: Starting systemd-fsck-usr.service... Feb 14 00:38:27.222508 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 14 00:38:27.222516 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 14 00:38:27.222546 systemd-journald[898]: Collecting audit messages is disabled. Feb 14 00:38:27.222566 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 00:38:27.222576 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 14 00:38:27.222588 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 14 00:38:27.222596 kernel: Bridge firewalling registered Feb 14 00:38:27.222605 systemd-journald[898]: Journal started Feb 14 00:38:27.222624 systemd-journald[898]: Runtime Journal (/run/log/journal/edcbceffab27486fa7c1c2cdd83f7ea6) is 8.0M, max 4.0G, 3.9G free. Feb 14 00:38:27.180804 systemd-modules-load[900]: Inserted module 'overlay' Feb 14 00:38:27.260001 systemd[1]: Started systemd-journald.service - Journal Service. Feb 14 00:38:27.202908 systemd-modules-load[900]: Inserted module 'br_netfilter' Feb 14 00:38:27.265644 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 00:38:27.276403 systemd[1]: Finished systemd-fsck-usr.service. Feb 14 00:38:27.287084 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 14 00:38:27.297627 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 00:38:27.320699 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 00:38:27.326860 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 14 00:38:27.344047 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 14 00:38:27.372732 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 14 00:38:27.389592 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 00:38:27.406287 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 14 00:38:27.417633 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 14 00:38:27.423418 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 00:38:27.454821 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 14 00:38:27.462363 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 14 00:38:27.474188 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 14 00:38:27.499235 dracut-cmdline[942]: dracut-dracut-053 Feb 14 00:38:27.499235 dracut-cmdline[942]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 14 00:38:27.488094 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 00:38:27.498740 systemd-resolved[947]: Positive Trust Anchors: Feb 14 00:38:27.498750 systemd-resolved[947]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 14 00:38:27.498781 systemd-resolved[947]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 14 00:38:27.513617 systemd-resolved[947]: Defaulting to hostname 'linux'. Feb 14 00:38:27.515115 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 14 00:38:27.550261 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 14 00:38:27.651355 kernel: SCSI subsystem initialized Feb 14 00:38:27.662586 kernel: Loading iSCSI transport class v2.0-870. Feb 14 00:38:27.681589 kernel: iscsi: registered transport (tcp) Feb 14 00:38:27.708249 kernel: iscsi: registered transport (qla4xxx) Feb 14 00:38:27.708275 kernel: QLogic iSCSI HBA Driver Feb 14 00:38:27.751988 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 14 00:38:27.774701 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 14 00:38:27.819075 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 14 00:38:27.819092 kernel: device-mapper: uevent: version 1.0.3 Feb 14 00:38:27.828626 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 14 00:38:27.894591 kernel: raid6: neonx8 gen() 15836 MB/s Feb 14 00:38:27.919590 kernel: raid6: neonx4 gen() 15713 MB/s Feb 14 00:38:27.945590 kernel: raid6: neonx2 gen() 13297 MB/s Feb 14 00:38:27.970590 kernel: raid6: neonx1 gen() 10520 MB/s Feb 14 00:38:27.995591 kernel: raid6: int64x8 gen() 6984 MB/s Feb 14 00:38:28.020590 kernel: raid6: int64x4 gen() 7372 MB/s Feb 14 00:38:28.045590 kernel: raid6: int64x2 gen() 6153 MB/s Feb 14 00:38:28.074048 kernel: raid6: int64x1 gen() 5077 MB/s Feb 14 00:38:28.074070 kernel: raid6: using algorithm neonx8 gen() 15836 MB/s Feb 14 00:38:28.107863 kernel: raid6: .... xor() 11958 MB/s, rmw enabled Feb 14 00:38:28.107885 kernel: raid6: using neon recovery algorithm Feb 14 00:38:28.130772 kernel: xor: measuring software checksum speed Feb 14 00:38:28.130794 kernel: 8regs : 19731 MB/sec Feb 14 00:38:28.138884 kernel: 32regs : 19679 MB/sec Feb 14 00:38:28.146634 kernel: arm64_neon : 27070 MB/sec Feb 14 00:38:28.154221 kernel: xor: using function: arm64_neon (27070 MB/sec) Feb 14 00:38:28.214591 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 14 00:38:28.226609 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 14 00:38:28.241706 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 00:38:28.257428 systemd-udevd[1132]: Using default interface naming scheme 'v255'. Feb 14 00:38:28.260484 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 00:38:28.285723 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 14 00:38:28.299935 dracut-pre-trigger[1142]: rd.md=0: removing MD RAID activation Feb 14 00:38:28.325912 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 14 00:38:28.349751 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 14 00:38:28.450924 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 00:38:28.480188 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 14 00:38:28.480221 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 14 00:38:28.491697 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 14 00:38:28.664611 kernel: ACPI: bus type USB registered Feb 14 00:38:28.664635 kernel: usbcore: registered new interface driver usbfs Feb 14 00:38:28.664646 kernel: usbcore: registered new interface driver hub Feb 14 00:38:28.664656 kernel: usbcore: registered new device driver usb Feb 14 00:38:28.664666 kernel: PTP clock support registered Feb 14 00:38:28.664676 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 31 Feb 14 00:38:28.886864 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Feb 14 00:38:28.887011 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 Feb 14 00:38:28.887098 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault Feb 14 00:38:28.887178 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 14 00:38:28.887189 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 14 00:38:28.887199 kernel: igb 0003:03:00.0: Adding to iommu group 32 Feb 14 00:38:29.006282 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 33 Feb 14 00:38:29.642797 kernel: nvme 0005:03:00.0: Adding to iommu group 34 Feb 14 00:38:29.642925 kernel: nvme 0005:04:00.0: Adding to iommu group 35 Feb 14 00:38:29.643013 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000010 Feb 14 00:38:29.643095 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Feb 14 00:38:29.643171 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 Feb 14 00:38:29.643246 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed Feb 14 00:38:29.643324 kernel: hub 1-0:1.0: USB hub found Feb 14 00:38:29.643425 kernel: hub 1-0:1.0: 4 ports detected Feb 14 00:38:29.643509 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 14 00:38:29.643657 kernel: hub 2-0:1.0: USB hub found Feb 14 00:38:29.643757 kernel: hub 2-0:1.0: 4 ports detected Feb 14 00:38:29.643840 kernel: nvme nvme0: pci function 0005:03:00.0 Feb 14 00:38:29.643925 kernel: igb 0003:03:00.0: added PHC on eth0 Feb 14 00:38:29.644007 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 14 00:38:29.644084 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 Feb 14 00:38:29.644162 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:53:3a Feb 14 00:38:29.644242 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 14 00:38:29.644320 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 Feb 14 00:38:29.644396 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Feb 14 00:38:29.644472 kernel: nvme nvme0: Shutdown timeout set to 8 seconds Feb 14 00:38:29.644546 kernel: igb 0003:03:00.1: Adding to iommu group 36 Feb 14 00:38:29.644632 kernel: nvme nvme1: pci function 0005:04:00.0 Feb 14 00:38:29.644716 kernel: nvme nvme1: Shutdown timeout set to 8 seconds Feb 14 00:38:29.644791 kernel: nvme nvme0: 32/0/0 default/read/poll queues Feb 14 00:38:29.644864 kernel: nvme nvme1: 32/0/0 default/read/poll queues Feb 14 00:38:29.644934 kernel: igb 0003:03:00.1: added PHC on eth1 Feb 14 00:38:29.645011 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection Feb 14 00:38:29.645089 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:53:3b Feb 14 00:38:29.645164 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 Feb 14 00:38:29.645238 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Feb 14 00:38:29.645317 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 14 00:38:29.645328 kernel: GPT:9289727 != 1875385007 Feb 14 00:38:29.645337 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 14 00:38:29.645347 kernel: GPT:9289727 != 1875385007 Feb 14 00:38:29.645356 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 14 00:38:29.645366 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 14 00:38:29.645376 kernel: igb 0003:03:00.1 eno2: renamed from eth1 Feb 14 00:38:29.645451 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (1213) Feb 14 00:38:29.645462 kernel: igb 0003:03:00.0 eno1: renamed from eth0 Feb 14 00:38:29.645541 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (1198) Feb 14 00:38:29.645552 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd Feb 14 00:38:29.645685 kernel: hub 2-3:1.0: USB hub found Feb 14 00:38:29.645785 kernel: hub 2-3:1.0: 4 ports detected Feb 14 00:38:29.645871 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged Feb 14 00:38:29.645948 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 14 00:38:29.645958 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 14 00:38:29.645972 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd Feb 14 00:38:29.646093 kernel: hub 1-3:1.0: USB hub found Feb 14 00:38:29.646186 kernel: hub 1-3:1.0: 4 ports detected Feb 14 00:38:29.646271 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 14 00:38:29.646349 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 Feb 14 00:38:30.325902 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 Feb 14 00:38:30.326066 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 14 00:38:30.326156 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged Feb 14 00:38:30.326231 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 14 00:38:28.620791 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 14 00:38:30.341993 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 Feb 14 00:38:28.672653 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 14 00:38:30.363342 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 Feb 14 00:38:28.677853 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 00:38:28.683043 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 14 00:38:28.699801 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 14 00:38:28.706475 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 14 00:38:30.395779 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 14 00:38:28.706525 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 00:38:30.406932 disk-uuid[1279]: Primary Header is updated. Feb 14 00:38:30.406932 disk-uuid[1279]: Secondary Entries is updated. Feb 14 00:38:30.406932 disk-uuid[1279]: Secondary Header is updated. Feb 14 00:38:28.712409 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 00:38:30.433816 disk-uuid[1280]: The operation has completed successfully. Feb 14 00:38:28.718216 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 14 00:38:28.718251 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 00:38:28.724313 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 00:38:28.740705 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 00:38:28.745955 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 14 00:38:28.756242 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 00:38:28.772731 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 00:38:29.025252 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 00:38:29.236459 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. Feb 14 00:38:29.299400 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. Feb 14 00:38:29.309152 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Feb 14 00:38:29.345828 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Feb 14 00:38:29.350461 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Feb 14 00:38:30.566836 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 14 00:38:29.365683 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 14 00:38:30.499270 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 14 00:38:30.583535 sh[1470]: Success Feb 14 00:38:30.499350 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 14 00:38:30.529766 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 14 00:38:30.589910 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 14 00:38:30.611702 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 14 00:38:30.623236 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 14 00:38:30.717389 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 14 00:38:30.717416 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 14 00:38:30.717436 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 14 00:38:30.717456 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 14 00:38:30.717475 kernel: BTRFS info (device dm-0): using free space tree Feb 14 00:38:30.717494 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 14 00:38:30.723643 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 14 00:38:30.735631 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 14 00:38:30.755701 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 14 00:38:30.767585 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 00:38:30.767602 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 14 00:38:30.767612 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 14 00:38:30.824584 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 14 00:38:30.824596 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Feb 14 00:38:30.836303 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 14 00:38:30.873594 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 00:38:30.872793 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 14 00:38:30.881950 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 14 00:38:30.949360 ignition[1542]: Ignition 2.19.0 Feb 14 00:38:30.949367 ignition[1542]: Stage: fetch-offline Feb 14 00:38:30.949400 ignition[1542]: no configs at "/usr/lib/ignition/base.d" Feb 14 00:38:30.959211 unknown[1542]: fetched base config from "system" Feb 14 00:38:30.949408 ignition[1542]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 00:38:30.959218 unknown[1542]: fetched user config from "system" Feb 14 00:38:30.949556 ignition[1542]: parsed url from cmdline: "" Feb 14 00:38:30.962023 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 14 00:38:30.949559 ignition[1542]: no config URL provided Feb 14 00:38:30.984340 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 14 00:38:30.949563 ignition[1542]: reading system config file "/usr/lib/ignition/user.ign" Feb 14 00:38:31.006692 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 14 00:38:30.949621 ignition[1542]: parsing config with SHA512: 671fe8312dd448425793eebb37a9ca796cdfd76a79f6500e7e44a33df24b4ede30989982d9b964ba80fdd5c0b2ba962d086792ac685af735cc1b45afd41458e0 Feb 14 00:38:31.029557 systemd-networkd[1695]: lo: Link UP Feb 14 00:38:30.959675 ignition[1542]: fetch-offline: fetch-offline passed Feb 14 00:38:31.029560 systemd-networkd[1695]: lo: Gained carrier Feb 14 00:38:30.959679 ignition[1542]: POST message to Packet Timeline Feb 14 00:38:31.033048 systemd-networkd[1695]: Enumeration completed Feb 14 00:38:30.959684 ignition[1542]: POST Status error: resource requires networking Feb 14 00:38:31.033106 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 14 00:38:30.959743 ignition[1542]: Ignition finished successfully Feb 14 00:38:31.034116 systemd-networkd[1695]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 00:38:31.086156 ignition[1697]: Ignition 2.19.0 Feb 14 00:38:31.042095 systemd[1]: Reached target network.target - Network. Feb 14 00:38:31.086162 ignition[1697]: Stage: kargs Feb 14 00:38:31.052076 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 14 00:38:31.086302 ignition[1697]: no configs at "/usr/lib/ignition/base.d" Feb 14 00:38:31.062670 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 14 00:38:31.086311 ignition[1697]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 00:38:31.086451 systemd-networkd[1695]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 00:38:31.087208 ignition[1697]: kargs: kargs passed Feb 14 00:38:31.138039 systemd-networkd[1695]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 00:38:31.087212 ignition[1697]: POST message to Packet Timeline Feb 14 00:38:31.087225 ignition[1697]: GET https://metadata.packet.net/metadata: attempt #1 Feb 14 00:38:31.090790 ignition[1697]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58736->[::1]:53: read: connection refused Feb 14 00:38:31.291844 ignition[1697]: GET https://metadata.packet.net/metadata: attempt #2 Feb 14 00:38:31.292270 ignition[1697]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44209->[::1]:53: read: connection refused Feb 14 00:38:31.693327 ignition[1697]: GET https://metadata.packet.net/metadata: attempt #3 Feb 14 00:38:31.695199 ignition[1697]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42038->[::1]:53: read: connection refused Feb 14 00:38:31.822596 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Feb 14 00:38:31.824931 systemd-networkd[1695]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 00:38:32.448594 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Feb 14 00:38:32.451196 systemd-networkd[1695]: eno1: Link UP Feb 14 00:38:32.451406 systemd-networkd[1695]: eno2: Link UP Feb 14 00:38:32.451529 systemd-networkd[1695]: enP1p1s0f0np0: Link UP Feb 14 00:38:32.451678 systemd-networkd[1695]: enP1p1s0f0np0: Gained carrier Feb 14 00:38:32.463805 systemd-networkd[1695]: enP1p1s0f1np1: Link UP Feb 14 00:38:32.495616 systemd-networkd[1695]: enP1p1s0f0np0: DHCPv4 address 147.28.162.217/31, gateway 147.28.162.216 acquired from 147.28.144.140 Feb 14 00:38:32.495921 ignition[1697]: GET https://metadata.packet.net/metadata: attempt #4 Feb 14 00:38:32.496335 ignition[1697]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45716->[::1]:53: read: connection refused Feb 14 00:38:32.829171 systemd-networkd[1695]: enP1p1s0f1np1: Gained carrier Feb 14 00:38:33.724722 systemd-networkd[1695]: enP1p1s0f0np0: Gained IPv6LL Feb 14 00:38:34.098008 ignition[1697]: GET https://metadata.packet.net/metadata: attempt #5 Feb 14 00:38:34.098750 ignition[1697]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37650->[::1]:53: read: connection refused Feb 14 00:38:34.812888 systemd-networkd[1695]: enP1p1s0f1np1: Gained IPv6LL Feb 14 00:38:37.301681 ignition[1697]: GET https://metadata.packet.net/metadata: attempt #6 Feb 14 00:38:37.547342 ignition[1697]: GET result: OK Feb 14 00:38:37.852421 ignition[1697]: Ignition finished successfully Feb 14 00:38:37.857493 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 14 00:38:37.869704 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 14 00:38:37.886253 ignition[1721]: Ignition 2.19.0 Feb 14 00:38:37.886260 ignition[1721]: Stage: disks Feb 14 00:38:37.886419 ignition[1721]: no configs at "/usr/lib/ignition/base.d" Feb 14 00:38:37.886428 ignition[1721]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 00:38:37.887405 ignition[1721]: disks: disks passed Feb 14 00:38:37.887410 ignition[1721]: POST message to Packet Timeline Feb 14 00:38:37.887422 ignition[1721]: GET https://metadata.packet.net/metadata: attempt #1 Feb 14 00:38:38.894177 ignition[1721]: GET result: OK Feb 14 00:38:39.203641 ignition[1721]: Ignition finished successfully Feb 14 00:38:39.206631 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 14 00:38:39.212542 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 14 00:38:39.220275 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 14 00:38:39.228535 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 14 00:38:39.237538 systemd[1]: Reached target sysinit.target - System Initialization. Feb 14 00:38:39.246670 systemd[1]: Reached target basic.target - Basic System. Feb 14 00:38:39.266731 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 14 00:38:39.282182 systemd-fsck[1738]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 14 00:38:39.285736 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 14 00:38:39.303663 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 14 00:38:39.368507 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 14 00:38:39.373374 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 14 00:38:39.378483 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 14 00:38:39.398632 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 14 00:38:39.405590 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1748) Feb 14 00:38:39.406585 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 00:38:39.406597 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 14 00:38:39.406610 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 14 00:38:39.407585 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 14 00:38:39.407599 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Feb 14 00:38:39.499661 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 14 00:38:39.505950 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 14 00:38:39.517147 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Feb 14 00:38:39.532175 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 14 00:38:39.532202 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 14 00:38:39.545277 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 14 00:38:39.575059 coreos-metadata[1767]: Feb 14 00:38:39.559 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 14 00:38:39.598382 coreos-metadata[1766]: Feb 14 00:38:39.559 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 14 00:38:39.558755 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 14 00:38:39.587775 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 14 00:38:39.625917 initrd-setup-root[1789]: cut: /sysroot/etc/passwd: No such file or directory Feb 14 00:38:39.631809 initrd-setup-root[1796]: cut: /sysroot/etc/group: No such file or directory Feb 14 00:38:39.637668 initrd-setup-root[1803]: cut: /sysroot/etc/shadow: No such file or directory Feb 14 00:38:39.643444 initrd-setup-root[1810]: cut: /sysroot/etc/gshadow: No such file or directory Feb 14 00:38:39.710805 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 14 00:38:39.734646 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 14 00:38:39.765562 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 00:38:39.741123 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 14 00:38:39.771870 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 14 00:38:39.787503 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 14 00:38:39.799187 ignition[1884]: INFO : Ignition 2.19.0 Feb 14 00:38:39.799187 ignition[1884]: INFO : Stage: mount Feb 14 00:38:39.809926 ignition[1884]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 00:38:39.809926 ignition[1884]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 00:38:39.809926 ignition[1884]: INFO : mount: mount passed Feb 14 00:38:39.809926 ignition[1884]: INFO : POST message to Packet Timeline Feb 14 00:38:39.809926 ignition[1884]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 14 00:38:40.389663 coreos-metadata[1766]: Feb 14 00:38:40.389 INFO Fetch successful Feb 14 00:38:40.396386 coreos-metadata[1767]: Feb 14 00:38:40.396 INFO Fetch successful Feb 14 00:38:40.432565 coreos-metadata[1766]: Feb 14 00:38:40.432 INFO wrote hostname ci-4081.3.1-a-a04cd882ea to /sysroot/etc/hostname Feb 14 00:38:40.435624 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 14 00:38:40.446737 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 14 00:38:40.446813 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Feb 14 00:38:40.657218 ignition[1884]: INFO : GET result: OK Feb 14 00:38:40.959020 ignition[1884]: INFO : Ignition finished successfully Feb 14 00:38:40.961780 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 14 00:38:40.977704 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 14 00:38:40.989505 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 14 00:38:41.024024 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1910) Feb 14 00:38:41.024062 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 00:38:41.038361 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 14 00:38:41.051203 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 14 00:38:41.073786 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 14 00:38:41.073808 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Feb 14 00:38:41.081981 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 14 00:38:41.112080 ignition[1928]: INFO : Ignition 2.19.0 Feb 14 00:38:41.112080 ignition[1928]: INFO : Stage: files Feb 14 00:38:41.121213 ignition[1928]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 00:38:41.121213 ignition[1928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 00:38:41.121213 ignition[1928]: DEBUG : files: compiled without relabeling support, skipping Feb 14 00:38:41.121213 ignition[1928]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 14 00:38:41.121213 ignition[1928]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 14 00:38:41.121213 ignition[1928]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 14 00:38:41.121213 ignition[1928]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 14 00:38:41.121213 ignition[1928]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 14 00:38:41.121213 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 14 00:38:41.121213 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Feb 14 00:38:41.117687 unknown[1928]: wrote ssh authorized keys file for user: core Feb 14 00:38:41.210691 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 14 00:38:41.339073 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 14 00:38:41.528293 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 14 00:38:41.964159 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 14 00:38:41.976494 ignition[1928]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 14 00:38:41.976494 ignition[1928]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 14 00:38:41.976494 ignition[1928]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 14 00:38:41.976494 ignition[1928]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 14 00:38:41.976494 ignition[1928]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 14 00:38:41.976494 ignition[1928]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 14 00:38:41.976494 ignition[1928]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 14 00:38:41.976494 ignition[1928]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 14 00:38:41.976494 ignition[1928]: INFO : files: files passed Feb 14 00:38:41.976494 ignition[1928]: INFO : POST message to Packet Timeline Feb 14 00:38:41.976494 ignition[1928]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 14 00:38:42.197068 ignition[1928]: INFO : GET result: OK Feb 14 00:38:42.568524 ignition[1928]: INFO : Ignition finished successfully Feb 14 00:38:42.571855 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 14 00:38:42.590711 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 14 00:38:42.597412 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 14 00:38:42.609180 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 14 00:38:42.609261 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 14 00:38:42.643987 initrd-setup-root-after-ignition[1969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 14 00:38:42.643987 initrd-setup-root-after-ignition[1969]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 14 00:38:42.627120 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 14 00:38:42.689525 initrd-setup-root-after-ignition[1973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 14 00:38:42.639828 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 14 00:38:42.662726 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 14 00:38:42.703550 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 14 00:38:42.703633 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 14 00:38:42.713407 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 14 00:38:42.729138 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 14 00:38:42.740453 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 14 00:38:42.754786 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 14 00:38:42.779062 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 14 00:38:42.804698 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 14 00:38:42.827403 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 14 00:38:42.833349 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 00:38:42.845029 systemd[1]: Stopped target timers.target - Timer Units. Feb 14 00:38:42.856709 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 14 00:38:42.856811 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 14 00:38:42.868578 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 14 00:38:42.879989 systemd[1]: Stopped target basic.target - Basic System. Feb 14 00:38:42.891543 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 14 00:38:42.903067 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 14 00:38:42.914417 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 14 00:38:42.925828 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 14 00:38:42.937199 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 14 00:38:42.948530 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 14 00:38:42.959934 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 14 00:38:42.976823 systemd[1]: Stopped target swap.target - Swaps. Feb 14 00:38:42.988193 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 14 00:38:42.988290 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 14 00:38:42.999731 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 14 00:38:43.010847 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 00:38:43.021870 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 14 00:38:43.025645 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 00:38:43.033020 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 14 00:38:43.033120 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 14 00:38:43.044246 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 14 00:38:43.044333 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 14 00:38:43.055505 systemd[1]: Stopped target paths.target - Path Units. Feb 14 00:38:43.066473 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 14 00:38:43.066599 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 00:38:43.083337 systemd[1]: Stopped target slices.target - Slice Units. Feb 14 00:38:43.094610 systemd[1]: Stopped target sockets.target - Socket Units. Feb 14 00:38:43.105905 systemd[1]: iscsid.socket: Deactivated successfully. Feb 14 00:38:43.105991 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 14 00:38:43.210975 ignition[1997]: INFO : Ignition 2.19.0 Feb 14 00:38:43.210975 ignition[1997]: INFO : Stage: umount Feb 14 00:38:43.210975 ignition[1997]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 00:38:43.210975 ignition[1997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 00:38:43.210975 ignition[1997]: INFO : umount: umount passed Feb 14 00:38:43.210975 ignition[1997]: INFO : POST message to Packet Timeline Feb 14 00:38:43.210975 ignition[1997]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 14 00:38:43.117354 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 14 00:38:43.117458 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 14 00:38:43.128881 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 14 00:38:43.128969 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 14 00:38:43.140292 systemd[1]: ignition-files.service: Deactivated successfully. Feb 14 00:38:43.140372 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 14 00:38:43.151738 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 14 00:38:43.151817 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 14 00:38:43.174704 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 14 00:38:43.181349 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 14 00:38:43.193300 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 14 00:38:43.193395 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 00:38:43.205232 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 14 00:38:43.205315 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 14 00:38:43.219160 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 14 00:38:43.221121 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 14 00:38:43.221199 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 14 00:38:43.256750 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 14 00:38:43.256919 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 14 00:38:44.655429 ignition[1997]: INFO : GET result: OK Feb 14 00:38:45.002265 ignition[1997]: INFO : Ignition finished successfully Feb 14 00:38:45.004527 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 14 00:38:45.004730 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 14 00:38:45.012237 systemd[1]: Stopped target network.target - Network. Feb 14 00:38:45.021150 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 14 00:38:45.021206 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 14 00:38:45.030537 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 14 00:38:45.030585 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 14 00:38:45.039939 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 14 00:38:45.039970 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 14 00:38:45.049504 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 14 00:38:45.049548 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 14 00:38:45.059145 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 14 00:38:45.059173 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 14 00:38:45.068974 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 14 00:38:45.074608 systemd-networkd[1695]: enP1p1s0f0np0: DHCPv6 lease lost Feb 14 00:38:45.078470 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 14 00:38:45.084726 systemd-networkd[1695]: enP1p1s0f1np1: DHCPv6 lease lost Feb 14 00:38:45.088358 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 14 00:38:45.088477 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 14 00:38:45.100250 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 14 00:38:45.100400 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 00:38:45.108375 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 14 00:38:45.108532 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 14 00:38:45.118407 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 14 00:38:45.118551 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 14 00:38:45.142707 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 14 00:38:45.152240 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 14 00:38:45.152306 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 14 00:38:45.162281 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 14 00:38:45.162314 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 14 00:38:45.172287 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 14 00:38:45.172320 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 14 00:38:45.182613 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 00:38:45.202932 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 14 00:38:45.203038 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 00:38:45.210790 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 14 00:38:45.210942 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 14 00:38:45.219974 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 14 00:38:45.220014 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 00:38:45.230622 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 14 00:38:45.230658 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 14 00:38:45.246878 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 14 00:38:45.246921 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 14 00:38:45.257717 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 14 00:38:45.257765 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 00:38:45.287756 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 14 00:38:45.296305 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 14 00:38:45.296370 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 00:38:45.307363 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 14 00:38:45.307394 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 14 00:38:45.318376 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 14 00:38:45.318404 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 00:38:45.335456 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 14 00:38:45.335490 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 00:38:45.347245 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 14 00:38:45.347323 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 14 00:38:45.867731 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 14 00:38:45.868687 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 14 00:38:45.878949 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 14 00:38:45.898691 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 14 00:38:45.912184 systemd[1]: Switching root. Feb 14 00:38:45.962805 systemd-journald[898]: Journal stopped Feb 14 00:38:27.161551 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] Feb 14 00:38:27.161574 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 14 00:38:27.161598 kernel: KASLR enabled Feb 14 00:38:27.161604 kernel: efi: EFI v2.7 by American Megatrends Feb 14 00:38:27.161610 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea465818 RNG=0xebf00018 MEMRESERVE=0xe4642f98 Feb 14 00:38:27.161616 kernel: random: crng init done Feb 14 00:38:27.161623 kernel: esrt: Reserving ESRT space from 0x00000000ea465818 to 0x00000000ea465878. Feb 14 00:38:27.161628 kernel: ACPI: Early table checksum verification disabled Feb 14 00:38:27.161636 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) Feb 14 00:38:27.161642 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) Feb 14 00:38:27.161648 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) Feb 14 00:38:27.161654 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) Feb 14 00:38:27.161660 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) Feb 14 00:38:27.161667 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) Feb 14 00:38:27.161676 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) Feb 14 00:38:27.161682 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) Feb 14 00:38:27.161689 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) Feb 14 00:38:27.161695 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) Feb 14 00:38:27.161702 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) Feb 14 00:38:27.161708 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) Feb 14 00:38:27.161714 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) Feb 14 00:38:27.161721 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) Feb 14 00:38:27.161727 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) Feb 14 00:38:27.161735 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) Feb 14 00:38:27.161741 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) Feb 14 00:38:27.161748 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) Feb 14 00:38:27.161754 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) Feb 14 00:38:27.161761 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 Feb 14 00:38:27.161767 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] Feb 14 00:38:27.161773 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] Feb 14 00:38:27.161780 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] Feb 14 00:38:27.161786 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] Feb 14 00:38:27.161793 kernel: NUMA: NODE_DATA [mem 0x83fdffca800-0x83fdffcffff] Feb 14 00:38:27.161799 kernel: Zone ranges: Feb 14 00:38:27.161805 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] Feb 14 00:38:27.161813 kernel: DMA32 empty Feb 14 00:38:27.161820 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] Feb 14 00:38:27.161826 kernel: Movable zone start for each node Feb 14 00:38:27.161833 kernel: Early memory node ranges Feb 14 00:38:27.161839 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] Feb 14 00:38:27.161848 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] Feb 14 00:38:27.161855 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] Feb 14 00:38:27.161863 kernel: node 0: [mem 0x0000000094000000-0x00000000eba36fff] Feb 14 00:38:27.161870 kernel: node 0: [mem 0x00000000eba37000-0x00000000ebeadfff] Feb 14 00:38:27.161877 kernel: node 0: [mem 0x00000000ebeae000-0x00000000ebeaefff] Feb 14 00:38:27.161884 kernel: node 0: [mem 0x00000000ebeaf000-0x00000000ebeccfff] Feb 14 00:38:27.161890 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] Feb 14 00:38:27.161897 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] Feb 14 00:38:27.161904 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] Feb 14 00:38:27.161910 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] Feb 14 00:38:27.161917 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee54ffff] Feb 14 00:38:27.161924 kernel: node 0: [mem 0x00000000ee550000-0x00000000f765ffff] Feb 14 00:38:27.161932 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] Feb 14 00:38:27.161939 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] Feb 14 00:38:27.161945 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] Feb 14 00:38:27.161952 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] Feb 14 00:38:27.161958 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] Feb 14 00:38:27.161965 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] Feb 14 00:38:27.161972 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] Feb 14 00:38:27.161979 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] Feb 14 00:38:27.161985 kernel: On node 0, zone DMA: 768 pages in unavailable ranges Feb 14 00:38:27.161992 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges Feb 14 00:38:27.161999 kernel: psci: probing for conduit method from ACPI. Feb 14 00:38:27.162007 kernel: psci: PSCIv1.1 detected in firmware. Feb 14 00:38:27.162013 kernel: psci: Using standard PSCI v0.2 function IDs Feb 14 00:38:27.162020 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 14 00:38:27.162027 kernel: psci: SMC Calling Convention v1.2 Feb 14 00:38:27.162033 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Feb 14 00:38:27.162040 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 Feb 14 00:38:27.162047 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 Feb 14 00:38:27.162053 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 Feb 14 00:38:27.162060 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 Feb 14 00:38:27.162067 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 Feb 14 00:38:27.162073 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 Feb 14 00:38:27.162080 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 Feb 14 00:38:27.162088 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 Feb 14 00:38:27.162095 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 Feb 14 00:38:27.162102 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 Feb 14 00:38:27.162108 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 Feb 14 00:38:27.162115 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 Feb 14 00:38:27.162122 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 Feb 14 00:38:27.162129 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 Feb 14 00:38:27.162135 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 Feb 14 00:38:27.162142 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 Feb 14 00:38:27.162149 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 Feb 14 00:38:27.162156 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 Feb 14 00:38:27.162162 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 Feb 14 00:38:27.162170 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 Feb 14 00:38:27.162177 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 Feb 14 00:38:27.162184 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 Feb 14 00:38:27.162190 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 Feb 14 00:38:27.162197 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 Feb 14 00:38:27.162204 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 Feb 14 00:38:27.162210 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 Feb 14 00:38:27.162217 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 Feb 14 00:38:27.162224 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 Feb 14 00:38:27.162230 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 Feb 14 00:38:27.162237 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 Feb 14 00:38:27.162245 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 Feb 14 00:38:27.162251 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 Feb 14 00:38:27.162258 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 Feb 14 00:38:27.162265 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 Feb 14 00:38:27.162272 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 Feb 14 00:38:27.162278 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 Feb 14 00:38:27.162285 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 Feb 14 00:38:27.162292 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 Feb 14 00:38:27.162298 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 Feb 14 00:38:27.162305 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 Feb 14 00:38:27.162312 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 Feb 14 00:38:27.162318 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 Feb 14 00:38:27.162326 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 Feb 14 00:38:27.162333 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 Feb 14 00:38:27.162340 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 Feb 14 00:38:27.162346 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 Feb 14 00:38:27.162353 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 Feb 14 00:38:27.162360 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 Feb 14 00:38:27.162366 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 Feb 14 00:38:27.162373 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 Feb 14 00:38:27.162387 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 Feb 14 00:38:27.162394 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 Feb 14 00:38:27.162402 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 Feb 14 00:38:27.162410 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 Feb 14 00:38:27.162417 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 Feb 14 00:38:27.162424 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 Feb 14 00:38:27.162431 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 Feb 14 00:38:27.162438 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 Feb 14 00:38:27.162447 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 Feb 14 00:38:27.162454 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 Feb 14 00:38:27.162461 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 Feb 14 00:38:27.162468 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 Feb 14 00:38:27.162475 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 Feb 14 00:38:27.162482 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 Feb 14 00:38:27.162489 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 Feb 14 00:38:27.162496 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 Feb 14 00:38:27.162503 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 Feb 14 00:38:27.162510 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 Feb 14 00:38:27.162518 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 Feb 14 00:38:27.162525 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 Feb 14 00:38:27.162533 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 Feb 14 00:38:27.162540 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 Feb 14 00:38:27.162547 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 Feb 14 00:38:27.162554 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 Feb 14 00:38:27.162561 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 Feb 14 00:38:27.162568 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 Feb 14 00:38:27.162576 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 Feb 14 00:38:27.162615 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 Feb 14 00:38:27.162623 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 Feb 14 00:38:27.162630 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 14 00:38:27.162637 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 14 00:38:27.162647 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 Feb 14 00:38:27.162654 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 Feb 14 00:38:27.162661 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 Feb 14 00:38:27.162668 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 Feb 14 00:38:27.162676 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 Feb 14 00:38:27.162683 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 Feb 14 00:38:27.162690 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 Feb 14 00:38:27.162697 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 Feb 14 00:38:27.162704 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 Feb 14 00:38:27.162711 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 Feb 14 00:38:27.162718 kernel: Detected PIPT I-cache on CPU0 Feb 14 00:38:27.162727 kernel: CPU features: detected: GIC system register CPU interface Feb 14 00:38:27.162734 kernel: CPU features: detected: Virtualization Host Extensions Feb 14 00:38:27.162741 kernel: CPU features: detected: Hardware dirty bit management Feb 14 00:38:27.162748 kernel: CPU features: detected: Spectre-v4 Feb 14 00:38:27.162755 kernel: CPU features: detected: Spectre-BHB Feb 14 00:38:27.162763 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 14 00:38:27.162770 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 14 00:38:27.162777 kernel: CPU features: detected: ARM erratum 1418040 Feb 14 00:38:27.162784 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 14 00:38:27.162791 kernel: alternatives: applying boot alternatives Feb 14 00:38:27.162800 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 14 00:38:27.162809 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 14 00:38:27.162816 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Feb 14 00:38:27.162823 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes Feb 14 00:38:27.162830 kernel: printk: log_buf_len min size: 262144 bytes Feb 14 00:38:27.162837 kernel: printk: log_buf_len: 1048576 bytes Feb 14 00:38:27.162844 kernel: printk: early log buf free: 249904(95%) Feb 14 00:38:27.162852 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) Feb 14 00:38:27.162859 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) Feb 14 00:38:27.162866 kernel: Fallback order for Node 0: 0 Feb 14 00:38:27.162873 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 Feb 14 00:38:27.162880 kernel: Policy zone: Normal Feb 14 00:38:27.162889 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 14 00:38:27.162896 kernel: software IO TLB: area num 128. Feb 14 00:38:27.162903 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) Feb 14 00:38:27.162910 kernel: Memory: 262922516K/268174336K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 5251820K reserved, 0K cma-reserved) Feb 14 00:38:27.162918 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 Feb 14 00:38:27.162925 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 14 00:38:27.162933 kernel: rcu: RCU event tracing is enabled. Feb 14 00:38:27.162940 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. Feb 14 00:38:27.162947 kernel: Trampoline variant of Tasks RCU enabled. Feb 14 00:38:27.162954 kernel: Tracing variant of Tasks RCU enabled. Feb 14 00:38:27.162962 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 14 00:38:27.162970 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 Feb 14 00:38:27.162978 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 14 00:38:27.162985 kernel: GICv3: GIC: Using split EOI/Deactivate mode Feb 14 00:38:27.162992 kernel: GICv3: 672 SPIs implemented Feb 14 00:38:27.162999 kernel: GICv3: 0 Extended SPIs implemented Feb 14 00:38:27.163006 kernel: Root IRQ handler: gic_handle_irq Feb 14 00:38:27.163013 kernel: GICv3: GICv3 features: 16 PPIs Feb 14 00:38:27.163020 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 Feb 14 00:38:27.163027 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 Feb 14 00:38:27.163034 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 Feb 14 00:38:27.163041 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 Feb 14 00:38:27.163048 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 Feb 14 00:38:27.163055 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 Feb 14 00:38:27.163064 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 Feb 14 00:38:27.163071 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 Feb 14 00:38:27.163078 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 Feb 14 00:38:27.163085 kernel: ITS [mem 0x100100040000-0x10010005ffff] Feb 14 00:38:27.163092 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) Feb 14 00:38:27.163100 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) Feb 14 00:38:27.163107 kernel: ITS [mem 0x100100060000-0x10010007ffff] Feb 14 00:38:27.163114 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) Feb 14 00:38:27.163121 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) Feb 14 00:38:27.163128 kernel: ITS [mem 0x100100080000-0x10010009ffff] Feb 14 00:38:27.163136 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) Feb 14 00:38:27.163144 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) Feb 14 00:38:27.163151 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] Feb 14 00:38:27.163159 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) Feb 14 00:38:27.163166 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) Feb 14 00:38:27.163173 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] Feb 14 00:38:27.163180 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) Feb 14 00:38:27.163187 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) Feb 14 00:38:27.163195 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] Feb 14 00:38:27.163202 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) Feb 14 00:38:27.163209 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) Feb 14 00:38:27.163216 kernel: ITS [mem 0x100100100000-0x10010011ffff] Feb 14 00:38:27.163225 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) Feb 14 00:38:27.163233 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) Feb 14 00:38:27.163240 kernel: ITS [mem 0x100100120000-0x10010013ffff] Feb 14 00:38:27.163247 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) Feb 14 00:38:27.163254 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) Feb 14 00:38:27.163261 kernel: GICv3: using LPI property table @0x00000800003e0000 Feb 14 00:38:27.163269 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 Feb 14 00:38:27.163276 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 14 00:38:27.163283 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163290 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). Feb 14 00:38:27.163297 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). Feb 14 00:38:27.163306 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 14 00:38:27.163314 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 14 00:38:27.163321 kernel: Console: colour dummy device 80x25 Feb 14 00:38:27.163328 kernel: printk: console [tty0] enabled Feb 14 00:38:27.163336 kernel: ACPI: Core revision 20230628 Feb 14 00:38:27.163343 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 14 00:38:27.163350 kernel: pid_max: default: 81920 minimum: 640 Feb 14 00:38:27.163358 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 14 00:38:27.163365 kernel: landlock: Up and running. Feb 14 00:38:27.163372 kernel: SELinux: Initializing. Feb 14 00:38:27.163380 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 14 00:38:27.163388 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 14 00:38:27.163395 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Feb 14 00:38:27.163403 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Feb 14 00:38:27.163410 kernel: rcu: Hierarchical SRCU implementation. Feb 14 00:38:27.163417 kernel: rcu: Max phase no-delay instances is 400. Feb 14 00:38:27.163425 kernel: Platform MSI: ITS@0x100100040000 domain created Feb 14 00:38:27.163432 kernel: Platform MSI: ITS@0x100100060000 domain created Feb 14 00:38:27.163439 kernel: Platform MSI: ITS@0x100100080000 domain created Feb 14 00:38:27.163448 kernel: Platform MSI: ITS@0x1001000a0000 domain created Feb 14 00:38:27.163455 kernel: Platform MSI: ITS@0x1001000c0000 domain created Feb 14 00:38:27.163462 kernel: Platform MSI: ITS@0x1001000e0000 domain created Feb 14 00:38:27.163469 kernel: Platform MSI: ITS@0x100100100000 domain created Feb 14 00:38:27.163477 kernel: Platform MSI: ITS@0x100100120000 domain created Feb 14 00:38:27.163484 kernel: PCI/MSI: ITS@0x100100040000 domain created Feb 14 00:38:27.163491 kernel: PCI/MSI: ITS@0x100100060000 domain created Feb 14 00:38:27.163498 kernel: PCI/MSI: ITS@0x100100080000 domain created Feb 14 00:38:27.163506 kernel: PCI/MSI: ITS@0x1001000a0000 domain created Feb 14 00:38:27.163514 kernel: PCI/MSI: ITS@0x1001000c0000 domain created Feb 14 00:38:27.163521 kernel: PCI/MSI: ITS@0x1001000e0000 domain created Feb 14 00:38:27.163528 kernel: PCI/MSI: ITS@0x100100100000 domain created Feb 14 00:38:27.163536 kernel: PCI/MSI: ITS@0x100100120000 domain created Feb 14 00:38:27.163543 kernel: Remapping and enabling EFI services. Feb 14 00:38:27.163550 kernel: smp: Bringing up secondary CPUs ... Feb 14 00:38:27.163557 kernel: Detected PIPT I-cache on CPU1 Feb 14 00:38:27.163564 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 Feb 14 00:38:27.163572 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 Feb 14 00:38:27.163584 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163591 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] Feb 14 00:38:27.163598 kernel: Detected PIPT I-cache on CPU2 Feb 14 00:38:27.163606 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 Feb 14 00:38:27.163613 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 Feb 14 00:38:27.163620 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163627 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] Feb 14 00:38:27.163634 kernel: Detected PIPT I-cache on CPU3 Feb 14 00:38:27.163642 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 Feb 14 00:38:27.163649 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 Feb 14 00:38:27.163658 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163665 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] Feb 14 00:38:27.163672 kernel: Detected PIPT I-cache on CPU4 Feb 14 00:38:27.163679 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 Feb 14 00:38:27.163686 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 Feb 14 00:38:27.163694 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163701 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] Feb 14 00:38:27.163708 kernel: Detected PIPT I-cache on CPU5 Feb 14 00:38:27.163715 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 Feb 14 00:38:27.163724 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 Feb 14 00:38:27.163731 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163739 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] Feb 14 00:38:27.163746 kernel: Detected PIPT I-cache on CPU6 Feb 14 00:38:27.163753 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 Feb 14 00:38:27.163760 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 Feb 14 00:38:27.163768 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163775 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] Feb 14 00:38:27.163782 kernel: Detected PIPT I-cache on CPU7 Feb 14 00:38:27.163789 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 Feb 14 00:38:27.163798 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 Feb 14 00:38:27.163805 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163812 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] Feb 14 00:38:27.163819 kernel: Detected PIPT I-cache on CPU8 Feb 14 00:38:27.163827 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 Feb 14 00:38:27.163834 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 Feb 14 00:38:27.163841 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163848 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] Feb 14 00:38:27.163855 kernel: Detected PIPT I-cache on CPU9 Feb 14 00:38:27.163863 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 Feb 14 00:38:27.163872 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 Feb 14 00:38:27.163879 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163886 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] Feb 14 00:38:27.163893 kernel: Detected PIPT I-cache on CPU10 Feb 14 00:38:27.163901 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 Feb 14 00:38:27.163908 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 Feb 14 00:38:27.163915 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163922 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] Feb 14 00:38:27.163929 kernel: Detected PIPT I-cache on CPU11 Feb 14 00:38:27.163938 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 Feb 14 00:38:27.163945 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 Feb 14 00:38:27.163953 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163960 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] Feb 14 00:38:27.163967 kernel: Detected PIPT I-cache on CPU12 Feb 14 00:38:27.163974 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 Feb 14 00:38:27.163982 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 Feb 14 00:38:27.163989 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.163996 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] Feb 14 00:38:27.164003 kernel: Detected PIPT I-cache on CPU13 Feb 14 00:38:27.164012 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 Feb 14 00:38:27.164019 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 Feb 14 00:38:27.164027 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164034 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] Feb 14 00:38:27.164041 kernel: Detected PIPT I-cache on CPU14 Feb 14 00:38:27.164049 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 Feb 14 00:38:27.164056 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 Feb 14 00:38:27.164064 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164071 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] Feb 14 00:38:27.164079 kernel: Detected PIPT I-cache on CPU15 Feb 14 00:38:27.164087 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 Feb 14 00:38:27.164094 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 Feb 14 00:38:27.164101 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164108 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] Feb 14 00:38:27.164116 kernel: Detected PIPT I-cache on CPU16 Feb 14 00:38:27.164123 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 Feb 14 00:38:27.164130 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 Feb 14 00:38:27.164138 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164155 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] Feb 14 00:38:27.164164 kernel: Detected PIPT I-cache on CPU17 Feb 14 00:38:27.164172 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 Feb 14 00:38:27.164179 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 Feb 14 00:38:27.164187 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164194 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] Feb 14 00:38:27.164202 kernel: Detected PIPT I-cache on CPU18 Feb 14 00:38:27.164209 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 Feb 14 00:38:27.164217 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 Feb 14 00:38:27.164226 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164234 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] Feb 14 00:38:27.164241 kernel: Detected PIPT I-cache on CPU19 Feb 14 00:38:27.164249 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 Feb 14 00:38:27.164256 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 Feb 14 00:38:27.164264 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164272 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] Feb 14 00:38:27.164281 kernel: Detected PIPT I-cache on CPU20 Feb 14 00:38:27.164289 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 Feb 14 00:38:27.164298 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 Feb 14 00:38:27.164305 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164313 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] Feb 14 00:38:27.164321 kernel: Detected PIPT I-cache on CPU21 Feb 14 00:38:27.164329 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 Feb 14 00:38:27.164336 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 Feb 14 00:38:27.164344 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164353 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] Feb 14 00:38:27.164360 kernel: Detected PIPT I-cache on CPU22 Feb 14 00:38:27.164368 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 Feb 14 00:38:27.164376 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 Feb 14 00:38:27.164383 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164391 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] Feb 14 00:38:27.164398 kernel: Detected PIPT I-cache on CPU23 Feb 14 00:38:27.164406 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 Feb 14 00:38:27.164413 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 Feb 14 00:38:27.164423 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164430 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] Feb 14 00:38:27.164438 kernel: Detected PIPT I-cache on CPU24 Feb 14 00:38:27.164447 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 Feb 14 00:38:27.164455 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 Feb 14 00:38:27.164462 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164470 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] Feb 14 00:38:27.164478 kernel: Detected PIPT I-cache on CPU25 Feb 14 00:38:27.164485 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 Feb 14 00:38:27.164493 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 Feb 14 00:38:27.164502 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164509 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] Feb 14 00:38:27.164517 kernel: Detected PIPT I-cache on CPU26 Feb 14 00:38:27.164525 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 Feb 14 00:38:27.164532 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 Feb 14 00:38:27.164540 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164548 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] Feb 14 00:38:27.164556 kernel: Detected PIPT I-cache on CPU27 Feb 14 00:38:27.164563 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 Feb 14 00:38:27.164572 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 Feb 14 00:38:27.164583 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164591 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] Feb 14 00:38:27.164598 kernel: Detected PIPT I-cache on CPU28 Feb 14 00:38:27.164606 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 Feb 14 00:38:27.164614 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 Feb 14 00:38:27.164621 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164629 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] Feb 14 00:38:27.164636 kernel: Detected PIPT I-cache on CPU29 Feb 14 00:38:27.164644 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 Feb 14 00:38:27.164653 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 Feb 14 00:38:27.164661 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164669 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] Feb 14 00:38:27.164676 kernel: Detected PIPT I-cache on CPU30 Feb 14 00:38:27.164684 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 Feb 14 00:38:27.164692 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 Feb 14 00:38:27.164699 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164707 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] Feb 14 00:38:27.164715 kernel: Detected PIPT I-cache on CPU31 Feb 14 00:38:27.164724 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 Feb 14 00:38:27.164732 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 Feb 14 00:38:27.164740 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164747 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] Feb 14 00:38:27.164755 kernel: Detected PIPT I-cache on CPU32 Feb 14 00:38:27.164762 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 Feb 14 00:38:27.164770 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 Feb 14 00:38:27.164778 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164785 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] Feb 14 00:38:27.164794 kernel: Detected PIPT I-cache on CPU33 Feb 14 00:38:27.164802 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 Feb 14 00:38:27.164810 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 Feb 14 00:38:27.164818 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164825 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] Feb 14 00:38:27.164833 kernel: Detected PIPT I-cache on CPU34 Feb 14 00:38:27.164841 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 Feb 14 00:38:27.164848 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 Feb 14 00:38:27.164856 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164864 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] Feb 14 00:38:27.164873 kernel: Detected PIPT I-cache on CPU35 Feb 14 00:38:27.164880 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 Feb 14 00:38:27.164888 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 Feb 14 00:38:27.164896 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164904 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] Feb 14 00:38:27.164911 kernel: Detected PIPT I-cache on CPU36 Feb 14 00:38:27.164919 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 Feb 14 00:38:27.164927 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 Feb 14 00:38:27.164934 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164943 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] Feb 14 00:38:27.164951 kernel: Detected PIPT I-cache on CPU37 Feb 14 00:38:27.164959 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 Feb 14 00:38:27.164967 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 Feb 14 00:38:27.164976 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.164983 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] Feb 14 00:38:27.164991 kernel: Detected PIPT I-cache on CPU38 Feb 14 00:38:27.164998 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 Feb 14 00:38:27.165006 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 Feb 14 00:38:27.165014 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165022 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] Feb 14 00:38:27.165030 kernel: Detected PIPT I-cache on CPU39 Feb 14 00:38:27.165038 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 Feb 14 00:38:27.165045 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 Feb 14 00:38:27.165053 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165060 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] Feb 14 00:38:27.165068 kernel: Detected PIPT I-cache on CPU40 Feb 14 00:38:27.165076 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 Feb 14 00:38:27.165085 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 Feb 14 00:38:27.165092 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165100 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] Feb 14 00:38:27.165107 kernel: Detected PIPT I-cache on CPU41 Feb 14 00:38:27.165115 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 Feb 14 00:38:27.165123 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 Feb 14 00:38:27.165130 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165138 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] Feb 14 00:38:27.165146 kernel: Detected PIPT I-cache on CPU42 Feb 14 00:38:27.165155 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 Feb 14 00:38:27.165163 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 Feb 14 00:38:27.165170 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165178 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] Feb 14 00:38:27.165185 kernel: Detected PIPT I-cache on CPU43 Feb 14 00:38:27.165193 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 Feb 14 00:38:27.165201 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 Feb 14 00:38:27.165208 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165216 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] Feb 14 00:38:27.165223 kernel: Detected PIPT I-cache on CPU44 Feb 14 00:38:27.165233 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 Feb 14 00:38:27.165240 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 Feb 14 00:38:27.165248 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165255 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] Feb 14 00:38:27.165263 kernel: Detected PIPT I-cache on CPU45 Feb 14 00:38:27.165271 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 Feb 14 00:38:27.165278 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 Feb 14 00:38:27.165286 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165293 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] Feb 14 00:38:27.165303 kernel: Detected PIPT I-cache on CPU46 Feb 14 00:38:27.165310 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 Feb 14 00:38:27.165318 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 Feb 14 00:38:27.165326 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165333 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] Feb 14 00:38:27.165341 kernel: Detected PIPT I-cache on CPU47 Feb 14 00:38:27.165349 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 Feb 14 00:38:27.165356 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 Feb 14 00:38:27.165364 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165372 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] Feb 14 00:38:27.165381 kernel: Detected PIPT I-cache on CPU48 Feb 14 00:38:27.165388 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 Feb 14 00:38:27.165396 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 Feb 14 00:38:27.165404 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165411 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] Feb 14 00:38:27.165419 kernel: Detected PIPT I-cache on CPU49 Feb 14 00:38:27.165427 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 Feb 14 00:38:27.165435 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 Feb 14 00:38:27.165443 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165452 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] Feb 14 00:38:27.165460 kernel: Detected PIPT I-cache on CPU50 Feb 14 00:38:27.165468 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 Feb 14 00:38:27.165476 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 Feb 14 00:38:27.165483 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165491 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] Feb 14 00:38:27.165498 kernel: Detected PIPT I-cache on CPU51 Feb 14 00:38:27.165506 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 Feb 14 00:38:27.165514 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 Feb 14 00:38:27.165523 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165531 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] Feb 14 00:38:27.165538 kernel: Detected PIPT I-cache on CPU52 Feb 14 00:38:27.165546 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 Feb 14 00:38:27.165554 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 Feb 14 00:38:27.165561 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165569 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] Feb 14 00:38:27.165577 kernel: Detected PIPT I-cache on CPU53 Feb 14 00:38:27.165586 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 Feb 14 00:38:27.165594 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 Feb 14 00:38:27.165603 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165611 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] Feb 14 00:38:27.165619 kernel: Detected PIPT I-cache on CPU54 Feb 14 00:38:27.165626 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 Feb 14 00:38:27.165634 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 Feb 14 00:38:27.165642 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165649 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] Feb 14 00:38:27.165657 kernel: Detected PIPT I-cache on CPU55 Feb 14 00:38:27.165664 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 Feb 14 00:38:27.165673 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 Feb 14 00:38:27.165681 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165689 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] Feb 14 00:38:27.165696 kernel: Detected PIPT I-cache on CPU56 Feb 14 00:38:27.165705 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 Feb 14 00:38:27.165713 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 Feb 14 00:38:27.165721 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165728 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] Feb 14 00:38:27.165736 kernel: Detected PIPT I-cache on CPU57 Feb 14 00:38:27.165744 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 Feb 14 00:38:27.165753 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 Feb 14 00:38:27.165760 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165768 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] Feb 14 00:38:27.165776 kernel: Detected PIPT I-cache on CPU58 Feb 14 00:38:27.165783 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 Feb 14 00:38:27.165791 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 Feb 14 00:38:27.165799 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165806 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] Feb 14 00:38:27.165814 kernel: Detected PIPT I-cache on CPU59 Feb 14 00:38:27.165823 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 Feb 14 00:38:27.165831 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 Feb 14 00:38:27.165838 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165846 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] Feb 14 00:38:27.165854 kernel: Detected PIPT I-cache on CPU60 Feb 14 00:38:27.165861 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 Feb 14 00:38:27.165869 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 Feb 14 00:38:27.165877 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165884 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] Feb 14 00:38:27.165892 kernel: Detected PIPT I-cache on CPU61 Feb 14 00:38:27.165901 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 Feb 14 00:38:27.165908 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 Feb 14 00:38:27.165916 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165924 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] Feb 14 00:38:27.165932 kernel: Detected PIPT I-cache on CPU62 Feb 14 00:38:27.165939 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 Feb 14 00:38:27.165947 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 Feb 14 00:38:27.165955 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.165962 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] Feb 14 00:38:27.165971 kernel: Detected PIPT I-cache on CPU63 Feb 14 00:38:27.165979 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 Feb 14 00:38:27.165987 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 Feb 14 00:38:27.165995 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166002 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] Feb 14 00:38:27.166010 kernel: Detected PIPT I-cache on CPU64 Feb 14 00:38:27.166018 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 Feb 14 00:38:27.166025 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 Feb 14 00:38:27.166033 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166041 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] Feb 14 00:38:27.166050 kernel: Detected PIPT I-cache on CPU65 Feb 14 00:38:27.166057 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 Feb 14 00:38:27.166065 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 Feb 14 00:38:27.166073 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166080 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] Feb 14 00:38:27.166088 kernel: Detected PIPT I-cache on CPU66 Feb 14 00:38:27.166096 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 Feb 14 00:38:27.166104 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 Feb 14 00:38:27.166111 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166120 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] Feb 14 00:38:27.166128 kernel: Detected PIPT I-cache on CPU67 Feb 14 00:38:27.166136 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 Feb 14 00:38:27.166143 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 Feb 14 00:38:27.166151 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166159 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] Feb 14 00:38:27.166166 kernel: Detected PIPT I-cache on CPU68 Feb 14 00:38:27.166174 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 Feb 14 00:38:27.166181 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 Feb 14 00:38:27.166191 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166198 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] Feb 14 00:38:27.166206 kernel: Detected PIPT I-cache on CPU69 Feb 14 00:38:27.166214 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 Feb 14 00:38:27.166221 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 Feb 14 00:38:27.166229 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166237 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] Feb 14 00:38:27.166244 kernel: Detected PIPT I-cache on CPU70 Feb 14 00:38:27.166252 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 Feb 14 00:38:27.166259 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 Feb 14 00:38:27.166269 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166276 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] Feb 14 00:38:27.166284 kernel: Detected PIPT I-cache on CPU71 Feb 14 00:38:27.166292 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 Feb 14 00:38:27.166299 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 Feb 14 00:38:27.166307 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166315 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] Feb 14 00:38:27.166322 kernel: Detected PIPT I-cache on CPU72 Feb 14 00:38:27.166330 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 Feb 14 00:38:27.166339 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 Feb 14 00:38:27.166347 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166355 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] Feb 14 00:38:27.166362 kernel: Detected PIPT I-cache on CPU73 Feb 14 00:38:27.166370 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 Feb 14 00:38:27.166377 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 Feb 14 00:38:27.166385 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166393 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] Feb 14 00:38:27.166400 kernel: Detected PIPT I-cache on CPU74 Feb 14 00:38:27.166408 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 Feb 14 00:38:27.166417 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 Feb 14 00:38:27.166425 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166432 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] Feb 14 00:38:27.166440 kernel: Detected PIPT I-cache on CPU75 Feb 14 00:38:27.166447 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 Feb 14 00:38:27.166455 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 Feb 14 00:38:27.166463 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166470 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] Feb 14 00:38:27.166478 kernel: Detected PIPT I-cache on CPU76 Feb 14 00:38:27.166487 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 Feb 14 00:38:27.166495 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 Feb 14 00:38:27.166503 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166510 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] Feb 14 00:38:27.166518 kernel: Detected PIPT I-cache on CPU77 Feb 14 00:38:27.166526 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 Feb 14 00:38:27.166534 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 Feb 14 00:38:27.166541 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166549 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] Feb 14 00:38:27.166557 kernel: Detected PIPT I-cache on CPU78 Feb 14 00:38:27.166565 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 Feb 14 00:38:27.166573 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 Feb 14 00:38:27.166583 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166591 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] Feb 14 00:38:27.166598 kernel: Detected PIPT I-cache on CPU79 Feb 14 00:38:27.166606 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 Feb 14 00:38:27.166614 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 Feb 14 00:38:27.166622 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 14 00:38:27.166629 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] Feb 14 00:38:27.166639 kernel: smp: Brought up 1 node, 80 CPUs Feb 14 00:38:27.166647 kernel: SMP: Total of 80 processors activated. Feb 14 00:38:27.166654 kernel: CPU features: detected: 32-bit EL0 Support Feb 14 00:38:27.166662 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 14 00:38:27.166670 kernel: CPU features: detected: Common not Private translations Feb 14 00:38:27.166677 kernel: CPU features: detected: CRC32 instructions Feb 14 00:38:27.166685 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 14 00:38:27.166693 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 14 00:38:27.166701 kernel: CPU features: detected: LSE atomic instructions Feb 14 00:38:27.166710 kernel: CPU features: detected: Privileged Access Never Feb 14 00:38:27.166717 kernel: CPU features: detected: RAS Extension Support Feb 14 00:38:27.166725 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 14 00:38:27.166733 kernel: CPU: All CPU(s) started at EL2 Feb 14 00:38:27.166740 kernel: alternatives: applying system-wide alternatives Feb 14 00:38:27.166748 kernel: devtmpfs: initialized Feb 14 00:38:27.166755 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 14 00:38:27.166763 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Feb 14 00:38:27.166771 kernel: pinctrl core: initialized pinctrl subsystem Feb 14 00:38:27.166780 kernel: SMBIOS 3.4.0 present. Feb 14 00:38:27.166787 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 Feb 14 00:38:27.166795 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 14 00:38:27.166803 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations Feb 14 00:38:27.166811 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 14 00:38:27.166818 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 14 00:38:27.166826 kernel: audit: initializing netlink subsys (disabled) Feb 14 00:38:27.166833 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 Feb 14 00:38:27.166841 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 14 00:38:27.166850 kernel: cpuidle: using governor menu Feb 14 00:38:27.166858 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 14 00:38:27.166865 kernel: ASID allocator initialised with 32768 entries Feb 14 00:38:27.166873 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 14 00:38:27.166880 kernel: Serial: AMBA PL011 UART driver Feb 14 00:38:27.166888 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 14 00:38:27.166896 kernel: Modules: 0 pages in range for non-PLT usage Feb 14 00:38:27.166903 kernel: Modules: 509040 pages in range for PLT usage Feb 14 00:38:27.166911 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 14 00:38:27.166920 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 14 00:38:27.166927 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 14 00:38:27.166935 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 14 00:38:27.166943 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 14 00:38:27.166951 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 14 00:38:27.166958 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 14 00:38:27.166966 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 14 00:38:27.166974 kernel: ACPI: Added _OSI(Module Device) Feb 14 00:38:27.166981 kernel: ACPI: Added _OSI(Processor Device) Feb 14 00:38:27.166990 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 14 00:38:27.166998 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 14 00:38:27.167005 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded Feb 14 00:38:27.167013 kernel: ACPI: Interpreter enabled Feb 14 00:38:27.167021 kernel: ACPI: Using GIC for interrupt routing Feb 14 00:38:27.167028 kernel: ACPI: MCFG table detected, 8 entries Feb 14 00:38:27.167036 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 Feb 14 00:38:27.167044 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 Feb 14 00:38:27.167051 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 Feb 14 00:38:27.167060 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 Feb 14 00:38:27.167068 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 Feb 14 00:38:27.167076 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 Feb 14 00:38:27.167083 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 Feb 14 00:38:27.167091 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 Feb 14 00:38:27.167099 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA Feb 14 00:38:27.167107 kernel: printk: console [ttyAMA0] enabled Feb 14 00:38:27.167114 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA Feb 14 00:38:27.167122 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) Feb 14 00:38:27.167249 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 00:38:27.167323 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 00:38:27.167386 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] Feb 14 00:38:27.167448 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 00:38:27.167511 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 Feb 14 00:38:27.167572 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] Feb 14 00:38:27.167589 kernel: PCI host bridge to bus 000d:00 Feb 14 00:38:27.167661 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] Feb 14 00:38:27.167720 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] Feb 14 00:38:27.167776 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] Feb 14 00:38:27.167857 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 Feb 14 00:38:27.167931 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 Feb 14 00:38:27.167998 kernel: pci 000d:00:01.0: enabling Extended Tags Feb 14 00:38:27.168065 kernel: pci 000d:00:01.0: supports D1 D2 Feb 14 00:38:27.168131 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.168203 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 Feb 14 00:38:27.168269 kernel: pci 000d:00:02.0: supports D1 D2 Feb 14 00:38:27.168333 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.168405 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 Feb 14 00:38:27.168472 kernel: pci 000d:00:03.0: supports D1 D2 Feb 14 00:38:27.168539 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.168637 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 Feb 14 00:38:27.168703 kernel: pci 000d:00:04.0: supports D1 D2 Feb 14 00:38:27.168766 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.168776 kernel: acpiphp: Slot [1] registered Feb 14 00:38:27.168784 kernel: acpiphp: Slot [2] registered Feb 14 00:38:27.168792 kernel: acpiphp: Slot [3] registered Feb 14 00:38:27.168803 kernel: acpiphp: Slot [4] registered Feb 14 00:38:27.168860 kernel: pci_bus 000d:00: on NUMA node 0 Feb 14 00:38:27.168925 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 00:38:27.168991 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.169055 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.169121 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 00:38:27.169186 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.169254 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.169320 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 00:38:27.169383 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.169449 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.169513 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 00:38:27.169577 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.169646 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.169713 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] Feb 14 00:38:27.169777 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] Feb 14 00:38:27.169841 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] Feb 14 00:38:27.169905 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] Feb 14 00:38:27.169969 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] Feb 14 00:38:27.170034 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] Feb 14 00:38:27.170099 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] Feb 14 00:38:27.170163 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] Feb 14 00:38:27.170230 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.170295 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.170358 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.170423 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.170487 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.170551 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.170619 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.170688 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.170752 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.170817 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.170880 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.170945 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.171009 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.171073 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.171136 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.171202 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.171266 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Feb 14 00:38:27.171331 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] Feb 14 00:38:27.171395 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] Feb 14 00:38:27.171459 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Feb 14 00:38:27.171524 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] Feb 14 00:38:27.171591 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] Feb 14 00:38:27.171658 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Feb 14 00:38:27.171723 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] Feb 14 00:38:27.171787 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] Feb 14 00:38:27.171852 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Feb 14 00:38:27.171916 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] Feb 14 00:38:27.171980 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] Feb 14 00:38:27.172042 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] Feb 14 00:38:27.172099 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] Feb 14 00:38:27.172170 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] Feb 14 00:38:27.172231 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] Feb 14 00:38:27.172300 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] Feb 14 00:38:27.172361 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] Feb 14 00:38:27.172439 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] Feb 14 00:38:27.172501 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] Feb 14 00:38:27.172567 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] Feb 14 00:38:27.172632 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] Feb 14 00:38:27.172643 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) Feb 14 00:38:27.172712 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 00:38:27.172779 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 00:38:27.172841 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] Feb 14 00:38:27.172904 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 00:38:27.172965 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 Feb 14 00:38:27.173027 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] Feb 14 00:38:27.173037 kernel: PCI host bridge to bus 0000:00 Feb 14 00:38:27.173100 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] Feb 14 00:38:27.173161 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] Feb 14 00:38:27.173217 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 14 00:38:27.173289 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 Feb 14 00:38:27.173361 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 Feb 14 00:38:27.173425 kernel: pci 0000:00:01.0: enabling Extended Tags Feb 14 00:38:27.173490 kernel: pci 0000:00:01.0: supports D1 D2 Feb 14 00:38:27.173553 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.173629 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 Feb 14 00:38:27.173693 kernel: pci 0000:00:02.0: supports D1 D2 Feb 14 00:38:27.173758 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.173830 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 Feb 14 00:38:27.173896 kernel: pci 0000:00:03.0: supports D1 D2 Feb 14 00:38:27.173959 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.174030 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 Feb 14 00:38:27.174096 kernel: pci 0000:00:04.0: supports D1 D2 Feb 14 00:38:27.174161 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.174171 kernel: acpiphp: Slot [1-1] registered Feb 14 00:38:27.174179 kernel: acpiphp: Slot [2-1] registered Feb 14 00:38:27.174186 kernel: acpiphp: Slot [3-1] registered Feb 14 00:38:27.174194 kernel: acpiphp: Slot [4-1] registered Feb 14 00:38:27.174249 kernel: pci_bus 0000:00: on NUMA node 0 Feb 14 00:38:27.174313 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 00:38:27.174377 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.174445 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.174509 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 00:38:27.174573 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.174641 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.174705 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 00:38:27.174769 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.174835 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.174900 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 00:38:27.174964 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.175028 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.175092 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] Feb 14 00:38:27.175157 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Feb 14 00:38:27.175221 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] Feb 14 00:38:27.175287 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Feb 14 00:38:27.175351 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] Feb 14 00:38:27.175416 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Feb 14 00:38:27.175480 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] Feb 14 00:38:27.175545 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Feb 14 00:38:27.175614 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.175677 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.175743 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.175809 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.175874 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.175937 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.176003 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.176066 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.176131 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.176194 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.176258 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.176321 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.176387 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.176451 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.176517 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.176584 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.176648 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 14 00:38:27.176712 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] Feb 14 00:38:27.176776 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Feb 14 00:38:27.176841 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Feb 14 00:38:27.176906 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] Feb 14 00:38:27.176971 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Feb 14 00:38:27.177035 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Feb 14 00:38:27.177101 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] Feb 14 00:38:27.177166 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Feb 14 00:38:27.177232 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Feb 14 00:38:27.177295 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] Feb 14 00:38:27.177360 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Feb 14 00:38:27.177418 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] Feb 14 00:38:27.177478 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] Feb 14 00:38:27.177546 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] Feb 14 00:38:27.177609 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Feb 14 00:38:27.177676 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] Feb 14 00:38:27.177736 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Feb 14 00:38:27.177811 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] Feb 14 00:38:27.177874 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Feb 14 00:38:27.177942 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] Feb 14 00:38:27.178001 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Feb 14 00:38:27.178011 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) Feb 14 00:38:27.178082 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 00:38:27.178145 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 00:38:27.178211 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] Feb 14 00:38:27.178273 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 00:38:27.178335 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 Feb 14 00:38:27.178397 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] Feb 14 00:38:27.178407 kernel: PCI host bridge to bus 0005:00 Feb 14 00:38:27.178471 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] Feb 14 00:38:27.178530 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] Feb 14 00:38:27.178589 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] Feb 14 00:38:27.178663 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 Feb 14 00:38:27.178735 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 Feb 14 00:38:27.178801 kernel: pci 0005:00:01.0: supports D1 D2 Feb 14 00:38:27.178865 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.178939 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 Feb 14 00:38:27.179004 kernel: pci 0005:00:03.0: supports D1 D2 Feb 14 00:38:27.179072 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.179141 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 Feb 14 00:38:27.179207 kernel: pci 0005:00:05.0: supports D1 D2 Feb 14 00:38:27.179271 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.179344 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 Feb 14 00:38:27.179410 kernel: pci 0005:00:07.0: supports D1 D2 Feb 14 00:38:27.179474 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.179486 kernel: acpiphp: Slot [1-2] registered Feb 14 00:38:27.179494 kernel: acpiphp: Slot [2-2] registered Feb 14 00:38:27.179564 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 Feb 14 00:38:27.179637 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] Feb 14 00:38:27.179704 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] Feb 14 00:38:27.179777 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 Feb 14 00:38:27.179845 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] Feb 14 00:38:27.179912 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] Feb 14 00:38:27.179972 kernel: pci_bus 0005:00: on NUMA node 0 Feb 14 00:38:27.180036 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 00:38:27.180101 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.180166 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.180256 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 00:38:27.180323 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.180394 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.180460 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 00:38:27.180527 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.180604 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 14 00:38:27.180669 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 00:38:27.180734 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.180798 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 Feb 14 00:38:27.180866 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] Feb 14 00:38:27.180932 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Feb 14 00:38:27.180997 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] Feb 14 00:38:27.181060 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Feb 14 00:38:27.181124 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] Feb 14 00:38:27.181187 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Feb 14 00:38:27.181253 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] Feb 14 00:38:27.181316 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Feb 14 00:38:27.181383 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.181447 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.181512 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.181577 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.181647 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.181710 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.181775 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.181840 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.181907 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.181972 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.182036 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.182100 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.182165 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.182230 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.182295 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.182360 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.182424 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Feb 14 00:38:27.182491 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] Feb 14 00:38:27.182555 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Feb 14 00:38:27.182623 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Feb 14 00:38:27.182686 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] Feb 14 00:38:27.182761 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Feb 14 00:38:27.182829 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] Feb 14 00:38:27.182899 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] Feb 14 00:38:27.182963 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Feb 14 00:38:27.183027 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] Feb 14 00:38:27.183093 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Feb 14 00:38:27.183160 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] Feb 14 00:38:27.183227 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] Feb 14 00:38:27.183291 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Feb 14 00:38:27.183358 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] Feb 14 00:38:27.183422 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Feb 14 00:38:27.183482 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] Feb 14 00:38:27.183539 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] Feb 14 00:38:27.183612 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] Feb 14 00:38:27.183673 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Feb 14 00:38:27.183751 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] Feb 14 00:38:27.183812 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Feb 14 00:38:27.183878 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] Feb 14 00:38:27.183940 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Feb 14 00:38:27.184006 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] Feb 14 00:38:27.184069 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Feb 14 00:38:27.184079 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) Feb 14 00:38:27.184149 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 00:38:27.184216 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 00:38:27.184279 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] Feb 14 00:38:27.184342 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 00:38:27.184403 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 Feb 14 00:38:27.184477 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] Feb 14 00:38:27.184487 kernel: PCI host bridge to bus 0003:00 Feb 14 00:38:27.184552 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] Feb 14 00:38:27.184647 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] Feb 14 00:38:27.184708 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] Feb 14 00:38:27.184783 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 Feb 14 00:38:27.184857 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 Feb 14 00:38:27.184929 kernel: pci 0003:00:01.0: supports D1 D2 Feb 14 00:38:27.184995 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.185066 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 Feb 14 00:38:27.185129 kernel: pci 0003:00:03.0: supports D1 D2 Feb 14 00:38:27.185193 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.185263 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 Feb 14 00:38:27.185327 kernel: pci 0003:00:05.0: supports D1 D2 Feb 14 00:38:27.185393 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.185403 kernel: acpiphp: Slot [1-3] registered Feb 14 00:38:27.185411 kernel: acpiphp: Slot [2-3] registered Feb 14 00:38:27.185482 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 Feb 14 00:38:27.185548 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] Feb 14 00:38:27.185615 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] Feb 14 00:38:27.185680 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] Feb 14 00:38:27.185744 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold Feb 14 00:38:27.185811 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] Feb 14 00:38:27.185875 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) Feb 14 00:38:27.185940 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] Feb 14 00:38:27.186005 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) Feb 14 00:38:27.186070 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) Feb 14 00:38:27.186142 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 Feb 14 00:38:27.186209 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] Feb 14 00:38:27.186274 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] Feb 14 00:38:27.186338 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] Feb 14 00:38:27.186404 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold Feb 14 00:38:27.186471 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] Feb 14 00:38:27.186537 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) Feb 14 00:38:27.186607 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] Feb 14 00:38:27.186672 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) Feb 14 00:38:27.186733 kernel: pci_bus 0003:00: on NUMA node 0 Feb 14 00:38:27.186797 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 00:38:27.186862 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.186925 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.186991 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 00:38:27.187055 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.187119 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.187186 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 Feb 14 00:38:27.187251 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 Feb 14 00:38:27.187315 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Feb 14 00:38:27.187380 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] Feb 14 00:38:27.187454 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] Feb 14 00:38:27.187521 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] Feb 14 00:38:27.187590 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] Feb 14 00:38:27.187655 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] Feb 14 00:38:27.187722 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.187786 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.187850 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.187914 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.187979 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.188043 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.188108 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.188172 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.188239 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.188302 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.188366 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.188430 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.188494 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Feb 14 00:38:27.188558 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] Feb 14 00:38:27.188625 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] Feb 14 00:38:27.188692 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Feb 14 00:38:27.188756 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] Feb 14 00:38:27.188823 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] Feb 14 00:38:27.188889 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] Feb 14 00:38:27.188957 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] Feb 14 00:38:27.189023 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] Feb 14 00:38:27.189091 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] Feb 14 00:38:27.189160 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] Feb 14 00:38:27.189228 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] Feb 14 00:38:27.189295 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] Feb 14 00:38:27.189360 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] Feb 14 00:38:27.189428 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] Feb 14 00:38:27.189493 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] Feb 14 00:38:27.189560 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] Feb 14 00:38:27.189630 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] Feb 14 00:38:27.189700 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] Feb 14 00:38:27.189767 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] Feb 14 00:38:27.189833 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] Feb 14 00:38:27.189899 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] Feb 14 00:38:27.189963 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] Feb 14 00:38:27.190029 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] Feb 14 00:38:27.190094 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] Feb 14 00:38:27.190154 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 14 00:38:27.190212 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] Feb 14 00:38:27.190270 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] Feb 14 00:38:27.190345 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] Feb 14 00:38:27.190407 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] Feb 14 00:38:27.190477 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] Feb 14 00:38:27.190538 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] Feb 14 00:38:27.190817 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] Feb 14 00:38:27.190883 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] Feb 14 00:38:27.190894 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) Feb 14 00:38:27.190963 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 00:38:27.191026 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 00:38:27.191090 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] Feb 14 00:38:27.191151 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 00:38:27.191211 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 Feb 14 00:38:27.191271 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] Feb 14 00:38:27.191282 kernel: PCI host bridge to bus 000c:00 Feb 14 00:38:27.191345 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] Feb 14 00:38:27.191401 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] Feb 14 00:38:27.191459 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] Feb 14 00:38:27.191529 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 Feb 14 00:38:27.191606 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 Feb 14 00:38:27.191671 kernel: pci 000c:00:01.0: enabling Extended Tags Feb 14 00:38:27.191733 kernel: pci 000c:00:01.0: supports D1 D2 Feb 14 00:38:27.191800 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.191873 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 Feb 14 00:38:27.191941 kernel: pci 000c:00:02.0: supports D1 D2 Feb 14 00:38:27.192005 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.192076 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 Feb 14 00:38:27.192140 kernel: pci 000c:00:03.0: supports D1 D2 Feb 14 00:38:27.192203 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.192273 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 Feb 14 00:38:27.192336 kernel: pci 000c:00:04.0: supports D1 D2 Feb 14 00:38:27.192402 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.192412 kernel: acpiphp: Slot [1-4] registered Feb 14 00:38:27.192421 kernel: acpiphp: Slot [2-4] registered Feb 14 00:38:27.192429 kernel: acpiphp: Slot [3-2] registered Feb 14 00:38:27.192437 kernel: acpiphp: Slot [4-2] registered Feb 14 00:38:27.192492 kernel: pci_bus 000c:00: on NUMA node 0 Feb 14 00:38:27.192555 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 00:38:27.192623 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.192690 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.192753 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 00:38:27.192817 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.192880 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.192943 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 00:38:27.193006 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.193069 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.193136 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 00:38:27.193198 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.193261 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.193324 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] Feb 14 00:38:27.193388 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] Feb 14 00:38:27.193450 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] Feb 14 00:38:27.193514 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] Feb 14 00:38:27.193578 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] Feb 14 00:38:27.193646 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] Feb 14 00:38:27.193709 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] Feb 14 00:38:27.193772 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] Feb 14 00:38:27.193835 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.193898 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.193961 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.194024 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.194089 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.194152 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.194215 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.194278 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.194341 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.194403 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.194466 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.194529 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.194597 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.194660 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.194723 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.194787 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.194850 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Feb 14 00:38:27.194914 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] Feb 14 00:38:27.194978 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] Feb 14 00:38:27.195040 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Feb 14 00:38:27.195105 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] Feb 14 00:38:27.195170 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] Feb 14 00:38:27.195233 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Feb 14 00:38:27.195298 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] Feb 14 00:38:27.195361 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] Feb 14 00:38:27.195425 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Feb 14 00:38:27.195490 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] Feb 14 00:38:27.195554 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] Feb 14 00:38:27.195614 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] Feb 14 00:38:27.195672 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] Feb 14 00:38:27.195739 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] Feb 14 00:38:27.195799 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] Feb 14 00:38:27.195873 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] Feb 14 00:38:27.195935 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] Feb 14 00:38:27.196000 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] Feb 14 00:38:27.196060 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] Feb 14 00:38:27.196126 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] Feb 14 00:38:27.196186 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] Feb 14 00:38:27.196197 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) Feb 14 00:38:27.196267 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 00:38:27.196330 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 00:38:27.196392 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] Feb 14 00:38:27.196453 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 00:38:27.196514 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 Feb 14 00:38:27.196575 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] Feb 14 00:38:27.196588 kernel: PCI host bridge to bus 0002:00 Feb 14 00:38:27.196655 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] Feb 14 00:38:27.196713 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] Feb 14 00:38:27.196769 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] Feb 14 00:38:27.196840 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 Feb 14 00:38:27.196910 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 Feb 14 00:38:27.196974 kernel: pci 0002:00:01.0: supports D1 D2 Feb 14 00:38:27.197041 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.197110 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 Feb 14 00:38:27.197175 kernel: pci 0002:00:03.0: supports D1 D2 Feb 14 00:38:27.197239 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.197307 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 Feb 14 00:38:27.197372 kernel: pci 0002:00:05.0: supports D1 D2 Feb 14 00:38:27.197434 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.197507 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 Feb 14 00:38:27.197571 kernel: pci 0002:00:07.0: supports D1 D2 Feb 14 00:38:27.197639 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.197650 kernel: acpiphp: Slot [1-5] registered Feb 14 00:38:27.197658 kernel: acpiphp: Slot [2-5] registered Feb 14 00:38:27.197666 kernel: acpiphp: Slot [3-3] registered Feb 14 00:38:27.197674 kernel: acpiphp: Slot [4-3] registered Feb 14 00:38:27.197729 kernel: pci_bus 0002:00: on NUMA node 0 Feb 14 00:38:27.197793 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 00:38:27.197858 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.197925 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Feb 14 00:38:27.197992 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 00:38:27.198057 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.198123 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.198189 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 00:38:27.198252 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.198317 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.198381 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 00:38:27.198444 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.198509 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.198577 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] Feb 14 00:38:27.198643 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] Feb 14 00:38:27.198707 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] Feb 14 00:38:27.198770 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] Feb 14 00:38:27.198835 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] Feb 14 00:38:27.198898 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] Feb 14 00:38:27.198962 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] Feb 14 00:38:27.199028 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] Feb 14 00:38:27.199091 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.199155 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.199223 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.199289 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.199353 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.199415 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.199479 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.199544 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.199630 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.199694 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.199758 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.199821 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.199885 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.199948 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.200011 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.200074 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.200140 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Feb 14 00:38:27.200203 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] Feb 14 00:38:27.200268 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] Feb 14 00:38:27.200332 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Feb 14 00:38:27.200397 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] Feb 14 00:38:27.200462 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] Feb 14 00:38:27.200525 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Feb 14 00:38:27.200595 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] Feb 14 00:38:27.200659 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] Feb 14 00:38:27.200723 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Feb 14 00:38:27.200787 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] Feb 14 00:38:27.200851 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] Feb 14 00:38:27.200909 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] Feb 14 00:38:27.200969 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] Feb 14 00:38:27.201038 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] Feb 14 00:38:27.201098 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] Feb 14 00:38:27.201167 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] Feb 14 00:38:27.201226 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] Feb 14 00:38:27.201300 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] Feb 14 00:38:27.201363 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] Feb 14 00:38:27.201428 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] Feb 14 00:38:27.201488 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] Feb 14 00:38:27.201498 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) Feb 14 00:38:27.201567 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 00:38:27.201639 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 00:38:27.201704 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] Feb 14 00:38:27.201769 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 00:38:27.201832 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 Feb 14 00:38:27.201893 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] Feb 14 00:38:27.201903 kernel: PCI host bridge to bus 0001:00 Feb 14 00:38:27.201968 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] Feb 14 00:38:27.202025 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] Feb 14 00:38:27.202084 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] Feb 14 00:38:27.202157 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 Feb 14 00:38:27.202230 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 Feb 14 00:38:27.202293 kernel: pci 0001:00:01.0: enabling Extended Tags Feb 14 00:38:27.202358 kernel: pci 0001:00:01.0: supports D1 D2 Feb 14 00:38:27.202422 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.202492 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 Feb 14 00:38:27.202558 kernel: pci 0001:00:02.0: supports D1 D2 Feb 14 00:38:27.202697 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.202774 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 Feb 14 00:38:27.202838 kernel: pci 0001:00:03.0: supports D1 D2 Feb 14 00:38:27.202901 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.202971 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 Feb 14 00:38:27.203039 kernel: pci 0001:00:04.0: supports D1 D2 Feb 14 00:38:27.203104 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.203116 kernel: acpiphp: Slot [1-6] registered Feb 14 00:38:27.203187 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 14 00:38:27.203253 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] Feb 14 00:38:27.203318 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] Feb 14 00:38:27.203383 kernel: pci 0001:01:00.0: PME# supported from D3cold Feb 14 00:38:27.203449 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 14 00:38:27.203523 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 14 00:38:27.203596 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] Feb 14 00:38:27.203663 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] Feb 14 00:38:27.203728 kernel: pci 0001:01:00.1: PME# supported from D3cold Feb 14 00:38:27.203738 kernel: acpiphp: Slot [2-6] registered Feb 14 00:38:27.203746 kernel: acpiphp: Slot [3-4] registered Feb 14 00:38:27.203755 kernel: acpiphp: Slot [4-4] registered Feb 14 00:38:27.203814 kernel: pci_bus 0001:00: on NUMA node 0 Feb 14 00:38:27.203878 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 14 00:38:27.203942 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 14 00:38:27.204005 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.204068 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Feb 14 00:38:27.204131 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 00:38:27.204197 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.204259 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.204325 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 00:38:27.204388 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.204451 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.204515 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] Feb 14 00:38:27.204578 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] Feb 14 00:38:27.204645 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] Feb 14 00:38:27.204710 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] Feb 14 00:38:27.204774 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] Feb 14 00:38:27.204837 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] Feb 14 00:38:27.204901 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] Feb 14 00:38:27.204964 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] Feb 14 00:38:27.205027 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.205090 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.205153 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.205217 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.205281 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.205344 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.205407 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.205470 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.205534 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.205709 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.205776 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.205839 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.205905 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.205967 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.206031 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.206094 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.206160 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] Feb 14 00:38:27.206227 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] Feb 14 00:38:27.206291 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] Feb 14 00:38:27.206356 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] Feb 14 00:38:27.206421 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Feb 14 00:38:27.206484 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Feb 14 00:38:27.206547 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] Feb 14 00:38:27.206614 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Feb 14 00:38:27.206677 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] Feb 14 00:38:27.206740 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] Feb 14 00:38:27.206805 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Feb 14 00:38:27.206868 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] Feb 14 00:38:27.206931 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] Feb 14 00:38:27.206995 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Feb 14 00:38:27.207058 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] Feb 14 00:38:27.207121 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] Feb 14 00:38:27.207182 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] Feb 14 00:38:27.207239 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] Feb 14 00:38:27.207314 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] Feb 14 00:38:27.207375 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] Feb 14 00:38:27.207440 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] Feb 14 00:38:27.207500 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] Feb 14 00:38:27.207565 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] Feb 14 00:38:27.207630 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] Feb 14 00:38:27.207696 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] Feb 14 00:38:27.207755 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] Feb 14 00:38:27.207766 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) Feb 14 00:38:27.207833 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 14 00:38:27.207895 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] Feb 14 00:38:27.207960 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] Feb 14 00:38:27.208020 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops Feb 14 00:38:27.208082 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 Feb 14 00:38:27.208144 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] Feb 14 00:38:27.208155 kernel: PCI host bridge to bus 0004:00 Feb 14 00:38:27.208218 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] Feb 14 00:38:27.208275 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] Feb 14 00:38:27.208333 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] Feb 14 00:38:27.208403 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 Feb 14 00:38:27.208475 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 Feb 14 00:38:27.208539 kernel: pci 0004:00:01.0: supports D1 D2 Feb 14 00:38:27.208606 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.208675 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 Feb 14 00:38:27.208739 kernel: pci 0004:00:03.0: supports D1 D2 Feb 14 00:38:27.208805 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.208874 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 Feb 14 00:38:27.208939 kernel: pci 0004:00:05.0: supports D1 D2 Feb 14 00:38:27.209001 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot Feb 14 00:38:27.209076 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 Feb 14 00:38:27.209141 kernel: pci 0004:01:00.0: enabling Extended Tags Feb 14 00:38:27.209207 kernel: pci 0004:01:00.0: supports D1 D2 Feb 14 00:38:27.209274 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 14 00:38:27.209350 kernel: pci_bus 0004:02: extended config space not accessible Feb 14 00:38:27.209426 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 Feb 14 00:38:27.209494 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] Feb 14 00:38:27.209563 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] Feb 14 00:38:27.209634 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] Feb 14 00:38:27.209703 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb Feb 14 00:38:27.209773 kernel: pci 0004:02:00.0: supports D1 D2 Feb 14 00:38:27.209840 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 14 00:38:27.209913 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 Feb 14 00:38:27.209978 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] Feb 14 00:38:27.210044 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold Feb 14 00:38:27.210105 kernel: pci_bus 0004:00: on NUMA node 0 Feb 14 00:38:27.210171 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 Feb 14 00:38:27.210237 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 14 00:38:27.210301 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 14 00:38:27.210365 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 14 00:38:27.210429 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 14 00:38:27.210493 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.210556 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 14 00:38:27.210623 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] Feb 14 00:38:27.210689 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] Feb 14 00:38:27.210753 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] Feb 14 00:38:27.210816 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] Feb 14 00:38:27.210879 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] Feb 14 00:38:27.210943 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] Feb 14 00:38:27.211006 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.211069 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.211134 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.211197 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.211260 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.211323 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.211387 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.211451 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.211514 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.211577 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.211643 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.211709 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.211774 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] Feb 14 00:38:27.211841 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] Feb 14 00:38:27.211906 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] Feb 14 00:38:27.211975 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] Feb 14 00:38:27.212043 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] Feb 14 00:38:27.212112 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] Feb 14 00:38:27.212180 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] Feb 14 00:38:27.212247 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Feb 14 00:38:27.212313 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] Feb 14 00:38:27.212376 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Feb 14 00:38:27.212441 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] Feb 14 00:38:27.212504 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] Feb 14 00:38:27.212570 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] Feb 14 00:38:27.212639 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Feb 14 00:38:27.212702 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] Feb 14 00:38:27.212770 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] Feb 14 00:38:27.212835 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Feb 14 00:38:27.212899 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] Feb 14 00:38:27.212964 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] Feb 14 00:38:27.213023 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 14 00:38:27.213080 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] Feb 14 00:38:27.213140 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] Feb 14 00:38:27.213208 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] Feb 14 00:38:27.213269 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] Feb 14 00:38:27.213334 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] Feb 14 00:38:27.213403 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] Feb 14 00:38:27.213463 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] Feb 14 00:38:27.213533 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] Feb 14 00:38:27.213596 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] Feb 14 00:38:27.213607 kernel: iommu: Default domain type: Translated Feb 14 00:38:27.213615 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 14 00:38:27.213623 kernel: efivars: Registered efivars operations Feb 14 00:38:27.213693 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device Feb 14 00:38:27.213763 kernel: pci 0004:02:00.0: vgaarb: bridge control possible Feb 14 00:38:27.213831 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none Feb 14 00:38:27.213844 kernel: vgaarb: loaded Feb 14 00:38:27.213853 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 14 00:38:27.213861 kernel: VFS: Disk quotas dquot_6.6.0 Feb 14 00:38:27.213869 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 14 00:38:27.213877 kernel: pnp: PnP ACPI init Feb 14 00:38:27.213946 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved Feb 14 00:38:27.214007 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved Feb 14 00:38:27.214069 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved Feb 14 00:38:27.214128 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved Feb 14 00:38:27.214187 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved Feb 14 00:38:27.214246 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved Feb 14 00:38:27.214307 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved Feb 14 00:38:27.214367 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved Feb 14 00:38:27.214377 kernel: pnp: PnP ACPI: found 1 devices Feb 14 00:38:27.214387 kernel: NET: Registered PF_INET protocol family Feb 14 00:38:27.214396 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 14 00:38:27.214404 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 14 00:38:27.214412 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 14 00:38:27.214421 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 14 00:38:27.214429 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 14 00:38:27.214437 kernel: TCP: Hash tables configured (established 524288 bind 65536) Feb 14 00:38:27.214445 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 14 00:38:27.214454 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 14 00:38:27.214463 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 14 00:38:27.214531 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes Feb 14 00:38:27.214542 kernel: kvm [1]: IPA Size Limit: 48 bits Feb 14 00:38:27.214551 kernel: kvm [1]: GICv3: no GICV resource entry Feb 14 00:38:27.214559 kernel: kvm [1]: disabling GICv2 emulation Feb 14 00:38:27.214567 kernel: kvm [1]: GIC system register CPU interface enabled Feb 14 00:38:27.214575 kernel: kvm [1]: vgic interrupt IRQ9 Feb 14 00:38:27.214586 kernel: kvm [1]: VHE mode initialized successfully Feb 14 00:38:27.214596 kernel: Initialise system trusted keyrings Feb 14 00:38:27.214606 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 Feb 14 00:38:27.214614 kernel: Key type asymmetric registered Feb 14 00:38:27.214622 kernel: Asymmetric key parser 'x509' registered Feb 14 00:38:27.214630 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 14 00:38:27.214638 kernel: io scheduler mq-deadline registered Feb 14 00:38:27.214646 kernel: io scheduler kyber registered Feb 14 00:38:27.214654 kernel: io scheduler bfq registered Feb 14 00:38:27.214662 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 14 00:38:27.214670 kernel: ACPI: button: Power Button [PWRB] Feb 14 00:38:27.214680 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). Feb 14 00:38:27.214688 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 14 00:38:27.214761 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 Feb 14 00:38:27.214824 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 00:38:27.214886 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 00:38:27.214946 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq Feb 14 00:38:27.215007 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq Feb 14 00:38:27.215070 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq Feb 14 00:38:27.215137 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 Feb 14 00:38:27.215198 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 00:38:27.215259 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 00:38:27.215319 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq Feb 14 00:38:27.215380 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq Feb 14 00:38:27.215440 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq Feb 14 00:38:27.215510 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 Feb 14 00:38:27.215570 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 00:38:27.215633 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 00:38:27.215699 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq Feb 14 00:38:27.215759 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq Feb 14 00:38:27.215820 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq Feb 14 00:38:27.215889 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 Feb 14 00:38:27.215950 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 00:38:27.216010 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 00:38:27.216071 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq Feb 14 00:38:27.216131 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq Feb 14 00:38:27.216191 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq Feb 14 00:38:27.216265 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 Feb 14 00:38:27.216329 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 00:38:27.216389 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 00:38:27.216451 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq Feb 14 00:38:27.216511 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq Feb 14 00:38:27.216572 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq Feb 14 00:38:27.216643 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 Feb 14 00:38:27.216708 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 00:38:27.216769 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 00:38:27.216830 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq Feb 14 00:38:27.216890 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq Feb 14 00:38:27.216951 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq Feb 14 00:38:27.217020 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 Feb 14 00:38:27.217080 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 00:38:27.217143 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 00:38:27.217205 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq Feb 14 00:38:27.217265 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq Feb 14 00:38:27.217325 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq Feb 14 00:38:27.217393 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 Feb 14 00:38:27.217454 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) Feb 14 00:38:27.217516 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Feb 14 00:38:27.217576 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq Feb 14 00:38:27.217641 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq Feb 14 00:38:27.217702 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq Feb 14 00:38:27.217713 kernel: thunder_xcv, ver 1.0 Feb 14 00:38:27.217723 kernel: thunder_bgx, ver 1.0 Feb 14 00:38:27.217731 kernel: nicpf, ver 1.0 Feb 14 00:38:27.217740 kernel: nicvf, ver 1.0 Feb 14 00:38:27.217806 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 14 00:38:27.217869 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-14T00:38:25 UTC (1739493505) Feb 14 00:38:27.217880 kernel: efifb: probing for efifb Feb 14 00:38:27.217888 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k Feb 14 00:38:27.217896 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Feb 14 00:38:27.217904 kernel: efifb: scrolling: redraw Feb 14 00:38:27.217912 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 14 00:38:27.217921 kernel: Console: switching to colour frame buffer device 100x37 Feb 14 00:38:27.217931 kernel: fb0: EFI VGA frame buffer device Feb 14 00:38:27.217939 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 Feb 14 00:38:27.217947 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 14 00:38:27.217955 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 14 00:38:27.217963 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 14 00:38:27.217972 kernel: watchdog: Hard watchdog permanently disabled Feb 14 00:38:27.217980 kernel: NET: Registered PF_INET6 protocol family Feb 14 00:38:27.217988 kernel: Segment Routing with IPv6 Feb 14 00:38:27.217996 kernel: In-situ OAM (IOAM) with IPv6 Feb 14 00:38:27.218005 kernel: NET: Registered PF_PACKET protocol family Feb 14 00:38:27.218013 kernel: Key type dns_resolver registered Feb 14 00:38:27.218021 kernel: registered taskstats version 1 Feb 14 00:38:27.218029 kernel: Loading compiled-in X.509 certificates Feb 14 00:38:27.218038 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 14 00:38:27.218046 kernel: Key type .fscrypt registered Feb 14 00:38:27.218053 kernel: Key type fscrypt-provisioning registered Feb 14 00:38:27.218061 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 14 00:38:27.218069 kernel: ima: Allocated hash algorithm: sha1 Feb 14 00:38:27.218078 kernel: ima: No architecture policies found Feb 14 00:38:27.218087 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 14 00:38:27.218152 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 Feb 14 00:38:27.218219 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 Feb 14 00:38:27.218285 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 Feb 14 00:38:27.218352 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 Feb 14 00:38:27.218419 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 Feb 14 00:38:27.218485 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 Feb 14 00:38:27.218551 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 Feb 14 00:38:27.218622 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 Feb 14 00:38:27.218689 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 Feb 14 00:38:27.218755 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 Feb 14 00:38:27.218820 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 Feb 14 00:38:27.218886 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 Feb 14 00:38:27.218952 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 Feb 14 00:38:27.219017 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 Feb 14 00:38:27.219082 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 Feb 14 00:38:27.219149 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 Feb 14 00:38:27.219216 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 Feb 14 00:38:27.219281 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 Feb 14 00:38:27.219347 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 Feb 14 00:38:27.219411 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 Feb 14 00:38:27.219478 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 Feb 14 00:38:27.219542 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 Feb 14 00:38:27.219611 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 Feb 14 00:38:27.219679 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 Feb 14 00:38:27.219749 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 Feb 14 00:38:27.219813 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 Feb 14 00:38:27.219879 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 Feb 14 00:38:27.219943 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 Feb 14 00:38:27.220009 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 Feb 14 00:38:27.220075 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 Feb 14 00:38:27.220144 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 Feb 14 00:38:27.220213 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 Feb 14 00:38:27.220279 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 Feb 14 00:38:27.220345 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 Feb 14 00:38:27.220410 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 Feb 14 00:38:27.220476 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 Feb 14 00:38:27.220541 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 Feb 14 00:38:27.220610 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 Feb 14 00:38:27.220676 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 Feb 14 00:38:27.220741 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 Feb 14 00:38:27.220809 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 Feb 14 00:38:27.220875 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 Feb 14 00:38:27.220940 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 Feb 14 00:38:27.221005 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 Feb 14 00:38:27.221071 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 Feb 14 00:38:27.221137 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 Feb 14 00:38:27.221204 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 Feb 14 00:38:27.221268 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 Feb 14 00:38:27.221336 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 Feb 14 00:38:27.221401 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 Feb 14 00:38:27.221467 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 Feb 14 00:38:27.221530 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 Feb 14 00:38:27.221600 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 Feb 14 00:38:27.221664 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 Feb 14 00:38:27.221731 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 Feb 14 00:38:27.221796 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 Feb 14 00:38:27.221865 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 Feb 14 00:38:27.221929 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 Feb 14 00:38:27.221996 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 Feb 14 00:38:27.222059 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 Feb 14 00:38:27.222128 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 Feb 14 00:38:27.222139 kernel: clk: Disabling unused clocks Feb 14 00:38:27.222148 kernel: Freeing unused kernel memory: 39360K Feb 14 00:38:27.222156 kernel: Run /init as init process Feb 14 00:38:27.222166 kernel: with arguments: Feb 14 00:38:27.222174 kernel: /init Feb 14 00:38:27.222182 kernel: with environment: Feb 14 00:38:27.222190 kernel: HOME=/ Feb 14 00:38:27.222198 kernel: TERM=linux Feb 14 00:38:27.222206 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 14 00:38:27.222216 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 14 00:38:27.222226 systemd[1]: Detected architecture arm64. Feb 14 00:38:27.222236 systemd[1]: Running in initrd. Feb 14 00:38:27.222244 systemd[1]: No hostname configured, using default hostname. Feb 14 00:38:27.222252 systemd[1]: Hostname set to . Feb 14 00:38:27.222260 systemd[1]: Initializing machine ID from random generator. Feb 14 00:38:27.222269 systemd[1]: Queued start job for default target initrd.target. Feb 14 00:38:27.222278 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 00:38:27.222286 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 00:38:27.222295 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 14 00:38:27.222306 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 14 00:38:27.222314 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 14 00:38:27.222323 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 14 00:38:27.222332 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 14 00:38:27.222342 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 14 00:38:27.222350 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 00:38:27.222360 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 14 00:38:27.222368 systemd[1]: Reached target paths.target - Path Units. Feb 14 00:38:27.222377 systemd[1]: Reached target slices.target - Slice Units. Feb 14 00:38:27.222385 systemd[1]: Reached target swap.target - Swaps. Feb 14 00:38:27.222394 systemd[1]: Reached target timers.target - Timer Units. Feb 14 00:38:27.222404 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 14 00:38:27.222412 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 14 00:38:27.222421 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 14 00:38:27.222429 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 14 00:38:27.222439 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 14 00:38:27.222448 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 14 00:38:27.222456 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 00:38:27.222464 systemd[1]: Reached target sockets.target - Socket Units. Feb 14 00:38:27.222473 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 14 00:38:27.222481 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 14 00:38:27.222490 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 14 00:38:27.222498 systemd[1]: Starting systemd-fsck-usr.service... Feb 14 00:38:27.222508 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 14 00:38:27.222516 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 14 00:38:27.222546 systemd-journald[898]: Collecting audit messages is disabled. Feb 14 00:38:27.222566 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 00:38:27.222576 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 14 00:38:27.222588 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 14 00:38:27.222596 kernel: Bridge firewalling registered Feb 14 00:38:27.222605 systemd-journald[898]: Journal started Feb 14 00:38:27.222624 systemd-journald[898]: Runtime Journal (/run/log/journal/edcbceffab27486fa7c1c2cdd83f7ea6) is 8.0M, max 4.0G, 3.9G free. Feb 14 00:38:27.180804 systemd-modules-load[900]: Inserted module 'overlay' Feb 14 00:38:27.260001 systemd[1]: Started systemd-journald.service - Journal Service. Feb 14 00:38:27.202908 systemd-modules-load[900]: Inserted module 'br_netfilter' Feb 14 00:38:27.265644 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 00:38:27.276403 systemd[1]: Finished systemd-fsck-usr.service. Feb 14 00:38:27.287084 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 14 00:38:27.297627 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 00:38:27.320699 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 00:38:27.326860 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 14 00:38:27.344047 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 14 00:38:27.372732 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 14 00:38:27.389592 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 00:38:27.406287 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 14 00:38:27.417633 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 14 00:38:27.423418 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 00:38:27.454821 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 14 00:38:27.462363 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 14 00:38:27.474188 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 14 00:38:27.499235 dracut-cmdline[942]: dracut-dracut-053 Feb 14 00:38:27.499235 dracut-cmdline[942]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 14 00:38:27.488094 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 00:38:27.498740 systemd-resolved[947]: Positive Trust Anchors: Feb 14 00:38:27.498750 systemd-resolved[947]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 14 00:38:27.498781 systemd-resolved[947]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 14 00:38:27.513617 systemd-resolved[947]: Defaulting to hostname 'linux'. Feb 14 00:38:27.515115 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 14 00:38:27.550261 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 14 00:38:27.651355 kernel: SCSI subsystem initialized Feb 14 00:38:27.662586 kernel: Loading iSCSI transport class v2.0-870. Feb 14 00:38:27.681589 kernel: iscsi: registered transport (tcp) Feb 14 00:38:27.708249 kernel: iscsi: registered transport (qla4xxx) Feb 14 00:38:27.708275 kernel: QLogic iSCSI HBA Driver Feb 14 00:38:27.751988 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 14 00:38:27.774701 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 14 00:38:27.819075 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 14 00:38:27.819092 kernel: device-mapper: uevent: version 1.0.3 Feb 14 00:38:27.828626 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 14 00:38:27.894591 kernel: raid6: neonx8 gen() 15836 MB/s Feb 14 00:38:27.919590 kernel: raid6: neonx4 gen() 15713 MB/s Feb 14 00:38:27.945590 kernel: raid6: neonx2 gen() 13297 MB/s Feb 14 00:38:27.970590 kernel: raid6: neonx1 gen() 10520 MB/s Feb 14 00:38:27.995591 kernel: raid6: int64x8 gen() 6984 MB/s Feb 14 00:38:28.020590 kernel: raid6: int64x4 gen() 7372 MB/s Feb 14 00:38:28.045590 kernel: raid6: int64x2 gen() 6153 MB/s Feb 14 00:38:28.074048 kernel: raid6: int64x1 gen() 5077 MB/s Feb 14 00:38:28.074070 kernel: raid6: using algorithm neonx8 gen() 15836 MB/s Feb 14 00:38:28.107863 kernel: raid6: .... xor() 11958 MB/s, rmw enabled Feb 14 00:38:28.107885 kernel: raid6: using neon recovery algorithm Feb 14 00:38:28.130772 kernel: xor: measuring software checksum speed Feb 14 00:38:28.130794 kernel: 8regs : 19731 MB/sec Feb 14 00:38:28.138884 kernel: 32regs : 19679 MB/sec Feb 14 00:38:28.146634 kernel: arm64_neon : 27070 MB/sec Feb 14 00:38:28.154221 kernel: xor: using function: arm64_neon (27070 MB/sec) Feb 14 00:38:28.214591 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 14 00:38:28.226609 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 14 00:38:28.241706 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 00:38:28.257428 systemd-udevd[1132]: Using default interface naming scheme 'v255'. Feb 14 00:38:28.260484 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 00:38:28.285723 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 14 00:38:28.299935 dracut-pre-trigger[1142]: rd.md=0: removing MD RAID activation Feb 14 00:38:28.325912 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 14 00:38:28.349751 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 14 00:38:28.450924 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 00:38:28.480188 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 14 00:38:28.480221 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 14 00:38:28.491697 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 14 00:38:28.664611 kernel: ACPI: bus type USB registered Feb 14 00:38:28.664635 kernel: usbcore: registered new interface driver usbfs Feb 14 00:38:28.664646 kernel: usbcore: registered new interface driver hub Feb 14 00:38:28.664656 kernel: usbcore: registered new device driver usb Feb 14 00:38:28.664666 kernel: PTP clock support registered Feb 14 00:38:28.664676 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 31 Feb 14 00:38:28.886864 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Feb 14 00:38:28.887011 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 Feb 14 00:38:28.887098 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault Feb 14 00:38:28.887178 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 14 00:38:28.887189 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 14 00:38:28.887199 kernel: igb 0003:03:00.0: Adding to iommu group 32 Feb 14 00:38:29.006282 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 33 Feb 14 00:38:29.642797 kernel: nvme 0005:03:00.0: Adding to iommu group 34 Feb 14 00:38:29.642925 kernel: nvme 0005:04:00.0: Adding to iommu group 35 Feb 14 00:38:29.643013 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000010 Feb 14 00:38:29.643095 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Feb 14 00:38:29.643171 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 Feb 14 00:38:29.643246 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed Feb 14 00:38:29.643324 kernel: hub 1-0:1.0: USB hub found Feb 14 00:38:29.643425 kernel: hub 1-0:1.0: 4 ports detected Feb 14 00:38:29.643509 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 14 00:38:29.643657 kernel: hub 2-0:1.0: USB hub found Feb 14 00:38:29.643757 kernel: hub 2-0:1.0: 4 ports detected Feb 14 00:38:29.643840 kernel: nvme nvme0: pci function 0005:03:00.0 Feb 14 00:38:29.643925 kernel: igb 0003:03:00.0: added PHC on eth0 Feb 14 00:38:29.644007 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 14 00:38:29.644084 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 Feb 14 00:38:29.644162 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:53:3a Feb 14 00:38:29.644242 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 14 00:38:29.644320 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 Feb 14 00:38:29.644396 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Feb 14 00:38:29.644472 kernel: nvme nvme0: Shutdown timeout set to 8 seconds Feb 14 00:38:29.644546 kernel: igb 0003:03:00.1: Adding to iommu group 36 Feb 14 00:38:29.644632 kernel: nvme nvme1: pci function 0005:04:00.0 Feb 14 00:38:29.644716 kernel: nvme nvme1: Shutdown timeout set to 8 seconds Feb 14 00:38:29.644791 kernel: nvme nvme0: 32/0/0 default/read/poll queues Feb 14 00:38:29.644864 kernel: nvme nvme1: 32/0/0 default/read/poll queues Feb 14 00:38:29.644934 kernel: igb 0003:03:00.1: added PHC on eth1 Feb 14 00:38:29.645011 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection Feb 14 00:38:29.645089 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:53:3b Feb 14 00:38:29.645164 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 Feb 14 00:38:29.645238 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Feb 14 00:38:29.645317 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 14 00:38:29.645328 kernel: GPT:9289727 != 1875385007 Feb 14 00:38:29.645337 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 14 00:38:29.645347 kernel: GPT:9289727 != 1875385007 Feb 14 00:38:29.645356 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 14 00:38:29.645366 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 14 00:38:29.645376 kernel: igb 0003:03:00.1 eno2: renamed from eth1 Feb 14 00:38:29.645451 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (1213) Feb 14 00:38:29.645462 kernel: igb 0003:03:00.0 eno1: renamed from eth0 Feb 14 00:38:29.645541 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (1198) Feb 14 00:38:29.645552 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd Feb 14 00:38:29.645685 kernel: hub 2-3:1.0: USB hub found Feb 14 00:38:29.645785 kernel: hub 2-3:1.0: 4 ports detected Feb 14 00:38:29.645871 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged Feb 14 00:38:29.645948 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 14 00:38:29.645958 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 14 00:38:29.645972 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd Feb 14 00:38:29.646093 kernel: hub 1-3:1.0: USB hub found Feb 14 00:38:29.646186 kernel: hub 1-3:1.0: 4 ports detected Feb 14 00:38:29.646271 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 14 00:38:29.646349 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 Feb 14 00:38:30.325902 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 Feb 14 00:38:30.326066 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 14 00:38:30.326156 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged Feb 14 00:38:30.326231 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Feb 14 00:38:28.620791 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 14 00:38:30.341993 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 Feb 14 00:38:28.672653 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 14 00:38:30.363342 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 Feb 14 00:38:28.677853 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 00:38:28.683043 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 14 00:38:28.699801 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 14 00:38:28.706475 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 14 00:38:30.395779 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 14 00:38:28.706525 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 00:38:30.406932 disk-uuid[1279]: Primary Header is updated. Feb 14 00:38:30.406932 disk-uuid[1279]: Secondary Entries is updated. Feb 14 00:38:30.406932 disk-uuid[1279]: Secondary Header is updated. Feb 14 00:38:28.712409 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 00:38:30.433816 disk-uuid[1280]: The operation has completed successfully. Feb 14 00:38:28.718216 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 14 00:38:28.718251 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 00:38:28.724313 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 00:38:28.740705 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 00:38:28.745955 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 14 00:38:28.756242 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 00:38:28.772731 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 14 00:38:29.025252 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 00:38:29.236459 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. Feb 14 00:38:29.299400 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. Feb 14 00:38:29.309152 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Feb 14 00:38:29.345828 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Feb 14 00:38:29.350461 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Feb 14 00:38:30.566836 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 14 00:38:29.365683 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 14 00:38:30.499270 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 14 00:38:30.583535 sh[1470]: Success Feb 14 00:38:30.499350 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 14 00:38:30.529766 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 14 00:38:30.589910 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 14 00:38:30.611702 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 14 00:38:30.623236 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 14 00:38:30.717389 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 14 00:38:30.717416 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 14 00:38:30.717436 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 14 00:38:30.717456 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 14 00:38:30.717475 kernel: BTRFS info (device dm-0): using free space tree Feb 14 00:38:30.717494 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 14 00:38:30.723643 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 14 00:38:30.735631 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 14 00:38:30.755701 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 14 00:38:30.767585 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 00:38:30.767602 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 14 00:38:30.767612 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 14 00:38:30.824584 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 14 00:38:30.824596 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Feb 14 00:38:30.836303 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 14 00:38:30.873594 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 00:38:30.872793 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 14 00:38:30.881950 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 14 00:38:30.949360 ignition[1542]: Ignition 2.19.0 Feb 14 00:38:30.949367 ignition[1542]: Stage: fetch-offline Feb 14 00:38:30.949400 ignition[1542]: no configs at "/usr/lib/ignition/base.d" Feb 14 00:38:30.959211 unknown[1542]: fetched base config from "system" Feb 14 00:38:30.949408 ignition[1542]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 00:38:30.959218 unknown[1542]: fetched user config from "system" Feb 14 00:38:30.949556 ignition[1542]: parsed url from cmdline: "" Feb 14 00:38:30.962023 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 14 00:38:30.949559 ignition[1542]: no config URL provided Feb 14 00:38:30.984340 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 14 00:38:30.949563 ignition[1542]: reading system config file "/usr/lib/ignition/user.ign" Feb 14 00:38:31.006692 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 14 00:38:30.949621 ignition[1542]: parsing config with SHA512: 671fe8312dd448425793eebb37a9ca796cdfd76a79f6500e7e44a33df24b4ede30989982d9b964ba80fdd5c0b2ba962d086792ac685af735cc1b45afd41458e0 Feb 14 00:38:31.029557 systemd-networkd[1695]: lo: Link UP Feb 14 00:38:30.959675 ignition[1542]: fetch-offline: fetch-offline passed Feb 14 00:38:31.029560 systemd-networkd[1695]: lo: Gained carrier Feb 14 00:38:30.959679 ignition[1542]: POST message to Packet Timeline Feb 14 00:38:31.033048 systemd-networkd[1695]: Enumeration completed Feb 14 00:38:30.959684 ignition[1542]: POST Status error: resource requires networking Feb 14 00:38:31.033106 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 14 00:38:30.959743 ignition[1542]: Ignition finished successfully Feb 14 00:38:31.034116 systemd-networkd[1695]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 00:38:31.086156 ignition[1697]: Ignition 2.19.0 Feb 14 00:38:31.042095 systemd[1]: Reached target network.target - Network. Feb 14 00:38:31.086162 ignition[1697]: Stage: kargs Feb 14 00:38:31.052076 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 14 00:38:31.086302 ignition[1697]: no configs at "/usr/lib/ignition/base.d" Feb 14 00:38:31.062670 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 14 00:38:31.086311 ignition[1697]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 00:38:31.086451 systemd-networkd[1695]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 00:38:31.087208 ignition[1697]: kargs: kargs passed Feb 14 00:38:31.138039 systemd-networkd[1695]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 00:38:31.087212 ignition[1697]: POST message to Packet Timeline Feb 14 00:38:31.087225 ignition[1697]: GET https://metadata.packet.net/metadata: attempt #1 Feb 14 00:38:31.090790 ignition[1697]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58736->[::1]:53: read: connection refused Feb 14 00:38:31.291844 ignition[1697]: GET https://metadata.packet.net/metadata: attempt #2 Feb 14 00:38:31.292270 ignition[1697]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44209->[::1]:53: read: connection refused Feb 14 00:38:31.693327 ignition[1697]: GET https://metadata.packet.net/metadata: attempt #3 Feb 14 00:38:31.695199 ignition[1697]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42038->[::1]:53: read: connection refused Feb 14 00:38:31.822596 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Feb 14 00:38:31.824931 systemd-networkd[1695]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 14 00:38:32.448594 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Feb 14 00:38:32.451196 systemd-networkd[1695]: eno1: Link UP Feb 14 00:38:32.451406 systemd-networkd[1695]: eno2: Link UP Feb 14 00:38:32.451529 systemd-networkd[1695]: enP1p1s0f0np0: Link UP Feb 14 00:38:32.451678 systemd-networkd[1695]: enP1p1s0f0np0: Gained carrier Feb 14 00:38:32.463805 systemd-networkd[1695]: enP1p1s0f1np1: Link UP Feb 14 00:38:32.495616 systemd-networkd[1695]: enP1p1s0f0np0: DHCPv4 address 147.28.162.217/31, gateway 147.28.162.216 acquired from 147.28.144.140 Feb 14 00:38:32.495921 ignition[1697]: GET https://metadata.packet.net/metadata: attempt #4 Feb 14 00:38:32.496335 ignition[1697]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45716->[::1]:53: read: connection refused Feb 14 00:38:32.829171 systemd-networkd[1695]: enP1p1s0f1np1: Gained carrier Feb 14 00:38:33.724722 systemd-networkd[1695]: enP1p1s0f0np0: Gained IPv6LL Feb 14 00:38:34.098008 ignition[1697]: GET https://metadata.packet.net/metadata: attempt #5 Feb 14 00:38:34.098750 ignition[1697]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37650->[::1]:53: read: connection refused Feb 14 00:38:34.812888 systemd-networkd[1695]: enP1p1s0f1np1: Gained IPv6LL Feb 14 00:38:37.301681 ignition[1697]: GET https://metadata.packet.net/metadata: attempt #6 Feb 14 00:38:37.547342 ignition[1697]: GET result: OK Feb 14 00:38:37.852421 ignition[1697]: Ignition finished successfully Feb 14 00:38:37.857493 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 14 00:38:37.869704 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 14 00:38:37.886253 ignition[1721]: Ignition 2.19.0 Feb 14 00:38:37.886260 ignition[1721]: Stage: disks Feb 14 00:38:37.886419 ignition[1721]: no configs at "/usr/lib/ignition/base.d" Feb 14 00:38:37.886428 ignition[1721]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 00:38:37.887405 ignition[1721]: disks: disks passed Feb 14 00:38:37.887410 ignition[1721]: POST message to Packet Timeline Feb 14 00:38:37.887422 ignition[1721]: GET https://metadata.packet.net/metadata: attempt #1 Feb 14 00:38:38.894177 ignition[1721]: GET result: OK Feb 14 00:38:39.203641 ignition[1721]: Ignition finished successfully Feb 14 00:38:39.206631 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 14 00:38:39.212542 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 14 00:38:39.220275 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 14 00:38:39.228535 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 14 00:38:39.237538 systemd[1]: Reached target sysinit.target - System Initialization. Feb 14 00:38:39.246670 systemd[1]: Reached target basic.target - Basic System. Feb 14 00:38:39.266731 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 14 00:38:39.282182 systemd-fsck[1738]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 14 00:38:39.285736 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 14 00:38:39.303663 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 14 00:38:39.368507 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 14 00:38:39.373374 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 14 00:38:39.378483 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 14 00:38:39.398632 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 14 00:38:39.405590 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1748) Feb 14 00:38:39.406585 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 00:38:39.406597 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 14 00:38:39.406610 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 14 00:38:39.407585 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 14 00:38:39.407599 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Feb 14 00:38:39.499661 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 14 00:38:39.505950 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 14 00:38:39.517147 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Feb 14 00:38:39.532175 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 14 00:38:39.532202 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 14 00:38:39.545277 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 14 00:38:39.575059 coreos-metadata[1767]: Feb 14 00:38:39.559 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 14 00:38:39.598382 coreos-metadata[1766]: Feb 14 00:38:39.559 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 14 00:38:39.558755 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 14 00:38:39.587775 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 14 00:38:39.625917 initrd-setup-root[1789]: cut: /sysroot/etc/passwd: No such file or directory Feb 14 00:38:39.631809 initrd-setup-root[1796]: cut: /sysroot/etc/group: No such file or directory Feb 14 00:38:39.637668 initrd-setup-root[1803]: cut: /sysroot/etc/shadow: No such file or directory Feb 14 00:38:39.643444 initrd-setup-root[1810]: cut: /sysroot/etc/gshadow: No such file or directory Feb 14 00:38:39.710805 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 14 00:38:39.734646 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 14 00:38:39.765562 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 00:38:39.741123 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 14 00:38:39.771870 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 14 00:38:39.787503 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 14 00:38:39.799187 ignition[1884]: INFO : Ignition 2.19.0 Feb 14 00:38:39.799187 ignition[1884]: INFO : Stage: mount Feb 14 00:38:39.809926 ignition[1884]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 00:38:39.809926 ignition[1884]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 00:38:39.809926 ignition[1884]: INFO : mount: mount passed Feb 14 00:38:39.809926 ignition[1884]: INFO : POST message to Packet Timeline Feb 14 00:38:39.809926 ignition[1884]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 14 00:38:40.389663 coreos-metadata[1766]: Feb 14 00:38:40.389 INFO Fetch successful Feb 14 00:38:40.396386 coreos-metadata[1767]: Feb 14 00:38:40.396 INFO Fetch successful Feb 14 00:38:40.432565 coreos-metadata[1766]: Feb 14 00:38:40.432 INFO wrote hostname ci-4081.3.1-a-a04cd882ea to /sysroot/etc/hostname Feb 14 00:38:40.435624 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 14 00:38:40.446737 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 14 00:38:40.446813 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Feb 14 00:38:40.657218 ignition[1884]: INFO : GET result: OK Feb 14 00:38:40.959020 ignition[1884]: INFO : Ignition finished successfully Feb 14 00:38:40.961780 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 14 00:38:40.977704 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 14 00:38:40.989505 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 14 00:38:41.024024 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1910) Feb 14 00:38:41.024062 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 14 00:38:41.038361 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 14 00:38:41.051203 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 14 00:38:41.073786 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 14 00:38:41.073808 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Feb 14 00:38:41.081981 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 14 00:38:41.112080 ignition[1928]: INFO : Ignition 2.19.0 Feb 14 00:38:41.112080 ignition[1928]: INFO : Stage: files Feb 14 00:38:41.121213 ignition[1928]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 00:38:41.121213 ignition[1928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 00:38:41.121213 ignition[1928]: DEBUG : files: compiled without relabeling support, skipping Feb 14 00:38:41.121213 ignition[1928]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 14 00:38:41.121213 ignition[1928]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 14 00:38:41.121213 ignition[1928]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 14 00:38:41.121213 ignition[1928]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 14 00:38:41.121213 ignition[1928]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 14 00:38:41.121213 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 14 00:38:41.121213 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Feb 14 00:38:41.117687 unknown[1928]: wrote ssh authorized keys file for user: core Feb 14 00:38:41.210691 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 14 00:38:41.339073 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 14 00:38:41.349422 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 14 00:38:41.528293 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 14 00:38:41.964159 ignition[1928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 14 00:38:41.976494 ignition[1928]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 14 00:38:41.976494 ignition[1928]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 14 00:38:41.976494 ignition[1928]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 14 00:38:41.976494 ignition[1928]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 14 00:38:41.976494 ignition[1928]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 14 00:38:41.976494 ignition[1928]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 14 00:38:41.976494 ignition[1928]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 14 00:38:41.976494 ignition[1928]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 14 00:38:41.976494 ignition[1928]: INFO : files: files passed Feb 14 00:38:41.976494 ignition[1928]: INFO : POST message to Packet Timeline Feb 14 00:38:41.976494 ignition[1928]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 14 00:38:42.197068 ignition[1928]: INFO : GET result: OK Feb 14 00:38:42.568524 ignition[1928]: INFO : Ignition finished successfully Feb 14 00:38:42.571855 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 14 00:38:42.590711 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 14 00:38:42.597412 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 14 00:38:42.609180 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 14 00:38:42.609261 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 14 00:38:42.643987 initrd-setup-root-after-ignition[1969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 14 00:38:42.643987 initrd-setup-root-after-ignition[1969]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 14 00:38:42.627120 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 14 00:38:42.689525 initrd-setup-root-after-ignition[1973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 14 00:38:42.639828 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 14 00:38:42.662726 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 14 00:38:42.703550 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 14 00:38:42.703633 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 14 00:38:42.713407 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 14 00:38:42.729138 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 14 00:38:42.740453 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 14 00:38:42.754786 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 14 00:38:42.779062 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 14 00:38:42.804698 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 14 00:38:42.827403 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 14 00:38:42.833349 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 00:38:42.845029 systemd[1]: Stopped target timers.target - Timer Units. Feb 14 00:38:42.856709 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 14 00:38:42.856811 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 14 00:38:42.868578 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 14 00:38:42.879989 systemd[1]: Stopped target basic.target - Basic System. Feb 14 00:38:42.891543 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 14 00:38:42.903067 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 14 00:38:42.914417 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 14 00:38:42.925828 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 14 00:38:42.937199 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 14 00:38:42.948530 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 14 00:38:42.959934 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 14 00:38:42.976823 systemd[1]: Stopped target swap.target - Swaps. Feb 14 00:38:42.988193 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 14 00:38:42.988290 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 14 00:38:42.999731 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 14 00:38:43.010847 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 00:38:43.021870 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 14 00:38:43.025645 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 00:38:43.033020 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 14 00:38:43.033120 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 14 00:38:43.044246 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 14 00:38:43.044333 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 14 00:38:43.055505 systemd[1]: Stopped target paths.target - Path Units. Feb 14 00:38:43.066473 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 14 00:38:43.066599 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 00:38:43.083337 systemd[1]: Stopped target slices.target - Slice Units. Feb 14 00:38:43.094610 systemd[1]: Stopped target sockets.target - Socket Units. Feb 14 00:38:43.105905 systemd[1]: iscsid.socket: Deactivated successfully. Feb 14 00:38:43.105991 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 14 00:38:43.210975 ignition[1997]: INFO : Ignition 2.19.0 Feb 14 00:38:43.210975 ignition[1997]: INFO : Stage: umount Feb 14 00:38:43.210975 ignition[1997]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 14 00:38:43.210975 ignition[1997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 14 00:38:43.210975 ignition[1997]: INFO : umount: umount passed Feb 14 00:38:43.210975 ignition[1997]: INFO : POST message to Packet Timeline Feb 14 00:38:43.210975 ignition[1997]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 14 00:38:43.117354 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 14 00:38:43.117458 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 14 00:38:43.128881 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 14 00:38:43.128969 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 14 00:38:43.140292 systemd[1]: ignition-files.service: Deactivated successfully. Feb 14 00:38:43.140372 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 14 00:38:43.151738 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 14 00:38:43.151817 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 14 00:38:43.174704 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 14 00:38:43.181349 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 14 00:38:43.193300 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 14 00:38:43.193395 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 00:38:43.205232 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 14 00:38:43.205315 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 14 00:38:43.219160 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 14 00:38:43.221121 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 14 00:38:43.221199 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 14 00:38:43.256750 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 14 00:38:43.256919 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 14 00:38:44.655429 ignition[1997]: INFO : GET result: OK Feb 14 00:38:45.002265 ignition[1997]: INFO : Ignition finished successfully Feb 14 00:38:45.004527 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 14 00:38:45.004730 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 14 00:38:45.012237 systemd[1]: Stopped target network.target - Network. Feb 14 00:38:45.021150 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 14 00:38:45.021206 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 14 00:38:45.030537 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 14 00:38:45.030585 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 14 00:38:45.039939 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 14 00:38:45.039970 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 14 00:38:45.049504 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 14 00:38:45.049548 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 14 00:38:45.059145 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 14 00:38:45.059173 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 14 00:38:45.068974 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 14 00:38:45.074608 systemd-networkd[1695]: enP1p1s0f0np0: DHCPv6 lease lost Feb 14 00:38:45.078470 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 14 00:38:45.084726 systemd-networkd[1695]: enP1p1s0f1np1: DHCPv6 lease lost Feb 14 00:38:45.088358 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 14 00:38:45.088477 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 14 00:38:45.100250 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 14 00:38:45.100400 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 00:38:45.108375 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 14 00:38:45.108532 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 14 00:38:45.118407 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 14 00:38:45.118551 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 14 00:38:45.142707 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 14 00:38:45.152240 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 14 00:38:45.152306 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 14 00:38:45.162281 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 14 00:38:45.162314 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 14 00:38:45.172287 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 14 00:38:45.172320 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 14 00:38:45.182613 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 00:38:45.202932 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 14 00:38:45.203038 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 00:38:45.210790 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 14 00:38:45.210942 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 14 00:38:45.219974 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 14 00:38:45.220014 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 00:38:45.230622 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 14 00:38:45.230658 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 14 00:38:45.246878 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 14 00:38:45.246921 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 14 00:38:45.257717 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 14 00:38:45.257765 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 14 00:38:45.287756 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 14 00:38:45.296305 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 14 00:38:45.296370 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 00:38:45.307363 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 14 00:38:45.307394 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 14 00:38:45.318376 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 14 00:38:45.318404 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 00:38:45.335456 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 14 00:38:45.335490 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 00:38:45.347245 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 14 00:38:45.347323 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 14 00:38:45.867731 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 14 00:38:45.868687 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 14 00:38:45.878949 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 14 00:38:45.898691 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 14 00:38:45.912184 systemd[1]: Switching root. Feb 14 00:38:45.962805 systemd-journald[898]: Journal stopped Feb 14 00:38:47.924165 systemd-journald[898]: Received SIGTERM from PID 1 (systemd). Feb 14 00:38:47.924192 kernel: SELinux: policy capability network_peer_controls=1 Feb 14 00:38:47.924203 kernel: SELinux: policy capability open_perms=1 Feb 14 00:38:47.924211 kernel: SELinux: policy capability extended_socket_class=1 Feb 14 00:38:47.924218 kernel: SELinux: policy capability always_check_network=0 Feb 14 00:38:47.924226 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 14 00:38:47.924234 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 14 00:38:47.924244 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 14 00:38:47.924252 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 14 00:38:47.924260 kernel: audit: type=1403 audit(1739493526.146:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 14 00:38:47.924269 systemd[1]: Successfully loaded SELinux policy in 118.108ms. Feb 14 00:38:47.924278 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.488ms. Feb 14 00:38:47.924288 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 14 00:38:47.924297 systemd[1]: Detected architecture arm64. Feb 14 00:38:47.924308 systemd[1]: Detected first boot. Feb 14 00:38:47.924317 systemd[1]: Hostname set to . Feb 14 00:38:47.924326 systemd[1]: Initializing machine ID from random generator. Feb 14 00:38:47.924335 zram_generator::config[2079]: No configuration found. Feb 14 00:38:47.924346 systemd[1]: Populated /etc with preset unit settings. Feb 14 00:38:47.924355 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 14 00:38:47.924364 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 14 00:38:47.924373 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 14 00:38:47.924383 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 14 00:38:47.924392 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 14 00:38:47.924401 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 14 00:38:47.924415 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 14 00:38:47.924426 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 14 00:38:47.924435 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 14 00:38:47.924444 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 14 00:38:47.924453 systemd[1]: Created slice user.slice - User and Session Slice. Feb 14 00:38:47.924463 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 14 00:38:47.924472 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 14 00:38:47.924481 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 14 00:38:47.924491 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 14 00:38:47.924501 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 14 00:38:47.924510 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 14 00:38:47.924519 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 14 00:38:47.924529 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 14 00:38:47.924538 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 14 00:38:47.924547 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 14 00:38:47.924559 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 14 00:38:47.924568 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 14 00:38:47.924579 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 14 00:38:47.924593 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 14 00:38:47.924602 systemd[1]: Reached target slices.target - Slice Units. Feb 14 00:38:47.924611 systemd[1]: Reached target swap.target - Swaps. Feb 14 00:38:47.924620 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 14 00:38:47.924630 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 14 00:38:47.924639 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 14 00:38:47.924650 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 14 00:38:47.924660 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 14 00:38:47.924669 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 14 00:38:47.924679 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 14 00:38:47.924688 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 14 00:38:47.924700 systemd[1]: Mounting media.mount - External Media Directory... Feb 14 00:38:47.924709 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 14 00:38:47.924718 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 14 00:38:47.924728 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 14 00:38:47.924738 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 14 00:38:47.924748 systemd[1]: Reached target machines.target - Containers. Feb 14 00:38:47.924757 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 14 00:38:47.924767 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 00:38:47.924778 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 14 00:38:47.924788 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 14 00:38:47.924797 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 14 00:38:47.924807 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 14 00:38:47.924816 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 14 00:38:47.924826 kernel: ACPI: bus type drm_connector registered Feb 14 00:38:47.924835 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 14 00:38:47.924844 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 14 00:38:47.924854 kernel: fuse: init (API version 7.39) Feb 14 00:38:47.924864 kernel: loop: module loaded Feb 14 00:38:47.924873 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 14 00:38:47.924882 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 14 00:38:47.924892 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 14 00:38:47.924901 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 14 00:38:47.924910 systemd[1]: Stopped systemd-fsck-usr.service. Feb 14 00:38:47.924920 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 14 00:38:47.924929 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 14 00:38:47.924955 systemd-journald[2190]: Collecting audit messages is disabled. Feb 14 00:38:47.924974 systemd-journald[2190]: Journal started Feb 14 00:38:47.924995 systemd-journald[2190]: Runtime Journal (/run/log/journal/652fa916e5fc4dfe9752aa7f6cd0ce50) is 8.0M, max 4.0G, 3.9G free. Feb 14 00:38:46.662247 systemd[1]: Queued start job for default target multi-user.target. Feb 14 00:38:46.680497 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 14 00:38:46.680841 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 14 00:38:46.681126 systemd[1]: systemd-journald.service: Consumed 3.321s CPU time. Feb 14 00:38:47.948594 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 14 00:38:47.975593 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 14 00:38:47.996594 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 14 00:38:48.018533 systemd[1]: verity-setup.service: Deactivated successfully. Feb 14 00:38:48.018573 systemd[1]: Stopped verity-setup.service. Feb 14 00:38:48.042593 systemd[1]: Started systemd-journald.service - Journal Service. Feb 14 00:38:48.048220 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 14 00:38:48.053567 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 14 00:38:48.058836 systemd[1]: Mounted media.mount - External Media Directory. Feb 14 00:38:48.063987 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 14 00:38:48.069146 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 14 00:38:48.074274 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 14 00:38:48.079482 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 14 00:38:48.084728 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 14 00:38:48.089972 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 14 00:38:48.090703 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 14 00:38:48.095955 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 14 00:38:48.096099 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 14 00:38:48.101346 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 14 00:38:48.102611 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 14 00:38:48.107793 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 14 00:38:48.107942 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 14 00:38:48.113060 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 14 00:38:48.113785 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 14 00:38:48.118865 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 14 00:38:48.119014 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 14 00:38:48.123883 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 14 00:38:48.130607 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 14 00:38:48.135623 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 14 00:38:48.142612 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 14 00:38:48.158495 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 14 00:38:48.173852 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 14 00:38:48.179713 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 14 00:38:48.184423 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 14 00:38:48.184454 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 14 00:38:48.191399 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 14 00:38:48.197076 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 14 00:38:48.202866 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 14 00:38:48.207667 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 00:38:48.209344 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 14 00:38:48.214987 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 14 00:38:48.219762 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 14 00:38:48.220900 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 14 00:38:48.225557 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 14 00:38:48.226640 systemd-journald[2190]: Time spent on flushing to /var/log/journal/652fa916e5fc4dfe9752aa7f6cd0ce50 is 25.584ms for 2348 entries. Feb 14 00:38:48.226640 systemd-journald[2190]: System Journal (/var/log/journal/652fa916e5fc4dfe9752aa7f6cd0ce50) is 8.0M, max 195.6M, 187.6M free. Feb 14 00:38:48.268922 systemd-journald[2190]: Received client request to flush runtime journal. Feb 14 00:38:48.268974 kernel: loop0: detected capacity change from 0 to 114328 Feb 14 00:38:48.226861 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 14 00:38:48.244690 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 14 00:38:48.250394 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 14 00:38:48.256109 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 14 00:38:48.272450 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 14 00:38:48.282709 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 14 00:38:48.283306 systemd-tmpfiles[2235]: ACLs are not supported, ignoring. Feb 14 00:38:48.283319 systemd-tmpfiles[2235]: ACLs are not supported, ignoring. Feb 14 00:38:48.286371 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 14 00:38:48.290919 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 14 00:38:48.295513 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 14 00:38:48.300141 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 14 00:38:48.305035 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 14 00:38:48.309691 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 14 00:38:48.320163 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 14 00:38:48.323601 kernel: loop1: detected capacity change from 0 to 8 Feb 14 00:38:48.349845 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 14 00:38:48.356167 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 14 00:38:48.361817 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 14 00:38:48.362539 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 14 00:38:48.368874 udevadm[2237]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 14 00:38:48.381654 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 14 00:38:48.388701 kernel: loop2: detected capacity change from 0 to 114432 Feb 14 00:38:48.406776 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 14 00:38:48.421516 systemd-tmpfiles[2273]: ACLs are not supported, ignoring. Feb 14 00:38:48.421529 systemd-tmpfiles[2273]: ACLs are not supported, ignoring. Feb 14 00:38:48.425135 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 14 00:38:48.459596 kernel: loop3: detected capacity change from 0 to 201592 Feb 14 00:38:48.464517 ldconfig[2224]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 14 00:38:48.466299 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 14 00:38:48.525592 kernel: loop4: detected capacity change from 0 to 114328 Feb 14 00:38:48.540593 kernel: loop5: detected capacity change from 0 to 8 Feb 14 00:38:48.552594 kernel: loop6: detected capacity change from 0 to 114432 Feb 14 00:38:48.568594 kernel: loop7: detected capacity change from 0 to 201592 Feb 14 00:38:48.581475 (sd-merge)[2280]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Feb 14 00:38:48.581924 (sd-merge)[2280]: Merged extensions into '/usr'. Feb 14 00:38:48.584936 systemd[1]: Reloading requested from client PID 2234 ('systemd-sysext') (unit systemd-sysext.service)... Feb 14 00:38:48.584949 systemd[1]: Reloading... Feb 14 00:38:48.624592 zram_generator::config[2308]: No configuration found. Feb 14 00:38:48.717677 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 00:38:48.766315 systemd[1]: Reloading finished in 180 ms. Feb 14 00:38:48.792749 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 14 00:38:48.798660 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 14 00:38:48.815714 systemd[1]: Starting ensure-sysext.service... Feb 14 00:38:48.821493 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 14 00:38:48.827896 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 14 00:38:48.834693 systemd[1]: Reloading requested from client PID 2362 ('systemctl') (unit ensure-sysext.service)... Feb 14 00:38:48.834704 systemd[1]: Reloading... Feb 14 00:38:48.841316 systemd-tmpfiles[2363]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 14 00:38:48.841558 systemd-tmpfiles[2363]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 14 00:38:48.842189 systemd-tmpfiles[2363]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 14 00:38:48.842394 systemd-tmpfiles[2363]: ACLs are not supported, ignoring. Feb 14 00:38:48.842440 systemd-tmpfiles[2363]: ACLs are not supported, ignoring. Feb 14 00:38:48.844699 systemd-tmpfiles[2363]: Detected autofs mount point /boot during canonicalization of boot. Feb 14 00:38:48.844706 systemd-tmpfiles[2363]: Skipping /boot Feb 14 00:38:48.851364 systemd-tmpfiles[2363]: Detected autofs mount point /boot during canonicalization of boot. Feb 14 00:38:48.851372 systemd-tmpfiles[2363]: Skipping /boot Feb 14 00:38:48.854191 systemd-udevd[2364]: Using default interface naming scheme 'v255'. Feb 14 00:38:48.874586 zram_generator::config[2393]: No configuration found. Feb 14 00:38:48.909594 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2433) Feb 14 00:38:48.927627 kernel: IPMI message handler: version 39.2 Feb 14 00:38:48.936633 kernel: ipmi device interface Feb 14 00:38:48.948595 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 14 00:38:48.948679 kernel: ipmi_si: IPMI System Interface driver Feb 14 00:38:48.963599 kernel: ipmi_si: Unable to find any System Interface(s) Feb 14 00:38:48.984951 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 00:38:49.046983 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Feb 14 00:38:49.051623 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 14 00:38:49.051997 systemd[1]: Reloading finished in 217 ms. Feb 14 00:38:49.072220 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 14 00:38:49.093034 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 14 00:38:49.111042 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 14 00:38:49.119618 systemd[1]: Finished ensure-sysext.service. Feb 14 00:38:49.151805 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 14 00:38:49.157903 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 14 00:38:49.163124 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 14 00:38:49.164191 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 14 00:38:49.170133 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 14 00:38:49.175931 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 14 00:38:49.181525 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 14 00:38:49.181875 lvm[2561]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 14 00:38:49.187209 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 14 00:38:49.187496 augenrules[2575]: No rules Feb 14 00:38:49.191915 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 14 00:38:49.192829 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 14 00:38:49.198648 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 14 00:38:49.205100 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 14 00:38:49.211570 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 14 00:38:49.217713 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 14 00:38:49.223255 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 14 00:38:49.228757 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 14 00:38:49.234613 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 14 00:38:49.239851 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 14 00:38:49.245426 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 14 00:38:49.250210 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 14 00:38:49.250866 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 14 00:38:49.255574 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 14 00:38:49.255722 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 14 00:38:49.260481 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 14 00:38:49.260624 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 14 00:38:49.265418 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 14 00:38:49.265555 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 14 00:38:49.270254 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 14 00:38:49.275744 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 14 00:38:49.280537 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 14 00:38:49.293332 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 14 00:38:49.315852 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 14 00:38:49.320192 lvm[2609]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 14 00:38:49.320411 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 14 00:38:49.320474 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 14 00:38:49.321669 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 14 00:38:49.328063 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 14 00:38:49.332779 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 14 00:38:49.333174 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 14 00:38:49.337940 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 14 00:38:49.360823 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 14 00:38:49.372024 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 14 00:38:49.402211 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 14 00:38:49.406895 systemd-resolved[2585]: Positive Trust Anchors: Feb 14 00:38:49.406907 systemd-resolved[2585]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 14 00:38:49.406938 systemd-resolved[2585]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 14 00:38:49.407077 systemd[1]: Reached target time-set.target - System Time Set. Feb 14 00:38:49.410668 systemd-resolved[2585]: Using system hostname 'ci-4081.3.1-a-a04cd882ea'. Feb 14 00:38:49.412216 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 14 00:38:49.415802 systemd-networkd[2584]: lo: Link UP Feb 14 00:38:49.415808 systemd-networkd[2584]: lo: Gained carrier Feb 14 00:38:49.417538 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 14 00:38:49.419549 systemd-networkd[2584]: bond0: netdev ready Feb 14 00:38:49.421882 systemd[1]: Reached target sysinit.target - System Initialization. Feb 14 00:38:49.426154 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 14 00:38:49.428831 systemd-networkd[2584]: Enumeration completed Feb 14 00:38:49.430443 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 14 00:38:49.434893 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 14 00:38:49.437196 systemd-networkd[2584]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:49:ff:e4.network. Feb 14 00:38:49.439332 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 14 00:38:49.443638 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 14 00:38:49.447975 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 14 00:38:49.447997 systemd[1]: Reached target paths.target - Path Units. Feb 14 00:38:49.452299 systemd[1]: Reached target timers.target - Timer Units. Feb 14 00:38:49.457219 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 14 00:38:49.462888 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 14 00:38:49.472589 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 14 00:38:49.477404 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 14 00:38:49.481910 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 14 00:38:49.486376 systemd[1]: Reached target network.target - Network. Feb 14 00:38:49.490727 systemd[1]: Reached target sockets.target - Socket Units. Feb 14 00:38:49.494930 systemd[1]: Reached target basic.target - Basic System. Feb 14 00:38:49.499075 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 14 00:38:49.499094 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 14 00:38:49.510649 systemd[1]: Starting containerd.service - containerd container runtime... Feb 14 00:38:49.516056 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 14 00:38:49.521477 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 14 00:38:49.526918 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 14 00:38:49.532435 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 14 00:38:49.536842 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 14 00:38:49.537806 jq[2643]: false Feb 14 00:38:49.537899 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 14 00:38:49.538078 coreos-metadata[2639]: Feb 14 00:38:49.537 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 14 00:38:49.541233 coreos-metadata[2639]: Feb 14 00:38:49.541 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Feb 14 00:38:49.543068 dbus-daemon[2640]: [system] SELinux support is enabled Feb 14 00:38:49.543223 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 14 00:38:49.548708 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 14 00:38:49.551798 extend-filesystems[2645]: Found loop4 Feb 14 00:38:49.557775 extend-filesystems[2645]: Found loop5 Feb 14 00:38:49.557775 extend-filesystems[2645]: Found loop6 Feb 14 00:38:49.557775 extend-filesystems[2645]: Found loop7 Feb 14 00:38:49.557775 extend-filesystems[2645]: Found nvme0n1 Feb 14 00:38:49.557775 extend-filesystems[2645]: Found nvme0n1p1 Feb 14 00:38:49.557775 extend-filesystems[2645]: Found nvme0n1p2 Feb 14 00:38:49.557775 extend-filesystems[2645]: Found nvme0n1p3 Feb 14 00:38:49.557775 extend-filesystems[2645]: Found usr Feb 14 00:38:49.557775 extend-filesystems[2645]: Found nvme0n1p4 Feb 14 00:38:49.557775 extend-filesystems[2645]: Found nvme0n1p6 Feb 14 00:38:49.557775 extend-filesystems[2645]: Found nvme0n1p7 Feb 14 00:38:49.557775 extend-filesystems[2645]: Found nvme0n1p9 Feb 14 00:38:49.557775 extend-filesystems[2645]: Checking size of /dev/nvme0n1p9 Feb 14 00:38:49.687550 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks Feb 14 00:38:49.687583 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2407) Feb 14 00:38:49.554363 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 14 00:38:49.687680 extend-filesystems[2645]: Resized partition /dev/nvme0n1p9 Feb 14 00:38:49.679930 dbus-daemon[2640]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 14 00:38:49.565644 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 14 00:38:49.692101 extend-filesystems[2665]: resize2fs 1.47.1 (20-May-2024) Feb 14 00:38:49.572182 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 14 00:38:49.612446 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 14 00:38:49.613083 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 14 00:38:49.613793 systemd[1]: Starting update-engine.service - Update Engine... Feb 14 00:38:49.701606 update_engine[2673]: I20250214 00:38:49.664779 2673 main.cc:92] Flatcar Update Engine starting Feb 14 00:38:49.701606 update_engine[2673]: I20250214 00:38:49.667328 2673 update_check_scheduler.cc:74] Next update check in 4m0s Feb 14 00:38:49.620377 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 14 00:38:49.701934 jq[2674]: true Feb 14 00:38:49.628594 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 14 00:38:49.641331 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 14 00:38:49.702291 tar[2677]: linux-arm64/LICENSE Feb 14 00:38:49.702291 tar[2677]: linux-arm64/helm Feb 14 00:38:49.642617 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 14 00:38:49.702522 jq[2679]: true Feb 14 00:38:49.642901 systemd[1]: motdgen.service: Deactivated successfully. Feb 14 00:38:49.643056 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 14 00:38:49.646482 systemd-logind[2664]: Watching system buttons on /dev/input/event0 (Power Button) Feb 14 00:38:49.650697 systemd-logind[2664]: New seat seat0. Feb 14 00:38:49.651521 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 14 00:38:49.651690 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 14 00:38:49.660664 systemd[1]: Started systemd-logind.service - User Login Management. Feb 14 00:38:49.679810 (ntainerd)[2680]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 14 00:38:49.689651 systemd[1]: Started update-engine.service - Update Engine. Feb 14 00:38:49.697526 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 14 00:38:49.697878 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 14 00:38:49.705846 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 14 00:38:49.705950 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 14 00:38:49.712192 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 14 00:38:49.727183 bash[2699]: Updated "/home/core/.ssh/authorized_keys" Feb 14 00:38:49.728487 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 14 00:38:49.741919 locksmithd[2700]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 14 00:38:49.753925 systemd[1]: Starting sshkeys.service... Feb 14 00:38:49.762811 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 14 00:38:49.768742 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 14 00:38:49.788401 coreos-metadata[2725]: Feb 14 00:38:49.788 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 14 00:38:49.789482 coreos-metadata[2725]: Feb 14 00:38:49.789 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Feb 14 00:38:49.819926 containerd[2680]: time="2025-02-14T00:38:49.819844760Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 14 00:38:49.842902 containerd[2680]: time="2025-02-14T00:38:49.842859480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 14 00:38:49.844232 containerd[2680]: time="2025-02-14T00:38:49.844203000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 14 00:38:49.844253 containerd[2680]: time="2025-02-14T00:38:49.844230240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 14 00:38:49.844253 containerd[2680]: time="2025-02-14T00:38:49.844244880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 14 00:38:49.844402 containerd[2680]: time="2025-02-14T00:38:49.844389120Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 14 00:38:49.844422 containerd[2680]: time="2025-02-14T00:38:49.844407280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 14 00:38:49.844472 containerd[2680]: time="2025-02-14T00:38:49.844457840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 00:38:49.844496 containerd[2680]: time="2025-02-14T00:38:49.844472240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 14 00:38:49.844644 containerd[2680]: time="2025-02-14T00:38:49.844626000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 00:38:49.844664 containerd[2680]: time="2025-02-14T00:38:49.844642080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 14 00:38:49.844664 containerd[2680]: time="2025-02-14T00:38:49.844654920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 00:38:49.844696 containerd[2680]: time="2025-02-14T00:38:49.844664200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 14 00:38:49.844752 containerd[2680]: time="2025-02-14T00:38:49.844736960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 14 00:38:49.844942 containerd[2680]: time="2025-02-14T00:38:49.844923600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 14 00:38:49.845038 containerd[2680]: time="2025-02-14T00:38:49.845022480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 14 00:38:49.845059 containerd[2680]: time="2025-02-14T00:38:49.845037000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 14 00:38:49.845127 containerd[2680]: time="2025-02-14T00:38:49.845114200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 14 00:38:49.845165 containerd[2680]: time="2025-02-14T00:38:49.845153080Z" level=info msg="metadata content store policy set" policy=shared Feb 14 00:38:49.851350 containerd[2680]: time="2025-02-14T00:38:49.851323040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 14 00:38:49.851389 containerd[2680]: time="2025-02-14T00:38:49.851367520Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 14 00:38:49.851389 containerd[2680]: time="2025-02-14T00:38:49.851382800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 14 00:38:49.851454 containerd[2680]: time="2025-02-14T00:38:49.851396720Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 14 00:38:49.851454 containerd[2680]: time="2025-02-14T00:38:49.851411000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 14 00:38:49.851568 containerd[2680]: time="2025-02-14T00:38:49.851548640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 14 00:38:49.851783 containerd[2680]: time="2025-02-14T00:38:49.851765920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 14 00:38:49.851898 containerd[2680]: time="2025-02-14T00:38:49.851883720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 14 00:38:49.851918 containerd[2680]: time="2025-02-14T00:38:49.851901280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 14 00:38:49.851936 containerd[2680]: time="2025-02-14T00:38:49.851914880Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 14 00:38:49.851936 containerd[2680]: time="2025-02-14T00:38:49.851929720Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 14 00:38:49.851975 containerd[2680]: time="2025-02-14T00:38:49.851941720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 14 00:38:49.851975 containerd[2680]: time="2025-02-14T00:38:49.851954040Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 14 00:38:49.851975 containerd[2680]: time="2025-02-14T00:38:49.851967160Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 14 00:38:49.852023 containerd[2680]: time="2025-02-14T00:38:49.851981400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 14 00:38:49.852023 containerd[2680]: time="2025-02-14T00:38:49.851994480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 14 00:38:49.852023 containerd[2680]: time="2025-02-14T00:38:49.852006400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 14 00:38:49.852023 containerd[2680]: time="2025-02-14T00:38:49.852016960Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 14 00:38:49.852091 containerd[2680]: time="2025-02-14T00:38:49.852036080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 14 00:38:49.852091 containerd[2680]: time="2025-02-14T00:38:49.852049840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 14 00:38:49.852091 containerd[2680]: time="2025-02-14T00:38:49.852062160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 14 00:38:49.852091 containerd[2680]: time="2025-02-14T00:38:49.852075080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 14 00:38:49.852091 containerd[2680]: time="2025-02-14T00:38:49.852087240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 14 00:38:49.852177 containerd[2680]: time="2025-02-14T00:38:49.852099480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 14 00:38:49.852177 containerd[2680]: time="2025-02-14T00:38:49.852115680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 14 00:38:49.852177 containerd[2680]: time="2025-02-14T00:38:49.852128800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 14 00:38:49.852177 containerd[2680]: time="2025-02-14T00:38:49.852140840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 14 00:38:49.852177 containerd[2680]: time="2025-02-14T00:38:49.852154240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 14 00:38:49.852177 containerd[2680]: time="2025-02-14T00:38:49.852165280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 14 00:38:49.852177 containerd[2680]: time="2025-02-14T00:38:49.852176560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 14 00:38:49.852291 containerd[2680]: time="2025-02-14T00:38:49.852189120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 14 00:38:49.852291 containerd[2680]: time="2025-02-14T00:38:49.852204400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 14 00:38:49.852291 containerd[2680]: time="2025-02-14T00:38:49.852226840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 14 00:38:49.852291 containerd[2680]: time="2025-02-14T00:38:49.852238880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 14 00:38:49.852291 containerd[2680]: time="2025-02-14T00:38:49.852249240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 14 00:38:49.852375 containerd[2680]: time="2025-02-14T00:38:49.852353320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 14 00:38:49.852375 containerd[2680]: time="2025-02-14T00:38:49.852369040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 14 00:38:49.852412 containerd[2680]: time="2025-02-14T00:38:49.852379600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 14 00:38:49.852412 containerd[2680]: time="2025-02-14T00:38:49.852391320Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 14 00:38:49.852412 containerd[2680]: time="2025-02-14T00:38:49.852400760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 14 00:38:49.852465 containerd[2680]: time="2025-02-14T00:38:49.852412280Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 14 00:38:49.852465 containerd[2680]: time="2025-02-14T00:38:49.852424120Z" level=info msg="NRI interface is disabled by configuration." Feb 14 00:38:49.852465 containerd[2680]: time="2025-02-14T00:38:49.852435560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 14 00:38:49.852823 containerd[2680]: time="2025-02-14T00:38:49.852772760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 14 00:38:49.852919 containerd[2680]: time="2025-02-14T00:38:49.852828760Z" level=info msg="Connect containerd service" Feb 14 00:38:49.852919 containerd[2680]: time="2025-02-14T00:38:49.852852800Z" level=info msg="using legacy CRI server" Feb 14 00:38:49.852919 containerd[2680]: time="2025-02-14T00:38:49.852859280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 14 00:38:49.852979 containerd[2680]: time="2025-02-14T00:38:49.852933720Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 14 00:38:49.853534 containerd[2680]: time="2025-02-14T00:38:49.853510160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 14 00:38:49.853733 containerd[2680]: time="2025-02-14T00:38:49.853689880Z" level=info msg="Start subscribing containerd event" Feb 14 00:38:49.853759 containerd[2680]: time="2025-02-14T00:38:49.853748840Z" level=info msg="Start recovering state" Feb 14 00:38:49.853829 containerd[2680]: time="2025-02-14T00:38:49.853816440Z" level=info msg="Start event monitor" Feb 14 00:38:49.853857 containerd[2680]: time="2025-02-14T00:38:49.853830560Z" level=info msg="Start snapshots syncer" Feb 14 00:38:49.853857 containerd[2680]: time="2025-02-14T00:38:49.853840080Z" level=info msg="Start cni network conf syncer for default" Feb 14 00:38:49.853857 containerd[2680]: time="2025-02-14T00:38:49.853847680Z" level=info msg="Start streaming server" Feb 14 00:38:49.854000 containerd[2680]: time="2025-02-14T00:38:49.853982040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 14 00:38:49.854034 containerd[2680]: time="2025-02-14T00:38:49.854024400Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 14 00:38:49.854081 containerd[2680]: time="2025-02-14T00:38:49.854069800Z" level=info msg="containerd successfully booted in 0.035531s" Feb 14 00:38:49.854119 systemd[1]: Started containerd.service - containerd container runtime. Feb 14 00:38:50.014983 tar[2677]: linux-arm64/README.md Feb 14 00:38:50.029002 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 14 00:38:50.082597 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 Feb 14 00:38:50.097391 extend-filesystems[2665]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 14 00:38:50.097391 extend-filesystems[2665]: old_desc_blocks = 1, new_desc_blocks = 112 Feb 14 00:38:50.097391 extend-filesystems[2665]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. Feb 14 00:38:50.125932 extend-filesystems[2645]: Resized filesystem in /dev/nvme0n1p9 Feb 14 00:38:50.125932 extend-filesystems[2645]: Found nvme1n1 Feb 14 00:38:50.099814 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 14 00:38:50.100098 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 14 00:38:50.283689 sshd_keygen[2670]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 14 00:38:50.302259 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 14 00:38:50.323932 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 14 00:38:50.332963 systemd[1]: issuegen.service: Deactivated successfully. Feb 14 00:38:50.333144 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 14 00:38:50.339487 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 14 00:38:50.352647 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 14 00:38:50.359942 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 14 00:38:50.366735 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 14 00:38:50.372197 systemd[1]: Reached target getty.target - Login Prompts. Feb 14 00:38:50.541382 coreos-metadata[2639]: Feb 14 00:38:50.541 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 14 00:38:50.541942 coreos-metadata[2639]: Feb 14 00:38:50.541 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Feb 14 00:38:50.786600 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Feb 14 00:38:50.789614 coreos-metadata[2725]: Feb 14 00:38:50.789 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 14 00:38:50.790004 coreos-metadata[2725]: Feb 14 00:38:50.789 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Feb 14 00:38:50.803585 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link Feb 14 00:38:50.808982 systemd-networkd[2584]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:49:ff:e5.network. Feb 14 00:38:51.420597 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Feb 14 00:38:51.437444 systemd-networkd[2584]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 14 00:38:51.437586 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link Feb 14 00:38:51.438763 systemd-networkd[2584]: enP1p1s0f0np0: Link UP Feb 14 00:38:51.439055 systemd-networkd[2584]: enP1p1s0f0np0: Gained carrier Feb 14 00:38:51.457591 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 14 00:38:51.465212 systemd-networkd[2584]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:49:ff:e4.network. Feb 14 00:38:51.465519 systemd-networkd[2584]: enP1p1s0f1np1: Link UP Feb 14 00:38:51.465843 systemd-networkd[2584]: enP1p1s0f1np1: Gained carrier Feb 14 00:38:51.472848 systemd-networkd[2584]: bond0: Link UP Feb 14 00:38:51.473196 systemd-networkd[2584]: bond0: Gained carrier Feb 14 00:38:51.473397 systemd-timesyncd[2586]: Network configuration changed, trying to establish connection. Feb 14 00:38:51.473992 systemd-timesyncd[2586]: Network configuration changed, trying to establish connection. Feb 14 00:38:51.474323 systemd-timesyncd[2586]: Network configuration changed, trying to establish connection. Feb 14 00:38:51.474471 systemd-timesyncd[2586]: Network configuration changed, trying to establish connection. Feb 14 00:38:51.559756 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex Feb 14 00:38:51.559791 kernel: bond0: active interface up! Feb 14 00:38:51.683591 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex Feb 14 00:38:52.542053 coreos-metadata[2639]: Feb 14 00:38:52.542 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Feb 14 00:38:52.790094 coreos-metadata[2725]: Feb 14 00:38:52.790 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Feb 14 00:38:53.500923 systemd-timesyncd[2586]: Network configuration changed, trying to establish connection. Feb 14 00:38:53.500944 systemd-networkd[2584]: bond0: Gained IPv6LL Feb 14 00:38:53.501112 systemd-timesyncd[2586]: Network configuration changed, trying to establish connection. Feb 14 00:38:53.503222 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 14 00:38:53.509228 systemd[1]: Reached target network-online.target - Network is Online. Feb 14 00:38:53.526860 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:38:53.533366 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 14 00:38:53.557737 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 14 00:38:54.191288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:38:54.197163 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 00:38:54.594447 kubelet[2789]: E0214 00:38:54.594381 2789 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 00:38:54.596610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 00:38:54.596747 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 00:38:55.275830 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 14 00:38:55.293968 systemd[1]: Started sshd@0-147.28.162.217:22-139.178.68.195:44976.service - OpenSSH per-connection server daemon (139.178.68.195:44976). Feb 14 00:38:55.371591 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 Feb 14 00:38:55.371983 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity Feb 14 00:38:55.417229 login[2768]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Feb 14 00:38:55.417481 login[2767]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:38:55.425471 systemd-logind[2664]: New session 1 of user core. Feb 14 00:38:55.426871 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 14 00:38:55.438819 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 14 00:38:55.446934 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 14 00:38:55.449170 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 14 00:38:55.455242 (systemd)[2824]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 14 00:38:55.548332 systemd[2824]: Queued start job for default target default.target. Feb 14 00:38:55.572789 systemd[2824]: Created slice app.slice - User Application Slice. Feb 14 00:38:55.572815 systemd[2824]: Reached target paths.target - Paths. Feb 14 00:38:55.572827 systemd[2824]: Reached target timers.target - Timers. Feb 14 00:38:55.574011 systemd[2824]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 14 00:38:55.582850 systemd[2824]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 14 00:38:55.582903 systemd[2824]: Reached target sockets.target - Sockets. Feb 14 00:38:55.582915 systemd[2824]: Reached target basic.target - Basic System. Feb 14 00:38:55.582952 systemd[2824]: Reached target default.target - Main User Target. Feb 14 00:38:55.582974 systemd[2824]: Startup finished in 122ms. Feb 14 00:38:55.583351 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 14 00:38:55.584839 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 14 00:38:55.695556 sshd[2812]: Accepted publickey for core from 139.178.68.195 port 44976 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:38:55.696934 sshd[2812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:38:55.699969 systemd-logind[2664]: New session 3 of user core. Feb 14 00:38:55.709722 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 14 00:38:56.063000 systemd[1]: Started sshd@1-147.28.162.217:22-139.178.68.195:44988.service - OpenSSH per-connection server daemon (139.178.68.195:44988). Feb 14 00:38:56.146077 coreos-metadata[2725]: Feb 14 00:38:56.146 INFO Fetch successful Feb 14 00:38:56.193457 unknown[2725]: wrote ssh authorized keys file for user: core Feb 14 00:38:56.223359 update-ssh-keys[2852]: Updated "/home/core/.ssh/authorized_keys" Feb 14 00:38:56.224425 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 14 00:38:56.225883 systemd[1]: Finished sshkeys.service. Feb 14 00:38:56.418690 login[2768]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:38:56.421618 systemd-logind[2664]: New session 2 of user core. Feb 14 00:38:56.435686 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 14 00:38:56.465885 sshd[2849]: Accepted publickey for core from 139.178.68.195 port 44988 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:38:56.467069 sshd[2849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:38:56.469691 systemd-logind[2664]: New session 4 of user core. Feb 14 00:38:56.480687 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 14 00:38:56.699454 coreos-metadata[2639]: Feb 14 00:38:56.699 INFO Fetch successful Feb 14 00:38:56.762202 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 14 00:38:56.763933 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Feb 14 00:38:56.766784 sshd[2849]: pam_unix(sshd:session): session closed for user core Feb 14 00:38:56.769210 systemd[1]: sshd@1-147.28.162.217:22-139.178.68.195:44988.service: Deactivated successfully. Feb 14 00:38:56.770568 systemd[1]: session-4.scope: Deactivated successfully. Feb 14 00:38:56.771051 systemd-logind[2664]: Session 4 logged out. Waiting for processes to exit. Feb 14 00:38:56.771609 systemd-logind[2664]: Removed session 4. Feb 14 00:38:56.844571 systemd[1]: Started sshd@2-147.28.162.217:22-139.178.68.195:44998.service - OpenSSH per-connection server daemon (139.178.68.195:44998). Feb 14 00:38:57.064366 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Feb 14 00:38:57.064893 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 14 00:38:57.068651 systemd[1]: Startup finished in 3.221s (kernel) + 19.673s (initrd) + 11.039s (userspace) = 33.935s. Feb 14 00:38:57.257958 sshd[2874]: Accepted publickey for core from 139.178.68.195 port 44998 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:38:57.259146 sshd[2874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:38:57.261807 systemd-logind[2664]: New session 5 of user core. Feb 14 00:38:57.271683 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 14 00:38:57.563552 sshd[2874]: pam_unix(sshd:session): session closed for user core Feb 14 00:38:57.567066 systemd[1]: sshd@2-147.28.162.217:22-139.178.68.195:44998.service: Deactivated successfully. Feb 14 00:38:57.568804 systemd[1]: session-5.scope: Deactivated successfully. Feb 14 00:38:57.569324 systemd-logind[2664]: Session 5 logged out. Waiting for processes to exit. Feb 14 00:38:57.569890 systemd-logind[2664]: Removed session 5. Feb 14 00:39:04.639503 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 14 00:39:04.649787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:39:04.754762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:39:04.758367 (kubelet)[2890]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 00:39:04.790923 kubelet[2890]: E0214 00:39:04.790890 2890 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 00:39:04.793977 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 00:39:04.794109 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 00:39:07.633912 systemd[1]: Started sshd@3-147.28.162.217:22-139.178.68.195:56372.service - OpenSSH per-connection server daemon (139.178.68.195:56372). Feb 14 00:39:08.039314 sshd[2907]: Accepted publickey for core from 139.178.68.195 port 56372 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:39:08.040419 sshd[2907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:39:08.043570 systemd-logind[2664]: New session 6 of user core. Feb 14 00:39:08.052692 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 14 00:39:08.339656 sshd[2907]: pam_unix(sshd:session): session closed for user core Feb 14 00:39:08.343075 systemd[1]: sshd@3-147.28.162.217:22-139.178.68.195:56372.service: Deactivated successfully. Feb 14 00:39:08.344705 systemd[1]: session-6.scope: Deactivated successfully. Feb 14 00:39:08.345220 systemd-logind[2664]: Session 6 logged out. Waiting for processes to exit. Feb 14 00:39:08.345766 systemd-logind[2664]: Removed session 6. Feb 14 00:39:08.418554 systemd[1]: Started sshd@4-147.28.162.217:22-139.178.68.195:56376.service - OpenSSH per-connection server daemon (139.178.68.195:56376). Feb 14 00:39:08.827942 sshd[2914]: Accepted publickey for core from 139.178.68.195 port 56376 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:39:08.829045 sshd[2914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:39:08.831908 systemd-logind[2664]: New session 7 of user core. Feb 14 00:39:08.841681 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 14 00:39:09.126885 sshd[2914]: pam_unix(sshd:session): session closed for user core Feb 14 00:39:09.130493 systemd[1]: sshd@4-147.28.162.217:22-139.178.68.195:56376.service: Deactivated successfully. Feb 14 00:39:09.132941 systemd[1]: session-7.scope: Deactivated successfully. Feb 14 00:39:09.133944 systemd-logind[2664]: Session 7 logged out. Waiting for processes to exit. Feb 14 00:39:09.134464 systemd-logind[2664]: Removed session 7. Feb 14 00:39:09.206564 systemd[1]: Started sshd@5-147.28.162.217:22-139.178.68.195:56384.service - OpenSSH per-connection server daemon (139.178.68.195:56384). Feb 14 00:39:09.622892 sshd[2921]: Accepted publickey for core from 139.178.68.195 port 56384 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:39:09.624042 sshd[2921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:39:09.626726 systemd-logind[2664]: New session 8 of user core. Feb 14 00:39:09.635693 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 14 00:39:09.930313 sshd[2921]: pam_unix(sshd:session): session closed for user core Feb 14 00:39:09.933782 systemd[1]: sshd@5-147.28.162.217:22-139.178.68.195:56384.service: Deactivated successfully. Feb 14 00:39:09.935384 systemd[1]: session-8.scope: Deactivated successfully. Feb 14 00:39:09.935902 systemd-logind[2664]: Session 8 logged out. Waiting for processes to exit. Feb 14 00:39:09.936434 systemd-logind[2664]: Removed session 8. Feb 14 00:39:10.005536 systemd[1]: Started sshd@6-147.28.162.217:22-139.178.68.195:56396.service - OpenSSH per-connection server daemon (139.178.68.195:56396). Feb 14 00:39:10.408564 sshd[2928]: Accepted publickey for core from 139.178.68.195 port 56396 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:39:10.409779 sshd[2928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:39:10.412407 systemd-logind[2664]: New session 9 of user core. Feb 14 00:39:10.421686 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 14 00:39:10.651537 sudo[2931]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 14 00:39:10.651817 sudo[2931]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 00:39:10.664377 sudo[2931]: pam_unix(sudo:session): session closed for user root Feb 14 00:39:10.728553 sshd[2928]: pam_unix(sshd:session): session closed for user core Feb 14 00:39:10.732489 systemd[1]: sshd@6-147.28.162.217:22-139.178.68.195:56396.service: Deactivated successfully. Feb 14 00:39:10.735139 systemd[1]: session-9.scope: Deactivated successfully. Feb 14 00:39:10.735654 systemd-logind[2664]: Session 9 logged out. Waiting for processes to exit. Feb 14 00:39:10.736223 systemd-logind[2664]: Removed session 9. Feb 14 00:39:10.803743 systemd[1]: Started sshd@7-147.28.162.217:22-139.178.68.195:56398.service - OpenSSH per-connection server daemon (139.178.68.195:56398). Feb 14 00:39:11.209978 sshd[2936]: Accepted publickey for core from 139.178.68.195 port 56398 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:39:11.211206 sshd[2936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:39:11.213901 systemd-logind[2664]: New session 10 of user core. Feb 14 00:39:11.224736 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 14 00:39:11.446310 sudo[2940]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 14 00:39:11.446578 sudo[2940]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 00:39:11.448949 sudo[2940]: pam_unix(sudo:session): session closed for user root Feb 14 00:39:11.453418 sudo[2939]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 14 00:39:11.453692 sudo[2939]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 00:39:11.471866 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 14 00:39:11.472910 auditctl[2943]: No rules Feb 14 00:39:11.473737 systemd[1]: audit-rules.service: Deactivated successfully. Feb 14 00:39:11.473915 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 14 00:39:11.475526 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 14 00:39:11.497545 augenrules[2961]: No rules Feb 14 00:39:11.499662 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 14 00:39:11.500563 sudo[2939]: pam_unix(sudo:session): session closed for user root Feb 14 00:39:11.564239 sshd[2936]: pam_unix(sshd:session): session closed for user core Feb 14 00:39:11.567006 systemd[1]: sshd@7-147.28.162.217:22-139.178.68.195:56398.service: Deactivated successfully. Feb 14 00:39:11.568966 systemd[1]: session-10.scope: Deactivated successfully. Feb 14 00:39:11.569449 systemd-logind[2664]: Session 10 logged out. Waiting for processes to exit. Feb 14 00:39:11.570027 systemd-logind[2664]: Removed session 10. Feb 14 00:39:11.637650 systemd[1]: Started sshd@8-147.28.162.217:22-139.178.68.195:56404.service - OpenSSH per-connection server daemon (139.178.68.195:56404). Feb 14 00:39:12.053210 sshd[2970]: Accepted publickey for core from 139.178.68.195 port 56404 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:39:12.054290 sshd[2970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:39:12.057054 systemd-logind[2664]: New session 11 of user core. Feb 14 00:39:12.066691 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 14 00:39:12.294996 sudo[2973]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 14 00:39:12.295270 sudo[2973]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 14 00:39:12.571784 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 14 00:39:12.571982 (dockerd)[3003]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 14 00:39:12.785087 dockerd[3003]: time="2025-02-14T00:39:12.785036080Z" level=info msg="Starting up" Feb 14 00:39:12.844363 dockerd[3003]: time="2025-02-14T00:39:12.844283280Z" level=info msg="Loading containers: start." Feb 14 00:39:12.935591 kernel: Initializing XFRM netlink socket Feb 14 00:39:12.953436 systemd-timesyncd[2586]: Network configuration changed, trying to establish connection. Feb 14 00:39:13.000612 systemd-networkd[2584]: docker0: Link UP Feb 14 00:39:13.013763 dockerd[3003]: time="2025-02-14T00:39:13.013736600Z" level=info msg="Loading containers: done." Feb 14 00:39:13.023788 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2125716505-merged.mount: Deactivated successfully. Feb 14 00:39:13.025302 dockerd[3003]: time="2025-02-14T00:39:13.025268280Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 14 00:39:13.025362 dockerd[3003]: time="2025-02-14T00:39:13.025347240Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 14 00:39:13.025478 dockerd[3003]: time="2025-02-14T00:39:13.025459520Z" level=info msg="Daemon has completed initialization" Feb 14 00:39:13.045893 dockerd[3003]: time="2025-02-14T00:39:13.045782080Z" level=info msg="API listen on /run/docker.sock" Feb 14 00:39:13.045906 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 14 00:39:12.909919 systemd-resolved[2585]: Clock change detected. Flushing caches. Feb 14 00:39:12.918250 systemd-journald[2190]: Time jumped backwards, rotating. Feb 14 00:39:12.910124 systemd-timesyncd[2586]: Contacted time server [2602:f9ba:69::210]:123 (2.flatcar.pool.ntp.org). Feb 14 00:39:12.910172 systemd-timesyncd[2586]: Initial clock synchronization to Fri 2025-02-14 00:39:12.909852 UTC. Feb 14 00:39:12.989076 containerd[2680]: time="2025-02-14T00:39:12.989037185Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 14 00:39:13.430884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4278912539.mount: Deactivated successfully. Feb 14 00:39:14.369066 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 14 00:39:14.379941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:39:14.484910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:39:14.488488 (kubelet)[3280]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 00:39:14.518442 kubelet[3280]: E0214 00:39:14.518405 3280 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 00:39:14.521051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 00:39:14.521183 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 00:39:15.143145 containerd[2680]: time="2025-02-14T00:39:15.143096905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:15.143344 containerd[2680]: time="2025-02-14T00:39:15.143112865Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=26218236" Feb 14 00:39:15.145439 containerd[2680]: time="2025-02-14T00:39:15.145406265Z" level=info msg="ImageCreate event name:\"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:15.148257 containerd[2680]: time="2025-02-14T00:39:15.148229465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:15.149363 containerd[2680]: time="2025-02-14T00:39:15.149337425Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"26215036\" in 2.16025488s" Feb 14 00:39:15.149389 containerd[2680]: time="2025-02-14T00:39:15.149373305Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\"" Feb 14 00:39:15.149953 containerd[2680]: time="2025-02-14T00:39:15.149936305Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 14 00:39:16.612753 containerd[2680]: time="2025-02-14T00:39:16.612712945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:16.613007 containerd[2680]: time="2025-02-14T00:39:16.612762625Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=22528145" Feb 14 00:39:16.613825 containerd[2680]: time="2025-02-14T00:39:16.613803505Z" level=info msg="ImageCreate event name:\"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:16.616602 containerd[2680]: time="2025-02-14T00:39:16.616576185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:16.617759 containerd[2680]: time="2025-02-14T00:39:16.617729105Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"23968941\" in 1.46776536s" Feb 14 00:39:16.617793 containerd[2680]: time="2025-02-14T00:39:16.617765745Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\"" Feb 14 00:39:16.618157 containerd[2680]: time="2025-02-14T00:39:16.618132265Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 14 00:39:18.172717 containerd[2680]: time="2025-02-14T00:39:18.172644025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:18.173034 containerd[2680]: time="2025-02-14T00:39:18.172712505Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=17480800" Feb 14 00:39:18.175059 containerd[2680]: time="2025-02-14T00:39:18.174627985Z" level=info msg="ImageCreate event name:\"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:18.177952 containerd[2680]: time="2025-02-14T00:39:18.177914985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:18.179199 containerd[2680]: time="2025-02-14T00:39:18.179111185Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"18921614\" in 1.5609426s" Feb 14 00:39:18.179199 containerd[2680]: time="2025-02-14T00:39:18.179147305Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\"" Feb 14 00:39:18.179686 containerd[2680]: time="2025-02-14T00:39:18.179496145Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 14 00:39:19.355661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3530413817.mount: Deactivated successfully. Feb 14 00:39:19.541940 containerd[2680]: time="2025-02-14T00:39:19.541893585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:19.542232 containerd[2680]: time="2025-02-14T00:39:19.541949545Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363382" Feb 14 00:39:19.542680 containerd[2680]: time="2025-02-14T00:39:19.542660905Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:19.544595 containerd[2680]: time="2025-02-14T00:39:19.544573545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:19.545226 containerd[2680]: time="2025-02-14T00:39:19.545199865Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.3656692s" Feb 14 00:39:19.545255 containerd[2680]: time="2025-02-14T00:39:19.545233865Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 14 00:39:19.545577 containerd[2680]: time="2025-02-14T00:39:19.545554465Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 14 00:39:19.859181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1060285081.mount: Deactivated successfully. Feb 14 00:39:20.909354 containerd[2680]: time="2025-02-14T00:39:20.909299705Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Feb 14 00:39:20.909636 containerd[2680]: time="2025-02-14T00:39:20.909327185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:20.910557 containerd[2680]: time="2025-02-14T00:39:20.910528465Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:20.913516 containerd[2680]: time="2025-02-14T00:39:20.913491025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:20.914678 containerd[2680]: time="2025-02-14T00:39:20.914655665Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.3690682s" Feb 14 00:39:20.914802 containerd[2680]: time="2025-02-14T00:39:20.914683425Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Feb 14 00:39:20.915059 containerd[2680]: time="2025-02-14T00:39:20.915038065Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 14 00:39:21.190301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1441155213.mount: Deactivated successfully. Feb 14 00:39:21.190910 containerd[2680]: time="2025-02-14T00:39:21.190823785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:21.190910 containerd[2680]: time="2025-02-14T00:39:21.190897065Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 14 00:39:21.191588 containerd[2680]: time="2025-02-14T00:39:21.191558505Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:21.193701 containerd[2680]: time="2025-02-14T00:39:21.193654985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:21.194494 containerd[2680]: time="2025-02-14T00:39:21.194414945Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 279.34496ms" Feb 14 00:39:21.194494 containerd[2680]: time="2025-02-14T00:39:21.194451105Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 14 00:39:21.194963 containerd[2680]: time="2025-02-14T00:39:21.194805985Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 14 00:39:21.585078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount419167693.mount: Deactivated successfully. Feb 14 00:39:24.619080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 14 00:39:24.629633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:39:24.648734 containerd[2680]: time="2025-02-14T00:39:24.648694905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:24.648943 containerd[2680]: time="2025-02-14T00:39:24.648716985Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812429" Feb 14 00:39:24.649945 containerd[2680]: time="2025-02-14T00:39:24.649922905Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:24.653037 containerd[2680]: time="2025-02-14T00:39:24.653008785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:24.654245 containerd[2680]: time="2025-02-14T00:39:24.654220785Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.45938128s" Feb 14 00:39:24.654268 containerd[2680]: time="2025-02-14T00:39:24.654252345Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Feb 14 00:39:24.731883 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:39:24.735367 (kubelet)[3477]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 14 00:39:24.765969 kubelet[3477]: E0214 00:39:24.765931 3477 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 14 00:39:24.768231 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 14 00:39:24.768371 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 14 00:39:29.538451 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:39:29.550037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:39:29.567700 systemd[1]: Reloading requested from client PID 3549 ('systemctl') (unit session-11.scope)... Feb 14 00:39:29.567711 systemd[1]: Reloading... Feb 14 00:39:29.627746 zram_generator::config[3592]: No configuration found. Feb 14 00:39:29.719777 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 00:39:29.790922 systemd[1]: Reloading finished in 222 ms. Feb 14 00:39:29.837596 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:39:29.839802 systemd[1]: kubelet.service: Deactivated successfully. Feb 14 00:39:29.840008 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:39:29.841615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:39:29.949915 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:39:29.953593 (kubelet)[3657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 14 00:39:29.984742 kubelet[3657]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 00:39:29.984742 kubelet[3657]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 14 00:39:29.984742 kubelet[3657]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 00:39:29.985034 kubelet[3657]: I0214 00:39:29.984810 3657 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 14 00:39:30.749595 kubelet[3657]: I0214 00:39:30.749564 3657 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 14 00:39:30.749595 kubelet[3657]: I0214 00:39:30.749592 3657 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 14 00:39:30.749849 kubelet[3657]: I0214 00:39:30.749834 3657 server.go:954] "Client rotation is on, will bootstrap in background" Feb 14 00:39:30.779930 kubelet[3657]: E0214 00:39:30.779903 3657 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.28.162.217:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.28.162.217:6443: connect: connection refused" logger="UnhandledError" Feb 14 00:39:30.780795 kubelet[3657]: I0214 00:39:30.780772 3657 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 14 00:39:30.786414 kubelet[3657]: E0214 00:39:30.786391 3657 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 14 00:39:30.786437 kubelet[3657]: I0214 00:39:30.786414 3657 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 14 00:39:30.806629 kubelet[3657]: I0214 00:39:30.806608 3657 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 14 00:39:30.807225 kubelet[3657]: I0214 00:39:30.807193 3657 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 14 00:39:30.807393 kubelet[3657]: I0214 00:39:30.807228 3657 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-a-a04cd882ea","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 14 00:39:30.807476 kubelet[3657]: I0214 00:39:30.807465 3657 topology_manager.go:138] "Creating topology manager with none policy" Feb 14 00:39:30.807476 kubelet[3657]: I0214 00:39:30.807474 3657 container_manager_linux.go:304] "Creating device plugin manager" Feb 14 00:39:30.807688 kubelet[3657]: I0214 00:39:30.807677 3657 state_mem.go:36] "Initialized new in-memory state store" Feb 14 00:39:30.810419 kubelet[3657]: I0214 00:39:30.810404 3657 kubelet.go:446] "Attempting to sync node with API server" Feb 14 00:39:30.810441 kubelet[3657]: I0214 00:39:30.810424 3657 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 14 00:39:30.810460 kubelet[3657]: I0214 00:39:30.810441 3657 kubelet.go:352] "Adding apiserver pod source" Feb 14 00:39:30.810460 kubelet[3657]: I0214 00:39:30.810451 3657 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 14 00:39:30.813010 kubelet[3657]: I0214 00:39:30.812970 3657 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 14 00:39:30.813555 kubelet[3657]: I0214 00:39:30.813540 3657 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 14 00:39:30.813664 kubelet[3657]: W0214 00:39:30.813653 3657 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 14 00:39:30.813718 kubelet[3657]: W0214 00:39:30.813674 3657 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.162.217:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-a04cd882ea&limit=500&resourceVersion=0": dial tcp 147.28.162.217:6443: connect: connection refused Feb 14 00:39:30.813753 kubelet[3657]: W0214 00:39:30.813673 3657 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.162.217:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.162.217:6443: connect: connection refused Feb 14 00:39:30.813753 kubelet[3657]: E0214 00:39:30.813739 3657 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.162.217:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-a04cd882ea&limit=500&resourceVersion=0\": dial tcp 147.28.162.217:6443: connect: connection refused" logger="UnhandledError" Feb 14 00:39:30.813796 kubelet[3657]: E0214 00:39:30.813759 3657 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.162.217:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.162.217:6443: connect: connection refused" logger="UnhandledError" Feb 14 00:39:30.814409 kubelet[3657]: I0214 00:39:30.814398 3657 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 14 00:39:30.814434 kubelet[3657]: I0214 00:39:30.814428 3657 server.go:1287] "Started kubelet" Feb 14 00:39:30.814519 kubelet[3657]: I0214 00:39:30.814483 3657 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 14 00:39:30.818068 kubelet[3657]: I0214 00:39:30.817993 3657 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 14 00:39:30.818601 kubelet[3657]: I0214 00:39:30.818582 3657 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 14 00:39:30.820227 kubelet[3657]: E0214 00:39:30.820212 3657 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 14 00:39:30.820307 kubelet[3657]: E0214 00:39:30.820076 3657 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.162.217:6443/api/v1/namespaces/default/events\": dial tcp 147.28.162.217:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.1-a-a04cd882ea.1823ec2fb977fa59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-a-a04cd882ea,UID:ci-4081.3.1-a-a04cd882ea,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-a-a04cd882ea,},FirstTimestamp:2025-02-14 00:39:30.814409305 +0000 UTC m=+0.857907201,LastTimestamp:2025-02-14 00:39:30.814409305 +0000 UTC m=+0.857907201,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-a-a04cd882ea,}" Feb 14 00:39:30.820707 kubelet[3657]: I0214 00:39:30.820690 3657 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 14 00:39:30.820707 kubelet[3657]: I0214 00:39:30.820698 3657 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 14 00:39:30.820757 kubelet[3657]: I0214 00:39:30.820736 3657 server.go:490] "Adding debug handlers to kubelet server" Feb 14 00:39:30.820780 kubelet[3657]: I0214 00:39:30.820757 3657 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 14 00:39:30.820815 kubelet[3657]: I0214 00:39:30.820793 3657 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 14 00:39:30.820835 kubelet[3657]: I0214 00:39:30.820826 3657 reconciler.go:26] "Reconciler: start to sync state" Feb 14 00:39:30.820857 kubelet[3657]: E0214 00:39:30.820833 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:30.821067 kubelet[3657]: E0214 00:39:30.821048 3657 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.162.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-a04cd882ea?timeout=10s\": dial tcp 147.28.162.217:6443: connect: connection refused" interval="200ms" Feb 14 00:39:30.821120 kubelet[3657]: W0214 00:39:30.821086 3657 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.162.217:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.162.217:6443: connect: connection refused Feb 14 00:39:30.821147 kubelet[3657]: E0214 00:39:30.821132 3657 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.162.217:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.162.217:6443: connect: connection refused" logger="UnhandledError" Feb 14 00:39:30.821209 kubelet[3657]: I0214 00:39:30.821194 3657 factory.go:221] Registration of the systemd container factory successfully Feb 14 00:39:30.821301 kubelet[3657]: I0214 00:39:30.821286 3657 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 14 00:39:30.821906 kubelet[3657]: I0214 00:39:30.821888 3657 factory.go:221] Registration of the containerd container factory successfully Feb 14 00:39:30.834124 kubelet[3657]: I0214 00:39:30.834090 3657 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 14 00:39:30.835069 kubelet[3657]: I0214 00:39:30.835056 3657 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 14 00:39:30.835089 kubelet[3657]: I0214 00:39:30.835073 3657 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 14 00:39:30.835107 kubelet[3657]: I0214 00:39:30.835091 3657 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 14 00:39:30.835107 kubelet[3657]: I0214 00:39:30.835099 3657 kubelet.go:2388] "Starting kubelet main sync loop" Feb 14 00:39:30.835154 kubelet[3657]: E0214 00:39:30.835138 3657 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 14 00:39:30.835489 kubelet[3657]: W0214 00:39:30.835445 3657 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.162.217:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.162.217:6443: connect: connection refused Feb 14 00:39:30.835517 kubelet[3657]: E0214 00:39:30.835501 3657 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.162.217:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.162.217:6443: connect: connection refused" logger="UnhandledError" Feb 14 00:39:30.921205 kubelet[3657]: E0214 00:39:30.921173 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:30.935336 kubelet[3657]: E0214 00:39:30.935310 3657 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 14 00:39:31.021295 kubelet[3657]: E0214 00:39:31.021224 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:31.021603 kubelet[3657]: I0214 00:39:31.021391 3657 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 14 00:39:31.021603 kubelet[3657]: I0214 00:39:31.021406 3657 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 14 00:39:31.021603 kubelet[3657]: E0214 00:39:31.021404 3657 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.162.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-a04cd882ea?timeout=10s\": dial tcp 147.28.162.217:6443: connect: connection refused" interval="400ms" Feb 14 00:39:31.021603 kubelet[3657]: I0214 00:39:31.021423 3657 state_mem.go:36] "Initialized new in-memory state store" Feb 14 00:39:31.022482 kubelet[3657]: I0214 00:39:31.022467 3657 policy_none.go:49] "None policy: Start" Feb 14 00:39:31.022512 kubelet[3657]: I0214 00:39:31.022488 3657 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 14 00:39:31.022512 kubelet[3657]: I0214 00:39:31.022499 3657 state_mem.go:35] "Initializing new in-memory state store" Feb 14 00:39:31.026012 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 14 00:39:31.037966 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 14 00:39:31.040391 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 14 00:39:31.051370 kubelet[3657]: I0214 00:39:31.051348 3657 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 14 00:39:31.051545 kubelet[3657]: I0214 00:39:31.051528 3657 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 14 00:39:31.051584 kubelet[3657]: I0214 00:39:31.051543 3657 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 14 00:39:31.051724 kubelet[3657]: I0214 00:39:31.051707 3657 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 14 00:39:31.052145 kubelet[3657]: E0214 00:39:31.052123 3657 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 14 00:39:31.052171 kubelet[3657]: E0214 00:39:31.052163 3657 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:31.144060 systemd[1]: Created slice kubepods-burstable-pod5d4e276964cff697b93cbda9133f5f7c.slice - libcontainer container kubepods-burstable-pod5d4e276964cff697b93cbda9133f5f7c.slice. Feb 14 00:39:31.153192 kubelet[3657]: I0214 00:39:31.153163 3657 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:31.153538 kubelet[3657]: E0214 00:39:31.153515 3657 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.28.162.217:6443/api/v1/nodes\": dial tcp 147.28.162.217:6443: connect: connection refused" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:31.164050 kubelet[3657]: E0214 00:39:31.164020 3657 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:31.166297 systemd[1]: Created slice kubepods-burstable-pod4a5defb7c9683c7a8bf504ee80a683a4.slice - libcontainer container kubepods-burstable-pod4a5defb7c9683c7a8bf504ee80a683a4.slice. Feb 14 00:39:31.178653 kubelet[3657]: E0214 00:39:31.178628 3657 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:31.180902 systemd[1]: Created slice kubepods-burstable-pod0181228ecdec0b546e02015be7105a94.slice - libcontainer container kubepods-burstable-pod0181228ecdec0b546e02015be7105a94.slice. Feb 14 00:39:31.182103 kubelet[3657]: E0214 00:39:31.182086 3657 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:31.222426 kubelet[3657]: I0214 00:39:31.222397 3657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a5defb7c9683c7a8bf504ee80a683a4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-a-a04cd882ea\" (UID: \"4a5defb7c9683c7a8bf504ee80a683a4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:31.222478 kubelet[3657]: I0214 00:39:31.222431 3657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0181228ecdec0b546e02015be7105a94-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-a-a04cd882ea\" (UID: \"0181228ecdec0b546e02015be7105a94\") " pod="kube-system/kube-scheduler-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:31.222478 kubelet[3657]: I0214 00:39:31.222454 3657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d4e276964cff697b93cbda9133f5f7c-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-a-a04cd882ea\" (UID: \"5d4e276964cff697b93cbda9133f5f7c\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:31.222478 kubelet[3657]: I0214 00:39:31.222473 3657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d4e276964cff697b93cbda9133f5f7c-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-a-a04cd882ea\" (UID: \"5d4e276964cff697b93cbda9133f5f7c\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:31.222618 kubelet[3657]: I0214 00:39:31.222493 3657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a5defb7c9683c7a8bf504ee80a683a4-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-a04cd882ea\" (UID: \"4a5defb7c9683c7a8bf504ee80a683a4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:31.222618 kubelet[3657]: I0214 00:39:31.222515 3657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a5defb7c9683c7a8bf504ee80a683a4-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-a-a04cd882ea\" (UID: \"4a5defb7c9683c7a8bf504ee80a683a4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:31.222618 kubelet[3657]: I0214 00:39:31.222535 3657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d4e276964cff697b93cbda9133f5f7c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-a-a04cd882ea\" (UID: \"5d4e276964cff697b93cbda9133f5f7c\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:31.222618 kubelet[3657]: I0214 00:39:31.222553 3657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a5defb7c9683c7a8bf504ee80a683a4-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-a04cd882ea\" (UID: \"4a5defb7c9683c7a8bf504ee80a683a4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:31.222618 kubelet[3657]: I0214 00:39:31.222573 3657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4a5defb7c9683c7a8bf504ee80a683a4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-a-a04cd882ea\" (UID: \"4a5defb7c9683c7a8bf504ee80a683a4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:31.355997 kubelet[3657]: I0214 00:39:31.355935 3657 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:31.356225 kubelet[3657]: E0214 00:39:31.356201 3657 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.28.162.217:6443/api/v1/nodes\": dial tcp 147.28.162.217:6443: connect: connection refused" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:31.422323 kubelet[3657]: E0214 00:39:31.422279 3657 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.162.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-a-a04cd882ea?timeout=10s\": dial tcp 147.28.162.217:6443: connect: connection refused" interval="800ms" Feb 14 00:39:31.465328 containerd[2680]: time="2025-02-14T00:39:31.465275225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-a-a04cd882ea,Uid:5d4e276964cff697b93cbda9133f5f7c,Namespace:kube-system,Attempt:0,}" Feb 14 00:39:31.479791 containerd[2680]: time="2025-02-14T00:39:31.479758625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-a-a04cd882ea,Uid:4a5defb7c9683c7a8bf504ee80a683a4,Namespace:kube-system,Attempt:0,}" Feb 14 00:39:31.483196 containerd[2680]: time="2025-02-14T00:39:31.483163465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-a-a04cd882ea,Uid:0181228ecdec0b546e02015be7105a94,Namespace:kube-system,Attempt:0,}" Feb 14 00:39:31.719074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3536596064.mount: Deactivated successfully. Feb 14 00:39:31.719679 containerd[2680]: time="2025-02-14T00:39:31.719655425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 00:39:31.720187 containerd[2680]: time="2025-02-14T00:39:31.720166425Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 14 00:39:31.720373 containerd[2680]: time="2025-02-14T00:39:31.720352945Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 14 00:39:31.720426 containerd[2680]: time="2025-02-14T00:39:31.720405865Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 14 00:39:31.721428 containerd[2680]: time="2025-02-14T00:39:31.721387865Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 00:39:31.722858 containerd[2680]: time="2025-02-14T00:39:31.722830185Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 00:39:31.726172 containerd[2680]: time="2025-02-14T00:39:31.726136145Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 00:39:31.727036 containerd[2680]: time="2025-02-14T00:39:31.727012745Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 243.79384ms" Feb 14 00:39:31.728687 containerd[2680]: time="2025-02-14T00:39:31.728669465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 14 00:39:31.729482 containerd[2680]: time="2025-02-14T00:39:31.729451265Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 264.07376ms" Feb 14 00:39:31.730157 containerd[2680]: time="2025-02-14T00:39:31.730133545Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 250.3172ms" Feb 14 00:39:31.758749 kubelet[3657]: I0214 00:39:31.758725 3657 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:31.759086 kubelet[3657]: E0214 00:39:31.759058 3657 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.28.162.217:6443/api/v1/nodes\": dial tcp 147.28.162.217:6443: connect: connection refused" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:31.851880 containerd[2680]: time="2025-02-14T00:39:31.851816705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:39:31.851880 containerd[2680]: time="2025-02-14T00:39:31.851870305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:39:31.851880 containerd[2680]: time="2025-02-14T00:39:31.851812585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:39:31.851946 containerd[2680]: time="2025-02-14T00:39:31.851870025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:39:31.851946 containerd[2680]: time="2025-02-14T00:39:31.851882025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:39:31.851946 containerd[2680]: time="2025-02-14T00:39:31.851882145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:39:31.851946 containerd[2680]: time="2025-02-14T00:39:31.851874625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:39:31.851946 containerd[2680]: time="2025-02-14T00:39:31.851929225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:39:31.852029 containerd[2680]: time="2025-02-14T00:39:31.851942225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:39:31.853111 containerd[2680]: time="2025-02-14T00:39:31.853089705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:39:31.853131 containerd[2680]: time="2025-02-14T00:39:31.853107665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:39:31.853131 containerd[2680]: time="2025-02-14T00:39:31.853105545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:39:31.881894 systemd[1]: Started cri-containerd-4d1031126d1236a83df4369cc136270b1a49174d5d5a0d369d5e1d59a4cb3892.scope - libcontainer container 4d1031126d1236a83df4369cc136270b1a49174d5d5a0d369d5e1d59a4cb3892. Feb 14 00:39:31.883190 systemd[1]: Started cri-containerd-c802a510da4e693692f2c261c7234052bb0542f57e6b195136d2d5aa2f026607.scope - libcontainer container c802a510da4e693692f2c261c7234052bb0542f57e6b195136d2d5aa2f026607. Feb 14 00:39:31.884494 systemd[1]: Started cri-containerd-d6c35ccbb4044383443fa8d6f3d55ff67a3d16a3aae287262f6c91388d79c2a2.scope - libcontainer container d6c35ccbb4044383443fa8d6f3d55ff67a3d16a3aae287262f6c91388d79c2a2. Feb 14 00:39:31.904676 containerd[2680]: time="2025-02-14T00:39:31.904644145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-a-a04cd882ea,Uid:0181228ecdec0b546e02015be7105a94,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d1031126d1236a83df4369cc136270b1a49174d5d5a0d369d5e1d59a4cb3892\"" Feb 14 00:39:31.905245 containerd[2680]: time="2025-02-14T00:39:31.905222545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-a-a04cd882ea,Uid:4a5defb7c9683c7a8bf504ee80a683a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c802a510da4e693692f2c261c7234052bb0542f57e6b195136d2d5aa2f026607\"" Feb 14 00:39:31.907261 containerd[2680]: time="2025-02-14T00:39:31.907237625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-a-a04cd882ea,Uid:5d4e276964cff697b93cbda9133f5f7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6c35ccbb4044383443fa8d6f3d55ff67a3d16a3aae287262f6c91388d79c2a2\"" Feb 14 00:39:31.907839 containerd[2680]: time="2025-02-14T00:39:31.907812865Z" level=info msg="CreateContainer within sandbox \"4d1031126d1236a83df4369cc136270b1a49174d5d5a0d369d5e1d59a4cb3892\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 14 00:39:31.908357 containerd[2680]: time="2025-02-14T00:39:31.908339025Z" level=info msg="CreateContainer within sandbox \"c802a510da4e693692f2c261c7234052bb0542f57e6b195136d2d5aa2f026607\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 14 00:39:31.909520 containerd[2680]: time="2025-02-14T00:39:31.909495025Z" level=info msg="CreateContainer within sandbox \"d6c35ccbb4044383443fa8d6f3d55ff67a3d16a3aae287262f6c91388d79c2a2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 14 00:39:31.915155 containerd[2680]: time="2025-02-14T00:39:31.915103665Z" level=info msg="CreateContainer within sandbox \"4d1031126d1236a83df4369cc136270b1a49174d5d5a0d369d5e1d59a4cb3892\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a2367c05fdbeb5f74ddffff202e398c6a4944bca0e96e1e0fc6e861a84727db3\"" Feb 14 00:39:31.915951 containerd[2680]: time="2025-02-14T00:39:31.915896865Z" level=info msg="StartContainer for \"a2367c05fdbeb5f74ddffff202e398c6a4944bca0e96e1e0fc6e861a84727db3\"" Feb 14 00:39:31.916660 containerd[2680]: time="2025-02-14T00:39:31.916598825Z" level=info msg="CreateContainer within sandbox \"c802a510da4e693692f2c261c7234052bb0542f57e6b195136d2d5aa2f026607\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3cfc48f7ca0fa35753e48685ce6fb5586779215cda477fb6ca5e5b0b3f664844\"" Feb 14 00:39:31.917408 containerd[2680]: time="2025-02-14T00:39:31.916919665Z" level=info msg="StartContainer for \"3cfc48f7ca0fa35753e48685ce6fb5586779215cda477fb6ca5e5b0b3f664844\"" Feb 14 00:39:31.917679 containerd[2680]: time="2025-02-14T00:39:31.917649465Z" level=info msg="CreateContainer within sandbox \"d6c35ccbb4044383443fa8d6f3d55ff67a3d16a3aae287262f6c91388d79c2a2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0126ec22c12d12ff431ac1cfd42407f35cdaf77b196b4899692685d565abd768\"" Feb 14 00:39:31.918078 containerd[2680]: time="2025-02-14T00:39:31.918054265Z" level=info msg="StartContainer for \"0126ec22c12d12ff431ac1cfd42407f35cdaf77b196b4899692685d565abd768\"" Feb 14 00:39:31.944919 systemd[1]: Started cri-containerd-0126ec22c12d12ff431ac1cfd42407f35cdaf77b196b4899692685d565abd768.scope - libcontainer container 0126ec22c12d12ff431ac1cfd42407f35cdaf77b196b4899692685d565abd768. Feb 14 00:39:31.946125 systemd[1]: Started cri-containerd-3cfc48f7ca0fa35753e48685ce6fb5586779215cda477fb6ca5e5b0b3f664844.scope - libcontainer container 3cfc48f7ca0fa35753e48685ce6fb5586779215cda477fb6ca5e5b0b3f664844. Feb 14 00:39:31.947282 systemd[1]: Started cri-containerd-a2367c05fdbeb5f74ddffff202e398c6a4944bca0e96e1e0fc6e861a84727db3.scope - libcontainer container a2367c05fdbeb5f74ddffff202e398c6a4944bca0e96e1e0fc6e861a84727db3. Feb 14 00:39:31.968985 containerd[2680]: time="2025-02-14T00:39:31.968953985Z" level=info msg="StartContainer for \"0126ec22c12d12ff431ac1cfd42407f35cdaf77b196b4899692685d565abd768\" returns successfully" Feb 14 00:39:31.969711 containerd[2680]: time="2025-02-14T00:39:31.969652705Z" level=info msg="StartContainer for \"a2367c05fdbeb5f74ddffff202e398c6a4944bca0e96e1e0fc6e861a84727db3\" returns successfully" Feb 14 00:39:31.971585 containerd[2680]: time="2025-02-14T00:39:31.971560665Z" level=info msg="StartContainer for \"3cfc48f7ca0fa35753e48685ce6fb5586779215cda477fb6ca5e5b0b3f664844\" returns successfully" Feb 14 00:39:31.981277 kubelet[3657]: W0214 00:39:31.981225 3657 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.162.217:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.162.217:6443: connect: connection refused Feb 14 00:39:31.981311 kubelet[3657]: E0214 00:39:31.981290 3657 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.162.217:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.162.217:6443: connect: connection refused" logger="UnhandledError" Feb 14 00:39:32.003901 kubelet[3657]: W0214 00:39:32.003856 3657 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.162.217:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.162.217:6443: connect: connection refused Feb 14 00:39:32.003955 kubelet[3657]: E0214 00:39:32.003910 3657 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.162.217:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.162.217:6443: connect: connection refused" logger="UnhandledError" Feb 14 00:39:32.005150 kubelet[3657]: W0214 00:39:32.005115 3657 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.162.217:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-a04cd882ea&limit=500&resourceVersion=0": dial tcp 147.28.162.217:6443: connect: connection refused Feb 14 00:39:32.005191 kubelet[3657]: E0214 00:39:32.005158 3657 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.162.217:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-a-a04cd882ea&limit=500&resourceVersion=0\": dial tcp 147.28.162.217:6443: connect: connection refused" logger="UnhandledError" Feb 14 00:39:32.561733 kubelet[3657]: I0214 00:39:32.561711 3657 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:32.840884 kubelet[3657]: E0214 00:39:32.840813 3657 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:32.842667 kubelet[3657]: E0214 00:39:32.842640 3657 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:32.843610 kubelet[3657]: E0214 00:39:32.843592 3657 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:33.187055 kubelet[3657]: E0214 00:39:33.187021 3657 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.1-a-a04cd882ea\" not found" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:33.290219 kubelet[3657]: I0214 00:39:33.290187 3657 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:33.290219 kubelet[3657]: E0214 00:39:33.290216 3657 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4081.3.1-a-a04cd882ea\": node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:33.293302 kubelet[3657]: E0214 00:39:33.293275 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:33.394111 kubelet[3657]: E0214 00:39:33.394089 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:33.494566 kubelet[3657]: E0214 00:39:33.494508 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:33.594678 kubelet[3657]: E0214 00:39:33.594630 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:33.695252 kubelet[3657]: E0214 00:39:33.695211 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:33.795642 kubelet[3657]: E0214 00:39:33.795495 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:33.845219 kubelet[3657]: E0214 00:39:33.845202 3657 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:33.845349 kubelet[3657]: E0214 00:39:33.845332 3657 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:33.845408 kubelet[3657]: E0214 00:39:33.845393 3657 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:33.896638 kubelet[3657]: E0214 00:39:33.896598 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:33.997074 kubelet[3657]: E0214 00:39:33.997057 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:34.097393 kubelet[3657]: E0214 00:39:34.097308 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:34.198090 kubelet[3657]: E0214 00:39:34.198069 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:34.298628 kubelet[3657]: E0214 00:39:34.298606 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:34.399276 kubelet[3657]: E0214 00:39:34.399260 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:34.499696 kubelet[3657]: E0214 00:39:34.499680 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:34.600310 kubelet[3657]: E0214 00:39:34.600292 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:34.617225 update_engine[2673]: I20250214 00:39:34.617166 2673 update_attempter.cc:509] Updating boot flags... Feb 14 00:39:34.647747 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (4091) Feb 14 00:39:34.675747 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (4094) Feb 14 00:39:34.701177 kubelet[3657]: E0214 00:39:34.701153 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:34.801889 kubelet[3657]: E0214 00:39:34.801850 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:34.902314 kubelet[3657]: E0214 00:39:34.902278 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:35.002917 kubelet[3657]: E0214 00:39:35.002869 3657 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:35.020965 kubelet[3657]: I0214 00:39:35.020921 3657 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:35.027214 kubelet[3657]: W0214 00:39:35.027190 3657 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 00:39:35.027610 kubelet[3657]: I0214 00:39:35.027598 3657 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:35.029925 kubelet[3657]: W0214 00:39:35.029904 3657 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 00:39:35.029989 kubelet[3657]: I0214 00:39:35.029978 3657 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:35.032092 kubelet[3657]: W0214 00:39:35.032070 3657 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 00:39:35.387844 systemd[1]: Reloading requested from client PID 4107 ('systemctl') (unit session-11.scope)... Feb 14 00:39:35.387853 systemd[1]: Reloading... Feb 14 00:39:35.449750 zram_generator::config[4150]: No configuration found. Feb 14 00:39:35.539212 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 14 00:39:35.621362 systemd[1]: Reloading finished in 233 ms. Feb 14 00:39:35.654520 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:39:35.666591 systemd[1]: kubelet.service: Deactivated successfully. Feb 14 00:39:35.667382 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:39:35.667531 systemd[1]: kubelet.service: Consumed 1.292s CPU time, 149.5M memory peak, 0B memory swap peak. Feb 14 00:39:35.677040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 14 00:39:35.781955 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 14 00:39:35.785700 (kubelet)[4209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 14 00:39:35.818252 kubelet[4209]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 00:39:35.818252 kubelet[4209]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 14 00:39:35.818252 kubelet[4209]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 00:39:35.818541 kubelet[4209]: I0214 00:39:35.818322 4209 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 14 00:39:35.823461 kubelet[4209]: I0214 00:39:35.823439 4209 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 14 00:39:35.823461 kubelet[4209]: I0214 00:39:35.823460 4209 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 14 00:39:35.823682 kubelet[4209]: I0214 00:39:35.823671 4209 server.go:954] "Client rotation is on, will bootstrap in background" Feb 14 00:39:35.824813 kubelet[4209]: I0214 00:39:35.824801 4209 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 14 00:39:35.826868 kubelet[4209]: I0214 00:39:35.826842 4209 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 14 00:39:35.829279 kubelet[4209]: E0214 00:39:35.829259 4209 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 14 00:39:35.829302 kubelet[4209]: I0214 00:39:35.829283 4209 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 14 00:39:35.847972 kubelet[4209]: I0214 00:39:35.847941 4209 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 14 00:39:35.848246 kubelet[4209]: I0214 00:39:35.848220 4209 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 14 00:39:35.848399 kubelet[4209]: I0214 00:39:35.848248 4209 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-a-a04cd882ea","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 14 00:39:35.848462 kubelet[4209]: I0214 00:39:35.848411 4209 topology_manager.go:138] "Creating topology manager with none policy" Feb 14 00:39:35.848462 kubelet[4209]: I0214 00:39:35.848419 4209 container_manager_linux.go:304] "Creating device plugin manager" Feb 14 00:39:35.848540 kubelet[4209]: I0214 00:39:35.848480 4209 state_mem.go:36] "Initialized new in-memory state store" Feb 14 00:39:35.848802 kubelet[4209]: I0214 00:39:35.848792 4209 kubelet.go:446] "Attempting to sync node with API server" Feb 14 00:39:35.848827 kubelet[4209]: I0214 00:39:35.848806 4209 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 14 00:39:35.848827 kubelet[4209]: I0214 00:39:35.848822 4209 kubelet.go:352] "Adding apiserver pod source" Feb 14 00:39:35.848864 kubelet[4209]: I0214 00:39:35.848833 4209 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 14 00:39:35.849392 kubelet[4209]: I0214 00:39:35.849374 4209 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 14 00:39:35.850966 kubelet[4209]: I0214 00:39:35.850948 4209 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 14 00:39:35.851398 kubelet[4209]: I0214 00:39:35.851386 4209 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 14 00:39:35.851422 kubelet[4209]: I0214 00:39:35.851419 4209 server.go:1287] "Started kubelet" Feb 14 00:39:35.851572 kubelet[4209]: I0214 00:39:35.851540 4209 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 14 00:39:35.851654 kubelet[4209]: I0214 00:39:35.851605 4209 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 14 00:39:35.851831 kubelet[4209]: I0214 00:39:35.851820 4209 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 14 00:39:35.852490 kubelet[4209]: I0214 00:39:35.852476 4209 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 14 00:39:35.852490 kubelet[4209]: I0214 00:39:35.852479 4209 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 14 00:39:35.852675 kubelet[4209]: I0214 00:39:35.852598 4209 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 14 00:39:35.852675 kubelet[4209]: E0214 00:39:35.852597 4209 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.1-a-a04cd882ea\" not found" Feb 14 00:39:35.852675 kubelet[4209]: I0214 00:39:35.852649 4209 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 14 00:39:35.852758 kubelet[4209]: I0214 00:39:35.852707 4209 reconciler.go:26] "Reconciler: start to sync state" Feb 14 00:39:35.852950 kubelet[4209]: I0214 00:39:35.852933 4209 factory.go:221] Registration of the systemd container factory successfully Feb 14 00:39:35.853006 kubelet[4209]: E0214 00:39:35.852986 4209 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 14 00:39:35.853141 kubelet[4209]: I0214 00:39:35.853119 4209 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 14 00:39:35.853550 kubelet[4209]: I0214 00:39:35.853533 4209 server.go:490] "Adding debug handlers to kubelet server" Feb 14 00:39:35.853882 kubelet[4209]: I0214 00:39:35.853862 4209 factory.go:221] Registration of the containerd container factory successfully Feb 14 00:39:35.860203 kubelet[4209]: I0214 00:39:35.860172 4209 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 14 00:39:35.861051 kubelet[4209]: I0214 00:39:35.861032 4209 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 14 00:39:35.861082 kubelet[4209]: I0214 00:39:35.861057 4209 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 14 00:39:35.861082 kubelet[4209]: I0214 00:39:35.861075 4209 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 14 00:39:35.861082 kubelet[4209]: I0214 00:39:35.861082 4209 kubelet.go:2388] "Starting kubelet main sync loop" Feb 14 00:39:35.861151 kubelet[4209]: E0214 00:39:35.861134 4209 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 14 00:39:35.884641 kubelet[4209]: I0214 00:39:35.884616 4209 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 14 00:39:35.884641 kubelet[4209]: I0214 00:39:35.884635 4209 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 14 00:39:35.884761 kubelet[4209]: I0214 00:39:35.884653 4209 state_mem.go:36] "Initialized new in-memory state store" Feb 14 00:39:35.884820 kubelet[4209]: I0214 00:39:35.884808 4209 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 14 00:39:35.884839 kubelet[4209]: I0214 00:39:35.884820 4209 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 14 00:39:35.884856 kubelet[4209]: I0214 00:39:35.884841 4209 policy_none.go:49] "None policy: Start" Feb 14 00:39:35.884856 kubelet[4209]: I0214 00:39:35.884849 4209 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 14 00:39:35.884892 kubelet[4209]: I0214 00:39:35.884858 4209 state_mem.go:35] "Initializing new in-memory state store" Feb 14 00:39:35.884972 kubelet[4209]: I0214 00:39:35.884962 4209 state_mem.go:75] "Updated machine memory state" Feb 14 00:39:35.887745 kubelet[4209]: I0214 00:39:35.887719 4209 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 14 00:39:35.887904 kubelet[4209]: I0214 00:39:35.887892 4209 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 14 00:39:35.887933 kubelet[4209]: I0214 00:39:35.887905 4209 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 14 00:39:35.888043 kubelet[4209]: I0214 00:39:35.888028 4209 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 14 00:39:35.888619 kubelet[4209]: E0214 00:39:35.888599 4209 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 14 00:39:35.961969 kubelet[4209]: I0214 00:39:35.961881 4209 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:35.962045 kubelet[4209]: I0214 00:39:35.961895 4209 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:35.962095 kubelet[4209]: I0214 00:39:35.962063 4209 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:35.971148 kubelet[4209]: W0214 00:39:35.971126 4209 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 00:39:35.971197 kubelet[4209]: W0214 00:39:35.971167 4209 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 00:39:35.971237 kubelet[4209]: W0214 00:39:35.971197 4209 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 00:39:35.971237 kubelet[4209]: E0214 00:39:35.971212 4209 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.1-a-a04cd882ea\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:35.971307 kubelet[4209]: E0214 00:39:35.971240 4209 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.1-a-a04cd882ea\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:35.971307 kubelet[4209]: E0214 00:39:35.971257 4209 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.1-a-a04cd882ea\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:35.990832 kubelet[4209]: I0214 00:39:35.990817 4209 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:35.994518 kubelet[4209]: I0214 00:39:35.994496 4209 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:35.994573 kubelet[4209]: I0214 00:39:35.994555 4209 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:36.154508 kubelet[4209]: I0214 00:39:36.154476 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5d4e276964cff697b93cbda9133f5f7c-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-a-a04cd882ea\" (UID: \"5d4e276964cff697b93cbda9133f5f7c\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:36.154620 kubelet[4209]: I0214 00:39:36.154515 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5d4e276964cff697b93cbda9133f5f7c-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-a-a04cd882ea\" (UID: \"5d4e276964cff697b93cbda9133f5f7c\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:36.154620 kubelet[4209]: I0214 00:39:36.154537 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5d4e276964cff697b93cbda9133f5f7c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-a-a04cd882ea\" (UID: \"5d4e276964cff697b93cbda9133f5f7c\") " pod="kube-system/kube-apiserver-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:36.154620 kubelet[4209]: I0214 00:39:36.154556 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a5defb7c9683c7a8bf504ee80a683a4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-a-a04cd882ea\" (UID: \"4a5defb7c9683c7a8bf504ee80a683a4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:36.154620 kubelet[4209]: I0214 00:39:36.154573 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0181228ecdec0b546e02015be7105a94-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-a-a04cd882ea\" (UID: \"0181228ecdec0b546e02015be7105a94\") " pod="kube-system/kube-scheduler-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:36.154772 kubelet[4209]: I0214 00:39:36.154638 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a5defb7c9683c7a8bf504ee80a683a4-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-a04cd882ea\" (UID: \"4a5defb7c9683c7a8bf504ee80a683a4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:36.154772 kubelet[4209]: I0214 00:39:36.154692 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4a5defb7c9683c7a8bf504ee80a683a4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-a-a04cd882ea\" (UID: \"4a5defb7c9683c7a8bf504ee80a683a4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:36.154772 kubelet[4209]: I0214 00:39:36.154727 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a5defb7c9683c7a8bf504ee80a683a4-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-a-a04cd882ea\" (UID: \"4a5defb7c9683c7a8bf504ee80a683a4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:36.154772 kubelet[4209]: I0214 00:39:36.154764 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a5defb7c9683c7a8bf504ee80a683a4-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-a-a04cd882ea\" (UID: \"4a5defb7c9683c7a8bf504ee80a683a4\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:36.849541 kubelet[4209]: I0214 00:39:36.849503 4209 apiserver.go:52] "Watching apiserver" Feb 14 00:39:36.852778 kubelet[4209]: I0214 00:39:36.852758 4209 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 14 00:39:36.865896 kubelet[4209]: I0214 00:39:36.865873 4209 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:36.866580 kubelet[4209]: I0214 00:39:36.866553 4209 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:36.866622 kubelet[4209]: I0214 00:39:36.866518 4209 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:36.869702 kubelet[4209]: W0214 00:39:36.869675 4209 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 00:39:36.869763 kubelet[4209]: E0214 00:39:36.869750 4209 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.1-a-a04cd882ea\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:36.869840 kubelet[4209]: W0214 00:39:36.869819 4209 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 00:39:36.869947 kubelet[4209]: E0214 00:39:36.869928 4209 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.1-a-a04cd882ea\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:36.870053 kubelet[4209]: W0214 00:39:36.869938 4209 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 14 00:39:36.870104 kubelet[4209]: E0214 00:39:36.870089 4209 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.1-a-a04cd882ea\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.1-a-a04cd882ea" Feb 14 00:39:36.880590 kubelet[4209]: I0214 00:39:36.880547 4209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.1-a-a04cd882ea" podStartSLOduration=1.880523185 podStartE2EDuration="1.880523185s" podCreationTimestamp="2025-02-14 00:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 00:39:36.880501625 +0000 UTC m=+1.091920961" watchObservedRunningTime="2025-02-14 00:39:36.880523185 +0000 UTC m=+1.091942521" Feb 14 00:39:36.893780 kubelet[4209]: I0214 00:39:36.893741 4209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.1-a-a04cd882ea" podStartSLOduration=1.893723705 podStartE2EDuration="1.893723705s" podCreationTimestamp="2025-02-14 00:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 00:39:36.886639505 +0000 UTC m=+1.098058841" watchObservedRunningTime="2025-02-14 00:39:36.893723705 +0000 UTC m=+1.105143041" Feb 14 00:39:36.899841 kubelet[4209]: I0214 00:39:36.899803 4209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.1-a-a04cd882ea" podStartSLOduration=1.8997879850000001 podStartE2EDuration="1.899787985s" podCreationTimestamp="2025-02-14 00:39:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 00:39:36.893709545 +0000 UTC m=+1.105128881" watchObservedRunningTime="2025-02-14 00:39:36.899787985 +0000 UTC m=+1.111207321" Feb 14 00:39:40.433727 sudo[2973]: pam_unix(sudo:session): session closed for user root Feb 14 00:39:40.499196 sshd[2970]: pam_unix(sshd:session): session closed for user core Feb 14 00:39:40.502112 systemd[1]: sshd@8-147.28.162.217:22-139.178.68.195:56404.service: Deactivated successfully. Feb 14 00:39:40.504140 systemd[1]: session-11.scope: Deactivated successfully. Feb 14 00:39:40.504318 systemd[1]: session-11.scope: Consumed 7.218s CPU time, 170.7M memory peak, 0B memory swap peak. Feb 14 00:39:40.504656 systemd-logind[2664]: Session 11 logged out. Waiting for processes to exit. Feb 14 00:39:40.505252 systemd-logind[2664]: Removed session 11. Feb 14 00:39:41.829496 kubelet[4209]: I0214 00:39:41.829461 4209 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 14 00:39:41.829856 containerd[2680]: time="2025-02-14T00:39:41.829784545Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 14 00:39:41.830012 kubelet[4209]: I0214 00:39:41.829922 4209 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 14 00:39:42.793971 systemd[1]: Created slice kubepods-besteffort-pod647d22a8_5266_4bb9_a15c_893e323ce242.slice - libcontainer container kubepods-besteffort-pod647d22a8_5266_4bb9_a15c_893e323ce242.slice. Feb 14 00:39:42.896662 kubelet[4209]: I0214 00:39:42.896630 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/647d22a8-5266-4bb9-a15c-893e323ce242-kube-proxy\") pod \"kube-proxy-5pcpq\" (UID: \"647d22a8-5266-4bb9-a15c-893e323ce242\") " pod="kube-system/kube-proxy-5pcpq" Feb 14 00:39:42.896662 kubelet[4209]: I0214 00:39:42.896660 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j24t\" (UniqueName: \"kubernetes.io/projected/647d22a8-5266-4bb9-a15c-893e323ce242-kube-api-access-5j24t\") pod \"kube-proxy-5pcpq\" (UID: \"647d22a8-5266-4bb9-a15c-893e323ce242\") " pod="kube-system/kube-proxy-5pcpq" Feb 14 00:39:42.897053 kubelet[4209]: I0214 00:39:42.896683 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/647d22a8-5266-4bb9-a15c-893e323ce242-xtables-lock\") pod \"kube-proxy-5pcpq\" (UID: \"647d22a8-5266-4bb9-a15c-893e323ce242\") " pod="kube-system/kube-proxy-5pcpq" Feb 14 00:39:42.897053 kubelet[4209]: I0214 00:39:42.896699 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/647d22a8-5266-4bb9-a15c-893e323ce242-lib-modules\") pod \"kube-proxy-5pcpq\" (UID: \"647d22a8-5266-4bb9-a15c-893e323ce242\") " pod="kube-system/kube-proxy-5pcpq" Feb 14 00:39:42.912375 systemd[1]: Created slice kubepods-besteffort-pod8ac03e84_62a4_4b68_a4f0_94203be1f29d.slice - libcontainer container kubepods-besteffort-pod8ac03e84_62a4_4b68_a4f0_94203be1f29d.slice. Feb 14 00:39:42.997083 kubelet[4209]: I0214 00:39:42.996972 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8ac03e84-62a4-4b68-a4f0-94203be1f29d-var-lib-calico\") pod \"tigera-operator-7d68577dc5-gwh8w\" (UID: \"8ac03e84-62a4-4b68-a4f0-94203be1f29d\") " pod="tigera-operator/tigera-operator-7d68577dc5-gwh8w" Feb 14 00:39:42.997083 kubelet[4209]: I0214 00:39:42.997020 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7h4c\" (UniqueName: \"kubernetes.io/projected/8ac03e84-62a4-4b68-a4f0-94203be1f29d-kube-api-access-r7h4c\") pod \"tigera-operator-7d68577dc5-gwh8w\" (UID: \"8ac03e84-62a4-4b68-a4f0-94203be1f29d\") " pod="tigera-operator/tigera-operator-7d68577dc5-gwh8w" Feb 14 00:39:43.110365 containerd[2680]: time="2025-02-14T00:39:43.110290225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5pcpq,Uid:647d22a8-5266-4bb9-a15c-893e323ce242,Namespace:kube-system,Attempt:0,}" Feb 14 00:39:43.122373 containerd[2680]: time="2025-02-14T00:39:43.122312905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:39:43.122373 containerd[2680]: time="2025-02-14T00:39:43.122366625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:39:43.122431 containerd[2680]: time="2025-02-14T00:39:43.122378425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:39:43.122473 containerd[2680]: time="2025-02-14T00:39:43.122457945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:39:43.144916 systemd[1]: Started cri-containerd-82be3790329930bf59200c728bbda8a47b12687238177c0d7f76725dccce731a.scope - libcontainer container 82be3790329930bf59200c728bbda8a47b12687238177c0d7f76725dccce731a. Feb 14 00:39:43.160101 containerd[2680]: time="2025-02-14T00:39:43.160075385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5pcpq,Uid:647d22a8-5266-4bb9-a15c-893e323ce242,Namespace:kube-system,Attempt:0,} returns sandbox id \"82be3790329930bf59200c728bbda8a47b12687238177c0d7f76725dccce731a\"" Feb 14 00:39:43.162029 containerd[2680]: time="2025-02-14T00:39:43.162006225Z" level=info msg="CreateContainer within sandbox \"82be3790329930bf59200c728bbda8a47b12687238177c0d7f76725dccce731a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 14 00:39:43.168496 containerd[2680]: time="2025-02-14T00:39:43.168461425Z" level=info msg="CreateContainer within sandbox \"82be3790329930bf59200c728bbda8a47b12687238177c0d7f76725dccce731a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bb447c9ba44f5b085938db059245dc2e1b78bf923dfe0ddc1de551b58a694082\"" Feb 14 00:39:43.168936 containerd[2680]: time="2025-02-14T00:39:43.168915065Z" level=info msg="StartContainer for \"bb447c9ba44f5b085938db059245dc2e1b78bf923dfe0ddc1de551b58a694082\"" Feb 14 00:39:43.196914 systemd[1]: Started cri-containerd-bb447c9ba44f5b085938db059245dc2e1b78bf923dfe0ddc1de551b58a694082.scope - libcontainer container bb447c9ba44f5b085938db059245dc2e1b78bf923dfe0ddc1de551b58a694082. Feb 14 00:39:43.214843 containerd[2680]: time="2025-02-14T00:39:43.214811425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-gwh8w,Uid:8ac03e84-62a4-4b68-a4f0-94203be1f29d,Namespace:tigera-operator,Attempt:0,}" Feb 14 00:39:43.215683 containerd[2680]: time="2025-02-14T00:39:43.215660185Z" level=info msg="StartContainer for \"bb447c9ba44f5b085938db059245dc2e1b78bf923dfe0ddc1de551b58a694082\" returns successfully" Feb 14 00:39:43.227428 containerd[2680]: time="2025-02-14T00:39:43.227369265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:39:43.227457 containerd[2680]: time="2025-02-14T00:39:43.227422265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:39:43.227457 containerd[2680]: time="2025-02-14T00:39:43.227434665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:39:43.227527 containerd[2680]: time="2025-02-14T00:39:43.227509905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:39:43.247856 systemd[1]: Started cri-containerd-02177653a519144d4a51721b5a30a8813628e869a9422bb402cce68bc177cd5e.scope - libcontainer container 02177653a519144d4a51721b5a30a8813628e869a9422bb402cce68bc177cd5e. Feb 14 00:39:43.270592 containerd[2680]: time="2025-02-14T00:39:43.270546345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-gwh8w,Uid:8ac03e84-62a4-4b68-a4f0-94203be1f29d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"02177653a519144d4a51721b5a30a8813628e869a9422bb402cce68bc177cd5e\"" Feb 14 00:39:43.271727 containerd[2680]: time="2025-02-14T00:39:43.271698385Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 14 00:39:43.881687 kubelet[4209]: I0214 00:39:43.881611 4209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5pcpq" podStartSLOduration=1.881594465 podStartE2EDuration="1.881594465s" podCreationTimestamp="2025-02-14 00:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 00:39:43.881455345 +0000 UTC m=+8.092874721" watchObservedRunningTime="2025-02-14 00:39:43.881594465 +0000 UTC m=+8.093013761" Feb 14 00:39:46.737792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2099468321.mount: Deactivated successfully. Feb 14 00:39:47.070530 containerd[2680]: time="2025-02-14T00:39:47.070432528Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:47.070530 containerd[2680]: time="2025-02-14T00:39:47.070477835Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Feb 14 00:39:47.071262 containerd[2680]: time="2025-02-14T00:39:47.071236457Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:47.073245 containerd[2680]: time="2025-02-14T00:39:47.073222485Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:47.074001 containerd[2680]: time="2025-02-14T00:39:47.073979427Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 3.802250845s" Feb 14 00:39:47.074031 containerd[2680]: time="2025-02-14T00:39:47.074008539Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Feb 14 00:39:47.075545 containerd[2680]: time="2025-02-14T00:39:47.075521863Z" level=info msg="CreateContainer within sandbox \"02177653a519144d4a51721b5a30a8813628e869a9422bb402cce68bc177cd5e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 14 00:39:47.080103 containerd[2680]: time="2025-02-14T00:39:47.080077872Z" level=info msg="CreateContainer within sandbox \"02177653a519144d4a51721b5a30a8813628e869a9422bb402cce68bc177cd5e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3185662e04c21c17f118509d328e8e8815fbdc8bb34172ef9ddb41ec7dd55331\"" Feb 14 00:39:47.080480 containerd[2680]: time="2025-02-14T00:39:47.080453764Z" level=info msg="StartContainer for \"3185662e04c21c17f118509d328e8e8815fbdc8bb34172ef9ddb41ec7dd55331\"" Feb 14 00:39:47.107901 systemd[1]: Started cri-containerd-3185662e04c21c17f118509d328e8e8815fbdc8bb34172ef9ddb41ec7dd55331.scope - libcontainer container 3185662e04c21c17f118509d328e8e8815fbdc8bb34172ef9ddb41ec7dd55331. Feb 14 00:39:47.123761 containerd[2680]: time="2025-02-14T00:39:47.123725590Z" level=info msg="StartContainer for \"3185662e04c21c17f118509d328e8e8815fbdc8bb34172ef9ddb41ec7dd55331\" returns successfully" Feb 14 00:39:47.887799 kubelet[4209]: I0214 00:39:47.887752 4209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-gwh8w" podStartSLOduration=2.084494226 podStartE2EDuration="5.887728186s" podCreationTimestamp="2025-02-14 00:39:42 +0000 UTC" firstStartedPulling="2025-02-14 00:39:43.271310105 +0000 UTC m=+7.482729441" lastFinishedPulling="2025-02-14 00:39:47.074544065 +0000 UTC m=+11.285963401" observedRunningTime="2025-02-14 00:39:47.887635853 +0000 UTC m=+12.099055189" watchObservedRunningTime="2025-02-14 00:39:47.887728186 +0000 UTC m=+12.099147522" Feb 14 00:39:50.802715 systemd[1]: Created slice kubepods-besteffort-podb1acaad0_cf3c_4c32_887a_98a0e2b2deee.slice - libcontainer container kubepods-besteffort-podb1acaad0_cf3c_4c32_887a_98a0e2b2deee.slice. Feb 14 00:39:50.807097 systemd[1]: Created slice kubepods-besteffort-poda71aae71_241a_49b2_9ead_4e7eaed5d123.slice - libcontainer container kubepods-besteffort-poda71aae71_241a_49b2_9ead_4e7eaed5d123.slice. Feb 14 00:39:50.833166 kubelet[4209]: E0214 00:39:50.833125 4209 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffbcc" podUID="6ccbd114-de29-425f-ab5d-f82920c737e3" Feb 14 00:39:50.844651 kubelet[4209]: I0214 00:39:50.844603 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a71aae71-241a-49b2-9ead-4e7eaed5d123-cni-net-dir\") pod \"calico-node-r6w99\" (UID: \"a71aae71-241a-49b2-9ead-4e7eaed5d123\") " pod="calico-system/calico-node-r6w99" Feb 14 00:39:50.844760 kubelet[4209]: I0214 00:39:50.844665 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1acaad0-cf3c-4c32-887a-98a0e2b2deee-tigera-ca-bundle\") pod \"calico-typha-766dfcf4d-7rslc\" (UID: \"b1acaad0-cf3c-4c32-887a-98a0e2b2deee\") " pod="calico-system/calico-typha-766dfcf4d-7rslc" Feb 14 00:39:50.844760 kubelet[4209]: I0214 00:39:50.844698 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a71aae71-241a-49b2-9ead-4e7eaed5d123-tigera-ca-bundle\") pod \"calico-node-r6w99\" (UID: \"a71aae71-241a-49b2-9ead-4e7eaed5d123\") " pod="calico-system/calico-node-r6w99" Feb 14 00:39:50.844760 kubelet[4209]: I0214 00:39:50.844727 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a71aae71-241a-49b2-9ead-4e7eaed5d123-cni-bin-dir\") pod \"calico-node-r6w99\" (UID: \"a71aae71-241a-49b2-9ead-4e7eaed5d123\") " pod="calico-system/calico-node-r6w99" Feb 14 00:39:50.844836 kubelet[4209]: I0214 00:39:50.844766 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b1acaad0-cf3c-4c32-887a-98a0e2b2deee-typha-certs\") pod \"calico-typha-766dfcf4d-7rslc\" (UID: \"b1acaad0-cf3c-4c32-887a-98a0e2b2deee\") " pod="calico-system/calico-typha-766dfcf4d-7rslc" Feb 14 00:39:50.844836 kubelet[4209]: I0214 00:39:50.844790 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a71aae71-241a-49b2-9ead-4e7eaed5d123-var-run-calico\") pod \"calico-node-r6w99\" (UID: \"a71aae71-241a-49b2-9ead-4e7eaed5d123\") " pod="calico-system/calico-node-r6w99" Feb 14 00:39:50.844836 kubelet[4209]: I0214 00:39:50.844804 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a71aae71-241a-49b2-9ead-4e7eaed5d123-var-lib-calico\") pod \"calico-node-r6w99\" (UID: \"a71aae71-241a-49b2-9ead-4e7eaed5d123\") " pod="calico-system/calico-node-r6w99" Feb 14 00:39:50.844836 kubelet[4209]: I0214 00:39:50.844819 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a71aae71-241a-49b2-9ead-4e7eaed5d123-xtables-lock\") pod \"calico-node-r6w99\" (UID: \"a71aae71-241a-49b2-9ead-4e7eaed5d123\") " pod="calico-system/calico-node-r6w99" Feb 14 00:39:50.844914 kubelet[4209]: I0214 00:39:50.844836 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8lcg\" (UniqueName: \"kubernetes.io/projected/a71aae71-241a-49b2-9ead-4e7eaed5d123-kube-api-access-f8lcg\") pod \"calico-node-r6w99\" (UID: \"a71aae71-241a-49b2-9ead-4e7eaed5d123\") " pod="calico-system/calico-node-r6w99" Feb 14 00:39:50.845236 kubelet[4209]: I0214 00:39:50.844915 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqrwr\" (UniqueName: \"kubernetes.io/projected/b1acaad0-cf3c-4c32-887a-98a0e2b2deee-kube-api-access-zqrwr\") pod \"calico-typha-766dfcf4d-7rslc\" (UID: \"b1acaad0-cf3c-4c32-887a-98a0e2b2deee\") " pod="calico-system/calico-typha-766dfcf4d-7rslc" Feb 14 00:39:50.845236 kubelet[4209]: I0214 00:39:50.844952 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a71aae71-241a-49b2-9ead-4e7eaed5d123-lib-modules\") pod \"calico-node-r6w99\" (UID: \"a71aae71-241a-49b2-9ead-4e7eaed5d123\") " pod="calico-system/calico-node-r6w99" Feb 14 00:39:50.845236 kubelet[4209]: I0214 00:39:50.844971 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a71aae71-241a-49b2-9ead-4e7eaed5d123-policysync\") pod \"calico-node-r6w99\" (UID: \"a71aae71-241a-49b2-9ead-4e7eaed5d123\") " pod="calico-system/calico-node-r6w99" Feb 14 00:39:50.845236 kubelet[4209]: I0214 00:39:50.844990 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a71aae71-241a-49b2-9ead-4e7eaed5d123-cni-log-dir\") pod \"calico-node-r6w99\" (UID: \"a71aae71-241a-49b2-9ead-4e7eaed5d123\") " pod="calico-system/calico-node-r6w99" Feb 14 00:39:50.845236 kubelet[4209]: I0214 00:39:50.845005 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a71aae71-241a-49b2-9ead-4e7eaed5d123-flexvol-driver-host\") pod \"calico-node-r6w99\" (UID: \"a71aae71-241a-49b2-9ead-4e7eaed5d123\") " pod="calico-system/calico-node-r6w99" Feb 14 00:39:50.845380 kubelet[4209]: I0214 00:39:50.845046 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a71aae71-241a-49b2-9ead-4e7eaed5d123-node-certs\") pod \"calico-node-r6w99\" (UID: \"a71aae71-241a-49b2-9ead-4e7eaed5d123\") " pod="calico-system/calico-node-r6w99" Feb 14 00:39:50.945896 kubelet[4209]: I0214 00:39:50.945792 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6ccbd114-de29-425f-ab5d-f82920c737e3-varrun\") pod \"csi-node-driver-ffbcc\" (UID: \"6ccbd114-de29-425f-ab5d-f82920c737e3\") " pod="calico-system/csi-node-driver-ffbcc" Feb 14 00:39:50.945896 kubelet[4209]: I0214 00:39:50.945875 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6ccbd114-de29-425f-ab5d-f82920c737e3-registration-dir\") pod \"csi-node-driver-ffbcc\" (UID: \"6ccbd114-de29-425f-ab5d-f82920c737e3\") " pod="calico-system/csi-node-driver-ffbcc" Feb 14 00:39:50.945896 kubelet[4209]: I0214 00:39:50.945895 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ccbd114-de29-425f-ab5d-f82920c737e3-kubelet-dir\") pod \"csi-node-driver-ffbcc\" (UID: \"6ccbd114-de29-425f-ab5d-f82920c737e3\") " pod="calico-system/csi-node-driver-ffbcc" Feb 14 00:39:50.946028 kubelet[4209]: I0214 00:39:50.945941 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jslb\" (UniqueName: \"kubernetes.io/projected/6ccbd114-de29-425f-ab5d-f82920c737e3-kube-api-access-4jslb\") pod \"csi-node-driver-ffbcc\" (UID: \"6ccbd114-de29-425f-ab5d-f82920c737e3\") " pod="calico-system/csi-node-driver-ffbcc" Feb 14 00:39:50.946093 kubelet[4209]: I0214 00:39:50.946064 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6ccbd114-de29-425f-ab5d-f82920c737e3-socket-dir\") pod \"csi-node-driver-ffbcc\" (UID: \"6ccbd114-de29-425f-ab5d-f82920c737e3\") " pod="calico-system/csi-node-driver-ffbcc" Feb 14 00:39:50.946612 kubelet[4209]: E0214 00:39:50.946588 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:50.946612 kubelet[4209]: W0214 00:39:50.946608 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:50.946676 kubelet[4209]: E0214 00:39:50.946634 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:50.947213 kubelet[4209]: E0214 00:39:50.946908 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:50.947213 kubelet[4209]: W0214 00:39:50.946927 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:50.947213 kubelet[4209]: E0214 00:39:50.946944 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:50.947213 kubelet[4209]: E0214 00:39:50.947217 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:50.947361 kubelet[4209]: W0214 00:39:50.947234 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:50.947361 kubelet[4209]: E0214 00:39:50.947250 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:50.948507 kubelet[4209]: E0214 00:39:50.948489 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:50.948507 kubelet[4209]: W0214 00:39:50.948503 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:50.948630 kubelet[4209]: E0214 00:39:50.948519 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:50.949204 kubelet[4209]: E0214 00:39:50.948747 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:50.949204 kubelet[4209]: W0214 00:39:50.948756 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:50.949204 kubelet[4209]: E0214 00:39:50.948764 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:50.955430 kubelet[4209]: E0214 00:39:50.955415 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:50.955430 kubelet[4209]: W0214 00:39:50.955430 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:50.955493 kubelet[4209]: E0214 00:39:50.955446 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:50.955665 kubelet[4209]: E0214 00:39:50.955656 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:50.955688 kubelet[4209]: W0214 00:39:50.955665 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:50.955688 kubelet[4209]: E0214 00:39:50.955673 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.047023 kubelet[4209]: E0214 00:39:51.047001 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.047023 kubelet[4209]: W0214 00:39:51.047015 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.047023 kubelet[4209]: E0214 00:39:51.047027 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.047340 kubelet[4209]: E0214 00:39:51.047331 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.047340 kubelet[4209]: W0214 00:39:51.047339 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.047378 kubelet[4209]: E0214 00:39:51.047350 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.047524 kubelet[4209]: E0214 00:39:51.047514 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.047544 kubelet[4209]: W0214 00:39:51.047525 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.047544 kubelet[4209]: E0214 00:39:51.047537 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.047827 kubelet[4209]: E0214 00:39:51.047812 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.047851 kubelet[4209]: W0214 00:39:51.047828 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.047851 kubelet[4209]: E0214 00:39:51.047846 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.048136 kubelet[4209]: E0214 00:39:51.048127 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.048159 kubelet[4209]: W0214 00:39:51.048137 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.048159 kubelet[4209]: E0214 00:39:51.048149 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.048354 kubelet[4209]: E0214 00:39:51.048345 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.048375 kubelet[4209]: W0214 00:39:51.048354 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.048375 kubelet[4209]: E0214 00:39:51.048365 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.048667 kubelet[4209]: E0214 00:39:51.048656 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.048693 kubelet[4209]: W0214 00:39:51.048666 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.048718 kubelet[4209]: E0214 00:39:51.048690 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.048877 kubelet[4209]: E0214 00:39:51.048869 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.048898 kubelet[4209]: W0214 00:39:51.048877 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.048922 kubelet[4209]: E0214 00:39:51.048897 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.049066 kubelet[4209]: E0214 00:39:51.049058 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.049085 kubelet[4209]: W0214 00:39:51.049066 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.049108 kubelet[4209]: E0214 00:39:51.049082 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.049300 kubelet[4209]: E0214 00:39:51.049292 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.049319 kubelet[4209]: W0214 00:39:51.049299 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.049319 kubelet[4209]: E0214 00:39:51.049314 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.049488 kubelet[4209]: E0214 00:39:51.049480 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.049510 kubelet[4209]: W0214 00:39:51.049488 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.049510 kubelet[4209]: E0214 00:39:51.049502 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.049672 kubelet[4209]: E0214 00:39:51.049665 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.049692 kubelet[4209]: W0214 00:39:51.049672 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.049692 kubelet[4209]: E0214 00:39:51.049684 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.049842 kubelet[4209]: E0214 00:39:51.049824 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.049842 kubelet[4209]: W0214 00:39:51.049832 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.049842 kubelet[4209]: E0214 00:39:51.049842 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.050070 kubelet[4209]: E0214 00:39:51.050061 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.050097 kubelet[4209]: W0214 00:39:51.050070 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.050097 kubelet[4209]: E0214 00:39:51.050082 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.050286 kubelet[4209]: E0214 00:39:51.050277 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.050317 kubelet[4209]: W0214 00:39:51.050286 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.050317 kubelet[4209]: E0214 00:39:51.050297 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.050478 kubelet[4209]: E0214 00:39:51.050466 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.050478 kubelet[4209]: W0214 00:39:51.050473 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.050525 kubelet[4209]: E0214 00:39:51.050490 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.050615 kubelet[4209]: E0214 00:39:51.050607 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.050648 kubelet[4209]: W0214 00:39:51.050615 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.050648 kubelet[4209]: E0214 00:39:51.050631 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.050802 kubelet[4209]: E0214 00:39:51.050794 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.050802 kubelet[4209]: W0214 00:39:51.050801 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.050847 kubelet[4209]: E0214 00:39:51.050819 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.050939 kubelet[4209]: E0214 00:39:51.050931 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.050939 kubelet[4209]: W0214 00:39:51.050938 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.050997 kubelet[4209]: E0214 00:39:51.050960 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.051094 kubelet[4209]: E0214 00:39:51.051085 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.051094 kubelet[4209]: W0214 00:39:51.051092 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.051138 kubelet[4209]: E0214 00:39:51.051102 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.051268 kubelet[4209]: E0214 00:39:51.051261 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.051268 kubelet[4209]: W0214 00:39:51.051268 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.051322 kubelet[4209]: E0214 00:39:51.051278 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.051506 kubelet[4209]: E0214 00:39:51.051499 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.051533 kubelet[4209]: W0214 00:39:51.051506 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.051533 kubelet[4209]: E0214 00:39:51.051516 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.051753 kubelet[4209]: E0214 00:39:51.051745 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.051753 kubelet[4209]: W0214 00:39:51.051752 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.051814 kubelet[4209]: E0214 00:39:51.051762 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.051915 kubelet[4209]: E0214 00:39:51.051904 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.051915 kubelet[4209]: W0214 00:39:51.051912 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.051966 kubelet[4209]: E0214 00:39:51.051919 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.052169 kubelet[4209]: E0214 00:39:51.052159 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.052200 kubelet[4209]: W0214 00:39:51.052169 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.052200 kubelet[4209]: E0214 00:39:51.052177 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.059155 kubelet[4209]: E0214 00:39:51.059106 4209 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 14 00:39:51.059155 kubelet[4209]: W0214 00:39:51.059117 4209 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 14 00:39:51.059155 kubelet[4209]: E0214 00:39:51.059128 4209 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 14 00:39:51.105554 containerd[2680]: time="2025-02-14T00:39:51.105514950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-766dfcf4d-7rslc,Uid:b1acaad0-cf3c-4c32-887a-98a0e2b2deee,Namespace:calico-system,Attempt:0,}" Feb 14 00:39:51.109138 containerd[2680]: time="2025-02-14T00:39:51.109116189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r6w99,Uid:a71aae71-241a-49b2-9ead-4e7eaed5d123,Namespace:calico-system,Attempt:0,}" Feb 14 00:39:51.118376 containerd[2680]: time="2025-02-14T00:39:51.118318743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:39:51.118405 containerd[2680]: time="2025-02-14T00:39:51.118375651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:39:51.118405 containerd[2680]: time="2025-02-14T00:39:51.118388128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:39:51.118478 containerd[2680]: time="2025-02-14T00:39:51.118464071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:39:51.120077 containerd[2680]: time="2025-02-14T00:39:51.119704635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:39:51.120104 containerd[2680]: time="2025-02-14T00:39:51.120082071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:39:51.120104 containerd[2680]: time="2025-02-14T00:39:51.120095588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:39:51.120191 containerd[2680]: time="2025-02-14T00:39:51.120175731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:39:51.147854 systemd[1]: Started cri-containerd-44a2ec4c561d1ffc3d8345051b2b24e2fe1feb5a529874f64cfd38ec8d78d37d.scope - libcontainer container 44a2ec4c561d1ffc3d8345051b2b24e2fe1feb5a529874f64cfd38ec8d78d37d. Feb 14 00:39:51.150027 systemd[1]: Started cri-containerd-e01cfeab1d87be9cbe60e1b9227be00c173c7ec2788f1ac65736cabb5b20b0bf.scope - libcontainer container e01cfeab1d87be9cbe60e1b9227be00c173c7ec2788f1ac65736cabb5b20b0bf. Feb 14 00:39:51.165053 containerd[2680]: time="2025-02-14T00:39:51.165021041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r6w99,Uid:a71aae71-241a-49b2-9ead-4e7eaed5d123,Namespace:calico-system,Attempt:0,} returns sandbox id \"e01cfeab1d87be9cbe60e1b9227be00c173c7ec2788f1ac65736cabb5b20b0bf\"" Feb 14 00:39:51.165988 containerd[2680]: time="2025-02-14T00:39:51.165967471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 14 00:39:51.170391 containerd[2680]: time="2025-02-14T00:39:51.170365493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-766dfcf4d-7rslc,Uid:b1acaad0-cf3c-4c32-887a-98a0e2b2deee,Namespace:calico-system,Attempt:0,} returns sandbox id \"44a2ec4c561d1ffc3d8345051b2b24e2fe1feb5a529874f64cfd38ec8d78d37d\"" Feb 14 00:39:51.772134 containerd[2680]: time="2025-02-14T00:39:51.772087965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:51.772263 containerd[2680]: time="2025-02-14T00:39:51.772166628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Feb 14 00:39:51.772861 containerd[2680]: time="2025-02-14T00:39:51.772840198Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:51.786033 containerd[2680]: time="2025-02-14T00:39:51.785993074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:51.786705 containerd[2680]: time="2025-02-14T00:39:51.786685800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 620.685617ms" Feb 14 00:39:51.786748 containerd[2680]: time="2025-02-14T00:39:51.786711274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 14 00:39:51.787410 containerd[2680]: time="2025-02-14T00:39:51.787390163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 14 00:39:51.788223 containerd[2680]: time="2025-02-14T00:39:51.788198144Z" level=info msg="CreateContainer within sandbox \"e01cfeab1d87be9cbe60e1b9227be00c173c7ec2788f1ac65736cabb5b20b0bf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 14 00:39:51.793594 containerd[2680]: time="2025-02-14T00:39:51.793563911Z" level=info msg="CreateContainer within sandbox \"e01cfeab1d87be9cbe60e1b9227be00c173c7ec2788f1ac65736cabb5b20b0bf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e88a7ab840ff3170fb27e78c6ce24a1a91b66a18f7e403590128ea813e270d9a\"" Feb 14 00:39:51.793963 containerd[2680]: time="2025-02-14T00:39:51.793932709Z" level=info msg="StartContainer for \"e88a7ab840ff3170fb27e78c6ce24a1a91b66a18f7e403590128ea813e270d9a\"" Feb 14 00:39:51.818894 systemd[1]: Started cri-containerd-e88a7ab840ff3170fb27e78c6ce24a1a91b66a18f7e403590128ea813e270d9a.scope - libcontainer container e88a7ab840ff3170fb27e78c6ce24a1a91b66a18f7e403590128ea813e270d9a. Feb 14 00:39:51.837287 containerd[2680]: time="2025-02-14T00:39:51.837252958Z" level=info msg="StartContainer for \"e88a7ab840ff3170fb27e78c6ce24a1a91b66a18f7e403590128ea813e270d9a\" returns successfully" Feb 14 00:39:51.849960 systemd[1]: cri-containerd-e88a7ab840ff3170fb27e78c6ce24a1a91b66a18f7e403590128ea813e270d9a.scope: Deactivated successfully. Feb 14 00:39:51.942562 containerd[2680]: time="2025-02-14T00:39:51.942505880Z" level=info msg="shim disconnected" id=e88a7ab840ff3170fb27e78c6ce24a1a91b66a18f7e403590128ea813e270d9a namespace=k8s.io Feb 14 00:39:51.942562 containerd[2680]: time="2025-02-14T00:39:51.942559468Z" level=warning msg="cleaning up after shim disconnected" id=e88a7ab840ff3170fb27e78c6ce24a1a91b66a18f7e403590128ea813e270d9a namespace=k8s.io Feb 14 00:39:51.942562 containerd[2680]: time="2025-02-14T00:39:51.942567666Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 00:39:52.664111 containerd[2680]: time="2025-02-14T00:39:52.664065129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:52.664410 containerd[2680]: time="2025-02-14T00:39:52.664349870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=27861516" Feb 14 00:39:52.664900 containerd[2680]: time="2025-02-14T00:39:52.664847806Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:52.666682 containerd[2680]: time="2025-02-14T00:39:52.666636514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:52.667340 containerd[2680]: time="2025-02-14T00:39:52.667315132Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 879.893336ms" Feb 14 00:39:52.667378 containerd[2680]: time="2025-02-14T00:39:52.667347845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Feb 14 00:39:52.672644 containerd[2680]: time="2025-02-14T00:39:52.672618467Z" level=info msg="CreateContainer within sandbox \"44a2ec4c561d1ffc3d8345051b2b24e2fe1feb5a529874f64cfd38ec8d78d37d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 14 00:39:52.677472 containerd[2680]: time="2025-02-14T00:39:52.677442501Z" level=info msg="CreateContainer within sandbox \"44a2ec4c561d1ffc3d8345051b2b24e2fe1feb5a529874f64cfd38ec8d78d37d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fabf6d625240c1e8a8bd9724227b7ae8d6a6e024489c93e1d8ee5e5f6a09c5ac\"" Feb 14 00:39:52.677825 containerd[2680]: time="2025-02-14T00:39:52.677798147Z" level=info msg="StartContainer for \"fabf6d625240c1e8a8bd9724227b7ae8d6a6e024489c93e1d8ee5e5f6a09c5ac\"" Feb 14 00:39:52.704851 systemd[1]: Started cri-containerd-fabf6d625240c1e8a8bd9724227b7ae8d6a6e024489c93e1d8ee5e5f6a09c5ac.scope - libcontainer container fabf6d625240c1e8a8bd9724227b7ae8d6a6e024489c93e1d8ee5e5f6a09c5ac. Feb 14 00:39:52.728936 containerd[2680]: time="2025-02-14T00:39:52.728905896Z" level=info msg="StartContainer for \"fabf6d625240c1e8a8bd9724227b7ae8d6a6e024489c93e1d8ee5e5f6a09c5ac\" returns successfully" Feb 14 00:39:52.861618 kubelet[4209]: E0214 00:39:52.861572 4209 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffbcc" podUID="6ccbd114-de29-425f-ab5d-f82920c737e3" Feb 14 00:39:52.889889 containerd[2680]: time="2025-02-14T00:39:52.889865270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 14 00:39:53.890847 kubelet[4209]: I0214 00:39:53.890818 4209 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 00:39:54.623534 containerd[2680]: time="2025-02-14T00:39:54.623488139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:54.623917 containerd[2680]: time="2025-02-14T00:39:54.623534051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 14 00:39:54.624198 containerd[2680]: time="2025-02-14T00:39:54.624178373Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:54.626057 containerd[2680]: time="2025-02-14T00:39:54.626034873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:54.626953 containerd[2680]: time="2025-02-14T00:39:54.626915671Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 1.73700469s" Feb 14 00:39:54.627875 containerd[2680]: time="2025-02-14T00:39:54.626964223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 14 00:39:54.630290 containerd[2680]: time="2025-02-14T00:39:54.630267098Z" level=info msg="CreateContainer within sandbox \"e01cfeab1d87be9cbe60e1b9227be00c173c7ec2788f1ac65736cabb5b20b0bf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 14 00:39:54.635817 containerd[2680]: time="2025-02-14T00:39:54.635786727Z" level=info msg="CreateContainer within sandbox \"e01cfeab1d87be9cbe60e1b9227be00c173c7ec2788f1ac65736cabb5b20b0bf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"212fb42e781467f4dc2a7959cd577bd3f8176e4678ab28c220ac29a0e0e16d46\"" Feb 14 00:39:54.636168 containerd[2680]: time="2025-02-14T00:39:54.636144981Z" level=info msg="StartContainer for \"212fb42e781467f4dc2a7959cd577bd3f8176e4678ab28c220ac29a0e0e16d46\"" Feb 14 00:39:54.687899 systemd[1]: Started cri-containerd-212fb42e781467f4dc2a7959cd577bd3f8176e4678ab28c220ac29a0e0e16d46.scope - libcontainer container 212fb42e781467f4dc2a7959cd577bd3f8176e4678ab28c220ac29a0e0e16d46. Feb 14 00:39:54.709241 containerd[2680]: time="2025-02-14T00:39:54.709209038Z" level=info msg="StartContainer for \"212fb42e781467f4dc2a7959cd577bd3f8176e4678ab28c220ac29a0e0e16d46\" returns successfully" Feb 14 00:39:54.862213 kubelet[4209]: E0214 00:39:54.862172 4209 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ffbcc" podUID="6ccbd114-de29-425f-ab5d-f82920c737e3" Feb 14 00:39:54.906671 kubelet[4209]: I0214 00:39:54.906566 4209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-766dfcf4d-7rslc" podStartSLOduration=3.409627957 podStartE2EDuration="4.906551611s" podCreationTimestamp="2025-02-14 00:39:50 +0000 UTC" firstStartedPulling="2025-02-14 00:39:51.17100635 +0000 UTC m=+15.382425686" lastFinishedPulling="2025-02-14 00:39:52.667930004 +0000 UTC m=+16.879349340" observedRunningTime="2025-02-14 00:39:52.912778375 +0000 UTC m=+17.124197711" watchObservedRunningTime="2025-02-14 00:39:54.906551611 +0000 UTC m=+19.117970947" Feb 14 00:39:55.044474 containerd[2680]: time="2025-02-14T00:39:55.044435495Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 14 00:39:55.046208 systemd[1]: cri-containerd-212fb42e781467f4dc2a7959cd577bd3f8176e4678ab28c220ac29a0e0e16d46.scope: Deactivated successfully. Feb 14 00:39:55.046439 kubelet[4209]: I0214 00:39:55.046420 4209 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 14 00:39:55.064896 systemd[1]: Created slice kubepods-burstable-pod9c310922_b8a9_45e4_9148_d0a62cae98ca.slice - libcontainer container kubepods-burstable-pod9c310922_b8a9_45e4_9148_d0a62cae98ca.slice. Feb 14 00:39:55.068316 systemd[1]: Created slice kubepods-burstable-pod66b493ca_acf1_42dd_852d_4dc920f52794.slice - libcontainer container kubepods-burstable-pod66b493ca_acf1_42dd_852d_4dc920f52794.slice. Feb 14 00:39:55.071940 systemd[1]: Created slice kubepods-besteffort-pod0bcb1395_23a9_4344_8326_eb06cdc2ac2f.slice - libcontainer container kubepods-besteffort-pod0bcb1395_23a9_4344_8326_eb06cdc2ac2f.slice. Feb 14 00:39:55.075587 systemd[1]: Created slice kubepods-besteffort-podf944ebf0_c99a_48f8_9584_55824d96b5a2.slice - libcontainer container kubepods-besteffort-podf944ebf0_c99a_48f8_9584_55824d96b5a2.slice. Feb 14 00:39:55.079151 systemd[1]: Created slice kubepods-besteffort-pod96a418b4_d05a_4be9_9e9f_0435c9707a64.slice - libcontainer container kubepods-besteffort-pod96a418b4_d05a_4be9_9e9f_0435c9707a64.slice. Feb 14 00:39:55.175905 kubelet[4209]: I0214 00:39:55.175809 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbnlb\" (UniqueName: \"kubernetes.io/projected/f944ebf0-c99a-48f8-9584-55824d96b5a2-kube-api-access-dbnlb\") pod \"calico-apiserver-86f6495c65-ln5ss\" (UID: \"f944ebf0-c99a-48f8-9584-55824d96b5a2\") " pod="calico-apiserver/calico-apiserver-86f6495c65-ln5ss" Feb 14 00:39:55.175905 kubelet[4209]: I0214 00:39:55.175857 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bcb1395-23a9-4344-8326-eb06cdc2ac2f-tigera-ca-bundle\") pod \"calico-kube-controllers-5c596b8684-qhvsf\" (UID: \"0bcb1395-23a9-4344-8326-eb06cdc2ac2f\") " pod="calico-system/calico-kube-controllers-5c596b8684-qhvsf" Feb 14 00:39:55.175905 kubelet[4209]: I0214 00:39:55.175874 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b5m5\" (UniqueName: \"kubernetes.io/projected/96a418b4-d05a-4be9-9e9f-0435c9707a64-kube-api-access-4b5m5\") pod \"calico-apiserver-86f6495c65-qt2ld\" (UID: \"96a418b4-d05a-4be9-9e9f-0435c9707a64\") " pod="calico-apiserver/calico-apiserver-86f6495c65-qt2ld" Feb 14 00:39:55.176156 kubelet[4209]: I0214 00:39:55.176011 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lgqv\" (UniqueName: \"kubernetes.io/projected/0bcb1395-23a9-4344-8326-eb06cdc2ac2f-kube-api-access-8lgqv\") pod \"calico-kube-controllers-5c596b8684-qhvsf\" (UID: \"0bcb1395-23a9-4344-8326-eb06cdc2ac2f\") " pod="calico-system/calico-kube-controllers-5c596b8684-qhvsf" Feb 14 00:39:55.176156 kubelet[4209]: I0214 00:39:55.176068 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/96a418b4-d05a-4be9-9e9f-0435c9707a64-calico-apiserver-certs\") pod \"calico-apiserver-86f6495c65-qt2ld\" (UID: \"96a418b4-d05a-4be9-9e9f-0435c9707a64\") " pod="calico-apiserver/calico-apiserver-86f6495c65-qt2ld" Feb 14 00:39:55.176156 kubelet[4209]: I0214 00:39:55.176099 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvmbp\" (UniqueName: \"kubernetes.io/projected/66b493ca-acf1-42dd-852d-4dc920f52794-kube-api-access-zvmbp\") pod \"coredns-668d6bf9bc-zll2f\" (UID: \"66b493ca-acf1-42dd-852d-4dc920f52794\") " pod="kube-system/coredns-668d6bf9bc-zll2f" Feb 14 00:39:55.176156 kubelet[4209]: I0214 00:39:55.176132 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b493ca-acf1-42dd-852d-4dc920f52794-config-volume\") pod \"coredns-668d6bf9bc-zll2f\" (UID: \"66b493ca-acf1-42dd-852d-4dc920f52794\") " pod="kube-system/coredns-668d6bf9bc-zll2f" Feb 14 00:39:55.176286 kubelet[4209]: I0214 00:39:55.176164 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f944ebf0-c99a-48f8-9584-55824d96b5a2-calico-apiserver-certs\") pod \"calico-apiserver-86f6495c65-ln5ss\" (UID: \"f944ebf0-c99a-48f8-9584-55824d96b5a2\") " pod="calico-apiserver/calico-apiserver-86f6495c65-ln5ss" Feb 14 00:39:55.176286 kubelet[4209]: I0214 00:39:55.176192 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lql2\" (UniqueName: \"kubernetes.io/projected/9c310922-b8a9-45e4-9148-d0a62cae98ca-kube-api-access-2lql2\") pod \"coredns-668d6bf9bc-lnphz\" (UID: \"9c310922-b8a9-45e4-9148-d0a62cae98ca\") " pod="kube-system/coredns-668d6bf9bc-lnphz" Feb 14 00:39:55.176286 kubelet[4209]: I0214 00:39:55.176219 4209 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c310922-b8a9-45e4-9148-d0a62cae98ca-config-volume\") pod \"coredns-668d6bf9bc-lnphz\" (UID: \"9c310922-b8a9-45e4-9148-d0a62cae98ca\") " pod="kube-system/coredns-668d6bf9bc-lnphz" Feb 14 00:39:55.276579 containerd[2680]: time="2025-02-14T00:39:55.276510963Z" level=info msg="shim disconnected" id=212fb42e781467f4dc2a7959cd577bd3f8176e4678ab28c220ac29a0e0e16d46 namespace=k8s.io Feb 14 00:39:55.276579 containerd[2680]: time="2025-02-14T00:39:55.276573393Z" level=warning msg="cleaning up after shim disconnected" id=212fb42e781467f4dc2a7959cd577bd3f8176e4678ab28c220ac29a0e0e16d46 namespace=k8s.io Feb 14 00:39:55.276762 containerd[2680]: time="2025-02-14T00:39:55.276587630Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 14 00:39:55.367238 containerd[2680]: time="2025-02-14T00:39:55.367203790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lnphz,Uid:9c310922-b8a9-45e4-9148-d0a62cae98ca,Namespace:kube-system,Attempt:0,}" Feb 14 00:39:55.370807 containerd[2680]: time="2025-02-14T00:39:55.370768017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zll2f,Uid:66b493ca-acf1-42dd-852d-4dc920f52794,Namespace:kube-system,Attempt:0,}" Feb 14 00:39:55.374207 containerd[2680]: time="2025-02-14T00:39:55.374181391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c596b8684-qhvsf,Uid:0bcb1395-23a9-4344-8326-eb06cdc2ac2f,Namespace:calico-system,Attempt:0,}" Feb 14 00:39:55.401919 containerd[2680]: time="2025-02-14T00:39:55.401854559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f6495c65-qt2ld,Uid:96a418b4-d05a-4be9-9e9f-0435c9707a64,Namespace:calico-apiserver,Attempt:0,}" Feb 14 00:39:55.402012 containerd[2680]: time="2025-02-14T00:39:55.401968980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f6495c65-ln5ss,Uid:f944ebf0-c99a-48f8-9584-55824d96b5a2,Namespace:calico-apiserver,Attempt:0,}" Feb 14 00:39:55.425555 containerd[2680]: time="2025-02-14T00:39:55.425506258Z" level=error msg="Failed to destroy network for sandbox \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.425865 containerd[2680]: time="2025-02-14T00:39:55.425844080Z" level=error msg="encountered an error cleaning up failed sandbox \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.425944 containerd[2680]: time="2025-02-14T00:39:55.425896191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lnphz,Uid:9c310922-b8a9-45e4-9148-d0a62cae98ca,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.426059 containerd[2680]: time="2025-02-14T00:39:55.426027768Z" level=error msg="Failed to destroy network for sandbox \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.426130 kubelet[4209]: E0214 00:39:55.426096 4209 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.426168 containerd[2680]: time="2025-02-14T00:39:55.426139909Z" level=error msg="Failed to destroy network for sandbox \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.426192 kubelet[4209]: E0214 00:39:55.426164 4209 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lnphz" Feb 14 00:39:55.426192 kubelet[4209]: E0214 00:39:55.426183 4209 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lnphz" Feb 14 00:39:55.426285 kubelet[4209]: E0214 00:39:55.426223 4209 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lnphz_kube-system(9c310922-b8a9-45e4-9148-d0a62cae98ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lnphz_kube-system(9c310922-b8a9-45e4-9148-d0a62cae98ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lnphz" podUID="9c310922-b8a9-45e4-9148-d0a62cae98ca" Feb 14 00:39:55.426361 containerd[2680]: time="2025-02-14T00:39:55.426336675Z" level=error msg="encountered an error cleaning up failed sandbox \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.426399 containerd[2680]: time="2025-02-14T00:39:55.426381788Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c596b8684-qhvsf,Uid:0bcb1395-23a9-4344-8326-eb06cdc2ac2f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.426592 kubelet[4209]: E0214 00:39:55.426503 4209 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.426592 kubelet[4209]: E0214 00:39:55.426547 4209 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c596b8684-qhvsf" Feb 14 00:39:55.426592 kubelet[4209]: E0214 00:39:55.426564 4209 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c596b8684-qhvsf" Feb 14 00:39:55.426686 containerd[2680]: time="2025-02-14T00:39:55.426511885Z" level=error msg="encountered an error cleaning up failed sandbox \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.426686 containerd[2680]: time="2025-02-14T00:39:55.426561157Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zll2f,Uid:66b493ca-acf1-42dd-852d-4dc920f52794,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.426728 kubelet[4209]: E0214 00:39:55.426596 4209 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c596b8684-qhvsf_calico-system(0bcb1395-23a9-4344-8326-eb06cdc2ac2f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c596b8684-qhvsf_calico-system(0bcb1395-23a9-4344-8326-eb06cdc2ac2f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c596b8684-qhvsf" podUID="0bcb1395-23a9-4344-8326-eb06cdc2ac2f" Feb 14 00:39:55.426728 kubelet[4209]: E0214 00:39:55.426675 4209 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.426728 kubelet[4209]: E0214 00:39:55.426698 4209 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zll2f" Feb 14 00:39:55.426838 kubelet[4209]: E0214 00:39:55.426710 4209 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zll2f" Feb 14 00:39:55.426838 kubelet[4209]: E0214 00:39:55.426738 4209 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zll2f_kube-system(66b493ca-acf1-42dd-852d-4dc920f52794)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zll2f_kube-system(66b493ca-acf1-42dd-852d-4dc920f52794)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zll2f" podUID="66b493ca-acf1-42dd-852d-4dc920f52794" Feb 14 00:39:55.447352 containerd[2680]: time="2025-02-14T00:39:55.447305754Z" level=error msg="Failed to destroy network for sandbox \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.447657 containerd[2680]: time="2025-02-14T00:39:55.447630499Z" level=error msg="encountered an error cleaning up failed sandbox \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.447696 containerd[2680]: time="2025-02-14T00:39:55.447678850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f6495c65-ln5ss,Uid:f944ebf0-c99a-48f8-9584-55824d96b5a2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.447889 kubelet[4209]: E0214 00:39:55.447859 4209 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.447928 kubelet[4209]: E0214 00:39:55.447909 4209 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f6495c65-ln5ss" Feb 14 00:39:55.447959 kubelet[4209]: E0214 00:39:55.447928 4209 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f6495c65-ln5ss" Feb 14 00:39:55.447982 kubelet[4209]: E0214 00:39:55.447963 4209 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86f6495c65-ln5ss_calico-apiserver(f944ebf0-c99a-48f8-9584-55824d96b5a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86f6495c65-ln5ss_calico-apiserver(f944ebf0-c99a-48f8-9584-55824d96b5a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86f6495c65-ln5ss" podUID="f944ebf0-c99a-48f8-9584-55824d96b5a2" Feb 14 00:39:55.448018 containerd[2680]: time="2025-02-14T00:39:55.447945205Z" level=error msg="Failed to destroy network for sandbox \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.448286 containerd[2680]: time="2025-02-14T00:39:55.448262270Z" level=error msg="encountered an error cleaning up failed sandbox \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.448324 containerd[2680]: time="2025-02-14T00:39:55.448301344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f6495c65-qt2ld,Uid:96a418b4-d05a-4be9-9e9f-0435c9707a64,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.448424 kubelet[4209]: E0214 00:39:55.448401 4209 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.448449 kubelet[4209]: E0214 00:39:55.448438 4209 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f6495c65-qt2ld" Feb 14 00:39:55.448472 kubelet[4209]: E0214 00:39:55.448455 4209 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f6495c65-qt2ld" Feb 14 00:39:55.448502 kubelet[4209]: E0214 00:39:55.448485 4209 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86f6495c65-qt2ld_calico-apiserver(96a418b4-d05a-4be9-9e9f-0435c9707a64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86f6495c65-qt2ld_calico-apiserver(96a418b4-d05a-4be9-9e9f-0435c9707a64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86f6495c65-qt2ld" podUID="96a418b4-d05a-4be9-9e9f-0435c9707a64" Feb 14 00:39:55.643063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-212fb42e781467f4dc2a7959cd577bd3f8176e4678ab28c220ac29a0e0e16d46-rootfs.mount: Deactivated successfully. Feb 14 00:39:55.897012 kubelet[4209]: I0214 00:39:55.896965 4209 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Feb 14 00:39:55.897123 containerd[2680]: time="2025-02-14T00:39:55.897098596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 14 00:39:55.897407 containerd[2680]: time="2025-02-14T00:39:55.897387587Z" level=info msg="StopPodSandbox for \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\"" Feb 14 00:39:55.897560 containerd[2680]: time="2025-02-14T00:39:55.897545920Z" level=info msg="Ensure that sandbox 262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef in task-service has been cleanup successfully" Feb 14 00:39:55.897740 kubelet[4209]: I0214 00:39:55.897720 4209 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Feb 14 00:39:55.898083 containerd[2680]: time="2025-02-14T00:39:55.898066590Z" level=info msg="StopPodSandbox for \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\"" Feb 14 00:39:55.898546 containerd[2680]: time="2025-02-14T00:39:55.898212405Z" level=info msg="Ensure that sandbox 051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75 in task-service has been cleanup successfully" Feb 14 00:39:55.898614 kubelet[4209]: I0214 00:39:55.898499 4209 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Feb 14 00:39:55.898866 containerd[2680]: time="2025-02-14T00:39:55.898843097Z" level=info msg="StopPodSandbox for \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\"" Feb 14 00:39:55.898993 containerd[2680]: time="2025-02-14T00:39:55.898974994Z" level=info msg="Ensure that sandbox cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b in task-service has been cleanup successfully" Feb 14 00:39:55.899448 kubelet[4209]: I0214 00:39:55.899434 4209 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Feb 14 00:39:55.899899 containerd[2680]: time="2025-02-14T00:39:55.899878319Z" level=info msg="StopPodSandbox for \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\"" Feb 14 00:39:55.900018 containerd[2680]: time="2025-02-14T00:39:55.900005297Z" level=info msg="Ensure that sandbox 0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2 in task-service has been cleanup successfully" Feb 14 00:39:55.900264 kubelet[4209]: I0214 00:39:55.900248 4209 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Feb 14 00:39:55.900673 containerd[2680]: time="2025-02-14T00:39:55.900652746Z" level=info msg="StopPodSandbox for \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\"" Feb 14 00:39:55.900853 containerd[2680]: time="2025-02-14T00:39:55.900838434Z" level=info msg="Ensure that sandbox c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d in task-service has been cleanup successfully" Feb 14 00:39:55.919702 containerd[2680]: time="2025-02-14T00:39:55.919660242Z" level=error msg="StopPodSandbox for \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\" failed" error="failed to destroy network for sandbox \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.919781 containerd[2680]: time="2025-02-14T00:39:55.919662962Z" level=error msg="StopPodSandbox for \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\" failed" error="failed to destroy network for sandbox \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.919911 kubelet[4209]: E0214 00:39:55.919878 4209 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Feb 14 00:39:55.920152 kubelet[4209]: E0214 00:39:55.919942 4209 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b"} Feb 14 00:39:55.920152 kubelet[4209]: E0214 00:39:55.919878 4209 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Feb 14 00:39:55.920152 kubelet[4209]: E0214 00:39:55.919995 4209 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f944ebf0-c99a-48f8-9584-55824d96b5a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 00:39:55.920152 kubelet[4209]: E0214 00:39:55.920004 4209 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef"} Feb 14 00:39:55.920152 kubelet[4209]: E0214 00:39:55.920018 4209 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f944ebf0-c99a-48f8-9584-55824d96b5a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86f6495c65-ln5ss" podUID="f944ebf0-c99a-48f8-9584-55824d96b5a2" Feb 14 00:39:55.920307 kubelet[4209]: E0214 00:39:55.920035 4209 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"96a418b4-d05a-4be9-9e9f-0435c9707a64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 00:39:55.920307 kubelet[4209]: E0214 00:39:55.920055 4209 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"96a418b4-d05a-4be9-9e9f-0435c9707a64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86f6495c65-qt2ld" podUID="96a418b4-d05a-4be9-9e9f-0435c9707a64" Feb 14 00:39:55.920374 containerd[2680]: time="2025-02-14T00:39:55.920165355Z" level=error msg="StopPodSandbox for \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\" failed" error="failed to destroy network for sandbox \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.920400 kubelet[4209]: E0214 00:39:55.920321 4209 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Feb 14 00:39:55.920400 kubelet[4209]: E0214 00:39:55.920338 4209 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75"} Feb 14 00:39:55.920400 kubelet[4209]: E0214 00:39:55.920355 4209 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"66b493ca-acf1-42dd-852d-4dc920f52794\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 00:39:55.920400 kubelet[4209]: E0214 00:39:55.920370 4209 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"66b493ca-acf1-42dd-852d-4dc920f52794\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zll2f" podUID="66b493ca-acf1-42dd-852d-4dc920f52794" Feb 14 00:39:55.920906 containerd[2680]: time="2025-02-14T00:39:55.920878033Z" level=error msg="StopPodSandbox for \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\" failed" error="failed to destroy network for sandbox \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.920995 kubelet[4209]: E0214 00:39:55.920981 4209 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Feb 14 00:39:55.921017 kubelet[4209]: E0214 00:39:55.920998 4209 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2"} Feb 14 00:39:55.921045 kubelet[4209]: E0214 00:39:55.921019 4209 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0bcb1395-23a9-4344-8326-eb06cdc2ac2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 00:39:55.921045 kubelet[4209]: E0214 00:39:55.921034 4209 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0bcb1395-23a9-4344-8326-eb06cdc2ac2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c596b8684-qhvsf" podUID="0bcb1395-23a9-4344-8326-eb06cdc2ac2f" Feb 14 00:39:55.923083 containerd[2680]: time="2025-02-14T00:39:55.923054819Z" level=error msg="StopPodSandbox for \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\" failed" error="failed to destroy network for sandbox \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:55.923170 kubelet[4209]: E0214 00:39:55.923152 4209 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Feb 14 00:39:55.923213 kubelet[4209]: E0214 00:39:55.923177 4209 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d"} Feb 14 00:39:55.923213 kubelet[4209]: E0214 00:39:55.923197 4209 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c310922-b8a9-45e4-9148-d0a62cae98ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 00:39:55.923317 kubelet[4209]: E0214 00:39:55.923212 4209 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c310922-b8a9-45e4-9148-d0a62cae98ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lnphz" podUID="9c310922-b8a9-45e4-9148-d0a62cae98ca" Feb 14 00:39:56.866082 systemd[1]: Created slice kubepods-besteffort-pod6ccbd114_de29_425f_ab5d_f82920c737e3.slice - libcontainer container kubepods-besteffort-pod6ccbd114_de29_425f_ab5d_f82920c737e3.slice. Feb 14 00:39:56.867802 containerd[2680]: time="2025-02-14T00:39:56.867773220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ffbcc,Uid:6ccbd114-de29-425f-ab5d-f82920c737e3,Namespace:calico-system,Attempt:0,}" Feb 14 00:39:56.911942 containerd[2680]: time="2025-02-14T00:39:56.911895597Z" level=error msg="Failed to destroy network for sandbox \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:56.912239 containerd[2680]: time="2025-02-14T00:39:56.912214745Z" level=error msg="encountered an error cleaning up failed sandbox \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:56.912284 containerd[2680]: time="2025-02-14T00:39:56.912267857Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ffbcc,Uid:6ccbd114-de29-425f-ab5d-f82920c737e3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:56.912441 kubelet[4209]: E0214 00:39:56.912407 4209 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:56.912486 kubelet[4209]: E0214 00:39:56.912468 4209 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ffbcc" Feb 14 00:39:56.912514 kubelet[4209]: E0214 00:39:56.912499 4209 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ffbcc" Feb 14 00:39:56.912593 kubelet[4209]: E0214 00:39:56.912564 4209 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ffbcc_calico-system(6ccbd114-de29-425f-ab5d-f82920c737e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ffbcc_calico-system(6ccbd114-de29-425f-ab5d-f82920c737e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ffbcc" podUID="6ccbd114-de29-425f-ab5d-f82920c737e3" Feb 14 00:39:56.913526 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175-shm.mount: Deactivated successfully. Feb 14 00:39:57.407175 systemd[1]: Started sshd@9-147.28.162.217:22-218.92.0.218:40162.service - OpenSSH per-connection server daemon (218.92.0.218:40162). Feb 14 00:39:57.904207 kubelet[4209]: I0214 00:39:57.904178 4209 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Feb 14 00:39:57.904648 containerd[2680]: time="2025-02-14T00:39:57.904619959Z" level=info msg="StopPodSandbox for \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\"" Feb 14 00:39:57.904819 containerd[2680]: time="2025-02-14T00:39:57.904803971Z" level=info msg="Ensure that sandbox cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175 in task-service has been cleanup successfully" Feb 14 00:39:57.927104 containerd[2680]: time="2025-02-14T00:39:57.927064012Z" level=error msg="StopPodSandbox for \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\" failed" error="failed to destroy network for sandbox \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 14 00:39:57.927399 kubelet[4209]: E0214 00:39:57.927196 4209 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Feb 14 00:39:57.927399 kubelet[4209]: E0214 00:39:57.927235 4209 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175"} Feb 14 00:39:57.927399 kubelet[4209]: E0214 00:39:57.927263 4209 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6ccbd114-de29-425f-ab5d-f82920c737e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 14 00:39:57.927399 kubelet[4209]: E0214 00:39:57.927287 4209 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6ccbd114-de29-425f-ab5d-f82920c737e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ffbcc" podUID="6ccbd114-de29-425f-ab5d-f82920c737e3" Feb 14 00:39:58.800442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2353080721.mount: Deactivated successfully. Feb 14 00:39:58.830651 containerd[2680]: time="2025-02-14T00:39:58.830596158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:58.830738 containerd[2680]: time="2025-02-14T00:39:58.830660109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 14 00:39:58.831350 containerd[2680]: time="2025-02-14T00:39:58.831326574Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:58.832973 containerd[2680]: time="2025-02-14T00:39:58.832946265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:39:58.833562 containerd[2680]: time="2025-02-14T00:39:58.833541741Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 2.93640615s" Feb 14 00:39:58.833587 containerd[2680]: time="2025-02-14T00:39:58.833566577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 14 00:39:58.838938 containerd[2680]: time="2025-02-14T00:39:58.838909741Z" level=info msg="CreateContainer within sandbox \"e01cfeab1d87be9cbe60e1b9227be00c173c7ec2788f1ac65736cabb5b20b0bf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 14 00:39:58.847300 containerd[2680]: time="2025-02-14T00:39:58.847262959Z" level=info msg="CreateContainer within sandbox \"e01cfeab1d87be9cbe60e1b9227be00c173c7ec2788f1ac65736cabb5b20b0bf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"73028f13831b535ddbf20e72e763658f113862baf47a6323371cae79178ae5c7\"" Feb 14 00:39:58.847660 containerd[2680]: time="2025-02-14T00:39:58.847634707Z" level=info msg="StartContainer for \"73028f13831b535ddbf20e72e763658f113862baf47a6323371cae79178ae5c7\"" Feb 14 00:39:58.883860 systemd[1]: Started cri-containerd-73028f13831b535ddbf20e72e763658f113862baf47a6323371cae79178ae5c7.scope - libcontainer container 73028f13831b535ddbf20e72e763658f113862baf47a6323371cae79178ae5c7. Feb 14 00:39:58.903689 containerd[2680]: time="2025-02-14T00:39:58.903654341Z" level=info msg="StartContainer for \"73028f13831b535ddbf20e72e763658f113862baf47a6323371cae79178ae5c7\" returns successfully" Feb 14 00:39:58.917547 kubelet[4209]: I0214 00:39:58.917499 4209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-r6w99" podStartSLOduration=1.249088326 podStartE2EDuration="8.917483544s" podCreationTimestamp="2025-02-14 00:39:50 +0000 UTC" firstStartedPulling="2025-02-14 00:39:51.165761676 +0000 UTC m=+15.377180972" lastFinishedPulling="2025-02-14 00:39:58.834156854 +0000 UTC m=+23.045576190" observedRunningTime="2025-02-14 00:39:58.917323527 +0000 UTC m=+23.128742863" watchObservedRunningTime="2025-02-14 00:39:58.917483544 +0000 UTC m=+23.128902880" Feb 14 00:39:58.941838 sshd[5814]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.218 user=root Feb 14 00:39:59.012009 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 14 00:39:59.012062 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 14 00:40:01.094637 sshd[5776]: PAM: Permission denied for root from 218.92.0.218 Feb 14 00:40:01.506825 sshd[6084]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.218 user=root Feb 14 00:40:03.403803 sshd[5776]: PAM: Permission denied for root from 218.92.0.218 Feb 14 00:40:03.815915 sshd[6162]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.218 user=root Feb 14 00:40:05.652838 sshd[5776]: PAM: Permission denied for root from 218.92.0.218 Feb 14 00:40:05.858890 sshd[5776]: Received disconnect from 218.92.0.218 port 40162:11: [preauth] Feb 14 00:40:05.858890 sshd[5776]: Disconnected from authenticating user root 218.92.0.218 port 40162 [preauth] Feb 14 00:40:05.861014 systemd[1]: sshd@9-147.28.162.217:22-218.92.0.218:40162.service: Deactivated successfully. Feb 14 00:40:05.916198 kubelet[4209]: I0214 00:40:05.916135 4209 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 00:40:06.115226 systemd[1]: Started sshd@10-147.28.162.217:22-218.92.0.218:27492.service - OpenSSH per-connection server daemon (218.92.0.218:27492). Feb 14 00:40:06.862539 containerd[2680]: time="2025-02-14T00:40:06.862479840Z" level=info msg="StopPodSandbox for \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\"" Feb 14 00:40:06.863100 containerd[2680]: time="2025-02-14T00:40:06.862607509Z" level=info msg="StopPodSandbox for \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\"" Feb 14 00:40:06.947960 containerd[2680]: 2025-02-14 00:40:06.907 [INFO][6375] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Feb 14 00:40:06.947960 containerd[2680]: 2025-02-14 00:40:06.907 [INFO][6375] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" iface="eth0" netns="/var/run/netns/cni-8fd8e74a-c80e-3700-0b56-b4a088d4ec04" Feb 14 00:40:06.947960 containerd[2680]: 2025-02-14 00:40:06.907 [INFO][6375] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" iface="eth0" netns="/var/run/netns/cni-8fd8e74a-c80e-3700-0b56-b4a088d4ec04" Feb 14 00:40:06.947960 containerd[2680]: 2025-02-14 00:40:06.907 [INFO][6375] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" iface="eth0" netns="/var/run/netns/cni-8fd8e74a-c80e-3700-0b56-b4a088d4ec04" Feb 14 00:40:06.947960 containerd[2680]: 2025-02-14 00:40:06.907 [INFO][6375] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Feb 14 00:40:06.947960 containerd[2680]: 2025-02-14 00:40:06.907 [INFO][6375] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Feb 14 00:40:06.947960 containerd[2680]: 2025-02-14 00:40:06.936 [INFO][6423] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" HandleID="k8s-pod-network.262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0" Feb 14 00:40:06.947960 containerd[2680]: 2025-02-14 00:40:06.936 [INFO][6423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:06.947960 containerd[2680]: 2025-02-14 00:40:06.936 [INFO][6423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:06.947960 containerd[2680]: 2025-02-14 00:40:06.944 [WARNING][6423] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" HandleID="k8s-pod-network.262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0" Feb 14 00:40:06.947960 containerd[2680]: 2025-02-14 00:40:06.944 [INFO][6423] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" HandleID="k8s-pod-network.262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0" Feb 14 00:40:06.947960 containerd[2680]: 2025-02-14 00:40:06.945 [INFO][6423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:06.947960 containerd[2680]: 2025-02-14 00:40:06.946 [INFO][6375] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Feb 14 00:40:06.948278 containerd[2680]: time="2025-02-14T00:40:06.948092972Z" level=info msg="TearDown network for sandbox \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\" successfully" Feb 14 00:40:06.948278 containerd[2680]: time="2025-02-14T00:40:06.948117130Z" level=info msg="StopPodSandbox for \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\" returns successfully" Feb 14 00:40:06.948653 containerd[2680]: time="2025-02-14T00:40:06.948630847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f6495c65-qt2ld,Uid:96a418b4-d05a-4be9-9e9f-0435c9707a64,Namespace:calico-apiserver,Attempt:1,}" Feb 14 00:40:06.949982 systemd[1]: run-netns-cni\x2d8fd8e74a\x2dc80e\x2d3700\x2d0b56\x2db4a088d4ec04.mount: Deactivated successfully. Feb 14 00:40:06.957941 containerd[2680]: 2025-02-14 00:40:06.906 [INFO][6374] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Feb 14 00:40:06.957941 containerd[2680]: 2025-02-14 00:40:06.907 [INFO][6374] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" iface="eth0" netns="/var/run/netns/cni-d6af357c-088f-79ab-7d38-474ca1f5e5b7" Feb 14 00:40:06.957941 containerd[2680]: 2025-02-14 00:40:06.907 [INFO][6374] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" iface="eth0" netns="/var/run/netns/cni-d6af357c-088f-79ab-7d38-474ca1f5e5b7" Feb 14 00:40:06.957941 containerd[2680]: 2025-02-14 00:40:06.907 [INFO][6374] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" iface="eth0" netns="/var/run/netns/cni-d6af357c-088f-79ab-7d38-474ca1f5e5b7" Feb 14 00:40:06.957941 containerd[2680]: 2025-02-14 00:40:06.907 [INFO][6374] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Feb 14 00:40:06.957941 containerd[2680]: 2025-02-14 00:40:06.907 [INFO][6374] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Feb 14 00:40:06.957941 containerd[2680]: 2025-02-14 00:40:06.936 [INFO][6422] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" HandleID="k8s-pod-network.051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0" Feb 14 00:40:06.957941 containerd[2680]: 2025-02-14 00:40:06.936 [INFO][6422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:06.957941 containerd[2680]: 2025-02-14 00:40:06.945 [INFO][6422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:06.957941 containerd[2680]: 2025-02-14 00:40:06.952 [WARNING][6422] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" HandleID="k8s-pod-network.051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0" Feb 14 00:40:06.957941 containerd[2680]: 2025-02-14 00:40:06.952 [INFO][6422] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" HandleID="k8s-pod-network.051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0" Feb 14 00:40:06.957941 containerd[2680]: 2025-02-14 00:40:06.955 [INFO][6422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:06.957941 containerd[2680]: 2025-02-14 00:40:06.956 [INFO][6374] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Feb 14 00:40:06.958194 containerd[2680]: time="2025-02-14T00:40:06.958086728Z" level=info msg="TearDown network for sandbox \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\" successfully" Feb 14 00:40:06.958194 containerd[2680]: time="2025-02-14T00:40:06.958109566Z" level=info msg="StopPodSandbox for \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\" returns successfully" Feb 14 00:40:06.958557 containerd[2680]: time="2025-02-14T00:40:06.958533771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zll2f,Uid:66b493ca-acf1-42dd-852d-4dc920f52794,Namespace:kube-system,Attempt:1,}" Feb 14 00:40:06.959649 systemd[1]: run-netns-cni\x2dd6af357c\x2d088f\x2d79ab\x2d7d38\x2d474ca1f5e5b7.mount: Deactivated successfully. Feb 14 00:40:07.038804 systemd-networkd[2584]: calif32f237a442: Link UP Feb 14 00:40:07.039049 systemd-networkd[2584]: calif32f237a442: Gained carrier Feb 14 00:40:07.045604 containerd[2680]: 2025-02-14 00:40:06.968 [INFO][6457] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 14 00:40:07.045604 containerd[2680]: 2025-02-14 00:40:06.980 [INFO][6457] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0 calico-apiserver-86f6495c65- calico-apiserver 96a418b4-d05a-4be9-9e9f-0435c9707a64 713 0 2025-02-14 00:39:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86f6495c65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-a-a04cd882ea calico-apiserver-86f6495c65-qt2ld eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif32f237a442 [] []}} ContainerID="e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc" Namespace="calico-apiserver" Pod="calico-apiserver-86f6495c65-qt2ld" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-" Feb 14 00:40:07.045604 containerd[2680]: 2025-02-14 00:40:06.980 [INFO][6457] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc" Namespace="calico-apiserver" Pod="calico-apiserver-86f6495c65-qt2ld" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0" Feb 14 00:40:07.045604 containerd[2680]: 2025-02-14 00:40:07.002 [INFO][6514] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc" HandleID="k8s-pod-network.e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0" Feb 14 00:40:07.045604 containerd[2680]: 2025-02-14 00:40:07.016 [INFO][6514] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc" HandleID="k8s-pod-network.e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003cdf20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-a-a04cd882ea", "pod":"calico-apiserver-86f6495c65-qt2ld", "timestamp":"2025-02-14 00:40:07.002823723 +0000 UTC"}, Hostname:"ci-4081.3.1-a-a04cd882ea", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 00:40:07.045604 containerd[2680]: 2025-02-14 00:40:07.016 [INFO][6514] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:07.045604 containerd[2680]: 2025-02-14 00:40:07.016 [INFO][6514] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:07.045604 containerd[2680]: 2025-02-14 00:40:07.016 [INFO][6514] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-a04cd882ea' Feb 14 00:40:07.045604 containerd[2680]: 2025-02-14 00:40:07.017 [INFO][6514] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:07.045604 containerd[2680]: 2025-02-14 00:40:07.020 [INFO][6514] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:07.045604 containerd[2680]: 2025-02-14 00:40:07.023 [INFO][6514] ipam/ipam.go 489: Trying affinity for 192.168.5.0/26 host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:07.045604 containerd[2680]: 2025-02-14 00:40:07.024 [INFO][6514] ipam/ipam.go 155: Attempting to load block cidr=192.168.5.0/26 host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:07.045604 containerd[2680]: 2025-02-14 00:40:07.026 [INFO][6514] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:07.045604 containerd[2680]: 2025-02-14 00:40:07.026 [INFO][6514] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:07.045604 containerd[2680]: 2025-02-14 00:40:07.027 [INFO][6514] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc Feb 14 00:40:07.045604 containerd[2680]: 2025-02-14 00:40:07.029 [INFO][6514] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:07.045604 containerd[2680]: 2025-02-14 00:40:07.032 [INFO][6514] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.5.1/26] block=192.168.5.0/26 handle="k8s-pod-network.e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:07.045604 containerd[2680]: 2025-02-14 00:40:07.032 [INFO][6514] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.1/26] handle="k8s-pod-network.e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:07.045604 containerd[2680]: 2025-02-14 00:40:07.032 [INFO][6514] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:07.045604 containerd[2680]: 2025-02-14 00:40:07.032 [INFO][6514] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.5.1/26] IPv6=[] ContainerID="e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc" HandleID="k8s-pod-network.e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0" Feb 14 00:40:07.046112 containerd[2680]: 2025-02-14 00:40:07.033 [INFO][6457] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc" Namespace="calico-apiserver" Pod="calico-apiserver-86f6495c65-qt2ld" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0", GenerateName:"calico-apiserver-86f6495c65-", Namespace:"calico-apiserver", SelfLink:"", UID:"96a418b4-d05a-4be9-9e9f-0435c9707a64", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f6495c65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"", Pod:"calico-apiserver-86f6495c65-qt2ld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif32f237a442", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:07.046112 containerd[2680]: 2025-02-14 00:40:07.034 [INFO][6457] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.5.1/32] ContainerID="e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc" Namespace="calico-apiserver" Pod="calico-apiserver-86f6495c65-qt2ld" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0" Feb 14 00:40:07.046112 containerd[2680]: 2025-02-14 00:40:07.034 [INFO][6457] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif32f237a442 ContainerID="e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc" Namespace="calico-apiserver" Pod="calico-apiserver-86f6495c65-qt2ld" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0" Feb 14 00:40:07.046112 containerd[2680]: 2025-02-14 00:40:07.039 [INFO][6457] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc" Namespace="calico-apiserver" Pod="calico-apiserver-86f6495c65-qt2ld" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0" Feb 14 00:40:07.046112 containerd[2680]: 2025-02-14 00:40:07.039 [INFO][6457] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc" Namespace="calico-apiserver" Pod="calico-apiserver-86f6495c65-qt2ld" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0", GenerateName:"calico-apiserver-86f6495c65-", Namespace:"calico-apiserver", SelfLink:"", UID:"96a418b4-d05a-4be9-9e9f-0435c9707a64", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f6495c65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc", Pod:"calico-apiserver-86f6495c65-qt2ld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif32f237a442", MAC:"02:00:e9:11:53:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:07.046112 containerd[2680]: 2025-02-14 00:40:07.044 [INFO][6457] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc" Namespace="calico-apiserver" Pod="calico-apiserver-86f6495c65-qt2ld" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0" Feb 14 00:40:07.060390 containerd[2680]: time="2025-02-14T00:40:07.060329651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:40:07.060390 containerd[2680]: time="2025-02-14T00:40:07.060378087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:40:07.060443 containerd[2680]: time="2025-02-14T00:40:07.060388846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:40:07.060484 containerd[2680]: time="2025-02-14T00:40:07.060465000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:40:07.081914 systemd[1]: Started cri-containerd-e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc.scope - libcontainer container e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc. Feb 14 00:40:07.105233 containerd[2680]: time="2025-02-14T00:40:07.105179221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f6495c65-qt2ld,Uid:96a418b4-d05a-4be9-9e9f-0435c9707a64,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc\"" Feb 14 00:40:07.107377 containerd[2680]: time="2025-02-14T00:40:07.107354889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 14 00:40:07.137830 systemd-networkd[2584]: cali79f4600a391: Link UP Feb 14 00:40:07.138009 systemd-networkd[2584]: cali79f4600a391: Gained carrier Feb 14 00:40:07.145295 containerd[2680]: 2025-02-14 00:40:06.978 [INFO][6475] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 14 00:40:07.145295 containerd[2680]: 2025-02-14 00:40:06.988 [INFO][6475] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0 coredns-668d6bf9bc- kube-system 66b493ca-acf1-42dd-852d-4dc920f52794 712 0 2025-02-14 00:39:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-a-a04cd882ea coredns-668d6bf9bc-zll2f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali79f4600a391 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-zll2f" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-" Feb 14 00:40:07.145295 containerd[2680]: 2025-02-14 00:40:06.988 [INFO][6475] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-zll2f" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0" Feb 14 00:40:07.145295 containerd[2680]: 2025-02-14 00:40:07.009 [INFO][6524] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb" HandleID="k8s-pod-network.0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0" Feb 14 00:40:07.145295 containerd[2680]: 2025-02-14 00:40:07.017 [INFO][6524] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb" HandleID="k8s-pod-network.0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032f560), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-a-a04cd882ea", "pod":"coredns-668d6bf9bc-zll2f", "timestamp":"2025-02-14 00:40:07.009346286 +0000 UTC"}, Hostname:"ci-4081.3.1-a-a04cd882ea", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 00:40:07.145295 containerd[2680]: 2025-02-14 00:40:07.017 [INFO][6524] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:07.145295 containerd[2680]: 2025-02-14 00:40:07.032 [INFO][6524] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:07.145295 containerd[2680]: 2025-02-14 00:40:07.032 [INFO][6524] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-a04cd882ea' Feb 14 00:40:07.145295 containerd[2680]: 2025-02-14 00:40:07.119 [INFO][6524] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:07.145295 containerd[2680]: 2025-02-14 00:40:07.122 [INFO][6524] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:07.145295 containerd[2680]: 2025-02-14 00:40:07.125 [INFO][6524] ipam/ipam.go 489: Trying affinity for 192.168.5.0/26 host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:07.145295 containerd[2680]: 2025-02-14 00:40:07.126 [INFO][6524] ipam/ipam.go 155: Attempting to load block cidr=192.168.5.0/26 host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:07.145295 containerd[2680]: 2025-02-14 00:40:07.128 [INFO][6524] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:07.145295 containerd[2680]: 2025-02-14 00:40:07.128 [INFO][6524] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:07.145295 containerd[2680]: 2025-02-14 00:40:07.129 [INFO][6524] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb Feb 14 00:40:07.145295 containerd[2680]: 2025-02-14 00:40:07.131 [INFO][6524] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:07.145295 containerd[2680]: 2025-02-14 00:40:07.134 [INFO][6524] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.5.2/26] block=192.168.5.0/26 handle="k8s-pod-network.0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:07.145295 containerd[2680]: 2025-02-14 00:40:07.134 [INFO][6524] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.2/26] handle="k8s-pod-network.0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:07.145295 containerd[2680]: 2025-02-14 00:40:07.135 [INFO][6524] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:07.145295 containerd[2680]: 2025-02-14 00:40:07.135 [INFO][6524] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.5.2/26] IPv6=[] ContainerID="0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb" HandleID="k8s-pod-network.0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0" Feb 14 00:40:07.145720 containerd[2680]: 2025-02-14 00:40:07.136 [INFO][6475] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-zll2f" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"66b493ca-acf1-42dd-852d-4dc920f52794", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"", Pod:"coredns-668d6bf9bc-zll2f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79f4600a391", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:07.145720 containerd[2680]: 2025-02-14 00:40:07.136 [INFO][6475] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.5.2/32] ContainerID="0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-zll2f" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0" Feb 14 00:40:07.145720 containerd[2680]: 2025-02-14 00:40:07.136 [INFO][6475] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79f4600a391 ContainerID="0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-zll2f" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0" Feb 14 00:40:07.145720 containerd[2680]: 2025-02-14 00:40:07.138 [INFO][6475] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-zll2f" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0" Feb 14 00:40:07.145720 containerd[2680]: 2025-02-14 00:40:07.138 [INFO][6475] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-zll2f" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"66b493ca-acf1-42dd-852d-4dc920f52794", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb", Pod:"coredns-668d6bf9bc-zll2f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79f4600a391", MAC:"72:95:3c:30:d8:24", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:07.145720 containerd[2680]: 2025-02-14 00:40:07.143 [INFO][6475] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb" Namespace="kube-system" Pod="coredns-668d6bf9bc-zll2f" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0" Feb 14 00:40:07.159126 containerd[2680]: time="2025-02-14T00:40:07.159057717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:40:07.159126 containerd[2680]: time="2025-02-14T00:40:07.159108073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:40:07.159126 containerd[2680]: time="2025-02-14T00:40:07.159119752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:40:07.159239 containerd[2680]: time="2025-02-14T00:40:07.159200506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:40:07.181862 systemd[1]: Started cri-containerd-0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb.scope - libcontainer container 0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb. Feb 14 00:40:07.204389 containerd[2680]: time="2025-02-14T00:40:07.204328294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zll2f,Uid:66b493ca-acf1-42dd-852d-4dc920f52794,Namespace:kube-system,Attempt:1,} returns sandbox id \"0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb\"" Feb 14 00:40:07.206176 containerd[2680]: time="2025-02-14T00:40:07.206152630Z" level=info msg="CreateContainer within sandbox \"0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 14 00:40:07.211322 containerd[2680]: time="2025-02-14T00:40:07.211296103Z" level=info msg="CreateContainer within sandbox \"0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"978e0aa0dff59c7501045a113d80f43a2f8c1f849dadd5404a12e0dff9a901ca\"" Feb 14 00:40:07.211702 containerd[2680]: time="2025-02-14T00:40:07.211636516Z" level=info msg="StartContainer for \"978e0aa0dff59c7501045a113d80f43a2f8c1f849dadd5404a12e0dff9a901ca\"" Feb 14 00:40:07.243906 systemd[1]: Started cri-containerd-978e0aa0dff59c7501045a113d80f43a2f8c1f849dadd5404a12e0dff9a901ca.scope - libcontainer container 978e0aa0dff59c7501045a113d80f43a2f8c1f849dadd5404a12e0dff9a901ca. Feb 14 00:40:07.260442 containerd[2680]: time="2025-02-14T00:40:07.260409055Z" level=info msg="StartContainer for \"978e0aa0dff59c7501045a113d80f43a2f8c1f849dadd5404a12e0dff9a901ca\" returns successfully" Feb 14 00:40:07.800255 sshd[6750]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.218 user=root Feb 14 00:40:07.861961 containerd[2680]: time="2025-02-14T00:40:07.861919168Z" level=info msg="StopPodSandbox for \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\"" Feb 14 00:40:07.927446 containerd[2680]: 2025-02-14 00:40:07.899 [INFO][6768] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Feb 14 00:40:07.927446 containerd[2680]: 2025-02-14 00:40:07.899 [INFO][6768] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" iface="eth0" netns="/var/run/netns/cni-140c5b8f-4fb1-0b69-662a-dbd0e4050cdd" Feb 14 00:40:07.927446 containerd[2680]: 2025-02-14 00:40:07.899 [INFO][6768] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" iface="eth0" netns="/var/run/netns/cni-140c5b8f-4fb1-0b69-662a-dbd0e4050cdd" Feb 14 00:40:07.927446 containerd[2680]: 2025-02-14 00:40:07.899 [INFO][6768] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" iface="eth0" netns="/var/run/netns/cni-140c5b8f-4fb1-0b69-662a-dbd0e4050cdd" Feb 14 00:40:07.927446 containerd[2680]: 2025-02-14 00:40:07.899 [INFO][6768] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Feb 14 00:40:07.927446 containerd[2680]: 2025-02-14 00:40:07.899 [INFO][6768] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Feb 14 00:40:07.927446 containerd[2680]: 2025-02-14 00:40:07.917 [INFO][6790] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" HandleID="k8s-pod-network.0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0" Feb 14 00:40:07.927446 containerd[2680]: 2025-02-14 00:40:07.917 [INFO][6790] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:07.927446 containerd[2680]: 2025-02-14 00:40:07.917 [INFO][6790] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:07.927446 containerd[2680]: 2025-02-14 00:40:07.924 [WARNING][6790] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" HandleID="k8s-pod-network.0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0" Feb 14 00:40:07.927446 containerd[2680]: 2025-02-14 00:40:07.924 [INFO][6790] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" HandleID="k8s-pod-network.0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0" Feb 14 00:40:07.927446 containerd[2680]: 2025-02-14 00:40:07.925 [INFO][6790] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:07.927446 containerd[2680]: 2025-02-14 00:40:07.926 [INFO][6768] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Feb 14 00:40:07.928044 containerd[2680]: time="2025-02-14T00:40:07.927610328Z" level=info msg="TearDown network for sandbox \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\" successfully" Feb 14 00:40:07.928044 containerd[2680]: time="2025-02-14T00:40:07.927645606Z" level=info msg="StopPodSandbox for \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\" returns successfully" Feb 14 00:40:07.928141 containerd[2680]: time="2025-02-14T00:40:07.928118128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c596b8684-qhvsf,Uid:0bcb1395-23a9-4344-8326-eb06cdc2ac2f,Namespace:calico-system,Attempt:1,}" Feb 14 00:40:07.946738 kubelet[4209]: I0214 00:40:07.946674 4209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zll2f" podStartSLOduration=25.946660021 podStartE2EDuration="25.946660021s" podCreationTimestamp="2025-02-14 00:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 00:40:07.946366004 +0000 UTC m=+32.157785340" watchObservedRunningTime="2025-02-14 00:40:07.946660021 +0000 UTC m=+32.158079317" Feb 14 00:40:07.953319 systemd[1]: run-netns-cni\x2d140c5b8f\x2d4fb1\x2d0b69\x2d662a\x2ddbd0e4050cdd.mount: Deactivated successfully. Feb 14 00:40:08.017051 systemd-networkd[2584]: calie73715d0228: Link UP Feb 14 00:40:08.017702 systemd-networkd[2584]: calie73715d0228: Gained carrier Feb 14 00:40:08.024834 containerd[2680]: 2025-02-14 00:40:07.948 [INFO][6809] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 14 00:40:08.024834 containerd[2680]: 2025-02-14 00:40:07.959 [INFO][6809] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0 calico-kube-controllers-5c596b8684- calico-system 0bcb1395-23a9-4344-8326-eb06cdc2ac2f 728 0 2025-02-14 00:39:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c596b8684 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.1-a-a04cd882ea calico-kube-controllers-5c596b8684-qhvsf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie73715d0228 [] []}} ContainerID="3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4" Namespace="calico-system" Pod="calico-kube-controllers-5c596b8684-qhvsf" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-" Feb 14 00:40:08.024834 containerd[2680]: 2025-02-14 00:40:07.959 [INFO][6809] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4" Namespace="calico-system" Pod="calico-kube-controllers-5c596b8684-qhvsf" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0" Feb 14 00:40:08.024834 containerd[2680]: 2025-02-14 00:40:07.981 [INFO][6841] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4" HandleID="k8s-pod-network.3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0" Feb 14 00:40:08.024834 containerd[2680]: 2025-02-14 00:40:07.990 [INFO][6841] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4" HandleID="k8s-pod-network.3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40006b86f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-a-a04cd882ea", "pod":"calico-kube-controllers-5c596b8684-qhvsf", "timestamp":"2025-02-14 00:40:07.981636852 +0000 UTC"}, Hostname:"ci-4081.3.1-a-a04cd882ea", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 00:40:08.024834 containerd[2680]: 2025-02-14 00:40:07.990 [INFO][6841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:08.024834 containerd[2680]: 2025-02-14 00:40:07.990 [INFO][6841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:08.024834 containerd[2680]: 2025-02-14 00:40:07.990 [INFO][6841] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-a04cd882ea' Feb 14 00:40:08.024834 containerd[2680]: 2025-02-14 00:40:07.992 [INFO][6841] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:08.024834 containerd[2680]: 2025-02-14 00:40:07.994 [INFO][6841] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:08.024834 containerd[2680]: 2025-02-14 00:40:07.997 [INFO][6841] ipam/ipam.go 489: Trying affinity for 192.168.5.0/26 host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:08.024834 containerd[2680]: 2025-02-14 00:40:07.998 [INFO][6841] ipam/ipam.go 155: Attempting to load block cidr=192.168.5.0/26 host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:08.024834 containerd[2680]: 2025-02-14 00:40:08.000 [INFO][6841] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:08.024834 containerd[2680]: 2025-02-14 00:40:08.000 [INFO][6841] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:08.024834 containerd[2680]: 2025-02-14 00:40:08.001 [INFO][6841] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4 Feb 14 00:40:08.024834 containerd[2680]: 2025-02-14 00:40:08.003 [INFO][6841] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:08.024834 containerd[2680]: 2025-02-14 00:40:08.013 [INFO][6841] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.5.3/26] block=192.168.5.0/26 handle="k8s-pod-network.3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:08.024834 containerd[2680]: 2025-02-14 00:40:08.013 [INFO][6841] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.3/26] handle="k8s-pod-network.3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:08.024834 containerd[2680]: 2025-02-14 00:40:08.013 [INFO][6841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:08.024834 containerd[2680]: 2025-02-14 00:40:08.013 [INFO][6841] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.5.3/26] IPv6=[] ContainerID="3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4" HandleID="k8s-pod-network.3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0" Feb 14 00:40:08.025305 containerd[2680]: 2025-02-14 00:40:08.015 [INFO][6809] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4" Namespace="calico-system" Pod="calico-kube-controllers-5c596b8684-qhvsf" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0", GenerateName:"calico-kube-controllers-5c596b8684-", Namespace:"calico-system", SelfLink:"", UID:"0bcb1395-23a9-4344-8326-eb06cdc2ac2f", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c596b8684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"", Pod:"calico-kube-controllers-5c596b8684-qhvsf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie73715d0228", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:08.025305 containerd[2680]: 2025-02-14 00:40:08.015 [INFO][6809] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.5.3/32] ContainerID="3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4" Namespace="calico-system" Pod="calico-kube-controllers-5c596b8684-qhvsf" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0" Feb 14 00:40:08.025305 containerd[2680]: 2025-02-14 00:40:08.015 [INFO][6809] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie73715d0228 ContainerID="3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4" Namespace="calico-system" Pod="calico-kube-controllers-5c596b8684-qhvsf" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0" Feb 14 00:40:08.025305 containerd[2680]: 2025-02-14 00:40:08.017 [INFO][6809] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4" Namespace="calico-system" Pod="calico-kube-controllers-5c596b8684-qhvsf" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0" Feb 14 00:40:08.025305 containerd[2680]: 2025-02-14 00:40:08.017 [INFO][6809] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4" Namespace="calico-system" Pod="calico-kube-controllers-5c596b8684-qhvsf" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0", GenerateName:"calico-kube-controllers-5c596b8684-", Namespace:"calico-system", SelfLink:"", UID:"0bcb1395-23a9-4344-8326-eb06cdc2ac2f", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c596b8684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4", Pod:"calico-kube-controllers-5c596b8684-qhvsf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie73715d0228", MAC:"da:05:f3:2a:ed:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:08.025305 containerd[2680]: 2025-02-14 00:40:08.023 [INFO][6809] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4" Namespace="calico-system" Pod="calico-kube-controllers-5c596b8684-qhvsf" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0" Feb 14 00:40:08.038745 containerd[2680]: time="2025-02-14T00:40:08.038675206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:40:08.038745 containerd[2680]: time="2025-02-14T00:40:08.038728562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:40:08.038745 containerd[2680]: time="2025-02-14T00:40:08.038746121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:40:08.038889 containerd[2680]: time="2025-02-14T00:40:08.038823435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:40:08.072936 systemd[1]: Started cri-containerd-3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4.scope - libcontainer container 3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4. Feb 14 00:40:08.095860 containerd[2680]: time="2025-02-14T00:40:08.095825325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c596b8684-qhvsf,Uid:0bcb1395-23a9-4344-8326-eb06cdc2ac2f,Namespace:calico-system,Attempt:1,} returns sandbox id \"3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4\"" Feb 14 00:40:08.167910 containerd[2680]: time="2025-02-14T00:40:08.167874539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:40:08.167969 containerd[2680]: time="2025-02-14T00:40:08.167935575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Feb 14 00:40:08.168612 containerd[2680]: time="2025-02-14T00:40:08.168589766Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:40:08.170400 containerd[2680]: time="2025-02-14T00:40:08.170374874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:40:08.171106 containerd[2680]: time="2025-02-14T00:40:08.171088101Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.063703773s" Feb 14 00:40:08.171129 containerd[2680]: time="2025-02-14T00:40:08.171110419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 14 00:40:08.171959 containerd[2680]: time="2025-02-14T00:40:08.171936598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 14 00:40:08.172758 containerd[2680]: time="2025-02-14T00:40:08.172736418Z" level=info msg="CreateContainer within sandbox \"e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 14 00:40:08.177235 containerd[2680]: time="2025-02-14T00:40:08.177207567Z" level=info msg="CreateContainer within sandbox \"e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"329c399a83f9ac4074171fd83c3b6a18d2a3ecb53099f3f3f25c78bdb860f479\"" Feb 14 00:40:08.177554 containerd[2680]: time="2025-02-14T00:40:08.177533622Z" level=info msg="StartContainer for \"329c399a83f9ac4074171fd83c3b6a18d2a3ecb53099f3f3f25c78bdb860f479\"" Feb 14 00:40:08.204905 systemd[1]: Started cri-containerd-329c399a83f9ac4074171fd83c3b6a18d2a3ecb53099f3f3f25c78bdb860f479.scope - libcontainer container 329c399a83f9ac4074171fd83c3b6a18d2a3ecb53099f3f3f25c78bdb860f479. Feb 14 00:40:08.228664 containerd[2680]: time="2025-02-14T00:40:08.228635511Z" level=info msg="StartContainer for \"329c399a83f9ac4074171fd83c3b6a18d2a3ecb53099f3f3f25c78bdb860f479\" returns successfully" Feb 14 00:40:08.372839 systemd-networkd[2584]: cali79f4600a391: Gained IPv6LL Feb 14 00:40:08.692835 systemd-networkd[2584]: calif32f237a442: Gained IPv6LL Feb 14 00:40:08.934286 kubelet[4209]: I0214 00:40:08.934237 4209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86f6495c65-qt2ld" podStartSLOduration=17.869524457 podStartE2EDuration="18.934219196s" podCreationTimestamp="2025-02-14 00:39:50 +0000 UTC" firstStartedPulling="2025-02-14 00:40:07.107138586 +0000 UTC m=+31.318557922" lastFinishedPulling="2025-02-14 00:40:08.171833365 +0000 UTC m=+32.383252661" observedRunningTime="2025-02-14 00:40:08.934008132 +0000 UTC m=+33.145427468" watchObservedRunningTime="2025-02-14 00:40:08.934219196 +0000 UTC m=+33.145638532" Feb 14 00:40:09.100016 containerd[2680]: time="2025-02-14T00:40:09.099932441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:40:09.100016 containerd[2680]: time="2025-02-14T00:40:09.099998556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Feb 14 00:40:09.100707 containerd[2680]: time="2025-02-14T00:40:09.100685428Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:40:09.102414 containerd[2680]: time="2025-02-14T00:40:09.102392270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:40:09.103077 containerd[2680]: time="2025-02-14T00:40:09.103052424Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 931.086148ms" Feb 14 00:40:09.103100 containerd[2680]: time="2025-02-14T00:40:09.103085181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Feb 14 00:40:09.108248 containerd[2680]: time="2025-02-14T00:40:09.108224264Z" level=info msg="CreateContainer within sandbox \"3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 14 00:40:09.112919 containerd[2680]: time="2025-02-14T00:40:09.112889779Z" level=info msg="CreateContainer within sandbox \"3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"35e0da51691d9de5b7220a600889034700b79d839d6054e4010b236a5982d008\"" Feb 14 00:40:09.113231 containerd[2680]: time="2025-02-14T00:40:09.113206877Z" level=info msg="StartContainer for \"35e0da51691d9de5b7220a600889034700b79d839d6054e4010b236a5982d008\"" Feb 14 00:40:09.143918 systemd[1]: Started cri-containerd-35e0da51691d9de5b7220a600889034700b79d839d6054e4010b236a5982d008.scope - libcontainer container 35e0da51691d9de5b7220a600889034700b79d839d6054e4010b236a5982d008. Feb 14 00:40:09.167925 containerd[2680]: time="2025-02-14T00:40:09.167888833Z" level=info msg="StartContainer for \"35e0da51691d9de5b7220a600889034700b79d839d6054e4010b236a5982d008\" returns successfully" Feb 14 00:40:09.185663 sshd[6300]: PAM: Permission denied for root from 218.92.0.218 Feb 14 00:40:09.640590 sshd[7104]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.218 user=root Feb 14 00:40:09.929253 kubelet[4209]: I0214 00:40:09.929162 4209 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 00:40:09.938431 kubelet[4209]: I0214 00:40:09.938385 4209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5c596b8684-qhvsf" podStartSLOduration=18.93149199 podStartE2EDuration="19.938369637s" podCreationTimestamp="2025-02-14 00:39:50 +0000 UTC" firstStartedPulling="2025-02-14 00:40:08.096727738 +0000 UTC m=+32.308147074" lastFinishedPulling="2025-02-14 00:40:09.103605385 +0000 UTC m=+33.315024721" observedRunningTime="2025-02-14 00:40:09.938084977 +0000 UTC m=+34.149504313" watchObservedRunningTime="2025-02-14 00:40:09.938369637 +0000 UTC m=+34.149788973" Feb 14 00:40:10.036845 systemd-networkd[2584]: calie73715d0228: Gained IPv6LL Feb 14 00:40:10.862243 containerd[2680]: time="2025-02-14T00:40:10.862204835Z" level=info msg="StopPodSandbox for \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\"" Feb 14 00:40:10.862521 containerd[2680]: time="2025-02-14T00:40:10.862255951Z" level=info msg="StopPodSandbox for \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\"" Feb 14 00:40:10.928628 containerd[2680]: 2025-02-14 00:40:10.901 [INFO][7241] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Feb 14 00:40:10.928628 containerd[2680]: 2025-02-14 00:40:10.901 [INFO][7241] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" iface="eth0" netns="/var/run/netns/cni-a6fc1edd-e72f-83ee-378f-a3f0202adf68" Feb 14 00:40:10.928628 containerd[2680]: 2025-02-14 00:40:10.901 [INFO][7241] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" iface="eth0" netns="/var/run/netns/cni-a6fc1edd-e72f-83ee-378f-a3f0202adf68" Feb 14 00:40:10.928628 containerd[2680]: 2025-02-14 00:40:10.901 [INFO][7241] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" iface="eth0" netns="/var/run/netns/cni-a6fc1edd-e72f-83ee-378f-a3f0202adf68" Feb 14 00:40:10.928628 containerd[2680]: 2025-02-14 00:40:10.901 [INFO][7241] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Feb 14 00:40:10.928628 containerd[2680]: 2025-02-14 00:40:10.901 [INFO][7241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Feb 14 00:40:10.928628 containerd[2680]: 2025-02-14 00:40:10.918 [INFO][7283] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" HandleID="k8s-pod-network.cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0" Feb 14 00:40:10.928628 containerd[2680]: 2025-02-14 00:40:10.918 [INFO][7283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:10.928628 containerd[2680]: 2025-02-14 00:40:10.918 [INFO][7283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:10.928628 containerd[2680]: 2025-02-14 00:40:10.925 [WARNING][7283] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" HandleID="k8s-pod-network.cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0" Feb 14 00:40:10.928628 containerd[2680]: 2025-02-14 00:40:10.925 [INFO][7283] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" HandleID="k8s-pod-network.cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0" Feb 14 00:40:10.928628 containerd[2680]: 2025-02-14 00:40:10.926 [INFO][7283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:10.928628 containerd[2680]: 2025-02-14 00:40:10.927 [INFO][7241] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Feb 14 00:40:10.928974 containerd[2680]: time="2025-02-14T00:40:10.928780533Z" level=info msg="TearDown network for sandbox \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\" successfully" Feb 14 00:40:10.928974 containerd[2680]: time="2025-02-14T00:40:10.928810971Z" level=info msg="StopPodSandbox for \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\" returns successfully" Feb 14 00:40:10.929319 containerd[2680]: time="2025-02-14T00:40:10.929295299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f6495c65-ln5ss,Uid:f944ebf0-c99a-48f8-9584-55824d96b5a2,Namespace:calico-apiserver,Attempt:1,}" Feb 14 00:40:10.930855 systemd[1]: run-netns-cni\x2da6fc1edd\x2de72f\x2d83ee\x2d378f\x2da3f0202adf68.mount: Deactivated successfully. Feb 14 00:40:10.940573 containerd[2680]: 2025-02-14 00:40:10.900 [INFO][7240] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Feb 14 00:40:10.940573 containerd[2680]: 2025-02-14 00:40:10.901 [INFO][7240] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" iface="eth0" netns="/var/run/netns/cni-eb5957cf-c50b-597d-ff10-d22baf762195" Feb 14 00:40:10.940573 containerd[2680]: 2025-02-14 00:40:10.901 [INFO][7240] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" iface="eth0" netns="/var/run/netns/cni-eb5957cf-c50b-597d-ff10-d22baf762195" Feb 14 00:40:10.940573 containerd[2680]: 2025-02-14 00:40:10.901 [INFO][7240] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" iface="eth0" netns="/var/run/netns/cni-eb5957cf-c50b-597d-ff10-d22baf762195" Feb 14 00:40:10.940573 containerd[2680]: 2025-02-14 00:40:10.901 [INFO][7240] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Feb 14 00:40:10.940573 containerd[2680]: 2025-02-14 00:40:10.901 [INFO][7240] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Feb 14 00:40:10.940573 containerd[2680]: 2025-02-14 00:40:10.918 [INFO][7282] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" HandleID="k8s-pod-network.c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0" Feb 14 00:40:10.940573 containerd[2680]: 2025-02-14 00:40:10.918 [INFO][7282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:10.940573 containerd[2680]: 2025-02-14 00:40:10.926 [INFO][7282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:10.940573 containerd[2680]: 2025-02-14 00:40:10.936 [WARNING][7282] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" HandleID="k8s-pod-network.c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0" Feb 14 00:40:10.940573 containerd[2680]: 2025-02-14 00:40:10.936 [INFO][7282] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" HandleID="k8s-pod-network.c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0" Feb 14 00:40:10.940573 containerd[2680]: 2025-02-14 00:40:10.937 [INFO][7282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:10.940573 containerd[2680]: 2025-02-14 00:40:10.939 [INFO][7240] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Feb 14 00:40:10.940848 containerd[2680]: time="2025-02-14T00:40:10.940757632Z" level=info msg="TearDown network for sandbox \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\" successfully" Feb 14 00:40:10.940848 containerd[2680]: time="2025-02-14T00:40:10.940784790Z" level=info msg="StopPodSandbox for \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\" returns successfully" Feb 14 00:40:10.941177 containerd[2680]: time="2025-02-14T00:40:10.941153246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lnphz,Uid:9c310922-b8a9-45e4-9148-d0a62cae98ca,Namespace:kube-system,Attempt:1,}" Feb 14 00:40:10.942558 systemd[1]: run-netns-cni\x2deb5957cf\x2dc50b\x2d597d\x2dff10\x2dd22baf762195.mount: Deactivated successfully. Feb 14 00:40:11.017146 systemd-networkd[2584]: calicf68fec70fe: Link UP Feb 14 00:40:11.017340 systemd-networkd[2584]: calicf68fec70fe: Gained carrier Feb 14 00:40:11.024203 containerd[2680]: 2025-02-14 00:40:10.949 [INFO][7322] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 14 00:40:11.024203 containerd[2680]: 2025-02-14 00:40:10.960 [INFO][7322] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0 calico-apiserver-86f6495c65- calico-apiserver f944ebf0-c99a-48f8-9584-55824d96b5a2 762 0 2025-02-14 00:39:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86f6495c65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.1-a-a04cd882ea calico-apiserver-86f6495c65-ln5ss eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicf68fec70fe [] []}} ContainerID="5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90" Namespace="calico-apiserver" Pod="calico-apiserver-86f6495c65-ln5ss" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-" Feb 14 00:40:11.024203 containerd[2680]: 2025-02-14 00:40:10.960 [INFO][7322] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90" Namespace="calico-apiserver" Pod="calico-apiserver-86f6495c65-ln5ss" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0" Feb 14 00:40:11.024203 containerd[2680]: 2025-02-14 00:40:10.986 [INFO][7377] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90" HandleID="k8s-pod-network.5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0" Feb 14 00:40:11.024203 containerd[2680]: 2025-02-14 00:40:10.996 [INFO][7377] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90" HandleID="k8s-pod-network.5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039e7a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.1-a-a04cd882ea", "pod":"calico-apiserver-86f6495c65-ln5ss", "timestamp":"2025-02-14 00:40:10.986050398 +0000 UTC"}, Hostname:"ci-4081.3.1-a-a04cd882ea", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 00:40:11.024203 containerd[2680]: 2025-02-14 00:40:10.996 [INFO][7377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:11.024203 containerd[2680]: 2025-02-14 00:40:10.996 [INFO][7377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:11.024203 containerd[2680]: 2025-02-14 00:40:10.996 [INFO][7377] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-a04cd882ea' Feb 14 00:40:11.024203 containerd[2680]: 2025-02-14 00:40:10.997 [INFO][7377] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:11.024203 containerd[2680]: 2025-02-14 00:40:11.000 [INFO][7377] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:11.024203 containerd[2680]: 2025-02-14 00:40:11.004 [INFO][7377] ipam/ipam.go 489: Trying affinity for 192.168.5.0/26 host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:11.024203 containerd[2680]: 2025-02-14 00:40:11.005 [INFO][7377] ipam/ipam.go 155: Attempting to load block cidr=192.168.5.0/26 host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:11.024203 containerd[2680]: 2025-02-14 00:40:11.007 [INFO][7377] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:11.024203 containerd[2680]: 2025-02-14 00:40:11.007 [INFO][7377] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:11.024203 containerd[2680]: 2025-02-14 00:40:11.008 [INFO][7377] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90 Feb 14 00:40:11.024203 containerd[2680]: 2025-02-14 00:40:11.010 [INFO][7377] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:11.024203 containerd[2680]: 2025-02-14 00:40:11.014 [INFO][7377] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.5.4/26] block=192.168.5.0/26 handle="k8s-pod-network.5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:11.024203 containerd[2680]: 2025-02-14 00:40:11.014 [INFO][7377] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.4/26] handle="k8s-pod-network.5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:11.024203 containerd[2680]: 2025-02-14 00:40:11.014 [INFO][7377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:11.024203 containerd[2680]: 2025-02-14 00:40:11.014 [INFO][7377] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.5.4/26] IPv6=[] ContainerID="5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90" HandleID="k8s-pod-network.5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0" Feb 14 00:40:11.024614 containerd[2680]: 2025-02-14 00:40:11.015 [INFO][7322] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90" Namespace="calico-apiserver" Pod="calico-apiserver-86f6495c65-ln5ss" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0", GenerateName:"calico-apiserver-86f6495c65-", Namespace:"calico-apiserver", SelfLink:"", UID:"f944ebf0-c99a-48f8-9584-55824d96b5a2", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f6495c65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"", Pod:"calico-apiserver-86f6495c65-ln5ss", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf68fec70fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:11.024614 containerd[2680]: 2025-02-14 00:40:11.016 [INFO][7322] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.5.4/32] ContainerID="5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90" Namespace="calico-apiserver" Pod="calico-apiserver-86f6495c65-ln5ss" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0" Feb 14 00:40:11.024614 containerd[2680]: 2025-02-14 00:40:11.016 [INFO][7322] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf68fec70fe ContainerID="5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90" Namespace="calico-apiserver" Pod="calico-apiserver-86f6495c65-ln5ss" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0" Feb 14 00:40:11.024614 containerd[2680]: 2025-02-14 00:40:11.017 [INFO][7322] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90" Namespace="calico-apiserver" Pod="calico-apiserver-86f6495c65-ln5ss" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0" Feb 14 00:40:11.024614 containerd[2680]: 2025-02-14 00:40:11.017 [INFO][7322] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90" Namespace="calico-apiserver" Pod="calico-apiserver-86f6495c65-ln5ss" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0", GenerateName:"calico-apiserver-86f6495c65-", Namespace:"calico-apiserver", SelfLink:"", UID:"f944ebf0-c99a-48f8-9584-55824d96b5a2", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f6495c65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90", Pod:"calico-apiserver-86f6495c65-ln5ss", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf68fec70fe", MAC:"f2:70:34:be:50:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:11.024614 containerd[2680]: 2025-02-14 00:40:11.023 [INFO][7322] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90" Namespace="calico-apiserver" Pod="calico-apiserver-86f6495c65-ln5ss" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0" Feb 14 00:40:11.038423 containerd[2680]: time="2025-02-14T00:40:11.038361416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:40:11.038423 containerd[2680]: time="2025-02-14T00:40:11.038410973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:40:11.038482 containerd[2680]: time="2025-02-14T00:40:11.038422892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:40:11.038512 containerd[2680]: time="2025-02-14T00:40:11.038496368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:40:11.068848 systemd[1]: Started cri-containerd-5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90.scope - libcontainer container 5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90. Feb 14 00:40:11.091586 containerd[2680]: time="2025-02-14T00:40:11.091560084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f6495c65-ln5ss,Uid:f944ebf0-c99a-48f8-9584-55824d96b5a2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90\"" Feb 14 00:40:11.093390 containerd[2680]: time="2025-02-14T00:40:11.093364133Z" level=info msg="CreateContainer within sandbox \"5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 14 00:40:11.097973 containerd[2680]: time="2025-02-14T00:40:11.097949573Z" level=info msg="CreateContainer within sandbox \"5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1ae4e76259655e87d91c64b3c709c79f3b101cadc5bbcbcc1fb271de321307c5\"" Feb 14 00:40:11.098324 containerd[2680]: time="2025-02-14T00:40:11.098300911Z" level=info msg="StartContainer for \"1ae4e76259655e87d91c64b3c709c79f3b101cadc5bbcbcc1fb271de321307c5\"" Feb 14 00:40:11.117919 systemd-networkd[2584]: cali8d400c58498: Link UP Feb 14 00:40:11.118089 systemd-networkd[2584]: cali8d400c58498: Gained carrier Feb 14 00:40:11.127925 containerd[2680]: 2025-02-14 00:40:10.960 [INFO][7344] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 14 00:40:11.127925 containerd[2680]: 2025-02-14 00:40:10.972 [INFO][7344] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0 coredns-668d6bf9bc- kube-system 9c310922-b8a9-45e4-9148-d0a62cae98ca 761 0 2025-02-14 00:39:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.1-a-a04cd882ea coredns-668d6bf9bc-lnphz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8d400c58498 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912" Namespace="kube-system" Pod="coredns-668d6bf9bc-lnphz" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-" Feb 14 00:40:11.127925 containerd[2680]: 2025-02-14 00:40:10.972 [INFO][7344] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912" Namespace="kube-system" Pod="coredns-668d6bf9bc-lnphz" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0" Feb 14 00:40:11.127925 containerd[2680]: 2025-02-14 00:40:10.993 [INFO][7396] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912" HandleID="k8s-pod-network.a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0" Feb 14 00:40:11.127925 containerd[2680]: 2025-02-14 00:40:11.003 [INFO][7396] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912" HandleID="k8s-pod-network.a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039ee20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.1-a-a04cd882ea", "pod":"coredns-668d6bf9bc-lnphz", "timestamp":"2025-02-14 00:40:10.993662662 +0000 UTC"}, Hostname:"ci-4081.3.1-a-a04cd882ea", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 00:40:11.127925 containerd[2680]: 2025-02-14 00:40:11.003 [INFO][7396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:11.127925 containerd[2680]: 2025-02-14 00:40:11.014 [INFO][7396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:11.127925 containerd[2680]: 2025-02-14 00:40:11.014 [INFO][7396] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-a04cd882ea' Feb 14 00:40:11.127925 containerd[2680]: 2025-02-14 00:40:11.098 [INFO][7396] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:11.127925 containerd[2680]: 2025-02-14 00:40:11.101 [INFO][7396] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:11.127925 containerd[2680]: 2025-02-14 00:40:11.104 [INFO][7396] ipam/ipam.go 489: Trying affinity for 192.168.5.0/26 host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:11.127925 containerd[2680]: 2025-02-14 00:40:11.106 [INFO][7396] ipam/ipam.go 155: Attempting to load block cidr=192.168.5.0/26 host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:11.127925 containerd[2680]: 2025-02-14 00:40:11.107 [INFO][7396] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:11.127925 containerd[2680]: 2025-02-14 00:40:11.107 [INFO][7396] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:11.127925 containerd[2680]: 2025-02-14 00:40:11.108 [INFO][7396] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912 Feb 14 00:40:11.127925 containerd[2680]: 2025-02-14 00:40:11.111 [INFO][7396] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:11.127925 containerd[2680]: 2025-02-14 00:40:11.115 [INFO][7396] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.5.5/26] block=192.168.5.0/26 handle="k8s-pod-network.a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:11.127925 containerd[2680]: 2025-02-14 00:40:11.115 [INFO][7396] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.5/26] handle="k8s-pod-network.a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:11.127925 containerd[2680]: 2025-02-14 00:40:11.115 [INFO][7396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:11.127925 containerd[2680]: 2025-02-14 00:40:11.115 [INFO][7396] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.5.5/26] IPv6=[] ContainerID="a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912" HandleID="k8s-pod-network.a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0" Feb 14 00:40:11.127871 systemd[1]: Started cri-containerd-1ae4e76259655e87d91c64b3c709c79f3b101cadc5bbcbcc1fb271de321307c5.scope - libcontainer container 1ae4e76259655e87d91c64b3c709c79f3b101cadc5bbcbcc1fb271de321307c5. Feb 14 00:40:11.128358 containerd[2680]: 2025-02-14 00:40:11.116 [INFO][7344] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912" Namespace="kube-system" Pod="coredns-668d6bf9bc-lnphz" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9c310922-b8a9-45e4-9148-d0a62cae98ca", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"", Pod:"coredns-668d6bf9bc-lnphz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8d400c58498", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:11.128358 containerd[2680]: 2025-02-14 00:40:11.116 [INFO][7344] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.5.5/32] ContainerID="a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912" Namespace="kube-system" Pod="coredns-668d6bf9bc-lnphz" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0" Feb 14 00:40:11.128358 containerd[2680]: 2025-02-14 00:40:11.116 [INFO][7344] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d400c58498 ContainerID="a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912" Namespace="kube-system" Pod="coredns-668d6bf9bc-lnphz" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0" Feb 14 00:40:11.128358 containerd[2680]: 2025-02-14 00:40:11.118 [INFO][7344] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912" Namespace="kube-system" Pod="coredns-668d6bf9bc-lnphz" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0" Feb 14 00:40:11.128358 containerd[2680]: 2025-02-14 00:40:11.118 [INFO][7344] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912" Namespace="kube-system" Pod="coredns-668d6bf9bc-lnphz" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9c310922-b8a9-45e4-9148-d0a62cae98ca", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912", Pod:"coredns-668d6bf9bc-lnphz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8d400c58498", MAC:"92:8c:8c:ed:e3:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:11.128358 containerd[2680]: 2025-02-14 00:40:11.124 [INFO][7344] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912" Namespace="kube-system" Pod="coredns-668d6bf9bc-lnphz" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0" Feb 14 00:40:11.139040 containerd[2680]: time="2025-02-14T00:40:11.138943147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:40:11.139040 containerd[2680]: time="2025-02-14T00:40:11.138996223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:40:11.139040 containerd[2680]: time="2025-02-14T00:40:11.139006983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:40:11.139173 containerd[2680]: time="2025-02-14T00:40:11.139076818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:40:11.157871 systemd[1]: Started cri-containerd-a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912.scope - libcontainer container a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912. Feb 14 00:40:11.169021 containerd[2680]: time="2025-02-14T00:40:11.168988950Z" level=info msg="StartContainer for \"1ae4e76259655e87d91c64b3c709c79f3b101cadc5bbcbcc1fb271de321307c5\" returns successfully" Feb 14 00:40:11.181042 containerd[2680]: time="2025-02-14T00:40:11.181013295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lnphz,Uid:9c310922-b8a9-45e4-9148-d0a62cae98ca,Namespace:kube-system,Attempt:1,} returns sandbox id \"a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912\"" Feb 14 00:40:11.182721 containerd[2680]: time="2025-02-14T00:40:11.182701911Z" level=info msg="CreateContainer within sandbox \"a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 14 00:40:11.187842 containerd[2680]: time="2025-02-14T00:40:11.187777001Z" level=info msg="CreateContainer within sandbox \"a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d96af9d5dd370b64d78aa7cdb28ae4196ff1cda3d6c2096a38a5c9972e22f94\"" Feb 14 00:40:11.188446 containerd[2680]: time="2025-02-14T00:40:11.188101861Z" level=info msg="StartContainer for \"6d96af9d5dd370b64d78aa7cdb28ae4196ff1cda3d6c2096a38a5c9972e22f94\"" Feb 14 00:40:11.218919 systemd[1]: Started cri-containerd-6d96af9d5dd370b64d78aa7cdb28ae4196ff1cda3d6c2096a38a5c9972e22f94.scope - libcontainer container 6d96af9d5dd370b64d78aa7cdb28ae4196ff1cda3d6c2096a38a5c9972e22f94. Feb 14 00:40:11.235590 containerd[2680]: time="2025-02-14T00:40:11.235558320Z" level=info msg="StartContainer for \"6d96af9d5dd370b64d78aa7cdb28ae4196ff1cda3d6c2096a38a5c9972e22f94\" returns successfully" Feb 14 00:40:11.939589 kubelet[4209]: I0214 00:40:11.939535 4209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86f6495c65-ln5ss" podStartSLOduration=21.939518921 podStartE2EDuration="21.939518921s" podCreationTimestamp="2025-02-14 00:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 00:40:11.93921058 +0000 UTC m=+36.150629916" watchObservedRunningTime="2025-02-14 00:40:11.939518921 +0000 UTC m=+36.150938257" Feb 14 00:40:11.946462 kubelet[4209]: I0214 00:40:11.946424 4209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lnphz" podStartSLOduration=29.94640926 podStartE2EDuration="29.94640926s" podCreationTimestamp="2025-02-14 00:39:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-14 00:40:11.945991005 +0000 UTC m=+36.157410341" watchObservedRunningTime="2025-02-14 00:40:11.94640926 +0000 UTC m=+36.157828596" Feb 14 00:40:11.969161 sshd[6300]: PAM: Permission denied for root from 218.92.0.218 Feb 14 00:40:12.021534 kubelet[4209]: I0214 00:40:12.021507 4209 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 00:40:12.423631 sshd[7714]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.218 user=root Feb 14 00:40:12.497745 kernel: bpftool[7743]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 14 00:40:12.650533 systemd-networkd[2584]: vxlan.calico: Link UP Feb 14 00:40:12.650537 systemd-networkd[2584]: vxlan.calico: Gained carrier Feb 14 00:40:12.861973 containerd[2680]: time="2025-02-14T00:40:12.861872857Z" level=info msg="StopPodSandbox for \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\"" Feb 14 00:40:12.916932 systemd-networkd[2584]: calicf68fec70fe: Gained IPv6LL Feb 14 00:40:12.926310 containerd[2680]: 2025-02-14 00:40:12.898 [INFO][8066] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Feb 14 00:40:12.926310 containerd[2680]: 2025-02-14 00:40:12.898 [INFO][8066] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" iface="eth0" netns="/var/run/netns/cni-c4e74e46-3dd2-b663-8da4-f308d053b2a2" Feb 14 00:40:12.926310 containerd[2680]: 2025-02-14 00:40:12.898 [INFO][8066] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" iface="eth0" netns="/var/run/netns/cni-c4e74e46-3dd2-b663-8da4-f308d053b2a2" Feb 14 00:40:12.926310 containerd[2680]: 2025-02-14 00:40:12.898 [INFO][8066] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" iface="eth0" netns="/var/run/netns/cni-c4e74e46-3dd2-b663-8da4-f308d053b2a2" Feb 14 00:40:12.926310 containerd[2680]: 2025-02-14 00:40:12.898 [INFO][8066] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Feb 14 00:40:12.926310 containerd[2680]: 2025-02-14 00:40:12.898 [INFO][8066] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Feb 14 00:40:12.926310 containerd[2680]: 2025-02-14 00:40:12.915 [INFO][8085] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" HandleID="k8s-pod-network.cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Workload="ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0" Feb 14 00:40:12.926310 containerd[2680]: 2025-02-14 00:40:12.915 [INFO][8085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:12.926310 containerd[2680]: 2025-02-14 00:40:12.915 [INFO][8085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:12.926310 containerd[2680]: 2025-02-14 00:40:12.923 [WARNING][8085] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" HandleID="k8s-pod-network.cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Workload="ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0" Feb 14 00:40:12.926310 containerd[2680]: 2025-02-14 00:40:12.923 [INFO][8085] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" HandleID="k8s-pod-network.cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Workload="ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0" Feb 14 00:40:12.926310 containerd[2680]: 2025-02-14 00:40:12.924 [INFO][8085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:12.926310 containerd[2680]: 2025-02-14 00:40:12.925 [INFO][8066] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Feb 14 00:40:12.926690 containerd[2680]: time="2025-02-14T00:40:12.926538991Z" level=info msg="TearDown network for sandbox \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\" successfully" Feb 14 00:40:12.926690 containerd[2680]: time="2025-02-14T00:40:12.926568789Z" level=info msg="StopPodSandbox for \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\" returns successfully" Feb 14 00:40:12.927132 containerd[2680]: time="2025-02-14T00:40:12.927102638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ffbcc,Uid:6ccbd114-de29-425f-ab5d-f82920c737e3,Namespace:calico-system,Attempt:1,}" Feb 14 00:40:12.928672 systemd[1]: run-netns-cni\x2dc4e74e46\x2d3dd2\x2db663\x2d8da4\x2df308d053b2a2.mount: Deactivated successfully. Feb 14 00:40:13.011507 systemd-networkd[2584]: cali549fbb199e1: Link UP Feb 14 00:40:13.011677 systemd-networkd[2584]: cali549fbb199e1: Gained carrier Feb 14 00:40:13.019769 containerd[2680]: 2025-02-14 00:40:12.957 [INFO][8105] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0 csi-node-driver- calico-system 6ccbd114-de29-425f-ab5d-f82920c737e3 798 0 2025-02-14 00:39:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.1-a-a04cd882ea csi-node-driver-ffbcc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali549fbb199e1 [] []}} ContainerID="5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557" Namespace="calico-system" Pod="csi-node-driver-ffbcc" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-" Feb 14 00:40:13.019769 containerd[2680]: 2025-02-14 00:40:12.957 [INFO][8105] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557" Namespace="calico-system" Pod="csi-node-driver-ffbcc" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0" Feb 14 00:40:13.019769 containerd[2680]: 2025-02-14 00:40:12.979 [INFO][8132] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557" HandleID="k8s-pod-network.5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557" Workload="ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0" Feb 14 00:40:13.019769 containerd[2680]: 2025-02-14 00:40:12.989 [INFO][8132] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557" HandleID="k8s-pod-network.5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557" Workload="ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001b3ee0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.1-a-a04cd882ea", "pod":"csi-node-driver-ffbcc", "timestamp":"2025-02-14 00:40:12.979823177 +0000 UTC"}, Hostname:"ci-4081.3.1-a-a04cd882ea", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 14 00:40:13.019769 containerd[2680]: 2025-02-14 00:40:12.989 [INFO][8132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:13.019769 containerd[2680]: 2025-02-14 00:40:12.989 [INFO][8132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:13.019769 containerd[2680]: 2025-02-14 00:40:12.989 [INFO][8132] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.1-a-a04cd882ea' Feb 14 00:40:13.019769 containerd[2680]: 2025-02-14 00:40:12.991 [INFO][8132] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:13.019769 containerd[2680]: 2025-02-14 00:40:12.993 [INFO][8132] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:13.019769 containerd[2680]: 2025-02-14 00:40:12.998 [INFO][8132] ipam/ipam.go 489: Trying affinity for 192.168.5.0/26 host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:13.019769 containerd[2680]: 2025-02-14 00:40:12.999 [INFO][8132] ipam/ipam.go 155: Attempting to load block cidr=192.168.5.0/26 host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:13.019769 containerd[2680]: 2025-02-14 00:40:13.001 [INFO][8132] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.5.0/26 host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:13.019769 containerd[2680]: 2025-02-14 00:40:13.001 [INFO][8132] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.5.0/26 handle="k8s-pod-network.5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:13.019769 containerd[2680]: 2025-02-14 00:40:13.002 [INFO][8132] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557 Feb 14 00:40:13.019769 containerd[2680]: 2025-02-14 00:40:13.005 [INFO][8132] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.5.0/26 handle="k8s-pod-network.5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:13.019769 containerd[2680]: 2025-02-14 00:40:13.009 [INFO][8132] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.5.6/26] block=192.168.5.0/26 handle="k8s-pod-network.5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:13.019769 containerd[2680]: 2025-02-14 00:40:13.009 [INFO][8132] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.5.6/26] handle="k8s-pod-network.5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557" host="ci-4081.3.1-a-a04cd882ea" Feb 14 00:40:13.019769 containerd[2680]: 2025-02-14 00:40:13.009 [INFO][8132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:13.019769 containerd[2680]: 2025-02-14 00:40:13.009 [INFO][8132] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.5.6/26] IPv6=[] ContainerID="5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557" HandleID="k8s-pod-network.5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557" Workload="ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0" Feb 14 00:40:13.020283 containerd[2680]: 2025-02-14 00:40:13.010 [INFO][8105] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557" Namespace="calico-system" Pod="csi-node-driver-ffbcc" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6ccbd114-de29-425f-ab5d-f82920c737e3", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"", Pod:"csi-node-driver-ffbcc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali549fbb199e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:13.020283 containerd[2680]: 2025-02-14 00:40:13.010 [INFO][8105] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.5.6/32] ContainerID="5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557" Namespace="calico-system" Pod="csi-node-driver-ffbcc" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0" Feb 14 00:40:13.020283 containerd[2680]: 2025-02-14 00:40:13.010 [INFO][8105] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali549fbb199e1 ContainerID="5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557" Namespace="calico-system" Pod="csi-node-driver-ffbcc" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0" Feb 14 00:40:13.020283 containerd[2680]: 2025-02-14 00:40:13.011 [INFO][8105] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557" Namespace="calico-system" Pod="csi-node-driver-ffbcc" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0" Feb 14 00:40:13.020283 containerd[2680]: 2025-02-14 00:40:13.011 [INFO][8105] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557" Namespace="calico-system" Pod="csi-node-driver-ffbcc" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6ccbd114-de29-425f-ab5d-f82920c737e3", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557", Pod:"csi-node-driver-ffbcc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali549fbb199e1", MAC:"46:ea:7a:3d:e8:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:13.020283 containerd[2680]: 2025-02-14 00:40:13.018 [INFO][8105] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557" Namespace="calico-system" Pod="csi-node-driver-ffbcc" WorkloadEndpoint="ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0" Feb 14 00:40:13.033272 containerd[2680]: time="2025-02-14T00:40:13.033166672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 14 00:40:13.033295 containerd[2680]: time="2025-02-14T00:40:13.033265426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 14 00:40:13.033295 containerd[2680]: time="2025-02-14T00:40:13.033277266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:40:13.033372 containerd[2680]: time="2025-02-14T00:40:13.033354382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 14 00:40:13.044801 systemd-networkd[2584]: cali8d400c58498: Gained IPv6LL Feb 14 00:40:13.058873 systemd[1]: Started cri-containerd-5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557.scope - libcontainer container 5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557. Feb 14 00:40:13.074569 containerd[2680]: time="2025-02-14T00:40:13.074540608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ffbcc,Uid:6ccbd114-de29-425f-ab5d-f82920c737e3,Namespace:calico-system,Attempt:1,} returns sandbox id \"5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557\"" Feb 14 00:40:13.075583 containerd[2680]: time="2025-02-14T00:40:13.075562874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 14 00:40:13.631047 containerd[2680]: time="2025-02-14T00:40:13.631011067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:40:13.631047 containerd[2680]: time="2025-02-14T00:40:13.631043865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 14 00:40:13.631779 containerd[2680]: time="2025-02-14T00:40:13.631758467Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:40:13.633500 containerd[2680]: time="2025-02-14T00:40:13.633478534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:40:13.634191 containerd[2680]: time="2025-02-14T00:40:13.634161778Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 558.570625ms" Feb 14 00:40:13.634218 containerd[2680]: time="2025-02-14T00:40:13.634194736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 14 00:40:13.635822 containerd[2680]: time="2025-02-14T00:40:13.635795890Z" level=info msg="CreateContainer within sandbox \"5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 14 00:40:13.642083 containerd[2680]: time="2025-02-14T00:40:13.642054954Z" level=info msg="CreateContainer within sandbox \"5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"978b8b0fd51c5f0f287b9bcad3bef4b1ed537f79e5a584bcbb18ee957b940987\"" Feb 14 00:40:13.642428 containerd[2680]: time="2025-02-14T00:40:13.642401095Z" level=info msg="StartContainer for \"978b8b0fd51c5f0f287b9bcad3bef4b1ed537f79e5a584bcbb18ee957b940987\"" Feb 14 00:40:13.670857 systemd[1]: Started cri-containerd-978b8b0fd51c5f0f287b9bcad3bef4b1ed537f79e5a584bcbb18ee957b940987.scope - libcontainer container 978b8b0fd51c5f0f287b9bcad3bef4b1ed537f79e5a584bcbb18ee957b940987. Feb 14 00:40:13.696437 containerd[2680]: time="2025-02-14T00:40:13.696403993Z" level=info msg="StartContainer for \"978b8b0fd51c5f0f287b9bcad3bef4b1ed537f79e5a584bcbb18ee957b940987\" returns successfully" Feb 14 00:40:13.697258 containerd[2680]: time="2025-02-14T00:40:13.697237828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 14 00:40:13.829165 sshd[6300]: PAM: Permission denied for root from 218.92.0.218 Feb 14 00:40:14.056545 sshd[6300]: Received disconnect from 218.92.0.218 port 27492:11: [preauth] Feb 14 00:40:14.056545 sshd[6300]: Disconnected from authenticating user root 218.92.0.218 port 27492 [preauth] Feb 14 00:40:14.058505 systemd[1]: sshd@10-147.28.162.217:22-218.92.0.218:27492.service: Deactivated successfully. Feb 14 00:40:14.198222 containerd[2680]: time="2025-02-14T00:40:14.198177211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:40:14.198551 containerd[2680]: time="2025-02-14T00:40:14.198240167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 14 00:40:14.198974 containerd[2680]: time="2025-02-14T00:40:14.198953491Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:40:14.200798 containerd[2680]: time="2025-02-14T00:40:14.200779239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 14 00:40:14.201499 containerd[2680]: time="2025-02-14T00:40:14.201470205Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 504.200618ms" Feb 14 00:40:14.201523 containerd[2680]: time="2025-02-14T00:40:14.201506803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 14 00:40:14.203128 containerd[2680]: time="2025-02-14T00:40:14.203103922Z" level=info msg="CreateContainer within sandbox \"5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 14 00:40:14.208790 containerd[2680]: time="2025-02-14T00:40:14.208758557Z" level=info msg="CreateContainer within sandbox \"5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fc440d82606a6a318da641eec6e31598b88fd0b53a12c4fd1502c98e075a5ed4\"" Feb 14 00:40:14.209133 containerd[2680]: time="2025-02-14T00:40:14.209106980Z" level=info msg="StartContainer for \"fc440d82606a6a318da641eec6e31598b88fd0b53a12c4fd1502c98e075a5ed4\"" Feb 14 00:40:14.241852 systemd[1]: Started cri-containerd-fc440d82606a6a318da641eec6e31598b88fd0b53a12c4fd1502c98e075a5ed4.scope - libcontainer container fc440d82606a6a318da641eec6e31598b88fd0b53a12c4fd1502c98e075a5ed4. Feb 14 00:40:14.261153 containerd[2680]: time="2025-02-14T00:40:14.261121280Z" level=info msg="StartContainer for \"fc440d82606a6a318da641eec6e31598b88fd0b53a12c4fd1502c98e075a5ed4\" returns successfully" Feb 14 00:40:14.301249 systemd[1]: Started sshd@11-147.28.162.217:22-218.92.0.218:27500.service - OpenSSH per-connection server daemon (218.92.0.218:27500). Feb 14 00:40:14.324841 systemd-networkd[2584]: vxlan.calico: Gained IPv6LL Feb 14 00:40:14.900864 systemd-networkd[2584]: cali549fbb199e1: Gained IPv6LL Feb 14 00:40:14.908236 kubelet[4209]: I0214 00:40:14.908212 4209 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 14 00:40:14.908477 kubelet[4209]: I0214 00:40:14.908242 4209 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 14 00:40:14.948373 kubelet[4209]: I0214 00:40:14.948324 4209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ffbcc" podStartSLOduration=23.821621771 podStartE2EDuration="24.948309022s" podCreationTimestamp="2025-02-14 00:39:50 +0000 UTC" firstStartedPulling="2025-02-14 00:40:13.075371884 +0000 UTC m=+37.286791220" lastFinishedPulling="2025-02-14 00:40:14.202059135 +0000 UTC m=+38.413478471" observedRunningTime="2025-02-14 00:40:14.947924961 +0000 UTC m=+39.159344297" watchObservedRunningTime="2025-02-14 00:40:14.948309022 +0000 UTC m=+39.159728358" Feb 14 00:40:15.992061 sshd[8312]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.218 user=root Feb 14 00:40:17.809116 sshd[8305]: PAM: Permission denied for root from 218.92.0.218 Feb 14 00:40:18.266302 sshd[8324]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.218 user=root Feb 14 00:40:19.827435 sshd[8305]: PAM: Permission denied for root from 218.92.0.218 Feb 14 00:40:20.284968 sshd[8327]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.218 user=root Feb 14 00:40:22.457816 sshd[8305]: PAM: Permission denied for root from 218.92.0.218 Feb 14 00:40:22.686586 sshd[8305]: Received disconnect from 218.92.0.218 port 27500:11: [preauth] Feb 14 00:40:22.686586 sshd[8305]: Disconnected from authenticating user root 218.92.0.218 port 27500 [preauth] Feb 14 00:40:22.688827 systemd[1]: sshd@11-147.28.162.217:22-218.92.0.218:27500.service: Deactivated successfully. Feb 14 00:40:35.854360 containerd[2680]: time="2025-02-14T00:40:35.854275474Z" level=info msg="StopPodSandbox for \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\"" Feb 14 00:40:35.914575 containerd[2680]: 2025-02-14 00:40:35.885 [WARNING][8361] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9c310922-b8a9-45e4-9148-d0a62cae98ca", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912", Pod:"coredns-668d6bf9bc-lnphz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8d400c58498", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:35.914575 containerd[2680]: 2025-02-14 00:40:35.885 [INFO][8361] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Feb 14 00:40:35.914575 containerd[2680]: 2025-02-14 00:40:35.885 [INFO][8361] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" iface="eth0" netns="" Feb 14 00:40:35.914575 containerd[2680]: 2025-02-14 00:40:35.885 [INFO][8361] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Feb 14 00:40:35.914575 containerd[2680]: 2025-02-14 00:40:35.885 [INFO][8361] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Feb 14 00:40:35.914575 containerd[2680]: 2025-02-14 00:40:35.902 [INFO][8383] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" HandleID="k8s-pod-network.c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0" Feb 14 00:40:35.914575 containerd[2680]: 2025-02-14 00:40:35.902 [INFO][8383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:35.914575 containerd[2680]: 2025-02-14 00:40:35.902 [INFO][8383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:35.914575 containerd[2680]: 2025-02-14 00:40:35.910 [WARNING][8383] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" HandleID="k8s-pod-network.c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0" Feb 14 00:40:35.914575 containerd[2680]: 2025-02-14 00:40:35.910 [INFO][8383] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" HandleID="k8s-pod-network.c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0" Feb 14 00:40:35.914575 containerd[2680]: 2025-02-14 00:40:35.912 [INFO][8383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:35.914575 containerd[2680]: 2025-02-14 00:40:35.913 [INFO][8361] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Feb 14 00:40:35.914927 containerd[2680]: time="2025-02-14T00:40:35.914629330Z" level=info msg="TearDown network for sandbox \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\" successfully" Feb 14 00:40:35.914927 containerd[2680]: time="2025-02-14T00:40:35.914662729Z" level=info msg="StopPodSandbox for \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\" returns successfully" Feb 14 00:40:35.915089 containerd[2680]: time="2025-02-14T00:40:35.915063044Z" level=info msg="RemovePodSandbox for \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\"" Feb 14 00:40:35.915115 containerd[2680]: time="2025-02-14T00:40:35.915101203Z" level=info msg="Forcibly stopping sandbox \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\"" Feb 14 00:40:35.974290 containerd[2680]: 2025-02-14 00:40:35.946 [WARNING][8417] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9c310922-b8a9-45e4-9148-d0a62cae98ca", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"a630dd220e7b837ec984115b1d1d7a6abc7250b29eba3dd944bc0ffa2eddf912", Pod:"coredns-668d6bf9bc-lnphz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8d400c58498", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:35.974290 containerd[2680]: 2025-02-14 00:40:35.946 [INFO][8417] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Feb 14 00:40:35.974290 containerd[2680]: 2025-02-14 00:40:35.946 [INFO][8417] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" iface="eth0" netns="" Feb 14 00:40:35.974290 containerd[2680]: 2025-02-14 00:40:35.946 [INFO][8417] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Feb 14 00:40:35.974290 containerd[2680]: 2025-02-14 00:40:35.946 [INFO][8417] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Feb 14 00:40:35.974290 containerd[2680]: 2025-02-14 00:40:35.963 [INFO][8438] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" HandleID="k8s-pod-network.c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0" Feb 14 00:40:35.974290 containerd[2680]: 2025-02-14 00:40:35.963 [INFO][8438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:35.974290 containerd[2680]: 2025-02-14 00:40:35.963 [INFO][8438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:35.974290 containerd[2680]: 2025-02-14 00:40:35.970 [WARNING][8438] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" HandleID="k8s-pod-network.c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0" Feb 14 00:40:35.974290 containerd[2680]: 2025-02-14 00:40:35.971 [INFO][8438] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" HandleID="k8s-pod-network.c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--lnphz-eth0" Feb 14 00:40:35.974290 containerd[2680]: 2025-02-14 00:40:35.972 [INFO][8438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:35.974290 containerd[2680]: 2025-02-14 00:40:35.973 [INFO][8417] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d" Feb 14 00:40:35.974683 containerd[2680]: time="2025-02-14T00:40:35.974324954Z" level=info msg="TearDown network for sandbox \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\" successfully" Feb 14 00:40:35.975763 containerd[2680]: time="2025-02-14T00:40:35.975728176Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 00:40:35.975807 containerd[2680]: time="2025-02-14T00:40:35.975793255Z" level=info msg="RemovePodSandbox \"c3216f0906f4e75c01e5edb7a6e97663b7dc6688414acc24bb81189dc086b51d\" returns successfully" Feb 14 00:40:35.976182 containerd[2680]: time="2025-02-14T00:40:35.976160210Z" level=info msg="StopPodSandbox for \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\"" Feb 14 00:40:36.035557 containerd[2680]: 2025-02-14 00:40:36.007 [WARNING][8478] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6ccbd114-de29-425f-ab5d-f82920c737e3", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557", Pod:"csi-node-driver-ffbcc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali549fbb199e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:36.035557 containerd[2680]: 2025-02-14 00:40:36.007 [INFO][8478] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Feb 14 00:40:36.035557 containerd[2680]: 2025-02-14 00:40:36.007 [INFO][8478] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" iface="eth0" netns="" Feb 14 00:40:36.035557 containerd[2680]: 2025-02-14 00:40:36.007 [INFO][8478] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Feb 14 00:40:36.035557 containerd[2680]: 2025-02-14 00:40:36.007 [INFO][8478] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Feb 14 00:40:36.035557 containerd[2680]: 2025-02-14 00:40:36.024 [INFO][8513] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" HandleID="k8s-pod-network.cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Workload="ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0" Feb 14 00:40:36.035557 containerd[2680]: 2025-02-14 00:40:36.024 [INFO][8513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:36.035557 containerd[2680]: 2025-02-14 00:40:36.024 [INFO][8513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:36.035557 containerd[2680]: 2025-02-14 00:40:36.032 [WARNING][8513] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" HandleID="k8s-pod-network.cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Workload="ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0" Feb 14 00:40:36.035557 containerd[2680]: 2025-02-14 00:40:36.032 [INFO][8513] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" HandleID="k8s-pod-network.cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Workload="ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0" Feb 14 00:40:36.035557 containerd[2680]: 2025-02-14 00:40:36.033 [INFO][8513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:36.035557 containerd[2680]: 2025-02-14 00:40:36.034 [INFO][8478] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Feb 14 00:40:36.036010 containerd[2680]: time="2025-02-14T00:40:36.035601466Z" level=info msg="TearDown network for sandbox \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\" successfully" Feb 14 00:40:36.036010 containerd[2680]: time="2025-02-14T00:40:36.035631626Z" level=info msg="StopPodSandbox for \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\" returns successfully" Feb 14 00:40:36.036010 containerd[2680]: time="2025-02-14T00:40:36.035905983Z" level=info msg="RemovePodSandbox for \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\"" Feb 14 00:40:36.036010 containerd[2680]: time="2025-02-14T00:40:36.035933022Z" level=info msg="Forcibly stopping sandbox \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\"" Feb 14 00:40:36.095547 containerd[2680]: 2025-02-14 00:40:36.067 [WARNING][8554] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6ccbd114-de29-425f-ab5d-f82920c737e3", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"5465e53c151540839c4a3f3e36a0766b61c73a65076eaaea77691beff91f5557", Pod:"csi-node-driver-ffbcc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.5.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali549fbb199e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:36.095547 containerd[2680]: 2025-02-14 00:40:36.067 [INFO][8554] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Feb 14 00:40:36.095547 containerd[2680]: 2025-02-14 00:40:36.067 [INFO][8554] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" iface="eth0" netns="" Feb 14 00:40:36.095547 containerd[2680]: 2025-02-14 00:40:36.067 [INFO][8554] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Feb 14 00:40:36.095547 containerd[2680]: 2025-02-14 00:40:36.067 [INFO][8554] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Feb 14 00:40:36.095547 containerd[2680]: 2025-02-14 00:40:36.084 [INFO][8575] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" HandleID="k8s-pod-network.cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Workload="ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0" Feb 14 00:40:36.095547 containerd[2680]: 2025-02-14 00:40:36.084 [INFO][8575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:36.095547 containerd[2680]: 2025-02-14 00:40:36.084 [INFO][8575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:36.095547 containerd[2680]: 2025-02-14 00:40:36.092 [WARNING][8575] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" HandleID="k8s-pod-network.cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Workload="ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0" Feb 14 00:40:36.095547 containerd[2680]: 2025-02-14 00:40:36.092 [INFO][8575] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" HandleID="k8s-pod-network.cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Workload="ci--4081.3.1--a--a04cd882ea-k8s-csi--node--driver--ffbcc-eth0" Feb 14 00:40:36.095547 containerd[2680]: 2025-02-14 00:40:36.093 [INFO][8575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:36.095547 containerd[2680]: 2025-02-14 00:40:36.094 [INFO][8554] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175" Feb 14 00:40:36.095818 containerd[2680]: time="2025-02-14T00:40:36.095583896Z" level=info msg="TearDown network for sandbox \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\" successfully" Feb 14 00:40:36.097270 containerd[2680]: time="2025-02-14T00:40:36.097241356Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 00:40:36.097309 containerd[2680]: time="2025-02-14T00:40:36.097296475Z" level=info msg="RemovePodSandbox \"cb2a65060561c7cac13ac98629a01fd3432fa425e182b5c2c018291d08f3a175\" returns successfully" Feb 14 00:40:36.097682 containerd[2680]: time="2025-02-14T00:40:36.097655111Z" level=info msg="StopPodSandbox for \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\"" Feb 14 00:40:36.160355 containerd[2680]: 2025-02-14 00:40:36.131 [WARNING][8616] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0", GenerateName:"calico-apiserver-86f6495c65-", Namespace:"calico-apiserver", SelfLink:"", UID:"96a418b4-d05a-4be9-9e9f-0435c9707a64", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f6495c65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc", Pod:"calico-apiserver-86f6495c65-qt2ld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif32f237a442", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:36.160355 containerd[2680]: 2025-02-14 00:40:36.131 [INFO][8616] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Feb 14 00:40:36.160355 containerd[2680]: 2025-02-14 00:40:36.131 [INFO][8616] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" iface="eth0" netns="" Feb 14 00:40:36.160355 containerd[2680]: 2025-02-14 00:40:36.131 [INFO][8616] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Feb 14 00:40:36.160355 containerd[2680]: 2025-02-14 00:40:36.131 [INFO][8616] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Feb 14 00:40:36.160355 containerd[2680]: 2025-02-14 00:40:36.149 [INFO][8637] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" HandleID="k8s-pod-network.262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0" Feb 14 00:40:36.160355 containerd[2680]: 2025-02-14 00:40:36.149 [INFO][8637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:36.160355 containerd[2680]: 2025-02-14 00:40:36.149 [INFO][8637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:36.160355 containerd[2680]: 2025-02-14 00:40:36.156 [WARNING][8637] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" HandleID="k8s-pod-network.262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0" Feb 14 00:40:36.160355 containerd[2680]: 2025-02-14 00:40:36.156 [INFO][8637] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" HandleID="k8s-pod-network.262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0" Feb 14 00:40:36.160355 containerd[2680]: 2025-02-14 00:40:36.157 [INFO][8637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:36.160355 containerd[2680]: 2025-02-14 00:40:36.159 [INFO][8616] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Feb 14 00:40:36.160623 containerd[2680]: time="2025-02-14T00:40:36.160393187Z" level=info msg="TearDown network for sandbox \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\" successfully" Feb 14 00:40:36.160623 containerd[2680]: time="2025-02-14T00:40:36.160419026Z" level=info msg="StopPodSandbox for \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\" returns successfully" Feb 14 00:40:36.160834 containerd[2680]: time="2025-02-14T00:40:36.160809462Z" level=info msg="RemovePodSandbox for \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\"" Feb 14 00:40:36.160863 containerd[2680]: time="2025-02-14T00:40:36.160841221Z" level=info msg="Forcibly stopping sandbox \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\"" Feb 14 00:40:36.220627 containerd[2680]: 2025-02-14 00:40:36.192 [WARNING][8671] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0", GenerateName:"calico-apiserver-86f6495c65-", Namespace:"calico-apiserver", SelfLink:"", UID:"96a418b4-d05a-4be9-9e9f-0435c9707a64", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f6495c65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"e3a3d1df5bab0373d32c0ac7691a8aa39c0fc311e325f2ff83b7e45aa647affc", Pod:"calico-apiserver-86f6495c65-qt2ld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif32f237a442", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:36.220627 containerd[2680]: 2025-02-14 00:40:36.192 [INFO][8671] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Feb 14 00:40:36.220627 containerd[2680]: 2025-02-14 00:40:36.192 [INFO][8671] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" iface="eth0" netns="" Feb 14 00:40:36.220627 containerd[2680]: 2025-02-14 00:40:36.192 [INFO][8671] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Feb 14 00:40:36.220627 containerd[2680]: 2025-02-14 00:40:36.192 [INFO][8671] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Feb 14 00:40:36.220627 containerd[2680]: 2025-02-14 00:40:36.209 [INFO][8688] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" HandleID="k8s-pod-network.262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0" Feb 14 00:40:36.220627 containerd[2680]: 2025-02-14 00:40:36.209 [INFO][8688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:36.220627 containerd[2680]: 2025-02-14 00:40:36.209 [INFO][8688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:36.220627 containerd[2680]: 2025-02-14 00:40:36.217 [WARNING][8688] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" HandleID="k8s-pod-network.262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0" Feb 14 00:40:36.220627 containerd[2680]: 2025-02-14 00:40:36.217 [INFO][8688] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" HandleID="k8s-pod-network.262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--qt2ld-eth0" Feb 14 00:40:36.220627 containerd[2680]: 2025-02-14 00:40:36.218 [INFO][8688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:36.220627 containerd[2680]: 2025-02-14 00:40:36.219 [INFO][8671] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef" Feb 14 00:40:36.220894 containerd[2680]: time="2025-02-14T00:40:36.220671293Z" level=info msg="TearDown network for sandbox \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\" successfully" Feb 14 00:40:36.222084 containerd[2680]: time="2025-02-14T00:40:36.222055676Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 00:40:36.222123 containerd[2680]: time="2025-02-14T00:40:36.222110155Z" level=info msg="RemovePodSandbox \"262cc4b122c3cda2290dc499ad41aabbc212c83a1dc3fc9b64a77c90e4391bef\" returns successfully" Feb 14 00:40:36.222454 containerd[2680]: time="2025-02-14T00:40:36.222431871Z" level=info msg="StopPodSandbox for \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\"" Feb 14 00:40:36.283212 containerd[2680]: 2025-02-14 00:40:36.253 [WARNING][8723] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0", GenerateName:"calico-apiserver-86f6495c65-", Namespace:"calico-apiserver", SelfLink:"", UID:"f944ebf0-c99a-48f8-9584-55824d96b5a2", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f6495c65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90", Pod:"calico-apiserver-86f6495c65-ln5ss", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf68fec70fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:36.283212 containerd[2680]: 2025-02-14 00:40:36.253 [INFO][8723] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Feb 14 00:40:36.283212 containerd[2680]: 2025-02-14 00:40:36.253 [INFO][8723] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" iface="eth0" netns="" Feb 14 00:40:36.283212 containerd[2680]: 2025-02-14 00:40:36.254 [INFO][8723] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Feb 14 00:40:36.283212 containerd[2680]: 2025-02-14 00:40:36.254 [INFO][8723] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Feb 14 00:40:36.283212 containerd[2680]: 2025-02-14 00:40:36.272 [INFO][8746] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" HandleID="k8s-pod-network.cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0" Feb 14 00:40:36.283212 containerd[2680]: 2025-02-14 00:40:36.272 [INFO][8746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:36.283212 containerd[2680]: 2025-02-14 00:40:36.272 [INFO][8746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:36.283212 containerd[2680]: 2025-02-14 00:40:36.279 [WARNING][8746] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" HandleID="k8s-pod-network.cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0" Feb 14 00:40:36.283212 containerd[2680]: 2025-02-14 00:40:36.279 [INFO][8746] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" HandleID="k8s-pod-network.cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0" Feb 14 00:40:36.283212 containerd[2680]: 2025-02-14 00:40:36.280 [INFO][8746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:36.283212 containerd[2680]: 2025-02-14 00:40:36.282 [INFO][8723] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Feb 14 00:40:36.283625 containerd[2680]: time="2025-02-14T00:40:36.283244691Z" level=info msg="TearDown network for sandbox \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\" successfully" Feb 14 00:40:36.283625 containerd[2680]: time="2025-02-14T00:40:36.283266090Z" level=info msg="StopPodSandbox for \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\" returns successfully" Feb 14 00:40:36.283625 containerd[2680]: time="2025-02-14T00:40:36.283586726Z" level=info msg="RemovePodSandbox for \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\"" Feb 14 00:40:36.283625 containerd[2680]: time="2025-02-14T00:40:36.283618246Z" level=info msg="Forcibly stopping sandbox \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\"" Feb 14 00:40:36.346631 containerd[2680]: 2025-02-14 00:40:36.315 [WARNING][8780] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0", GenerateName:"calico-apiserver-86f6495c65-", Namespace:"calico-apiserver", SelfLink:"", UID:"f944ebf0-c99a-48f8-9584-55824d96b5a2", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f6495c65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"5559ca28f2f250d77354ded6d52ba72a0b2bf772dee952bcabbe45204c3a6a90", Pod:"calico-apiserver-86f6495c65-ln5ss", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.5.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf68fec70fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:36.346631 containerd[2680]: 2025-02-14 00:40:36.315 [INFO][8780] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Feb 14 00:40:36.346631 containerd[2680]: 2025-02-14 00:40:36.315 [INFO][8780] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" iface="eth0" netns="" Feb 14 00:40:36.346631 containerd[2680]: 2025-02-14 00:40:36.315 [INFO][8780] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Feb 14 00:40:36.346631 containerd[2680]: 2025-02-14 00:40:36.315 [INFO][8780] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Feb 14 00:40:36.346631 containerd[2680]: 2025-02-14 00:40:36.332 [INFO][8798] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" HandleID="k8s-pod-network.cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0" Feb 14 00:40:36.346631 containerd[2680]: 2025-02-14 00:40:36.332 [INFO][8798] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:36.346631 containerd[2680]: 2025-02-14 00:40:36.332 [INFO][8798] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:36.346631 containerd[2680]: 2025-02-14 00:40:36.343 [WARNING][8798] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" HandleID="k8s-pod-network.cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0" Feb 14 00:40:36.346631 containerd[2680]: 2025-02-14 00:40:36.343 [INFO][8798] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" HandleID="k8s-pod-network.cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--apiserver--86f6495c65--ln5ss-eth0" Feb 14 00:40:36.346631 containerd[2680]: 2025-02-14 00:40:36.344 [INFO][8798] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:36.346631 containerd[2680]: 2025-02-14 00:40:36.345 [INFO][8780] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b" Feb 14 00:40:36.347089 containerd[2680]: time="2025-02-14T00:40:36.346657798Z" level=info msg="TearDown network for sandbox \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\" successfully" Feb 14 00:40:36.348070 containerd[2680]: time="2025-02-14T00:40:36.348040542Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 00:40:36.348110 containerd[2680]: time="2025-02-14T00:40:36.348097501Z" level=info msg="RemovePodSandbox \"cac2e596a2b11b1f850fd7ebf72ee6cd2fc1ccde000886d49ff11625a296323b\" returns successfully" Feb 14 00:40:36.348478 containerd[2680]: time="2025-02-14T00:40:36.348455137Z" level=info msg="StopPodSandbox for \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\"" Feb 14 00:40:36.411519 containerd[2680]: 2025-02-14 00:40:36.381 [WARNING][8834] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"66b493ca-acf1-42dd-852d-4dc920f52794", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb", Pod:"coredns-668d6bf9bc-zll2f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79f4600a391", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:36.411519 containerd[2680]: 2025-02-14 00:40:36.381 [INFO][8834] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Feb 14 00:40:36.411519 containerd[2680]: 2025-02-14 00:40:36.381 [INFO][8834] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" iface="eth0" netns="" Feb 14 00:40:36.411519 containerd[2680]: 2025-02-14 00:40:36.381 [INFO][8834] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Feb 14 00:40:36.411519 containerd[2680]: 2025-02-14 00:40:36.381 [INFO][8834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Feb 14 00:40:36.411519 containerd[2680]: 2025-02-14 00:40:36.398 [INFO][8857] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" HandleID="k8s-pod-network.051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0" Feb 14 00:40:36.411519 containerd[2680]: 2025-02-14 00:40:36.398 [INFO][8857] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:36.411519 containerd[2680]: 2025-02-14 00:40:36.398 [INFO][8857] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:36.411519 containerd[2680]: 2025-02-14 00:40:36.407 [WARNING][8857] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" HandleID="k8s-pod-network.051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0" Feb 14 00:40:36.411519 containerd[2680]: 2025-02-14 00:40:36.407 [INFO][8857] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" HandleID="k8s-pod-network.051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0" Feb 14 00:40:36.411519 containerd[2680]: 2025-02-14 00:40:36.409 [INFO][8857] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:36.411519 containerd[2680]: 2025-02-14 00:40:36.410 [INFO][8834] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Feb 14 00:40:36.411519 containerd[2680]: time="2025-02-14T00:40:36.411508809Z" level=info msg="TearDown network for sandbox \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\" successfully" Feb 14 00:40:36.411909 containerd[2680]: time="2025-02-14T00:40:36.411529968Z" level=info msg="StopPodSandbox for \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\" returns successfully" Feb 14 00:40:36.411909 containerd[2680]: time="2025-02-14T00:40:36.411835645Z" level=info msg="RemovePodSandbox for \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\"" Feb 14 00:40:36.411909 containerd[2680]: time="2025-02-14T00:40:36.411863204Z" level=info msg="Forcibly stopping sandbox \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\"" Feb 14 00:40:36.473165 containerd[2680]: 2025-02-14 00:40:36.445 [WARNING][8894] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"66b493ca-acf1-42dd-852d-4dc920f52794", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"0a2930943a3a29bf255eb1b0f6625b73c81b67fd8cb69dd75d1efc1a5d5d18fb", Pod:"coredns-668d6bf9bc-zll2f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.5.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79f4600a391", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:36.473165 containerd[2680]: 2025-02-14 00:40:36.445 [INFO][8894] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Feb 14 00:40:36.473165 containerd[2680]: 2025-02-14 00:40:36.445 [INFO][8894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" iface="eth0" netns="" Feb 14 00:40:36.473165 containerd[2680]: 2025-02-14 00:40:36.445 [INFO][8894] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Feb 14 00:40:36.473165 containerd[2680]: 2025-02-14 00:40:36.445 [INFO][8894] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Feb 14 00:40:36.473165 containerd[2680]: 2025-02-14 00:40:36.462 [INFO][8918] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" HandleID="k8s-pod-network.051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0" Feb 14 00:40:36.473165 containerd[2680]: 2025-02-14 00:40:36.462 [INFO][8918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:36.473165 containerd[2680]: 2025-02-14 00:40:36.462 [INFO][8918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:36.473165 containerd[2680]: 2025-02-14 00:40:36.470 [WARNING][8918] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" HandleID="k8s-pod-network.051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0" Feb 14 00:40:36.473165 containerd[2680]: 2025-02-14 00:40:36.470 [INFO][8918] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" HandleID="k8s-pod-network.051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Workload="ci--4081.3.1--a--a04cd882ea-k8s-coredns--668d6bf9bc--zll2f-eth0" Feb 14 00:40:36.473165 containerd[2680]: 2025-02-14 00:40:36.471 [INFO][8918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:36.473165 containerd[2680]: 2025-02-14 00:40:36.472 [INFO][8894] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75" Feb 14 00:40:36.473414 containerd[2680]: time="2025-02-14T00:40:36.473194177Z" level=info msg="TearDown network for sandbox \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\" successfully" Feb 14 00:40:36.481945 containerd[2680]: time="2025-02-14T00:40:36.481904191Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 00:40:36.481997 containerd[2680]: time="2025-02-14T00:40:36.481970071Z" level=info msg="RemovePodSandbox \"051627f6f790bc59f9fff35835ecdddfac5e17c11b5cf8cd2d13271463beac75\" returns successfully" Feb 14 00:40:36.482309 containerd[2680]: time="2025-02-14T00:40:36.482286867Z" level=info msg="StopPodSandbox for \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\"" Feb 14 00:40:36.545901 containerd[2680]: 2025-02-14 00:40:36.517 [WARNING][8953] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0", GenerateName:"calico-kube-controllers-5c596b8684-", Namespace:"calico-system", SelfLink:"", UID:"0bcb1395-23a9-4344-8326-eb06cdc2ac2f", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c596b8684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4", Pod:"calico-kube-controllers-5c596b8684-qhvsf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie73715d0228", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:36.545901 containerd[2680]: 2025-02-14 00:40:36.518 [INFO][8953] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Feb 14 00:40:36.545901 containerd[2680]: 2025-02-14 00:40:36.518 [INFO][8953] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" iface="eth0" netns="" Feb 14 00:40:36.545901 containerd[2680]: 2025-02-14 00:40:36.518 [INFO][8953] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Feb 14 00:40:36.545901 containerd[2680]: 2025-02-14 00:40:36.518 [INFO][8953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Feb 14 00:40:36.545901 containerd[2680]: 2025-02-14 00:40:36.535 [INFO][8971] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" HandleID="k8s-pod-network.0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0" Feb 14 00:40:36.545901 containerd[2680]: 2025-02-14 00:40:36.535 [INFO][8971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:36.545901 containerd[2680]: 2025-02-14 00:40:36.535 [INFO][8971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:36.545901 containerd[2680]: 2025-02-14 00:40:36.542 [WARNING][8971] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" HandleID="k8s-pod-network.0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0" Feb 14 00:40:36.545901 containerd[2680]: 2025-02-14 00:40:36.542 [INFO][8971] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" HandleID="k8s-pod-network.0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0" Feb 14 00:40:36.545901 containerd[2680]: 2025-02-14 00:40:36.543 [INFO][8971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:36.545901 containerd[2680]: 2025-02-14 00:40:36.544 [INFO][8953] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Feb 14 00:40:36.546262 containerd[2680]: time="2025-02-14T00:40:36.545941812Z" level=info msg="TearDown network for sandbox \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\" successfully" Feb 14 00:40:36.546262 containerd[2680]: time="2025-02-14T00:40:36.545965571Z" level=info msg="StopPodSandbox for \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\" returns successfully" Feb 14 00:40:36.546303 containerd[2680]: time="2025-02-14T00:40:36.546277767Z" level=info msg="RemovePodSandbox for \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\"" Feb 14 00:40:36.546326 containerd[2680]: time="2025-02-14T00:40:36.546304767Z" level=info msg="Forcibly stopping sandbox \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\"" Feb 14 00:40:36.606205 containerd[2680]: 2025-02-14 00:40:36.577 [WARNING][9006] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0", GenerateName:"calico-kube-controllers-5c596b8684-", Namespace:"calico-system", SelfLink:"", UID:"0bcb1395-23a9-4344-8326-eb06cdc2ac2f", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.February, 14, 0, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c596b8684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.1-a-a04cd882ea", ContainerID:"3db8a899e057d5981af0f9fd1a3e58a94c740cc428f9c54cad8d73fd3c56cdc4", Pod:"calico-kube-controllers-5c596b8684-qhvsf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.5.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie73715d0228", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 14 00:40:36.606205 containerd[2680]: 2025-02-14 00:40:36.578 [INFO][9006] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Feb 14 00:40:36.606205 containerd[2680]: 2025-02-14 00:40:36.578 [INFO][9006] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" iface="eth0" netns="" Feb 14 00:40:36.606205 containerd[2680]: 2025-02-14 00:40:36.578 [INFO][9006] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Feb 14 00:40:36.606205 containerd[2680]: 2025-02-14 00:40:36.578 [INFO][9006] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Feb 14 00:40:36.606205 containerd[2680]: 2025-02-14 00:40:36.595 [INFO][9027] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" HandleID="k8s-pod-network.0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0" Feb 14 00:40:36.606205 containerd[2680]: 2025-02-14 00:40:36.595 [INFO][9027] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 14 00:40:36.606205 containerd[2680]: 2025-02-14 00:40:36.595 [INFO][9027] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 14 00:40:36.606205 containerd[2680]: 2025-02-14 00:40:36.603 [WARNING][9027] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" HandleID="k8s-pod-network.0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0" Feb 14 00:40:36.606205 containerd[2680]: 2025-02-14 00:40:36.603 [INFO][9027] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" HandleID="k8s-pod-network.0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Workload="ci--4081.3.1--a--a04cd882ea-k8s-calico--kube--controllers--5c596b8684--qhvsf-eth0" Feb 14 00:40:36.606205 containerd[2680]: 2025-02-14 00:40:36.604 [INFO][9027] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 14 00:40:36.606205 containerd[2680]: 2025-02-14 00:40:36.605 [INFO][9006] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2" Feb 14 00:40:36.606453 containerd[2680]: time="2025-02-14T00:40:36.606240477Z" level=info msg="TearDown network for sandbox \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\" successfully" Feb 14 00:40:36.607672 containerd[2680]: time="2025-02-14T00:40:36.607644940Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 14 00:40:36.607715 containerd[2680]: time="2025-02-14T00:40:36.607702219Z" level=info msg="RemovePodSandbox \"0a7514b59783d6500e5d24c7e7242095928047b3c8285cfeafa28643a8ec02f2\" returns successfully" Feb 14 00:40:49.679146 kubelet[4209]: I0214 00:40:49.679105 4209 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 00:42:00.347146 systemd[1]: Started sshd@12-147.28.162.217:22-218.92.0.235:33154.service - OpenSSH per-connection server daemon (218.92.0.235:33154). Feb 14 00:42:02.039654 sshd[9266]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.235 user=root Feb 14 00:42:04.077109 sshd[9264]: PAM: Permission denied for root from 218.92.0.235 Feb 14 00:42:04.533077 sshd[9267]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.235 user=root Feb 14 00:42:06.982260 sshd[9264]: PAM: Permission denied for root from 218.92.0.235 Feb 14 00:42:07.438808 sshd[9294]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.235 user=root Feb 14 00:42:08.964667 sshd[9264]: PAM: Permission denied for root from 218.92.0.235 Feb 14 00:42:09.192798 sshd[9264]: Received disconnect from 218.92.0.235 port 33154:11: [preauth] Feb 14 00:42:09.192798 sshd[9264]: Disconnected from authenticating user root 218.92.0.235 port 33154 [preauth] Feb 14 00:42:09.194826 systemd[1]: sshd@12-147.28.162.217:22-218.92.0.235:33154.service: Deactivated successfully. Feb 14 00:42:09.391095 systemd[1]: Started sshd@13-147.28.162.217:22-218.92.0.235:29602.service - OpenSSH per-connection server daemon (218.92.0.235:29602). Feb 14 00:42:10.933092 sshd[9342]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.235 user=root Feb 14 00:42:13.206450 sshd[9321]: PAM: Permission denied for root from 218.92.0.235 Feb 14 00:42:13.620548 sshd[9358]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.235 user=root Feb 14 00:42:15.637982 sshd[9321]: PAM: Permission denied for root from 218.92.0.235 Feb 14 00:42:16.053869 sshd[9364]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.235 user=root Feb 14 00:42:18.011520 sshd[9321]: PAM: Permission denied for root from 218.92.0.235 Feb 14 00:42:18.218388 sshd[9321]: Received disconnect from 218.92.0.235 port 29602:11: [preauth] Feb 14 00:42:18.218388 sshd[9321]: Disconnected from authenticating user root 218.92.0.235 port 29602 [preauth] Feb 14 00:42:18.220669 systemd[1]: sshd@13-147.28.162.217:22-218.92.0.235:29602.service: Deactivated successfully. Feb 14 00:42:23.439198 systemd[1]: Started sshd@14-147.28.162.217:22-218.92.0.235:49690.service - OpenSSH per-connection server daemon (218.92.0.235:49690). Feb 14 00:42:27.977625 sshd[9374]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.235 user=root Feb 14 00:42:30.250897 sshd[9372]: PAM: Permission denied for root from 218.92.0.235 Feb 14 00:42:30.668364 sshd[9375]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.235 user=root Feb 14 00:42:31.879149 systemd[1]: Started sshd@15-147.28.162.217:22-218.92.0.215:18542.service - OpenSSH per-connection server daemon (218.92.0.215:18542). Feb 14 00:42:32.111183 sshd[9377]: Unable to negotiate with 218.92.0.215 port 18542: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Feb 14 00:42:32.113162 systemd[1]: sshd@15-147.28.162.217:22-218.92.0.215:18542.service: Deactivated successfully. Feb 14 00:42:32.686079 sshd[9372]: PAM: Permission denied for root from 218.92.0.235 Feb 14 00:42:33.100497 sshd[9381]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.235 user=root Feb 14 00:42:35.393798 sshd[9372]: PAM: Permission denied for root from 218.92.0.235 Feb 14 00:42:35.602765 sshd[9372]: Received disconnect from 218.92.0.235 port 49690:11: [preauth] Feb 14 00:42:35.602765 sshd[9372]: Disconnected from authenticating user root 218.92.0.235 port 49690 [preauth] Feb 14 00:42:35.604140 systemd[1]: sshd@14-147.28.162.217:22-218.92.0.235:49690.service: Deactivated successfully. Feb 14 00:42:49.714718 update_engine[2673]: I20250214 00:42:49.714095 2673 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 14 00:42:49.714718 update_engine[2673]: I20250214 00:42:49.714155 2673 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 14 00:42:49.714718 update_engine[2673]: I20250214 00:42:49.714378 2673 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 14 00:42:49.714718 update_engine[2673]: I20250214 00:42:49.714687 2673 omaha_request_params.cc:62] Current group set to lts Feb 14 00:42:49.715122 update_engine[2673]: I20250214 00:42:49.714787 2673 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 14 00:42:49.715122 update_engine[2673]: I20250214 00:42:49.714797 2673 update_attempter.cc:643] Scheduling an action processor start. Feb 14 00:42:49.715122 update_engine[2673]: I20250214 00:42:49.714812 2673 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 14 00:42:49.715122 update_engine[2673]: I20250214 00:42:49.714839 2673 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 14 00:42:49.715122 update_engine[2673]: I20250214 00:42:49.714886 2673 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 14 00:42:49.715122 update_engine[2673]: I20250214 00:42:49.714895 2673 omaha_request_action.cc:272] Request: Feb 14 00:42:49.715122 update_engine[2673]: Feb 14 00:42:49.715122 update_engine[2673]: Feb 14 00:42:49.715122 update_engine[2673]: Feb 14 00:42:49.715122 update_engine[2673]: Feb 14 00:42:49.715122 update_engine[2673]: Feb 14 00:42:49.715122 update_engine[2673]: Feb 14 00:42:49.715122 update_engine[2673]: Feb 14 00:42:49.715122 update_engine[2673]: Feb 14 00:42:49.715122 update_engine[2673]: I20250214 00:42:49.714900 2673 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 14 00:42:49.715397 locksmithd[2700]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 14 00:42:49.715833 update_engine[2673]: I20250214 00:42:49.715816 2673 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 14 00:42:49.716082 update_engine[2673]: I20250214 00:42:49.716063 2673 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 14 00:42:49.716793 update_engine[2673]: E20250214 00:42:49.716775 2673 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 14 00:42:49.716836 update_engine[2673]: I20250214 00:42:49.716825 2673 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 14 00:42:59.623789 update_engine[2673]: I20250214 00:42:59.623701 2673 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 14 00:42:59.624210 update_engine[2673]: I20250214 00:42:59.623942 2673 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 14 00:42:59.624210 update_engine[2673]: I20250214 00:42:59.624119 2673 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 14 00:42:59.624904 update_engine[2673]: E20250214 00:42:59.624886 2673 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 14 00:42:59.624938 update_engine[2673]: I20250214 00:42:59.624924 2673 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 14 00:43:09.624897 update_engine[2673]: I20250214 00:43:09.624814 2673 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 14 00:43:09.625453 update_engine[2673]: I20250214 00:43:09.625084 2673 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 14 00:43:09.625453 update_engine[2673]: I20250214 00:43:09.625324 2673 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 14 00:43:09.625969 update_engine[2673]: E20250214 00:43:09.625942 2673 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 14 00:43:09.626012 update_engine[2673]: I20250214 00:43:09.625995 2673 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 14 00:43:19.624060 update_engine[2673]: I20250214 00:43:19.623945 2673 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 14 00:43:19.624783 update_engine[2673]: I20250214 00:43:19.624619 2673 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 14 00:43:19.624861 update_engine[2673]: I20250214 00:43:19.624831 2673 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 14 00:43:19.625416 update_engine[2673]: E20250214 00:43:19.625393 2673 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 14 00:43:19.625441 update_engine[2673]: I20250214 00:43:19.625432 2673 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 14 00:43:19.625461 update_engine[2673]: I20250214 00:43:19.625440 2673 omaha_request_action.cc:617] Omaha request response: Feb 14 00:43:19.625524 update_engine[2673]: E20250214 00:43:19.625510 2673 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 14 00:43:19.625549 update_engine[2673]: I20250214 00:43:19.625527 2673 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 14 00:43:19.625549 update_engine[2673]: I20250214 00:43:19.625533 2673 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 14 00:43:19.625549 update_engine[2673]: I20250214 00:43:19.625537 2673 update_attempter.cc:306] Processing Done. Feb 14 00:43:19.625606 update_engine[2673]: E20250214 00:43:19.625550 2673 update_attempter.cc:619] Update failed. Feb 14 00:43:19.625606 update_engine[2673]: I20250214 00:43:19.625555 2673 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 14 00:43:19.625606 update_engine[2673]: I20250214 00:43:19.625560 2673 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 14 00:43:19.625606 update_engine[2673]: I20250214 00:43:19.625566 2673 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 14 00:43:19.625683 update_engine[2673]: I20250214 00:43:19.625627 2673 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 14 00:43:19.625683 update_engine[2673]: I20250214 00:43:19.625646 2673 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 14 00:43:19.625683 update_engine[2673]: I20250214 00:43:19.625651 2673 omaha_request_action.cc:272] Request: Feb 14 00:43:19.625683 update_engine[2673]: Feb 14 00:43:19.625683 update_engine[2673]: Feb 14 00:43:19.625683 update_engine[2673]: Feb 14 00:43:19.625683 update_engine[2673]: Feb 14 00:43:19.625683 update_engine[2673]: Feb 14 00:43:19.625683 update_engine[2673]: Feb 14 00:43:19.625683 update_engine[2673]: I20250214 00:43:19.625658 2673 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 14 00:43:19.625875 update_engine[2673]: I20250214 00:43:19.625778 2673 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 14 00:43:19.626024 update_engine[2673]: I20250214 00:43:19.625929 2673 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 14 00:43:19.626162 locksmithd[2700]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 14 00:43:19.626626 update_engine[2673]: E20250214 00:43:19.626607 2673 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 14 00:43:19.626654 update_engine[2673]: I20250214 00:43:19.626642 2673 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 14 00:43:19.626675 update_engine[2673]: I20250214 00:43:19.626650 2673 omaha_request_action.cc:617] Omaha request response: Feb 14 00:43:19.626675 update_engine[2673]: I20250214 00:43:19.626657 2673 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 14 00:43:19.626675 update_engine[2673]: I20250214 00:43:19.626662 2673 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 14 00:43:19.626675 update_engine[2673]: I20250214 00:43:19.626672 2673 update_attempter.cc:306] Processing Done. Feb 14 00:43:19.626811 update_engine[2673]: I20250214 00:43:19.626677 2673 update_attempter.cc:310] Error event sent. Feb 14 00:43:19.626811 update_engine[2673]: I20250214 00:43:19.626685 2673 update_check_scheduler.cc:74] Next update check in 45m49s Feb 14 00:43:19.626878 locksmithd[2700]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 14 00:44:54.298092 systemd[1]: Started sshd@16-147.28.162.217:22-218.92.0.212:59090.service - OpenSSH per-connection server daemon (218.92.0.212:59090). Feb 14 00:44:54.527216 sshd[9735]: Unable to negotiate with 218.92.0.212 port 59090: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1,diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha1 [preauth] Feb 14 00:44:54.528623 systemd[1]: sshd@16-147.28.162.217:22-218.92.0.212:59090.service: Deactivated successfully. Feb 14 00:45:34.474094 systemd[1]: Started sshd@17-147.28.162.217:22-194.0.234.38:46872.service - OpenSSH per-connection server daemon (194.0.234.38:46872). Feb 14 00:45:36.126200 sshd[9851]: Invalid user admin from 194.0.234.38 port 46872 Feb 14 00:45:36.320748 sshd[9851]: Connection closed by invalid user admin 194.0.234.38 port 46872 [preauth] Feb 14 00:45:36.322737 systemd[1]: sshd@17-147.28.162.217:22-194.0.234.38:46872.service: Deactivated successfully. Feb 14 00:48:38.798184 systemd[1]: Started sshd@18-147.28.162.217:22-139.178.68.195:58206.service - OpenSSH per-connection server daemon (139.178.68.195:58206). Feb 14 00:48:39.210506 sshd[10334]: Accepted publickey for core from 139.178.68.195 port 58206 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:48:39.211668 sshd[10334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:48:39.215050 systemd-logind[2664]: New session 12 of user core. Feb 14 00:48:39.227833 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 14 00:48:39.573088 sshd[10334]: pam_unix(sshd:session): session closed for user core Feb 14 00:48:39.575991 systemd[1]: sshd@18-147.28.162.217:22-139.178.68.195:58206.service: Deactivated successfully. Feb 14 00:48:39.577715 systemd[1]: session-12.scope: Deactivated successfully. Feb 14 00:48:39.578246 systemd-logind[2664]: Session 12 logged out. Waiting for processes to exit. Feb 14 00:48:39.578840 systemd-logind[2664]: Removed session 12. Feb 14 00:48:44.646206 systemd[1]: Started sshd@19-147.28.162.217:22-139.178.68.195:58218.service - OpenSSH per-connection server daemon (139.178.68.195:58218). Feb 14 00:48:45.048321 sshd[10395]: Accepted publickey for core from 139.178.68.195 port 58218 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:48:45.049307 sshd[10395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:48:45.052239 systemd-logind[2664]: New session 13 of user core. Feb 14 00:48:45.065837 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 14 00:48:45.396850 sshd[10395]: pam_unix(sshd:session): session closed for user core Feb 14 00:48:45.399786 systemd[1]: sshd@19-147.28.162.217:22-139.178.68.195:58218.service: Deactivated successfully. Feb 14 00:48:45.401470 systemd[1]: session-13.scope: Deactivated successfully. Feb 14 00:48:45.401998 systemd-logind[2664]: Session 13 logged out. Waiting for processes to exit. Feb 14 00:48:45.402589 systemd-logind[2664]: Removed session 13. Feb 14 00:48:45.474103 systemd[1]: Started sshd@20-147.28.162.217:22-139.178.68.195:58226.service - OpenSSH per-connection server daemon (139.178.68.195:58226). Feb 14 00:48:45.887910 sshd[10434]: Accepted publickey for core from 139.178.68.195 port 58226 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:48:45.888993 sshd[10434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:48:45.891750 systemd-logind[2664]: New session 14 of user core. Feb 14 00:48:45.901825 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 14 00:48:46.273946 sshd[10434]: pam_unix(sshd:session): session closed for user core Feb 14 00:48:46.276829 systemd[1]: sshd@20-147.28.162.217:22-139.178.68.195:58226.service: Deactivated successfully. Feb 14 00:48:46.279354 systemd[1]: session-14.scope: Deactivated successfully. Feb 14 00:48:46.279860 systemd-logind[2664]: Session 14 logged out. Waiting for processes to exit. Feb 14 00:48:46.280442 systemd-logind[2664]: Removed session 14. Feb 14 00:48:46.350982 systemd[1]: Started sshd@21-147.28.162.217:22-139.178.68.195:58240.service - OpenSSH per-connection server daemon (139.178.68.195:58240). Feb 14 00:48:46.763298 sshd[10473]: Accepted publickey for core from 139.178.68.195 port 58240 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:48:46.764398 sshd[10473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:48:46.767295 systemd-logind[2664]: New session 15 of user core. Feb 14 00:48:46.780902 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 14 00:48:47.119502 sshd[10473]: pam_unix(sshd:session): session closed for user core Feb 14 00:48:47.122353 systemd[1]: sshd@21-147.28.162.217:22-139.178.68.195:58240.service: Deactivated successfully. Feb 14 00:48:47.123967 systemd[1]: session-15.scope: Deactivated successfully. Feb 14 00:48:47.124479 systemd-logind[2664]: Session 15 logged out. Waiting for processes to exit. Feb 14 00:48:47.125070 systemd-logind[2664]: Removed session 15. Feb 14 00:48:52.196108 systemd[1]: Started sshd@22-147.28.162.217:22-139.178.68.195:49832.service - OpenSSH per-connection server daemon (139.178.68.195:49832). Feb 14 00:48:52.610297 sshd[10514]: Accepted publickey for core from 139.178.68.195 port 49832 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:48:52.611313 sshd[10514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:48:52.614258 systemd-logind[2664]: New session 16 of user core. Feb 14 00:48:52.623836 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 14 00:48:52.972414 sshd[10514]: pam_unix(sshd:session): session closed for user core Feb 14 00:48:52.975700 systemd[1]: sshd@22-147.28.162.217:22-139.178.68.195:49832.service: Deactivated successfully. Feb 14 00:48:52.978147 systemd[1]: session-16.scope: Deactivated successfully. Feb 14 00:48:52.978670 systemd-logind[2664]: Session 16 logged out. Waiting for processes to exit. Feb 14 00:48:52.979260 systemd-logind[2664]: Removed session 16. Feb 14 00:48:58.044120 systemd[1]: Started sshd@23-147.28.162.217:22-139.178.68.195:56840.service - OpenSSH per-connection server daemon (139.178.68.195:56840). Feb 14 00:48:58.443530 sshd[10552]: Accepted publickey for core from 139.178.68.195 port 56840 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:48:58.444660 sshd[10552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:48:58.447715 systemd-logind[2664]: New session 17 of user core. Feb 14 00:48:58.462838 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 14 00:48:58.793428 sshd[10552]: pam_unix(sshd:session): session closed for user core Feb 14 00:48:58.796605 systemd[1]: sshd@23-147.28.162.217:22-139.178.68.195:56840.service: Deactivated successfully. Feb 14 00:48:58.798278 systemd[1]: session-17.scope: Deactivated successfully. Feb 14 00:48:58.798779 systemd-logind[2664]: Session 17 logged out. Waiting for processes to exit. Feb 14 00:48:58.799373 systemd-logind[2664]: Removed session 17. Feb 14 00:49:03.871108 systemd[1]: Started sshd@24-147.28.162.217:22-139.178.68.195:56852.service - OpenSSH per-connection server daemon (139.178.68.195:56852). Feb 14 00:49:04.273282 sshd[10588]: Accepted publickey for core from 139.178.68.195 port 56852 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:49:04.274309 sshd[10588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:49:04.277216 systemd-logind[2664]: New session 18 of user core. Feb 14 00:49:04.292849 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 14 00:49:04.622205 sshd[10588]: pam_unix(sshd:session): session closed for user core Feb 14 00:49:04.625117 systemd[1]: sshd@24-147.28.162.217:22-139.178.68.195:56852.service: Deactivated successfully. Feb 14 00:49:04.627326 systemd[1]: session-18.scope: Deactivated successfully. Feb 14 00:49:04.627840 systemd-logind[2664]: Session 18 logged out. Waiting for processes to exit. Feb 14 00:49:04.628416 systemd-logind[2664]: Removed session 18. Feb 14 00:49:04.701087 systemd[1]: Started sshd@25-147.28.162.217:22-139.178.68.195:56856.service - OpenSSH per-connection server daemon (139.178.68.195:56856). Feb 14 00:49:05.117443 sshd[10623]: Accepted publickey for core from 139.178.68.195 port 56856 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:49:05.118594 sshd[10623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:49:05.121767 systemd-logind[2664]: New session 19 of user core. Feb 14 00:49:05.132843 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 14 00:49:05.497842 sshd[10623]: pam_unix(sshd:session): session closed for user core Feb 14 00:49:05.500727 systemd[1]: sshd@25-147.28.162.217:22-139.178.68.195:56856.service: Deactivated successfully. Feb 14 00:49:05.502368 systemd[1]: session-19.scope: Deactivated successfully. Feb 14 00:49:05.503370 systemd-logind[2664]: Session 19 logged out. Waiting for processes to exit. Feb 14 00:49:05.504024 systemd-logind[2664]: Removed session 19. Feb 14 00:49:05.577036 systemd[1]: Started sshd@26-147.28.162.217:22-139.178.68.195:56866.service - OpenSSH per-connection server daemon (139.178.68.195:56866). Feb 14 00:49:05.993394 sshd[10653]: Accepted publickey for core from 139.178.68.195 port 56866 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:49:05.994627 sshd[10653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:49:06.002660 systemd-logind[2664]: New session 20 of user core. Feb 14 00:49:06.003767 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 14 00:49:06.685911 sshd[10653]: pam_unix(sshd:session): session closed for user core Feb 14 00:49:06.688772 systemd[1]: sshd@26-147.28.162.217:22-139.178.68.195:56866.service: Deactivated successfully. Feb 14 00:49:06.690913 systemd[1]: session-20.scope: Deactivated successfully. Feb 14 00:49:06.691434 systemd-logind[2664]: Session 20 logged out. Waiting for processes to exit. Feb 14 00:49:06.692030 systemd-logind[2664]: Removed session 20. Feb 14 00:49:06.759983 systemd[1]: Started sshd@27-147.28.162.217:22-139.178.68.195:51272.service - OpenSSH per-connection server daemon (139.178.68.195:51272). Feb 14 00:49:07.177226 sshd[10743]: Accepted publickey for core from 139.178.68.195 port 51272 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:49:07.178334 sshd[10743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:49:07.181210 systemd-logind[2664]: New session 21 of user core. Feb 14 00:49:07.194842 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 14 00:49:07.624552 sshd[10743]: pam_unix(sshd:session): session closed for user core Feb 14 00:49:07.627481 systemd[1]: sshd@27-147.28.162.217:22-139.178.68.195:51272.service: Deactivated successfully. Feb 14 00:49:07.629147 systemd[1]: session-21.scope: Deactivated successfully. Feb 14 00:49:07.629682 systemd-logind[2664]: Session 21 logged out. Waiting for processes to exit. Feb 14 00:49:07.630282 systemd-logind[2664]: Removed session 21. Feb 14 00:49:07.694991 systemd[1]: Started sshd@28-147.28.162.217:22-139.178.68.195:51286.service - OpenSSH per-connection server daemon (139.178.68.195:51286). Feb 14 00:49:08.094316 sshd[10791]: Accepted publickey for core from 139.178.68.195 port 51286 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:49:08.095416 sshd[10791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:49:08.098480 systemd-logind[2664]: New session 22 of user core. Feb 14 00:49:08.108898 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 14 00:49:08.444342 sshd[10791]: pam_unix(sshd:session): session closed for user core Feb 14 00:49:08.447236 systemd[1]: sshd@28-147.28.162.217:22-139.178.68.195:51286.service: Deactivated successfully. Feb 14 00:49:08.448886 systemd[1]: session-22.scope: Deactivated successfully. Feb 14 00:49:08.449373 systemd-logind[2664]: Session 22 logged out. Waiting for processes to exit. Feb 14 00:49:08.449958 systemd-logind[2664]: Removed session 22. Feb 14 00:49:13.522074 systemd[1]: Started sshd@29-147.28.162.217:22-139.178.68.195:51294.service - OpenSSH per-connection server daemon (139.178.68.195:51294). Feb 14 00:49:13.935900 sshd[10871]: Accepted publickey for core from 139.178.68.195 port 51294 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:49:13.937049 sshd[10871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:49:13.939894 systemd-logind[2664]: New session 23 of user core. Feb 14 00:49:13.951875 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 14 00:49:14.293146 sshd[10871]: pam_unix(sshd:session): session closed for user core Feb 14 00:49:14.296073 systemd[1]: sshd@29-147.28.162.217:22-139.178.68.195:51294.service: Deactivated successfully. Feb 14 00:49:14.297820 systemd[1]: session-23.scope: Deactivated successfully. Feb 14 00:49:14.298345 systemd-logind[2664]: Session 23 logged out. Waiting for processes to exit. Feb 14 00:49:14.298949 systemd-logind[2664]: Removed session 23. Feb 14 00:49:19.376131 systemd[1]: Started sshd@30-147.28.162.217:22-139.178.68.195:48848.service - OpenSSH per-connection server daemon (139.178.68.195:48848). Feb 14 00:49:19.801949 sshd[10907]: Accepted publickey for core from 139.178.68.195 port 48848 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:49:19.802989 sshd[10907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:49:19.806194 systemd-logind[2664]: New session 24 of user core. Feb 14 00:49:19.821830 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 14 00:49:20.166354 sshd[10907]: pam_unix(sshd:session): session closed for user core Feb 14 00:49:20.169257 systemd[1]: sshd@30-147.28.162.217:22-139.178.68.195:48848.service: Deactivated successfully. Feb 14 00:49:20.170878 systemd[1]: session-24.scope: Deactivated successfully. Feb 14 00:49:20.171355 systemd-logind[2664]: Session 24 logged out. Waiting for processes to exit. Feb 14 00:49:20.171927 systemd-logind[2664]: Removed session 24. Feb 14 00:49:25.235205 systemd[1]: Started sshd@31-147.28.162.217:22-139.178.68.195:48854.service - OpenSSH per-connection server daemon (139.178.68.195:48854). Feb 14 00:49:25.639063 sshd[10944]: Accepted publickey for core from 139.178.68.195 port 48854 ssh2: RSA SHA256:hUosanXmuMGpan2OiCLLwKwqn0pYVRUAIim0PHrNtMI Feb 14 00:49:25.640158 sshd[10944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 14 00:49:25.643111 systemd-logind[2664]: New session 25 of user core. Feb 14 00:49:25.651826 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 14 00:49:25.987385 sshd[10944]: pam_unix(sshd:session): session closed for user core Feb 14 00:49:25.990207 systemd[1]: sshd@31-147.28.162.217:22-139.178.68.195:48854.service: Deactivated successfully. Feb 14 00:49:25.991829 systemd[1]: session-25.scope: Deactivated successfully. Feb 14 00:49:25.992344 systemd-logind[2664]: Session 25 logged out. Waiting for processes to exit. Feb 14 00:49:25.992950 systemd-logind[2664]: Removed session 25.