Dec 13 14:38:09.172529 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] Dec 13 14:38:09.172551 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri Dec 13 11:56:07 -00 2024 Dec 13 14:38:09.172560 kernel: KASLR enabled Dec 13 14:38:09.172565 kernel: efi: EFI v2.7 by American Megatrends Dec 13 14:38:09.172571 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea451818 RNG=0xebf10018 MEMRESERVE=0xe4633f98 Dec 13 14:38:09.172576 kernel: random: crng init done Dec 13 14:38:09.172583 kernel: secureboot: Secure boot disabled Dec 13 14:38:09.172588 kernel: esrt: Reserving ESRT space from 0x00000000ea451818 to 0x00000000ea451878. Dec 13 14:38:09.172596 kernel: ACPI: Early table checksum verification disabled Dec 13 14:38:09.172602 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) Dec 13 14:38:09.172608 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) Dec 13 14:38:09.172613 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) Dec 13 14:38:09.172619 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) Dec 13 14:38:09.172625 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) Dec 13 14:38:09.172633 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) Dec 13 14:38:09.172639 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) Dec 13 14:38:09.172645 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) Dec 13 14:38:09.172651 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) Dec 13 14:38:09.172657 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) Dec 13 14:38:09.172663 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) Dec 13 14:38:09.172669 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) Dec 13 14:38:09.172675 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) Dec 13 14:38:09.172681 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) Dec 13 14:38:09.172687 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) Dec 13 14:38:09.172695 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) Dec 13 14:38:09.172701 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) Dec 13 14:38:09.172749 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) Dec 13 14:38:09.172757 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) Dec 13 14:38:09.172763 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 Dec 13 14:38:09.172769 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] Dec 13 14:38:09.172775 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] Dec 13 14:38:09.172781 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] Dec 13 14:38:09.172787 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] Dec 13 14:38:09.172793 kernel: NUMA: NODE_DATA [mem 0x83fdffcb800-0x83fdffd0fff] Dec 13 14:38:09.172799 kernel: Zone ranges: Dec 13 14:38:09.172807 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] Dec 13 14:38:09.172813 kernel: DMA32 empty Dec 13 14:38:09.172819 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] Dec 13 14:38:09.172825 kernel: Movable zone start for each node Dec 13 14:38:09.172831 kernel: Early memory node ranges Dec 13 14:38:09.172840 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] Dec 13 14:38:09.172846 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] Dec 13 14:38:09.172854 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] Dec 13 14:38:09.172861 kernel: node 0: [mem 0x0000000094000000-0x00000000eba31fff] Dec 13 14:38:09.172867 kernel: node 0: [mem 0x00000000eba32000-0x00000000ebea8fff] Dec 13 14:38:09.172873 kernel: node 0: [mem 0x00000000ebea9000-0x00000000ebeaefff] Dec 13 14:38:09.172880 kernel: node 0: [mem 0x00000000ebeaf000-0x00000000ebeccfff] Dec 13 14:38:09.172886 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] Dec 13 14:38:09.172892 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] Dec 13 14:38:09.172898 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] Dec 13 14:38:09.172905 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] Dec 13 14:38:09.172911 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee53ffff] Dec 13 14:38:09.172919 kernel: node 0: [mem 0x00000000ee540000-0x00000000f765ffff] Dec 13 14:38:09.172925 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] Dec 13 14:38:09.172931 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] Dec 13 14:38:09.172938 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] Dec 13 14:38:09.172944 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] Dec 13 14:38:09.172951 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] Dec 13 14:38:09.172957 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] Dec 13 14:38:09.172963 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] Dec 13 14:38:09.172970 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] Dec 13 14:38:09.172976 kernel: On node 0, zone DMA: 768 pages in unavailable ranges Dec 13 14:38:09.172983 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges Dec 13 14:38:09.172990 kernel: psci: probing for conduit method from ACPI. Dec 13 14:38:09.172997 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 14:38:09.173003 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 14:38:09.173009 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 14:38:09.173016 kernel: psci: SMC Calling Convention v1.2 Dec 13 14:38:09.173022 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 13 14:38:09.173028 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 Dec 13 14:38:09.173035 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 Dec 13 14:38:09.173041 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 Dec 13 14:38:09.173047 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 Dec 13 14:38:09.173053 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 Dec 13 14:38:09.173060 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 Dec 13 14:38:09.173067 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 Dec 13 14:38:09.173074 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 Dec 13 14:38:09.173080 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 Dec 13 14:38:09.173087 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 Dec 13 14:38:09.173093 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 Dec 13 14:38:09.173099 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 Dec 13 14:38:09.173105 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 Dec 13 14:38:09.173112 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 Dec 13 14:38:09.173118 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 Dec 13 14:38:09.173124 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 Dec 13 14:38:09.173131 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 Dec 13 14:38:09.173137 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 Dec 13 14:38:09.173145 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 Dec 13 14:38:09.173151 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 Dec 13 14:38:09.173157 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 Dec 13 14:38:09.173164 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 Dec 13 14:38:09.173170 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 Dec 13 14:38:09.173176 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 Dec 13 14:38:09.173182 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 Dec 13 14:38:09.173189 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 Dec 13 14:38:09.173195 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 Dec 13 14:38:09.173201 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 Dec 13 14:38:09.173208 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 Dec 13 14:38:09.173215 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 Dec 13 14:38:09.173221 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 Dec 13 14:38:09.173228 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 Dec 13 14:38:09.173234 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 Dec 13 14:38:09.173241 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 Dec 13 14:38:09.173247 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 Dec 13 14:38:09.173253 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 Dec 13 14:38:09.173260 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 Dec 13 14:38:09.173266 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 Dec 13 14:38:09.173273 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 Dec 13 14:38:09.173279 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 Dec 13 14:38:09.173285 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 Dec 13 14:38:09.173293 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 Dec 13 14:38:09.173299 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 Dec 13 14:38:09.173306 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 Dec 13 14:38:09.173312 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 Dec 13 14:38:09.173318 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 Dec 13 14:38:09.173325 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 Dec 13 14:38:09.173331 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 Dec 13 14:38:09.173338 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 Dec 13 14:38:09.173350 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 Dec 13 14:38:09.173357 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 Dec 13 14:38:09.173365 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 Dec 13 14:38:09.173372 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 Dec 13 14:38:09.173378 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 Dec 13 14:38:09.173385 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 Dec 13 14:38:09.173392 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 Dec 13 14:38:09.173399 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 Dec 13 14:38:09.173407 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 Dec 13 14:38:09.173413 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 Dec 13 14:38:09.173420 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 Dec 13 14:38:09.173427 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 Dec 13 14:38:09.173434 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 Dec 13 14:38:09.173441 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 Dec 13 14:38:09.173448 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 Dec 13 14:38:09.173454 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 Dec 13 14:38:09.173461 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 Dec 13 14:38:09.173468 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 Dec 13 14:38:09.173475 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 Dec 13 14:38:09.173481 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 Dec 13 14:38:09.173489 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 Dec 13 14:38:09.173496 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 Dec 13 14:38:09.173503 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 Dec 13 14:38:09.173510 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 Dec 13 14:38:09.173517 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 Dec 13 14:38:09.173523 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 Dec 13 14:38:09.173530 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 Dec 13 14:38:09.173537 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 Dec 13 14:38:09.173544 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 Dec 13 14:38:09.173550 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 Dec 13 14:38:09.173557 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 14:38:09.173565 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 14:38:09.173572 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 Dec 13 14:38:09.173579 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 Dec 13 14:38:09.173586 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 Dec 13 14:38:09.173593 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 Dec 13 14:38:09.173600 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 Dec 13 14:38:09.173607 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 Dec 13 14:38:09.173614 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 Dec 13 14:38:09.173620 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 Dec 13 14:38:09.173627 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 Dec 13 14:38:09.173634 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 Dec 13 14:38:09.173642 kernel: Detected PIPT I-cache on CPU0 Dec 13 14:38:09.173648 kernel: CPU features: detected: GIC system register CPU interface Dec 13 14:38:09.173655 kernel: CPU features: detected: Virtualization Host Extensions Dec 13 14:38:09.173662 kernel: CPU features: detected: Hardware dirty bit management Dec 13 14:38:09.173669 kernel: CPU features: detected: Spectre-v4 Dec 13 14:38:09.173675 kernel: CPU features: detected: Spectre-BHB Dec 13 14:38:09.173682 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 14:38:09.173689 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 14:38:09.173696 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 14:38:09.173702 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 14:38:09.173712 kernel: alternatives: applying boot alternatives Dec 13 14:38:09.173720 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 14:38:09.173729 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:38:09.173736 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Dec 13 14:38:09.173743 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes Dec 13 14:38:09.173749 kernel: printk: log_buf_len min size: 262144 bytes Dec 13 14:38:09.173756 kernel: printk: log_buf_len: 1048576 bytes Dec 13 14:38:09.173763 kernel: printk: early log buf free: 249864(95%) Dec 13 14:38:09.173770 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) Dec 13 14:38:09.173777 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) Dec 13 14:38:09.173783 kernel: Fallback order for Node 0: 0 Dec 13 14:38:09.173790 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 Dec 13 14:38:09.173799 kernel: Policy zone: Normal Dec 13 14:38:09.173806 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:38:09.173812 kernel: software IO TLB: area num 128. Dec 13 14:38:09.173819 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) Dec 13 14:38:09.173826 kernel: Memory: 262921880K/268174336K available (10304K kernel code, 2184K rwdata, 8088K rodata, 39936K init, 897K bss, 5252456K reserved, 0K cma-reserved) Dec 13 14:38:09.173833 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 Dec 13 14:38:09.173840 kernel: trace event string verifier disabled Dec 13 14:38:09.173847 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 14:38:09.173854 kernel: rcu: RCU event tracing is enabled. Dec 13 14:38:09.173861 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. Dec 13 14:38:09.173868 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 14:38:09.173875 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:38:09.173884 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:38:09.173891 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 Dec 13 14:38:09.173897 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 14:38:09.173904 kernel: GICv3: GIC: Using split EOI/Deactivate mode Dec 13 14:38:09.173911 kernel: GICv3: 672 SPIs implemented Dec 13 14:38:09.173917 kernel: GICv3: 0 Extended SPIs implemented Dec 13 14:38:09.173924 kernel: Root IRQ handler: gic_handle_irq Dec 13 14:38:09.173931 kernel: GICv3: GICv3 features: 16 PPIs Dec 13 14:38:09.173938 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 Dec 13 14:38:09.173944 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 Dec 13 14:38:09.173951 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 Dec 13 14:38:09.173958 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 Dec 13 14:38:09.173966 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 Dec 13 14:38:09.173972 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 Dec 13 14:38:09.173979 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 Dec 13 14:38:09.173986 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 Dec 13 14:38:09.173992 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 Dec 13 14:38:09.173999 kernel: ITS [mem 0x100100040000-0x10010005ffff] Dec 13 14:38:09.174006 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:38:09.174013 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) Dec 13 14:38:09.174020 kernel: ITS [mem 0x100100060000-0x10010007ffff] Dec 13 14:38:09.174027 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:38:09.174034 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) Dec 13 14:38:09.174042 kernel: ITS [mem 0x100100080000-0x10010009ffff] Dec 13 14:38:09.174049 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:38:09.174056 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) Dec 13 14:38:09.174063 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] Dec 13 14:38:09.174070 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:38:09.174076 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) Dec 13 14:38:09.174083 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] Dec 13 14:38:09.174090 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:38:09.174097 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) Dec 13 14:38:09.174104 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] Dec 13 14:38:09.174110 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:38:09.174119 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) Dec 13 14:38:09.174125 kernel: ITS [mem 0x100100100000-0x10010011ffff] Dec 13 14:38:09.174132 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:38:09.174139 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) Dec 13 14:38:09.174146 kernel: ITS [mem 0x100100120000-0x10010013ffff] Dec 13 14:38:09.174153 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:38:09.174160 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) Dec 13 14:38:09.174166 kernel: GICv3: using LPI property table @0x00000800003e0000 Dec 13 14:38:09.174173 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 Dec 13 14:38:09.174180 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 14:38:09.174187 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174195 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). Dec 13 14:38:09.174202 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). Dec 13 14:38:09.174209 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 14:38:09.174216 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 14:38:09.174223 kernel: Console: colour dummy device 80x25 Dec 13 14:38:09.174230 kernel: printk: console [tty0] enabled Dec 13 14:38:09.174237 kernel: ACPI: Core revision 20230628 Dec 13 14:38:09.174245 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 14:38:09.174252 kernel: pid_max: default: 81920 minimum: 640 Dec 13 14:38:09.174259 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 14:38:09.174267 kernel: landlock: Up and running. Dec 13 14:38:09.174274 kernel: SELinux: Initializing. Dec 13 14:38:09.174281 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:38:09.174288 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:38:09.174295 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Dec 13 14:38:09.174302 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Dec 13 14:38:09.174309 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:38:09.174316 kernel: rcu: Max phase no-delay instances is 400. Dec 13 14:38:09.174323 kernel: Platform MSI: ITS@0x100100040000 domain created Dec 13 14:38:09.174332 kernel: Platform MSI: ITS@0x100100060000 domain created Dec 13 14:38:09.174339 kernel: Platform MSI: ITS@0x100100080000 domain created Dec 13 14:38:09.174346 kernel: Platform MSI: ITS@0x1001000a0000 domain created Dec 13 14:38:09.174352 kernel: Platform MSI: ITS@0x1001000c0000 domain created Dec 13 14:38:09.174359 kernel: Platform MSI: ITS@0x1001000e0000 domain created Dec 13 14:38:09.174366 kernel: Platform MSI: ITS@0x100100100000 domain created Dec 13 14:38:09.174373 kernel: Platform MSI: ITS@0x100100120000 domain created Dec 13 14:38:09.174380 kernel: PCI/MSI: ITS@0x100100040000 domain created Dec 13 14:38:09.174387 kernel: PCI/MSI: ITS@0x100100060000 domain created Dec 13 14:38:09.174395 kernel: PCI/MSI: ITS@0x100100080000 domain created Dec 13 14:38:09.174402 kernel: PCI/MSI: ITS@0x1001000a0000 domain created Dec 13 14:38:09.174409 kernel: PCI/MSI: ITS@0x1001000c0000 domain created Dec 13 14:38:09.174415 kernel: PCI/MSI: ITS@0x1001000e0000 domain created Dec 13 14:38:09.174422 kernel: PCI/MSI: ITS@0x100100100000 domain created Dec 13 14:38:09.174429 kernel: PCI/MSI: ITS@0x100100120000 domain created Dec 13 14:38:09.174436 kernel: Remapping and enabling EFI services. Dec 13 14:38:09.174443 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:38:09.174450 kernel: Detected PIPT I-cache on CPU1 Dec 13 14:38:09.174457 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 Dec 13 14:38:09.174465 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 Dec 13 14:38:09.174472 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174479 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] Dec 13 14:38:09.174486 kernel: Detected PIPT I-cache on CPU2 Dec 13 14:38:09.174493 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 Dec 13 14:38:09.174500 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 Dec 13 14:38:09.174507 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174514 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] Dec 13 14:38:09.174520 kernel: Detected PIPT I-cache on CPU3 Dec 13 14:38:09.174529 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 Dec 13 14:38:09.174536 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 Dec 13 14:38:09.174543 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174550 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] Dec 13 14:38:09.174556 kernel: Detected PIPT I-cache on CPU4 Dec 13 14:38:09.174563 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 Dec 13 14:38:09.174570 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 Dec 13 14:38:09.174577 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174584 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] Dec 13 14:38:09.174591 kernel: Detected PIPT I-cache on CPU5 Dec 13 14:38:09.174599 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 Dec 13 14:38:09.174606 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 Dec 13 14:38:09.174613 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174620 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] Dec 13 14:38:09.174627 kernel: Detected PIPT I-cache on CPU6 Dec 13 14:38:09.174634 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 Dec 13 14:38:09.174641 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 Dec 13 14:38:09.174648 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174655 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] Dec 13 14:38:09.174663 kernel: Detected PIPT I-cache on CPU7 Dec 13 14:38:09.174670 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 Dec 13 14:38:09.174677 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 Dec 13 14:38:09.174684 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174690 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] Dec 13 14:38:09.174697 kernel: Detected PIPT I-cache on CPU8 Dec 13 14:38:09.174704 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 Dec 13 14:38:09.174713 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 Dec 13 14:38:09.174720 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174727 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] Dec 13 14:38:09.174736 kernel: Detected PIPT I-cache on CPU9 Dec 13 14:38:09.174743 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 Dec 13 14:38:09.174750 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 Dec 13 14:38:09.174757 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174764 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] Dec 13 14:38:09.174770 kernel: Detected PIPT I-cache on CPU10 Dec 13 14:38:09.174777 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 Dec 13 14:38:09.174785 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 Dec 13 14:38:09.174792 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174798 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] Dec 13 14:38:09.174807 kernel: Detected PIPT I-cache on CPU11 Dec 13 14:38:09.174814 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 Dec 13 14:38:09.174821 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 Dec 13 14:38:09.174828 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174835 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] Dec 13 14:38:09.174842 kernel: Detected PIPT I-cache on CPU12 Dec 13 14:38:09.174849 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 Dec 13 14:38:09.174856 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 Dec 13 14:38:09.174863 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174872 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] Dec 13 14:38:09.174879 kernel: Detected PIPT I-cache on CPU13 Dec 13 14:38:09.174886 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 Dec 13 14:38:09.174893 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 Dec 13 14:38:09.174900 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174907 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] Dec 13 14:38:09.174914 kernel: Detected PIPT I-cache on CPU14 Dec 13 14:38:09.174921 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 Dec 13 14:38:09.174928 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 Dec 13 14:38:09.174936 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174943 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] Dec 13 14:38:09.174950 kernel: Detected PIPT I-cache on CPU15 Dec 13 14:38:09.174957 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 Dec 13 14:38:09.174964 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 Dec 13 14:38:09.174971 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174978 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] Dec 13 14:38:09.174985 kernel: Detected PIPT I-cache on CPU16 Dec 13 14:38:09.174992 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 Dec 13 14:38:09.175008 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 Dec 13 14:38:09.175017 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175025 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] Dec 13 14:38:09.175032 kernel: Detected PIPT I-cache on CPU17 Dec 13 14:38:09.175039 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 Dec 13 14:38:09.175046 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 Dec 13 14:38:09.175054 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175061 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] Dec 13 14:38:09.175068 kernel: Detected PIPT I-cache on CPU18 Dec 13 14:38:09.175075 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 Dec 13 14:38:09.175084 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 Dec 13 14:38:09.175091 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175098 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] Dec 13 14:38:09.175106 kernel: Detected PIPT I-cache on CPU19 Dec 13 14:38:09.175113 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 Dec 13 14:38:09.175121 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 Dec 13 14:38:09.175130 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175138 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] Dec 13 14:38:09.175145 kernel: Detected PIPT I-cache on CPU20 Dec 13 14:38:09.175152 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 Dec 13 14:38:09.175159 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 Dec 13 14:38:09.175167 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175174 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] Dec 13 14:38:09.175181 kernel: Detected PIPT I-cache on CPU21 Dec 13 14:38:09.175189 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 Dec 13 14:38:09.175197 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 Dec 13 14:38:09.175205 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175212 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] Dec 13 14:38:09.175219 kernel: Detected PIPT I-cache on CPU22 Dec 13 14:38:09.175227 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 Dec 13 14:38:09.175234 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 Dec 13 14:38:09.175241 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175248 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] Dec 13 14:38:09.175256 kernel: Detected PIPT I-cache on CPU23 Dec 13 14:38:09.175263 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 Dec 13 14:38:09.175271 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 Dec 13 14:38:09.175279 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175286 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] Dec 13 14:38:09.175293 kernel: Detected PIPT I-cache on CPU24 Dec 13 14:38:09.175300 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 Dec 13 14:38:09.175308 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 Dec 13 14:38:09.175315 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175322 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] Dec 13 14:38:09.175329 kernel: Detected PIPT I-cache on CPU25 Dec 13 14:38:09.175338 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 Dec 13 14:38:09.175345 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 Dec 13 14:38:09.175353 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175360 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] Dec 13 14:38:09.175367 kernel: Detected PIPT I-cache on CPU26 Dec 13 14:38:09.175374 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 Dec 13 14:38:09.175382 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 Dec 13 14:38:09.175389 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175396 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] Dec 13 14:38:09.175403 kernel: Detected PIPT I-cache on CPU27 Dec 13 14:38:09.175412 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 Dec 13 14:38:09.175419 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 Dec 13 14:38:09.175426 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175435 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] Dec 13 14:38:09.175444 kernel: Detected PIPT I-cache on CPU28 Dec 13 14:38:09.175451 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 Dec 13 14:38:09.175458 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 Dec 13 14:38:09.175466 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175473 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] Dec 13 14:38:09.175481 kernel: Detected PIPT I-cache on CPU29 Dec 13 14:38:09.175489 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 Dec 13 14:38:09.175496 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 Dec 13 14:38:09.175504 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175511 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] Dec 13 14:38:09.175518 kernel: Detected PIPT I-cache on CPU30 Dec 13 14:38:09.175526 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 Dec 13 14:38:09.175533 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 Dec 13 14:38:09.175540 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175547 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] Dec 13 14:38:09.175556 kernel: Detected PIPT I-cache on CPU31 Dec 13 14:38:09.175563 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 Dec 13 14:38:09.175570 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 Dec 13 14:38:09.175578 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175585 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] Dec 13 14:38:09.175592 kernel: Detected PIPT I-cache on CPU32 Dec 13 14:38:09.175599 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 Dec 13 14:38:09.175607 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 Dec 13 14:38:09.175614 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175623 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] Dec 13 14:38:09.175630 kernel: Detected PIPT I-cache on CPU33 Dec 13 14:38:09.175637 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 Dec 13 14:38:09.175645 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 Dec 13 14:38:09.175652 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175659 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] Dec 13 14:38:09.175666 kernel: Detected PIPT I-cache on CPU34 Dec 13 14:38:09.175674 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 Dec 13 14:38:09.175681 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 Dec 13 14:38:09.175689 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175697 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] Dec 13 14:38:09.175704 kernel: Detected PIPT I-cache on CPU35 Dec 13 14:38:09.175713 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 Dec 13 14:38:09.175721 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 Dec 13 14:38:09.175728 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175736 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] Dec 13 14:38:09.175743 kernel: Detected PIPT I-cache on CPU36 Dec 13 14:38:09.175750 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 Dec 13 14:38:09.175757 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 Dec 13 14:38:09.175766 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175774 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] Dec 13 14:38:09.175781 kernel: Detected PIPT I-cache on CPU37 Dec 13 14:38:09.175788 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 Dec 13 14:38:09.175796 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 Dec 13 14:38:09.175803 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175810 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] Dec 13 14:38:09.175817 kernel: Detected PIPT I-cache on CPU38 Dec 13 14:38:09.175825 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 Dec 13 14:38:09.175833 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 Dec 13 14:38:09.175841 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175848 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] Dec 13 14:38:09.175855 kernel: Detected PIPT I-cache on CPU39 Dec 13 14:38:09.175862 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 Dec 13 14:38:09.175870 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 Dec 13 14:38:09.175877 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175884 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] Dec 13 14:38:09.175893 kernel: Detected PIPT I-cache on CPU40 Dec 13 14:38:09.175900 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 Dec 13 14:38:09.175907 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 Dec 13 14:38:09.175915 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175922 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] Dec 13 14:38:09.175929 kernel: Detected PIPT I-cache on CPU41 Dec 13 14:38:09.175937 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 Dec 13 14:38:09.175945 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 Dec 13 14:38:09.175952 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175960 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] Dec 13 14:38:09.175968 kernel: Detected PIPT I-cache on CPU42 Dec 13 14:38:09.175976 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 Dec 13 14:38:09.175983 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 Dec 13 14:38:09.175990 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175998 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] Dec 13 14:38:09.176005 kernel: Detected PIPT I-cache on CPU43 Dec 13 14:38:09.176012 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 Dec 13 14:38:09.176019 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 Dec 13 14:38:09.176027 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176035 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] Dec 13 14:38:09.176043 kernel: Detected PIPT I-cache on CPU44 Dec 13 14:38:09.176050 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 Dec 13 14:38:09.176057 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 Dec 13 14:38:09.176064 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176071 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] Dec 13 14:38:09.176078 kernel: Detected PIPT I-cache on CPU45 Dec 13 14:38:09.176086 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 Dec 13 14:38:09.176093 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 Dec 13 14:38:09.176102 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176109 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] Dec 13 14:38:09.176117 kernel: Detected PIPT I-cache on CPU46 Dec 13 14:38:09.176124 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 Dec 13 14:38:09.176131 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 Dec 13 14:38:09.176138 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176145 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] Dec 13 14:38:09.176153 kernel: Detected PIPT I-cache on CPU47 Dec 13 14:38:09.176160 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 Dec 13 14:38:09.176167 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 Dec 13 14:38:09.176176 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176183 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] Dec 13 14:38:09.176191 kernel: Detected PIPT I-cache on CPU48 Dec 13 14:38:09.176198 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 Dec 13 14:38:09.176205 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 Dec 13 14:38:09.176213 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176220 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] Dec 13 14:38:09.176227 kernel: Detected PIPT I-cache on CPU49 Dec 13 14:38:09.176234 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 Dec 13 14:38:09.176243 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 Dec 13 14:38:09.176250 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176257 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] Dec 13 14:38:09.176265 kernel: Detected PIPT I-cache on CPU50 Dec 13 14:38:09.176272 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 Dec 13 14:38:09.176279 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 Dec 13 14:38:09.176286 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176294 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] Dec 13 14:38:09.176301 kernel: Detected PIPT I-cache on CPU51 Dec 13 14:38:09.176308 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 Dec 13 14:38:09.176317 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 Dec 13 14:38:09.176324 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176331 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] Dec 13 14:38:09.176339 kernel: Detected PIPT I-cache on CPU52 Dec 13 14:38:09.176346 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 Dec 13 14:38:09.176353 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 Dec 13 14:38:09.176360 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176368 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] Dec 13 14:38:09.176375 kernel: Detected PIPT I-cache on CPU53 Dec 13 14:38:09.176385 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 Dec 13 14:38:09.176392 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 Dec 13 14:38:09.176399 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176406 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] Dec 13 14:38:09.176414 kernel: Detected PIPT I-cache on CPU54 Dec 13 14:38:09.176421 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 Dec 13 14:38:09.176428 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 Dec 13 14:38:09.176435 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176442 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] Dec 13 14:38:09.176450 kernel: Detected PIPT I-cache on CPU55 Dec 13 14:38:09.176459 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 Dec 13 14:38:09.176466 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 Dec 13 14:38:09.176473 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176481 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] Dec 13 14:38:09.176488 kernel: Detected PIPT I-cache on CPU56 Dec 13 14:38:09.176495 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 Dec 13 14:38:09.176503 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 Dec 13 14:38:09.176510 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176517 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] Dec 13 14:38:09.176526 kernel: Detected PIPT I-cache on CPU57 Dec 13 14:38:09.176533 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 Dec 13 14:38:09.176540 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 Dec 13 14:38:09.176548 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176555 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] Dec 13 14:38:09.176562 kernel: Detected PIPT I-cache on CPU58 Dec 13 14:38:09.176570 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 Dec 13 14:38:09.176577 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 Dec 13 14:38:09.176584 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176591 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] Dec 13 14:38:09.176600 kernel: Detected PIPT I-cache on CPU59 Dec 13 14:38:09.176607 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 Dec 13 14:38:09.176615 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 Dec 13 14:38:09.176622 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176629 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] Dec 13 14:38:09.176636 kernel: Detected PIPT I-cache on CPU60 Dec 13 14:38:09.176644 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 Dec 13 14:38:09.176651 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 Dec 13 14:38:09.176658 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176667 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] Dec 13 14:38:09.176674 kernel: Detected PIPT I-cache on CPU61 Dec 13 14:38:09.176681 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 Dec 13 14:38:09.176689 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 Dec 13 14:38:09.176696 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176703 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] Dec 13 14:38:09.176733 kernel: Detected PIPT I-cache on CPU62 Dec 13 14:38:09.176741 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 Dec 13 14:38:09.176749 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 Dec 13 14:38:09.176758 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176765 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] Dec 13 14:38:09.176772 kernel: Detected PIPT I-cache on CPU63 Dec 13 14:38:09.176780 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 Dec 13 14:38:09.176787 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 Dec 13 14:38:09.176794 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176802 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] Dec 13 14:38:09.176809 kernel: Detected PIPT I-cache on CPU64 Dec 13 14:38:09.176816 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 Dec 13 14:38:09.176824 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 Dec 13 14:38:09.176833 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176840 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] Dec 13 14:38:09.176847 kernel: Detected PIPT I-cache on CPU65 Dec 13 14:38:09.176855 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 Dec 13 14:38:09.176862 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 Dec 13 14:38:09.176870 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176877 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] Dec 13 14:38:09.176884 kernel: Detected PIPT I-cache on CPU66 Dec 13 14:38:09.176891 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 Dec 13 14:38:09.176900 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 Dec 13 14:38:09.176908 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176915 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] Dec 13 14:38:09.176923 kernel: Detected PIPT I-cache on CPU67 Dec 13 14:38:09.176930 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 Dec 13 14:38:09.176937 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 Dec 13 14:38:09.176945 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176952 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] Dec 13 14:38:09.176959 kernel: Detected PIPT I-cache on CPU68 Dec 13 14:38:09.176966 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 Dec 13 14:38:09.176975 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 Dec 13 14:38:09.176982 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176989 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] Dec 13 14:38:09.176997 kernel: Detected PIPT I-cache on CPU69 Dec 13 14:38:09.177004 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 Dec 13 14:38:09.177012 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 Dec 13 14:38:09.177019 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177026 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] Dec 13 14:38:09.177033 kernel: Detected PIPT I-cache on CPU70 Dec 13 14:38:09.177042 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 Dec 13 14:38:09.177049 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 Dec 13 14:38:09.177057 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177064 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] Dec 13 14:38:09.177071 kernel: Detected PIPT I-cache on CPU71 Dec 13 14:38:09.177078 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 Dec 13 14:38:09.177086 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 Dec 13 14:38:09.177093 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177100 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] Dec 13 14:38:09.177107 kernel: Detected PIPT I-cache on CPU72 Dec 13 14:38:09.177116 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 Dec 13 14:38:09.177123 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 Dec 13 14:38:09.177131 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177138 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] Dec 13 14:38:09.177145 kernel: Detected PIPT I-cache on CPU73 Dec 13 14:38:09.177153 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 Dec 13 14:38:09.177160 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 Dec 13 14:38:09.177167 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177174 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] Dec 13 14:38:09.177183 kernel: Detected PIPT I-cache on CPU74 Dec 13 14:38:09.177190 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 Dec 13 14:38:09.177198 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 Dec 13 14:38:09.177205 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177212 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] Dec 13 14:38:09.177219 kernel: Detected PIPT I-cache on CPU75 Dec 13 14:38:09.177227 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 Dec 13 14:38:09.177234 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 Dec 13 14:38:09.177241 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177249 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] Dec 13 14:38:09.177257 kernel: Detected PIPT I-cache on CPU76 Dec 13 14:38:09.177264 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 Dec 13 14:38:09.177272 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 Dec 13 14:38:09.177279 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177286 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] Dec 13 14:38:09.177293 kernel: Detected PIPT I-cache on CPU77 Dec 13 14:38:09.177301 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 Dec 13 14:38:09.177308 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 Dec 13 14:38:09.177316 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177324 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] Dec 13 14:38:09.177331 kernel: Detected PIPT I-cache on CPU78 Dec 13 14:38:09.177339 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 Dec 13 14:38:09.177346 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 Dec 13 14:38:09.177353 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177361 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] Dec 13 14:38:09.177368 kernel: Detected PIPT I-cache on CPU79 Dec 13 14:38:09.177375 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 Dec 13 14:38:09.177382 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 Dec 13 14:38:09.177391 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177399 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] Dec 13 14:38:09.177406 kernel: smp: Brought up 1 node, 80 CPUs Dec 13 14:38:09.177413 kernel: SMP: Total of 80 processors activated. Dec 13 14:38:09.177420 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 14:38:09.177428 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 14:38:09.177435 kernel: CPU features: detected: Common not Private translations Dec 13 14:38:09.177442 kernel: CPU features: detected: CRC32 instructions Dec 13 14:38:09.177450 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 14:38:09.177457 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 14:38:09.177466 kernel: CPU features: detected: LSE atomic instructions Dec 13 14:38:09.177473 kernel: CPU features: detected: Privileged Access Never Dec 13 14:38:09.177481 kernel: CPU features: detected: RAS Extension Support Dec 13 14:38:09.177488 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 14:38:09.177495 kernel: CPU: All CPU(s) started at EL2 Dec 13 14:38:09.177502 kernel: alternatives: applying system-wide alternatives Dec 13 14:38:09.177509 kernel: devtmpfs: initialized Dec 13 14:38:09.177517 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:38:09.177524 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Dec 13 14:38:09.177533 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:38:09.177541 kernel: SMBIOS 3.4.0 present. Dec 13 14:38:09.177548 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 Dec 13 14:38:09.177555 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:38:09.177563 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations Dec 13 14:38:09.177570 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 14:38:09.177578 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 14:38:09.177585 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:38:09.177593 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 Dec 13 14:38:09.177601 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:38:09.177609 kernel: cpuidle: using governor menu Dec 13 14:38:09.177616 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 14:38:09.177623 kernel: ASID allocator initialised with 32768 entries Dec 13 14:38:09.177631 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:38:09.177638 kernel: Serial: AMBA PL011 UART driver Dec 13 14:38:09.177646 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 14:38:09.177653 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 14:38:09.177660 kernel: Modules: 508880 pages in range for PLT usage Dec 13 14:38:09.177669 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:38:09.177676 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 14:38:09.177683 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 14:38:09.177691 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 14:38:09.177698 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:38:09.177705 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 14:38:09.177715 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 14:38:09.177723 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 14:38:09.177730 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:38:09.177739 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:38:09.177746 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:38:09.177753 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:38:09.177761 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded Dec 13 14:38:09.177768 kernel: ACPI: Interpreter enabled Dec 13 14:38:09.177775 kernel: ACPI: Using GIC for interrupt routing Dec 13 14:38:09.177783 kernel: ACPI: MCFG table detected, 8 entries Dec 13 14:38:09.177790 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 Dec 13 14:38:09.177797 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 Dec 13 14:38:09.177806 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 Dec 13 14:38:09.177814 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 Dec 13 14:38:09.177821 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 Dec 13 14:38:09.177828 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 Dec 13 14:38:09.177836 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 Dec 13 14:38:09.177843 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 Dec 13 14:38:09.177850 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA Dec 13 14:38:09.177858 kernel: printk: console [ttyAMA0] enabled Dec 13 14:38:09.177865 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA Dec 13 14:38:09.177874 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) Dec 13 14:38:09.178007 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:38:09.178077 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 14:38:09.178139 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] Dec 13 14:38:09.178200 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 14:38:09.178260 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 Dec 13 14:38:09.178321 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] Dec 13 14:38:09.178331 kernel: PCI host bridge to bus 000d:00 Dec 13 14:38:09.178405 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] Dec 13 14:38:09.178463 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] Dec 13 14:38:09.178520 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] Dec 13 14:38:09.178599 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 Dec 13 14:38:09.178674 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 Dec 13 14:38:09.178746 kernel: pci 000d:00:01.0: enabling Extended Tags Dec 13 14:38:09.178812 kernel: pci 000d:00:01.0: supports D1 D2 Dec 13 14:38:09.178875 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.178950 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 Dec 13 14:38:09.179014 kernel: pci 000d:00:02.0: supports D1 D2 Dec 13 14:38:09.179079 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.179155 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 Dec 13 14:38:09.179218 kernel: pci 000d:00:03.0: supports D1 D2 Dec 13 14:38:09.179285 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.179355 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 Dec 13 14:38:09.179419 kernel: pci 000d:00:04.0: supports D1 D2 Dec 13 14:38:09.179481 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.179491 kernel: acpiphp: Slot [1] registered Dec 13 14:38:09.179500 kernel: acpiphp: Slot [2] registered Dec 13 14:38:09.179508 kernel: acpiphp: Slot [3] registered Dec 13 14:38:09.179515 kernel: acpiphp: Slot [4] registered Dec 13 14:38:09.179571 kernel: pci_bus 000d:00: on NUMA node 0 Dec 13 14:38:09.179633 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 14:38:09.179697 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.179766 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.179831 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 14:38:09.179897 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.179963 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.180028 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 14:38:09.180093 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.180156 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.180222 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 14:38:09.180285 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.180351 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.180416 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] Dec 13 14:38:09.180479 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] Dec 13 14:38:09.180543 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] Dec 13 14:38:09.180605 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] Dec 13 14:38:09.180668 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] Dec 13 14:38:09.180735 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] Dec 13 14:38:09.180800 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] Dec 13 14:38:09.180863 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] Dec 13 14:38:09.180926 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.180988 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.181051 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.181113 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.181176 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.181239 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.181303 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.181365 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.181427 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.181490 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.181552 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.181616 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.181678 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.181746 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.181812 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.181875 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.181938 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Dec 13 14:38:09.182002 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] Dec 13 14:38:09.182064 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] Dec 13 14:38:09.182127 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Dec 13 14:38:09.182190 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] Dec 13 14:38:09.182255 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] Dec 13 14:38:09.182320 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Dec 13 14:38:09.182383 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] Dec 13 14:38:09.182447 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] Dec 13 14:38:09.182510 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Dec 13 14:38:09.182573 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] Dec 13 14:38:09.182638 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] Dec 13 14:38:09.182699 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] Dec 13 14:38:09.182758 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] Dec 13 14:38:09.182829 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] Dec 13 14:38:09.182888 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] Dec 13 14:38:09.182957 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] Dec 13 14:38:09.183020 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] Dec 13 14:38:09.183097 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] Dec 13 14:38:09.183156 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] Dec 13 14:38:09.183222 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] Dec 13 14:38:09.183281 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] Dec 13 14:38:09.183290 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) Dec 13 14:38:09.183361 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:38:09.183424 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 14:38:09.183485 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] Dec 13 14:38:09.183546 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 14:38:09.183606 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 Dec 13 14:38:09.183667 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] Dec 13 14:38:09.183677 kernel: PCI host bridge to bus 0000:00 Dec 13 14:38:09.183747 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] Dec 13 14:38:09.183804 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] Dec 13 14:38:09.183860 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:38:09.183932 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 Dec 13 14:38:09.184003 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 Dec 13 14:38:09.184067 kernel: pci 0000:00:01.0: enabling Extended Tags Dec 13 14:38:09.184130 kernel: pci 0000:00:01.0: supports D1 D2 Dec 13 14:38:09.184197 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.184269 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 Dec 13 14:38:09.184333 kernel: pci 0000:00:02.0: supports D1 D2 Dec 13 14:38:09.184397 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.184466 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 Dec 13 14:38:09.184530 kernel: pci 0000:00:03.0: supports D1 D2 Dec 13 14:38:09.184593 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.184668 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 Dec 13 14:38:09.184734 kernel: pci 0000:00:04.0: supports D1 D2 Dec 13 14:38:09.184798 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.184808 kernel: acpiphp: Slot [1-1] registered Dec 13 14:38:09.184815 kernel: acpiphp: Slot [2-1] registered Dec 13 14:38:09.184822 kernel: acpiphp: Slot [3-1] registered Dec 13 14:38:09.184830 kernel: acpiphp: Slot [4-1] registered Dec 13 14:38:09.184884 kernel: pci_bus 0000:00: on NUMA node 0 Dec 13 14:38:09.184950 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 14:38:09.185012 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.185076 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.185139 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 14:38:09.185203 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.185265 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.185328 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 14:38:09.185393 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.185456 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.185519 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 14:38:09.185582 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.185645 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.185711 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] Dec 13 14:38:09.185775 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Dec 13 14:38:09.185840 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] Dec 13 14:38:09.185904 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Dec 13 14:38:09.185966 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] Dec 13 14:38:09.186029 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Dec 13 14:38:09.186091 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] Dec 13 14:38:09.186154 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Dec 13 14:38:09.186216 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.186280 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.186344 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.186408 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.186471 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.186534 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.186596 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.186659 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.186725 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.186787 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.186854 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.186916 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.186979 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.187041 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.187105 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.187167 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.187230 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 14:38:09.187293 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] Dec 13 14:38:09.187359 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Dec 13 14:38:09.187422 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Dec 13 14:38:09.187484 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] Dec 13 14:38:09.187548 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Dec 13 14:38:09.187610 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Dec 13 14:38:09.187675 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] Dec 13 14:38:09.187742 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Dec 13 14:38:09.187806 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Dec 13 14:38:09.187868 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] Dec 13 14:38:09.187933 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Dec 13 14:38:09.187992 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] Dec 13 14:38:09.188049 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] Dec 13 14:38:09.188118 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] Dec 13 14:38:09.188178 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Dec 13 14:38:09.188245 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] Dec 13 14:38:09.188305 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Dec 13 14:38:09.188381 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] Dec 13 14:38:09.188442 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Dec 13 14:38:09.188506 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] Dec 13 14:38:09.188566 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Dec 13 14:38:09.188576 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) Dec 13 14:38:09.188646 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:38:09.188711 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 14:38:09.188776 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] Dec 13 14:38:09.188837 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 14:38:09.188898 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 Dec 13 14:38:09.188958 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] Dec 13 14:38:09.188968 kernel: PCI host bridge to bus 0005:00 Dec 13 14:38:09.189032 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] Dec 13 14:38:09.189091 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] Dec 13 14:38:09.189148 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] Dec 13 14:38:09.189219 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 Dec 13 14:38:09.189292 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 Dec 13 14:38:09.189356 kernel: pci 0005:00:01.0: supports D1 D2 Dec 13 14:38:09.189419 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.189489 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 Dec 13 14:38:09.189554 kernel: pci 0005:00:03.0: supports D1 D2 Dec 13 14:38:09.189619 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.189688 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 Dec 13 14:38:09.189758 kernel: pci 0005:00:05.0: supports D1 D2 Dec 13 14:38:09.189820 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.189891 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 Dec 13 14:38:09.189956 kernel: pci 0005:00:07.0: supports D1 D2 Dec 13 14:38:09.190024 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.190034 kernel: acpiphp: Slot [1-2] registered Dec 13 14:38:09.190041 kernel: acpiphp: Slot [2-2] registered Dec 13 14:38:09.190113 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 Dec 13 14:38:09.190181 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] Dec 13 14:38:09.190245 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] Dec 13 14:38:09.190320 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 Dec 13 14:38:09.190391 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] Dec 13 14:38:09.190457 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] Dec 13 14:38:09.190517 kernel: pci_bus 0005:00: on NUMA node 0 Dec 13 14:38:09.190581 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 14:38:09.190647 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.190713 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.190781 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 14:38:09.190846 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.190910 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.190972 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 14:38:09.191036 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.191099 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 13 14:38:09.191165 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 14:38:09.191249 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.191319 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 Dec 13 14:38:09.191383 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] Dec 13 14:38:09.191448 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Dec 13 14:38:09.191511 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] Dec 13 14:38:09.191573 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Dec 13 14:38:09.191636 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] Dec 13 14:38:09.191717 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Dec 13 14:38:09.191787 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] Dec 13 14:38:09.191851 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Dec 13 14:38:09.191915 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.191980 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.192043 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.192105 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.192168 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.192230 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.192296 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.192358 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.192422 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.192485 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.192547 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.192610 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.192673 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.192741 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.192804 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.192871 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.192934 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Dec 13 14:38:09.192999 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] Dec 13 14:38:09.193061 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Dec 13 14:38:09.193126 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Dec 13 14:38:09.193189 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] Dec 13 14:38:09.193254 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Dec 13 14:38:09.193324 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] Dec 13 14:38:09.193391 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] Dec 13 14:38:09.193454 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Dec 13 14:38:09.193518 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] Dec 13 14:38:09.193582 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Dec 13 14:38:09.193648 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] Dec 13 14:38:09.193730 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] Dec 13 14:38:09.193797 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Dec 13 14:38:09.193861 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] Dec 13 14:38:09.193924 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Dec 13 14:38:09.193984 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] Dec 13 14:38:09.194040 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] Dec 13 14:38:09.194111 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] Dec 13 14:38:09.194173 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Dec 13 14:38:09.194250 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] Dec 13 14:38:09.194310 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Dec 13 14:38:09.194376 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] Dec 13 14:38:09.194438 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Dec 13 14:38:09.194506 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] Dec 13 14:38:09.194567 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Dec 13 14:38:09.194577 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) Dec 13 14:38:09.194646 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:38:09.194711 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 14:38:09.194774 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] Dec 13 14:38:09.194836 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 14:38:09.194900 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 Dec 13 14:38:09.194961 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] Dec 13 14:38:09.194972 kernel: PCI host bridge to bus 0003:00 Dec 13 14:38:09.195035 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] Dec 13 14:38:09.195092 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] Dec 13 14:38:09.195148 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] Dec 13 14:38:09.195221 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 Dec 13 14:38:09.195296 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 Dec 13 14:38:09.195361 kernel: pci 0003:00:01.0: supports D1 D2 Dec 13 14:38:09.195439 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.195511 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 Dec 13 14:38:09.195574 kernel: pci 0003:00:03.0: supports D1 D2 Dec 13 14:38:09.195637 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.195712 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 Dec 13 14:38:09.195785 kernel: pci 0003:00:05.0: supports D1 D2 Dec 13 14:38:09.195851 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.195861 kernel: acpiphp: Slot [1-3] registered Dec 13 14:38:09.195868 kernel: acpiphp: Slot [2-3] registered Dec 13 14:38:09.195944 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 Dec 13 14:38:09.196013 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] Dec 13 14:38:09.196080 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] Dec 13 14:38:09.196148 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] Dec 13 14:38:09.196213 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 14:38:09.196277 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] Dec 13 14:38:09.196342 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 14:38:09.196407 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] Dec 13 14:38:09.196473 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) Dec 13 14:38:09.196538 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) Dec 13 14:38:09.196613 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 Dec 13 14:38:09.196682 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] Dec 13 14:38:09.196750 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] Dec 13 14:38:09.196816 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] Dec 13 14:38:09.196880 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold Dec 13 14:38:09.196946 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] Dec 13 14:38:09.197011 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 14:38:09.197075 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] Dec 13 14:38:09.197141 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) Dec 13 14:38:09.197201 kernel: pci_bus 0003:00: on NUMA node 0 Dec 13 14:38:09.197265 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 14:38:09.197328 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.197391 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.197454 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 14:38:09.197517 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.197582 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.197646 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 Dec 13 14:38:09.197712 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 Dec 13 14:38:09.197787 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Dec 13 14:38:09.197853 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] Dec 13 14:38:09.197917 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] Dec 13 14:38:09.197979 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] Dec 13 14:38:09.198045 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] Dec 13 14:38:09.198108 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] Dec 13 14:38:09.198171 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.198234 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.198300 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.198364 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.198426 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.198490 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.198554 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.198618 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.198680 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.198747 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.198810 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.198874 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.198936 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Dec 13 14:38:09.199000 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] Dec 13 14:38:09.199067 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] Dec 13 14:38:09.199131 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Dec 13 14:38:09.199194 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] Dec 13 14:38:09.199260 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] Dec 13 14:38:09.199328 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] Dec 13 14:38:09.199394 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] Dec 13 14:38:09.199462 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] Dec 13 14:38:09.199527 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] Dec 13 14:38:09.199593 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] Dec 13 14:38:09.199658 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] Dec 13 14:38:09.199727 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] Dec 13 14:38:09.199793 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] Dec 13 14:38:09.199858 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] Dec 13 14:38:09.199926 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] Dec 13 14:38:09.199991 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] Dec 13 14:38:09.200056 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] Dec 13 14:38:09.200121 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] Dec 13 14:38:09.200186 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] Dec 13 14:38:09.200251 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] Dec 13 14:38:09.200316 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] Dec 13 14:38:09.200379 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] Dec 13 14:38:09.200444 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] Dec 13 14:38:09.200508 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] Dec 13 14:38:09.200566 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc Dec 13 14:38:09.200623 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] Dec 13 14:38:09.200680 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] Dec 13 14:38:09.200763 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] Dec 13 14:38:09.200828 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] Dec 13 14:38:09.200898 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] Dec 13 14:38:09.200957 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] Dec 13 14:38:09.201025 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] Dec 13 14:38:09.201084 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] Dec 13 14:38:09.201095 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) Dec 13 14:38:09.201166 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:38:09.201230 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 14:38:09.201293 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] Dec 13 14:38:09.201354 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 14:38:09.201415 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 Dec 13 14:38:09.201477 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] Dec 13 14:38:09.201487 kernel: PCI host bridge to bus 000c:00 Dec 13 14:38:09.201551 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] Dec 13 14:38:09.201611 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] Dec 13 14:38:09.201667 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] Dec 13 14:38:09.201742 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 Dec 13 14:38:09.201815 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 Dec 13 14:38:09.201880 kernel: pci 000c:00:01.0: enabling Extended Tags Dec 13 14:38:09.201943 kernel: pci 000c:00:01.0: supports D1 D2 Dec 13 14:38:09.202009 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.202083 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 Dec 13 14:38:09.202147 kernel: pci 000c:00:02.0: supports D1 D2 Dec 13 14:38:09.202210 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.202282 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 Dec 13 14:38:09.202345 kernel: pci 000c:00:03.0: supports D1 D2 Dec 13 14:38:09.202409 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.202479 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 Dec 13 14:38:09.202545 kernel: pci 000c:00:04.0: supports D1 D2 Dec 13 14:38:09.202609 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.202619 kernel: acpiphp: Slot [1-4] registered Dec 13 14:38:09.202627 kernel: acpiphp: Slot [2-4] registered Dec 13 14:38:09.202635 kernel: acpiphp: Slot [3-2] registered Dec 13 14:38:09.202643 kernel: acpiphp: Slot [4-2] registered Dec 13 14:38:09.202699 kernel: pci_bus 000c:00: on NUMA node 0 Dec 13 14:38:09.202771 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 14:38:09.202839 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.202903 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.202966 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 14:38:09.203030 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.203092 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.203157 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 14:38:09.203220 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.203286 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.203350 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 14:38:09.203414 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.203476 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.203540 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] Dec 13 14:38:09.203604 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] Dec 13 14:38:09.203669 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] Dec 13 14:38:09.203737 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] Dec 13 14:38:09.203799 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] Dec 13 14:38:09.203863 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] Dec 13 14:38:09.203925 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] Dec 13 14:38:09.203989 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] Dec 13 14:38:09.204052 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.204115 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.204179 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.204243 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.204306 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.204370 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.204434 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.204498 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.204562 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.204626 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.204689 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.204757 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.204820 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.204883 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.204947 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.205010 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.205074 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Dec 13 14:38:09.205137 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] Dec 13 14:38:09.205201 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] Dec 13 14:38:09.205266 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Dec 13 14:38:09.205330 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] Dec 13 14:38:09.205393 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] Dec 13 14:38:09.205457 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Dec 13 14:38:09.205519 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] Dec 13 14:38:09.205583 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] Dec 13 14:38:09.205648 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Dec 13 14:38:09.205714 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] Dec 13 14:38:09.205780 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] Dec 13 14:38:09.205839 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] Dec 13 14:38:09.205896 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] Dec 13 14:38:09.205965 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] Dec 13 14:38:09.206027 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] Dec 13 14:38:09.206102 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] Dec 13 14:38:09.206162 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] Dec 13 14:38:09.206227 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] Dec 13 14:38:09.206287 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] Dec 13 14:38:09.206354 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] Dec 13 14:38:09.206413 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] Dec 13 14:38:09.206426 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) Dec 13 14:38:09.206496 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:38:09.206559 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 14:38:09.206621 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] Dec 13 14:38:09.206683 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 14:38:09.206749 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 Dec 13 14:38:09.206810 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] Dec 13 14:38:09.206822 kernel: PCI host bridge to bus 0002:00 Dec 13 14:38:09.206889 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] Dec 13 14:38:09.206946 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] Dec 13 14:38:09.207003 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] Dec 13 14:38:09.207074 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 Dec 13 14:38:09.207145 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 Dec 13 14:38:09.207211 kernel: pci 0002:00:01.0: supports D1 D2 Dec 13 14:38:09.207275 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.207347 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 Dec 13 14:38:09.207411 kernel: pci 0002:00:03.0: supports D1 D2 Dec 13 14:38:09.207475 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.207547 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 Dec 13 14:38:09.207610 kernel: pci 0002:00:05.0: supports D1 D2 Dec 13 14:38:09.207676 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.207751 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 Dec 13 14:38:09.207814 kernel: pci 0002:00:07.0: supports D1 D2 Dec 13 14:38:09.207878 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.207888 kernel: acpiphp: Slot [1-5] registered Dec 13 14:38:09.207896 kernel: acpiphp: Slot [2-5] registered Dec 13 14:38:09.207903 kernel: acpiphp: Slot [3-3] registered Dec 13 14:38:09.207911 kernel: acpiphp: Slot [4-3] registered Dec 13 14:38:09.207968 kernel: pci_bus 0002:00: on NUMA node 0 Dec 13 14:38:09.208034 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 14:38:09.208097 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.208161 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.208226 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 14:38:09.208292 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.208355 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.208419 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 14:38:09.208482 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.208545 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.208609 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 14:38:09.208674 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.208740 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.208804 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] Dec 13 14:38:09.208867 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] Dec 13 14:38:09.208933 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] Dec 13 14:38:09.208999 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] Dec 13 14:38:09.209062 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] Dec 13 14:38:09.209126 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] Dec 13 14:38:09.209193 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] Dec 13 14:38:09.209256 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] Dec 13 14:38:09.209321 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.209384 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.209448 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.209512 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.209577 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.209644 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.209712 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.209776 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.209840 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.209902 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.209966 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.210029 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.210093 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.210155 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.210219 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.210284 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.210346 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Dec 13 14:38:09.210408 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] Dec 13 14:38:09.210473 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] Dec 13 14:38:09.210549 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Dec 13 14:38:09.210614 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] Dec 13 14:38:09.210677 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] Dec 13 14:38:09.210977 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Dec 13 14:38:09.211046 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] Dec 13 14:38:09.211115 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] Dec 13 14:38:09.211179 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Dec 13 14:38:09.211242 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] Dec 13 14:38:09.211306 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] Dec 13 14:38:09.211369 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] Dec 13 14:38:09.211425 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] Dec 13 14:38:09.211495 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] Dec 13 14:38:09.211554 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] Dec 13 14:38:09.211620 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] Dec 13 14:38:09.211678 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] Dec 13 14:38:09.211762 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] Dec 13 14:38:09.211820 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] Dec 13 14:38:09.211886 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] Dec 13 14:38:09.211944 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] Dec 13 14:38:09.211954 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) Dec 13 14:38:09.212022 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:38:09.212085 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 14:38:09.212145 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] Dec 13 14:38:09.212205 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 14:38:09.212264 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 Dec 13 14:38:09.212324 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] Dec 13 14:38:09.212335 kernel: PCI host bridge to bus 0001:00 Dec 13 14:38:09.212396 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] Dec 13 14:38:09.212455 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] Dec 13 14:38:09.212511 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] Dec 13 14:38:09.212582 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 Dec 13 14:38:09.212652 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 Dec 13 14:38:09.212719 kernel: pci 0001:00:01.0: enabling Extended Tags Dec 13 14:38:09.212782 kernel: pci 0001:00:01.0: supports D1 D2 Dec 13 14:38:09.212849 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.212919 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 Dec 13 14:38:09.212984 kernel: pci 0001:00:02.0: supports D1 D2 Dec 13 14:38:09.213046 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.213117 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 Dec 13 14:38:09.213181 kernel: pci 0001:00:03.0: supports D1 D2 Dec 13 14:38:09.213244 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.213315 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 Dec 13 14:38:09.213378 kernel: pci 0001:00:04.0: supports D1 D2 Dec 13 14:38:09.213442 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.213452 kernel: acpiphp: Slot [1-6] registered Dec 13 14:38:09.213521 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 Dec 13 14:38:09.213597 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] Dec 13 14:38:09.213662 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] Dec 13 14:38:09.213736 kernel: pci 0001:01:00.0: PME# supported from D3cold Dec 13 14:38:09.213801 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 14:38:09.213874 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 Dec 13 14:38:09.213940 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] Dec 13 14:38:09.214004 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] Dec 13 14:38:09.214068 kernel: pci 0001:01:00.1: PME# supported from D3cold Dec 13 14:38:09.214079 kernel: acpiphp: Slot [2-6] registered Dec 13 14:38:09.214086 kernel: acpiphp: Slot [3-4] registered Dec 13 14:38:09.214096 kernel: acpiphp: Slot [4-4] registered Dec 13 14:38:09.214152 kernel: pci_bus 0001:00: on NUMA node 0 Dec 13 14:38:09.214216 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 14:38:09.214279 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 14:38:09.214343 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.214405 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.214468 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 14:38:09.214533 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.214596 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.214659 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 14:38:09.214883 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.214955 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.215019 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] Dec 13 14:38:09.215081 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] Dec 13 14:38:09.215147 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] Dec 13 14:38:09.215209 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] Dec 13 14:38:09.215272 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] Dec 13 14:38:09.215333 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] Dec 13 14:38:09.215395 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] Dec 13 14:38:09.215457 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] Dec 13 14:38:09.215519 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.215580 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.215646 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.215711 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.215775 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.215837 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.215900 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.215961 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.216024 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.216085 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.216149 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.216211 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.216273 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.216334 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.216397 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.216461 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.216526 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] Dec 13 14:38:09.216593 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] Dec 13 14:38:09.216656 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] Dec 13 14:38:09.216726 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] Dec 13 14:38:09.216788 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Dec 13 14:38:09.216850 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Dec 13 14:38:09.216912 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] Dec 13 14:38:09.216975 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Dec 13 14:38:09.217037 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] Dec 13 14:38:09.217102 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] Dec 13 14:38:09.217164 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Dec 13 14:38:09.217227 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] Dec 13 14:38:09.217289 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] Dec 13 14:38:09.217352 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Dec 13 14:38:09.217414 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] Dec 13 14:38:09.217479 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] Dec 13 14:38:09.217538 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] Dec 13 14:38:09.217594 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] Dec 13 14:38:09.217670 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] Dec 13 14:38:09.217733 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] Dec 13 14:38:09.217801 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] Dec 13 14:38:09.217861 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] Dec 13 14:38:09.217927 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] Dec 13 14:38:09.217986 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] Dec 13 14:38:09.218051 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] Dec 13 14:38:09.218109 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] Dec 13 14:38:09.218119 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) Dec 13 14:38:09.218188 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:38:09.218253 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 14:38:09.218313 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] Dec 13 14:38:09.218373 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 14:38:09.218432 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 Dec 13 14:38:09.218492 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] Dec 13 14:38:09.218502 kernel: PCI host bridge to bus 0004:00 Dec 13 14:38:09.218564 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] Dec 13 14:38:09.218623 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] Dec 13 14:38:09.218678 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] Dec 13 14:38:09.218752 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 Dec 13 14:38:09.218823 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 Dec 13 14:38:09.218887 kernel: pci 0004:00:01.0: supports D1 D2 Dec 13 14:38:09.218949 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.219019 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 Dec 13 14:38:09.219085 kernel: pci 0004:00:03.0: supports D1 D2 Dec 13 14:38:09.219148 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.219218 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 Dec 13 14:38:09.219281 kernel: pci 0004:00:05.0: supports D1 D2 Dec 13 14:38:09.219344 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.219414 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 Dec 13 14:38:09.219482 kernel: pci 0004:01:00.0: enabling Extended Tags Dec 13 14:38:09.219545 kernel: pci 0004:01:00.0: supports D1 D2 Dec 13 14:38:09.219609 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 14:38:09.219687 kernel: pci_bus 0004:02: extended config space not accessible Dec 13 14:38:09.219766 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 Dec 13 14:38:09.219834 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] Dec 13 14:38:09.219901 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] Dec 13 14:38:09.219970 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] Dec 13 14:38:09.220036 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb Dec 13 14:38:09.220102 kernel: pci 0004:02:00.0: supports D1 D2 Dec 13 14:38:09.220169 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 14:38:09.220241 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 Dec 13 14:38:09.220306 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] Dec 13 14:38:09.220370 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 14:38:09.220426 kernel: pci_bus 0004:00: on NUMA node 0 Dec 13 14:38:09.220490 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 Dec 13 14:38:09.220553 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 14:38:09.220616 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.220679 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 13 14:38:09.220745 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 14:38:09.220807 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.220870 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.220935 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] Dec 13 14:38:09.220998 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] Dec 13 14:38:09.221061 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] Dec 13 14:38:09.221123 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] Dec 13 14:38:09.221185 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] Dec 13 14:38:09.221247 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] Dec 13 14:38:09.221311 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.221374 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.221437 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.221499 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.221561 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.221623 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.221685 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.221753 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.221816 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.221881 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.221944 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.222005 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.222070 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] Dec 13 14:38:09.222134 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.222197 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.222264 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] Dec 13 14:38:09.222331 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] Dec 13 14:38:09.222397 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] Dec 13 14:38:09.222465 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] Dec 13 14:38:09.222530 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Dec 13 14:38:09.222593 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] Dec 13 14:38:09.222656 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Dec 13 14:38:09.222721 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] Dec 13 14:38:09.222785 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] Dec 13 14:38:09.222850 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] Dec 13 14:38:09.222913 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Dec 13 14:38:09.222977 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] Dec 13 14:38:09.223040 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] Dec 13 14:38:09.223102 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Dec 13 14:38:09.223164 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] Dec 13 14:38:09.223226 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] Dec 13 14:38:09.223283 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc Dec 13 14:38:09.223344 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] Dec 13 14:38:09.223400 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] Dec 13 14:38:09.223468 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] Dec 13 14:38:09.223527 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] Dec 13 14:38:09.223589 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] Dec 13 14:38:09.223654 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] Dec 13 14:38:09.223719 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] Dec 13 14:38:09.223784 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] Dec 13 14:38:09.223843 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] Dec 13 14:38:09.223853 kernel: iommu: Default domain type: Translated Dec 13 14:38:09.223861 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 14:38:09.223869 kernel: efivars: Registered efivars operations Dec 13 14:38:09.223934 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device Dec 13 14:38:09.224001 kernel: pci 0004:02:00.0: vgaarb: bridge control possible Dec 13 14:38:09.224070 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none Dec 13 14:38:09.224081 kernel: vgaarb: loaded Dec 13 14:38:09.224089 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 14:38:09.224097 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:38:09.224105 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:38:09.224113 kernel: pnp: PnP ACPI init Dec 13 14:38:09.224180 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved Dec 13 14:38:09.224241 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved Dec 13 14:38:09.224298 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved Dec 13 14:38:09.224355 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved Dec 13 14:38:09.224411 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved Dec 13 14:38:09.224468 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved Dec 13 14:38:09.224525 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved Dec 13 14:38:09.224582 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved Dec 13 14:38:09.224594 kernel: pnp: PnP ACPI: found 1 devices Dec 13 14:38:09.224603 kernel: NET: Registered PF_INET protocol family Dec 13 14:38:09.224611 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:38:09.224619 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 14:38:09.224627 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:38:09.224635 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:38:09.224643 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 14:38:09.224651 kernel: TCP: Hash tables configured (established 524288 bind 65536) Dec 13 14:38:09.224658 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 14:38:09.224668 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 14:38:09.224676 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:38:09.224744 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes Dec 13 14:38:09.224754 kernel: kvm [1]: IPA Size Limit: 48 bits Dec 13 14:38:09.224762 kernel: kvm [1]: GICv3: no GICV resource entry Dec 13 14:38:09.224770 kernel: kvm [1]: disabling GICv2 emulation Dec 13 14:38:09.224778 kernel: kvm [1]: GIC system register CPU interface enabled Dec 13 14:38:09.224786 kernel: kvm [1]: vgic interrupt IRQ9 Dec 13 14:38:09.224793 kernel: kvm [1]: VHE mode initialized successfully Dec 13 14:38:09.224803 kernel: Initialise system trusted keyrings Dec 13 14:38:09.224810 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 Dec 13 14:38:09.224818 kernel: Key type asymmetric registered Dec 13 14:38:09.224825 kernel: Asymmetric key parser 'x509' registered Dec 13 14:38:09.224833 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 14:38:09.224841 kernel: io scheduler mq-deadline registered Dec 13 14:38:09.224849 kernel: io scheduler kyber registered Dec 13 14:38:09.224856 kernel: io scheduler bfq registered Dec 13 14:38:09.224864 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 14:38:09.224874 kernel: ACPI: button: Power Button [PWRB] Dec 13 14:38:09.224881 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). Dec 13 14:38:09.224889 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:38:09.224960 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 Dec 13 14:38:09.225020 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 14:38:09.225079 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 14:38:09.225137 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq Dec 13 14:38:09.225197 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq Dec 13 14:38:09.225255 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq Dec 13 14:38:09.225321 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 Dec 13 14:38:09.225380 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 14:38:09.225438 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 14:38:09.225497 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq Dec 13 14:38:09.225554 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq Dec 13 14:38:09.225614 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq Dec 13 14:38:09.225680 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 Dec 13 14:38:09.225872 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 14:38:09.225934 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 14:38:09.225993 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq Dec 13 14:38:09.226051 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq Dec 13 14:38:09.226112 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq Dec 13 14:38:09.226178 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 Dec 13 14:38:09.226237 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 14:38:09.226294 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 14:38:09.226352 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq Dec 13 14:38:09.226410 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq Dec 13 14:38:09.226467 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq Dec 13 14:38:09.226543 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 Dec 13 14:38:09.226602 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 14:38:09.226660 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 14:38:09.226722 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq Dec 13 14:38:09.226780 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq Dec 13 14:38:09.226839 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq Dec 13 14:38:09.226909 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 Dec 13 14:38:09.226968 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 14:38:09.227028 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 14:38:09.227086 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq Dec 13 14:38:09.227144 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq Dec 13 14:38:09.227202 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq Dec 13 14:38:09.227267 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 Dec 13 14:38:09.227328 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 14:38:09.227386 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 14:38:09.227445 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq Dec 13 14:38:09.227503 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq Dec 13 14:38:09.227561 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq Dec 13 14:38:09.227625 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 Dec 13 14:38:09.227686 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 14:38:09.227748 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 14:38:09.227806 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq Dec 13 14:38:09.227864 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq Dec 13 14:38:09.227921 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq Dec 13 14:38:09.227932 kernel: thunder_xcv, ver 1.0 Dec 13 14:38:09.227940 kernel: thunder_bgx, ver 1.0 Dec 13 14:38:09.227949 kernel: nicpf, ver 1.0 Dec 13 14:38:09.227957 kernel: nicvf, ver 1.0 Dec 13 14:38:09.228022 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 14:38:09.228082 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T14:38:07 UTC (1734100687) Dec 13 14:38:09.228092 kernel: efifb: probing for efifb Dec 13 14:38:09.228100 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k Dec 13 14:38:09.228108 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Dec 13 14:38:09.228115 kernel: efifb: scrolling: redraw Dec 13 14:38:09.228125 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 14:38:09.228132 kernel: Console: switching to colour frame buffer device 100x37 Dec 13 14:38:09.228140 kernel: fb0: EFI VGA frame buffer device Dec 13 14:38:09.228148 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 Dec 13 14:38:09.228156 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:38:09.228164 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 14:38:09.228172 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 14:38:09.228179 kernel: watchdog: Hard watchdog permanently disabled Dec 13 14:38:09.228187 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:38:09.228196 kernel: Segment Routing with IPv6 Dec 13 14:38:09.228204 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:38:09.228211 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:38:09.228219 kernel: Key type dns_resolver registered Dec 13 14:38:09.228227 kernel: registered taskstats version 1 Dec 13 14:38:09.228234 kernel: Loading compiled-in X.509 certificates Dec 13 14:38:09.228242 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 752b3e36c6039904ea643ccad2b3f5f3cb4ebf78' Dec 13 14:38:09.228250 kernel: Key type .fscrypt registered Dec 13 14:38:09.228257 kernel: Key type fscrypt-provisioning registered Dec 13 14:38:09.228266 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:38:09.228274 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:38:09.228282 kernel: ima: No architecture policies found Dec 13 14:38:09.228289 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 14:38:09.228354 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 Dec 13 14:38:09.228418 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 Dec 13 14:38:09.228482 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 Dec 13 14:38:09.228546 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 Dec 13 14:38:09.228611 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 Dec 13 14:38:09.228676 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 Dec 13 14:38:09.228744 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 Dec 13 14:38:09.228807 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 Dec 13 14:38:09.228872 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 Dec 13 14:38:09.228935 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 Dec 13 14:38:09.228998 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 Dec 13 14:38:09.229061 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 Dec 13 14:38:09.229125 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 Dec 13 14:38:09.229191 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 Dec 13 14:38:09.229254 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 Dec 13 14:38:09.229316 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 Dec 13 14:38:09.229381 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 Dec 13 14:38:09.229443 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 Dec 13 14:38:09.229507 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 Dec 13 14:38:09.229569 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 Dec 13 14:38:09.229633 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 Dec 13 14:38:09.229695 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 Dec 13 14:38:09.229765 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 Dec 13 14:38:09.229827 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 Dec 13 14:38:09.229891 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 Dec 13 14:38:09.229954 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 Dec 13 14:38:09.230018 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 Dec 13 14:38:09.230079 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 Dec 13 14:38:09.230144 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 Dec 13 14:38:09.230206 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 Dec 13 14:38:09.230272 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 Dec 13 14:38:09.230336 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 Dec 13 14:38:09.230399 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 Dec 13 14:38:09.230462 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 Dec 13 14:38:09.230526 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 Dec 13 14:38:09.230589 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 Dec 13 14:38:09.230652 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 Dec 13 14:38:09.230719 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 Dec 13 14:38:09.230786 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 Dec 13 14:38:09.230849 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 Dec 13 14:38:09.230912 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 Dec 13 14:38:09.230974 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 Dec 13 14:38:09.231037 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 Dec 13 14:38:09.231100 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 Dec 13 14:38:09.231163 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 Dec 13 14:38:09.231226 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 Dec 13 14:38:09.231291 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 Dec 13 14:38:09.231353 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 Dec 13 14:38:09.231419 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 Dec 13 14:38:09.231481 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 Dec 13 14:38:09.231545 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 Dec 13 14:38:09.231607 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 Dec 13 14:38:09.231671 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 Dec 13 14:38:09.231736 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 Dec 13 14:38:09.231800 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 Dec 13 14:38:09.231865 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 Dec 13 14:38:09.231929 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 Dec 13 14:38:09.231991 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 Dec 13 14:38:09.232054 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 Dec 13 14:38:09.232116 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 Dec 13 14:38:09.232182 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 Dec 13 14:38:09.232192 kernel: clk: Disabling unused clocks Dec 13 14:38:09.232202 kernel: Freeing unused kernel memory: 39936K Dec 13 14:38:09.232209 kernel: Run /init as init process Dec 13 14:38:09.232217 kernel: with arguments: Dec 13 14:38:09.232225 kernel: /init Dec 13 14:38:09.232232 kernel: with environment: Dec 13 14:38:09.232240 kernel: HOME=/ Dec 13 14:38:09.232247 kernel: TERM=linux Dec 13 14:38:09.232255 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:38:09.232265 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 14:38:09.232276 systemd[1]: Detected architecture arm64. Dec 13 14:38:09.232284 systemd[1]: Running in initrd. Dec 13 14:38:09.232292 systemd[1]: No hostname configured, using default hostname. Dec 13 14:38:09.232300 systemd[1]: Hostname set to . Dec 13 14:38:09.232307 systemd[1]: Initializing machine ID from random generator. Dec 13 14:38:09.232315 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:38:09.232324 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 14:38:09.232333 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 14:38:09.232342 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 14:38:09.232350 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 14:38:09.232358 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 14:38:09.232366 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 14:38:09.232375 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 14:38:09.232384 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 14:38:09.232393 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 14:38:09.232401 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 14:38:09.232410 systemd[1]: Reached target paths.target - Path Units. Dec 13 14:38:09.232418 systemd[1]: Reached target slices.target - Slice Units. Dec 13 14:38:09.232426 systemd[1]: Reached target swap.target - Swaps. Dec 13 14:38:09.232434 systemd[1]: Reached target timers.target - Timer Units. Dec 13 14:38:09.232442 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 14:38:09.232450 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 14:38:09.232458 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 14:38:09.232468 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 14:38:09.232476 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 14:38:09.232484 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 14:38:09.232492 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 14:38:09.232500 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 14:38:09.232508 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 14:38:09.232516 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 14:38:09.232524 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 14:38:09.232534 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:38:09.232542 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 14:38:09.232550 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 14:38:09.232580 systemd-journald[900]: Collecting audit messages is disabled. Dec 13 14:38:09.232601 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:38:09.232609 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 14:38:09.232618 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:38:09.232626 kernel: Bridge firewalling registered Dec 13 14:38:09.232634 systemd-journald[900]: Journal started Dec 13 14:38:09.232658 systemd-journald[900]: Runtime Journal (/run/log/journal/dd4de9db7be2427d914b4813423bb1c5) is 8.0M, max 4.0G, 3.9G free. Dec 13 14:38:09.191828 systemd-modules-load[902]: Inserted module 'overlay' Dec 13 14:38:09.271200 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 14:38:09.213724 systemd-modules-load[902]: Inserted module 'br_netfilter' Dec 13 14:38:09.277176 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 14:38:09.287969 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:38:09.299308 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 14:38:09.309934 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:38:09.334868 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 14:38:09.352134 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 14:38:09.358726 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 14:38:09.370164 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 14:38:09.386133 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 14:38:09.402373 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 14:38:09.418784 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 14:38:09.430047 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 14:38:09.458823 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 14:38:09.472210 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 14:38:09.478672 dracut-cmdline[943]: dracut-dracut-053 Dec 13 14:38:09.491557 dracut-cmdline[943]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 14:38:09.485737 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 14:38:09.501731 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 14:38:09.507869 systemd-resolved[951]: Positive Trust Anchors: Dec 13 14:38:09.507880 systemd-resolved[951]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:38:09.507911 systemd-resolved[951]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 14:38:09.522658 systemd-resolved[951]: Defaulting to hostname 'linux'. Dec 13 14:38:09.536403 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 14:38:09.653736 kernel: SCSI subsystem initialized Dec 13 14:38:09.555603 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 14:38:09.669549 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:38:09.682714 kernel: iscsi: registered transport (tcp) Dec 13 14:38:09.710226 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:38:09.710251 kernel: QLogic iSCSI HBA Driver Dec 13 14:38:09.753770 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 14:38:09.782831 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 14:38:09.822721 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:38:09.822751 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:38:09.837404 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 14:38:09.902719 kernel: raid6: neonx8 gen() 15849 MB/s Dec 13 14:38:09.928717 kernel: raid6: neonx4 gen() 15877 MB/s Dec 13 14:38:09.953717 kernel: raid6: neonx2 gen() 13243 MB/s Dec 13 14:38:09.978717 kernel: raid6: neonx1 gen() 10579 MB/s Dec 13 14:38:10.003717 kernel: raid6: int64x8 gen() 6810 MB/s Dec 13 14:38:10.028717 kernel: raid6: int64x4 gen() 7366 MB/s Dec 13 14:38:10.053718 kernel: raid6: int64x2 gen() 6136 MB/s Dec 13 14:38:10.082277 kernel: raid6: int64x1 gen() 5077 MB/s Dec 13 14:38:10.082298 kernel: raid6: using algorithm neonx4 gen() 15877 MB/s Dec 13 14:38:10.117246 kernel: raid6: .... xor() 12410 MB/s, rmw enabled Dec 13 14:38:10.117267 kernel: raid6: using neon recovery algorithm Dec 13 14:38:10.136717 kernel: xor: measuring software checksum speed Dec 13 14:38:10.148621 kernel: 8regs : 20786 MB/sec Dec 13 14:38:10.148641 kernel: 32regs : 21704 MB/sec Dec 13 14:38:10.156421 kernel: arm64_neon : 28244 MB/sec Dec 13 14:38:10.164385 kernel: xor: using function: arm64_neon (28244 MB/sec) Dec 13 14:38:10.224717 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 14:38:10.234029 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 14:38:10.256922 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 14:38:10.270060 systemd-udevd[1136]: Using default interface naming scheme 'v255'. Dec 13 14:38:10.273046 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 14:38:10.286816 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 14:38:10.300819 dracut-pre-trigger[1148]: rd.md=0: removing MD RAID activation Dec 13 14:38:10.327095 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 14:38:10.348878 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 14:38:10.453316 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 14:38:10.482268 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:38:10.482290 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:38:10.505079 kernel: ACPI: bus type USB registered Dec 13 14:38:10.505104 kernel: usbcore: registered new interface driver usbfs Dec 13 14:38:10.514903 kernel: usbcore: registered new interface driver hub Dec 13 14:38:10.514930 kernel: PTP clock support registered Dec 13 14:38:10.514949 kernel: usbcore: registered new device driver usb Dec 13 14:38:10.539852 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 14:38:10.549656 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:38:10.710298 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Dec 13 14:38:10.710315 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Dec 13 14:38:10.710324 kernel: igb 0003:03:00.0: Adding to iommu group 31 Dec 13 14:38:10.742894 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 32 Dec 13 14:38:11.024148 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Dec 13 14:38:11.024324 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 14:38:11.024447 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault Dec 13 14:38:11.024539 kernel: nvme 0005:03:00.0: Adding to iommu group 33 Dec 13 14:38:11.222412 kernel: igb 0003:03:00.0: added PHC on eth0 Dec 13 14:38:11.222504 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 34 Dec 13 14:38:11.660898 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 14:38:11.660995 kernel: nvme 0005:04:00.0: Adding to iommu group 35 Dec 13 14:38:11.661079 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6a:c4 Dec 13 14:38:11.661155 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 Dec 13 14:38:11.661233 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Dec 13 14:38:11.661314 kernel: igb 0003:03:00.1: Adding to iommu group 36 Dec 13 14:38:11.661396 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000410 Dec 13 14:38:11.661481 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Dec 13 14:38:11.661558 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 14:38:11.661634 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 14:38:11.661722 kernel: nvme nvme0: pci function 0005:03:00.0 Dec 13 14:38:11.661821 kernel: hub 1-0:1.0: USB hub found Dec 13 14:38:11.661924 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 Dec 13 14:38:11.662003 kernel: hub 1-0:1.0: 4 ports detected Dec 13 14:38:11.662086 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 14:38:11.662163 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 14:38:11.662290 kernel: hub 2-0:1.0: USB hub found Dec 13 14:38:11.662384 kernel: nvme nvme0: Shutdown timeout set to 8 seconds Dec 13 14:38:11.662460 kernel: hub 2-0:1.0: 4 ports detected Dec 13 14:38:11.662543 kernel: nvme nvme1: pci function 0005:04:00.0 Dec 13 14:38:11.662623 kernel: nvme nvme1: Shutdown timeout set to 8 seconds Dec 13 14:38:11.662693 kernel: nvme nvme0: 32/0/0 default/read/poll queues Dec 13 14:38:11.662773 kernel: nvme nvme1: 32/0/0 default/read/poll queues Dec 13 14:38:11.662845 kernel: igb 0003:03:00.1: added PHC on eth1 Dec 13 14:38:11.662923 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection Dec 13 14:38:11.663002 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6a:c5 Dec 13 14:38:11.663077 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 Dec 13 14:38:11.663152 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Dec 13 14:38:11.663226 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:38:11.663237 kernel: GPT:9289727 != 1875385007 Dec 13 14:38:11.663246 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:38:11.663256 kernel: GPT:9289727 != 1875385007 Dec 13 14:38:11.663265 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:38:11.663276 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:38:11.663285 kernel: igb 0003:03:00.1 eno2: renamed from eth1 Dec 13 14:38:11.663362 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by (udev-worker) (1208) Dec 13 14:38:11.663373 kernel: igb 0003:03:00.0 eno1: renamed from eth0 Dec 13 14:38:11.663450 kernel: BTRFS: device fsid 47b12626-f7d3-4179-9720-ca262eb4c614 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (1188) Dec 13 14:38:11.663460 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd Dec 13 14:38:11.663584 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged Dec 13 14:38:11.663667 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:38:11.663677 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:38:11.663686 kernel: hub 1-3:1.0: USB hub found Dec 13 14:38:11.663789 kernel: hub 1-3:1.0: 4 ports detected Dec 13 14:38:11.663876 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd Dec 13 14:38:11.663999 kernel: hub 2-3:1.0: USB hub found Dec 13 14:38:11.664095 kernel: hub 2-3:1.0: 4 ports detected Dec 13 14:38:11.664180 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Dec 13 14:38:11.664261 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 Dec 13 14:38:12.350536 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 Dec 13 14:38:12.350637 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 14:38:12.350733 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged Dec 13 14:38:12.350814 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Dec 13 14:38:10.549750 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 14:38:12.381897 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 Dec 13 14:38:12.382015 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 Dec 13 14:38:10.704841 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 14:38:12.401506 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:38:10.716194 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:38:10.716245 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:38:10.722144 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:38:10.738826 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:38:12.423194 disk-uuid[1291]: Primary Header is updated. Dec 13 14:38:12.423194 disk-uuid[1291]: Secondary Entries is updated. Dec 13 14:38:12.423194 disk-uuid[1291]: Secondary Header is updated. Dec 13 14:38:10.778177 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 14:38:12.443371 disk-uuid[1292]: The operation has completed successfully. Dec 13 14:38:10.784650 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:38:10.790382 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 14:38:10.795591 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 14:38:10.800690 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 14:38:10.816833 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 14:38:10.822485 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 14:38:10.835905 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 14:38:11.053408 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 14:38:11.244037 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. Dec 13 14:38:11.314624 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. Dec 13 14:38:11.323798 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Dec 13 14:38:11.332323 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Dec 13 14:38:11.347806 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Dec 13 14:38:12.575212 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 14:38:11.365819 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 14:38:12.585926 sh[1480]: Success Dec 13 14:38:12.492171 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:38:12.492288 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 14:38:12.535874 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 14:38:12.595344 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 14:38:12.616250 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 14:38:12.624126 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 14:38:12.728374 kernel: BTRFS info (device dm-0): first mount of filesystem 47b12626-f7d3-4179-9720-ca262eb4c614 Dec 13 14:38:12.728389 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:38:12.728399 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 14:38:12.728410 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 14:38:12.728419 kernel: BTRFS info (device dm-0): using free space tree Dec 13 14:38:12.728428 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 14:38:12.725019 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 14:38:12.734958 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 14:38:12.753863 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 14:38:12.760260 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 14:38:12.872223 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 14:38:12.872243 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:38:12.872253 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:38:12.872263 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:38:12.872272 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Dec 13 14:38:12.872282 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 14:38:12.868969 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 14:38:12.898827 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 14:38:12.909119 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 14:38:12.925923 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 14:38:12.951684 systemd-networkd[1681]: lo: Link UP Dec 13 14:38:12.951690 systemd-networkd[1681]: lo: Gained carrier Dec 13 14:38:12.955672 systemd-networkd[1681]: Enumeration completed Dec 13 14:38:12.955783 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 14:38:12.956950 systemd-networkd[1681]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:38:12.962271 systemd[1]: Reached target network.target - Network. Dec 13 14:38:12.984229 ignition[1670]: Ignition 2.20.0 Dec 13 14:38:12.995373 unknown[1670]: fetched base config from "system" Dec 13 14:38:12.984235 ignition[1670]: Stage: fetch-offline Dec 13 14:38:12.995380 unknown[1670]: fetched user config from "system" Dec 13 14:38:12.984339 ignition[1670]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:38:12.997689 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 14:38:12.984347 ignition[1670]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 14:38:13.007993 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 14:38:12.984665 ignition[1670]: parsed url from cmdline: "" Dec 13 14:38:13.009252 systemd-networkd[1681]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:38:12.984668 ignition[1670]: no config URL provided Dec 13 14:38:13.025835 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 14:38:12.984673 ignition[1670]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:38:13.062380 systemd-networkd[1681]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:38:12.984726 ignition[1670]: parsing config with SHA512: 5eb19a456babce2db99b368f27957c3963b117e77d28155ccd9360657981450c19c94aa0b64b7a8e55864a5696cf01422a50857c29110c3158b3149a2ce92242 Dec 13 14:38:12.995842 ignition[1670]: fetch-offline: fetch-offline passed Dec 13 14:38:12.995847 ignition[1670]: POST message to Packet Timeline Dec 13 14:38:12.995851 ignition[1670]: POST Status error: resource requires networking Dec 13 14:38:12.995940 ignition[1670]: Ignition finished successfully Dec 13 14:38:13.039188 ignition[1706]: Ignition 2.20.0 Dec 13 14:38:13.039194 ignition[1706]: Stage: kargs Dec 13 14:38:13.039437 ignition[1706]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:38:13.039445 ignition[1706]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 14:38:13.040964 ignition[1706]: kargs: kargs passed Dec 13 14:38:13.040981 ignition[1706]: POST message to Packet Timeline Dec 13 14:38:13.041199 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 14:38:13.044271 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49074->[::1]:53: read: connection refused Dec 13 14:38:13.245027 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #2 Dec 13 14:38:13.245848 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51934->[::1]:53: read: connection refused Dec 13 14:38:13.643720 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Dec 13 14:38:13.646140 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #3 Dec 13 14:38:13.646601 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35991->[::1]:53: read: connection refused Dec 13 14:38:13.647196 systemd-networkd[1681]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:38:14.246723 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Dec 13 14:38:14.250254 systemd-networkd[1681]: eno1: Link UP Dec 13 14:38:14.250384 systemd-networkd[1681]: eno2: Link UP Dec 13 14:38:14.250506 systemd-networkd[1681]: enP1p1s0f0np0: Link UP Dec 13 14:38:14.250649 systemd-networkd[1681]: enP1p1s0f0np0: Gained carrier Dec 13 14:38:14.262939 systemd-networkd[1681]: enP1p1s0f1np1: Link UP Dec 13 14:38:14.295761 systemd-networkd[1681]: enP1p1s0f0np0: DHCPv4 address 147.28.228.38/30, gateway 147.28.228.37 acquired from 147.28.144.140 Dec 13 14:38:14.446767 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #4 Dec 13 14:38:14.447183 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47013->[::1]:53: read: connection refused Dec 13 14:38:14.649922 systemd-networkd[1681]: enP1p1s0f1np1: Gained carrier Dec 13 14:38:15.273791 systemd-networkd[1681]: enP1p1s0f0np0: Gained IPv6LL Dec 13 14:38:15.977838 systemd-networkd[1681]: enP1p1s0f1np1: Gained IPv6LL Dec 13 14:38:16.047919 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #5 Dec 13 14:38:16.048350 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57523->[::1]:53: read: connection refused Dec 13 14:38:19.251740 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #6 Dec 13 14:38:20.389126 ignition[1706]: GET result: OK Dec 13 14:38:20.680575 ignition[1706]: Ignition finished successfully Dec 13 14:38:20.684112 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 14:38:20.694899 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 14:38:20.710326 ignition[1726]: Ignition 2.20.0 Dec 13 14:38:20.710334 ignition[1726]: Stage: disks Dec 13 14:38:20.710566 ignition[1726]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:38:20.710576 ignition[1726]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 14:38:20.712095 ignition[1726]: disks: disks passed Dec 13 14:38:20.712100 ignition[1726]: POST message to Packet Timeline Dec 13 14:38:20.712119 ignition[1726]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 14:38:21.324405 ignition[1726]: GET result: OK Dec 13 14:38:21.641388 ignition[1726]: Ignition finished successfully Dec 13 14:38:21.643817 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 14:38:21.649791 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 14:38:21.656956 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 14:38:21.664794 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 14:38:21.673086 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 14:38:21.681739 systemd[1]: Reached target basic.target - Basic System. Dec 13 14:38:21.699860 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 14:38:21.715684 systemd-fsck[1744]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 14:38:21.719688 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 14:38:21.744765 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 14:38:21.810529 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 14:38:21.820338 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 0aa4851d-a2ba-4d04-90b3-5d00bf608ecc r/w with ordered data mode. Quota mode: none. Dec 13 14:38:21.816233 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 14:38:21.834795 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 14:38:21.927111 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/nvme0n1p6 scanned by mount (1755) Dec 13 14:38:21.927141 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 14:38:21.927161 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:38:21.927180 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:38:21.927198 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:38:21.927222 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Dec 13 14:38:21.840962 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 14:38:21.933356 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 14:38:21.944028 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Dec 13 14:38:21.959874 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:38:21.959902 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 14:38:21.992091 coreos-metadata[1776]: Dec 13 14:38:21.989 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 14:38:22.008608 coreos-metadata[1774]: Dec 13 14:38:21.989 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 14:38:21.973114 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 14:38:21.986729 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 14:38:22.009833 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 14:38:22.042711 initrd-setup-root[1795]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:38:22.048642 initrd-setup-root[1803]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:38:22.054992 initrd-setup-root[1811]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:38:22.061112 initrd-setup-root[1818]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:38:22.131203 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 14:38:22.152784 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 14:38:22.183561 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 14:38:22.159300 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 14:38:22.189930 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 14:38:22.206187 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 14:38:22.214138 ignition[1895]: INFO : Ignition 2.20.0 Dec 13 14:38:22.214138 ignition[1895]: INFO : Stage: mount Dec 13 14:38:22.227595 ignition[1895]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:38:22.227595 ignition[1895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 14:38:22.227595 ignition[1895]: INFO : mount: mount passed Dec 13 14:38:22.227595 ignition[1895]: INFO : POST message to Packet Timeline Dec 13 14:38:22.227595 ignition[1895]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 14:38:22.727253 ignition[1895]: INFO : GET result: OK Dec 13 14:38:22.992919 coreos-metadata[1776]: Dec 13 14:38:22.992 INFO Fetch successful Dec 13 14:38:23.018457 ignition[1895]: INFO : Ignition finished successfully Dec 13 14:38:23.020783 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 14:38:23.037162 systemd[1]: flatcar-static-network.service: Deactivated successfully. Dec 13 14:38:23.037303 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Dec 13 14:38:23.409243 coreos-metadata[1774]: Dec 13 14:38:23.409 INFO Fetch successful Dec 13 14:38:23.449642 coreos-metadata[1774]: Dec 13 14:38:23.449 INFO wrote hostname ci-4186.0.0-a-f374b16159 to /sysroot/etc/hostname Dec 13 14:38:23.453006 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 14:38:23.471820 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 14:38:23.480448 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 14:38:23.517671 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/nvme0n1p6 scanned by mount (1917) Dec 13 14:38:23.517729 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 14:38:23.531992 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:38:23.544963 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:38:23.567839 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:38:23.567860 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Dec 13 14:38:23.576037 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 14:38:23.607482 ignition[1937]: INFO : Ignition 2.20.0 Dec 13 14:38:23.607482 ignition[1937]: INFO : Stage: files Dec 13 14:38:23.616942 ignition[1937]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:38:23.616942 ignition[1937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 14:38:23.616942 ignition[1937]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:38:23.616942 ignition[1937]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:38:23.616942 ignition[1937]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:38:23.616942 ignition[1937]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:38:23.616942 ignition[1937]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:38:23.616942 ignition[1937]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:38:23.616942 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:38:23.616942 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 14:38:23.612932 unknown[1937]: wrote ssh authorized keys file for user: core Dec 13 14:38:24.113228 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 14:38:26.516221 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Dec 13 14:38:26.706718 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 14:38:26.803729 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 14:38:26.803729 ignition[1937]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 14:38:26.828188 ignition[1937]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:38:26.828188 ignition[1937]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:38:26.828188 ignition[1937]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 14:38:26.828188 ignition[1937]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:38:26.828188 ignition[1937]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:38:26.828188 ignition[1937]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:38:26.828188 ignition[1937]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:38:26.828188 ignition[1937]: INFO : files: files passed Dec 13 14:38:26.828188 ignition[1937]: INFO : POST message to Packet Timeline Dec 13 14:38:26.828188 ignition[1937]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 14:38:27.295630 ignition[1937]: INFO : GET result: OK Dec 13 14:38:27.590112 ignition[1937]: INFO : Ignition finished successfully Dec 13 14:38:27.592842 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 14:38:27.613885 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 14:38:27.626365 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 14:38:27.644686 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:38:27.644787 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 14:38:27.662609 initrd-setup-root-after-ignition[1981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:38:27.662609 initrd-setup-root-after-ignition[1981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:38:27.657155 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 14:38:27.714516 initrd-setup-root-after-ignition[1985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:38:27.669933 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 14:38:27.695907 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 14:38:27.728433 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:38:27.728527 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 14:38:27.738482 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 14:38:27.754427 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 14:38:27.765822 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 14:38:27.776873 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 14:38:27.800752 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 14:38:27.824874 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 14:38:27.847038 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 14:38:27.858378 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 14:38:27.869735 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 14:38:27.880900 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:38:27.880999 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 14:38:27.892327 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 14:38:27.903327 systemd[1]: Stopped target basic.target - Basic System. Dec 13 14:38:27.914387 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 14:38:27.925308 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 14:38:27.936238 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 14:38:27.947207 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 14:38:27.958114 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 14:38:27.969104 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 14:38:27.985397 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 14:38:27.996349 systemd[1]: Stopped target swap.target - Swaps. Dec 13 14:38:28.007414 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:38:28.007515 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 14:38:28.018519 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 14:38:28.029305 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 14:38:28.040285 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 14:38:28.044748 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 14:38:28.051363 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:38:28.051459 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 14:38:28.062530 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:38:28.062620 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 14:38:28.073478 systemd[1]: Stopped target paths.target - Path Units. Dec 13 14:38:28.090019 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:38:28.090096 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 14:38:28.101140 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 14:38:28.112246 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 14:38:28.123486 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:38:28.123574 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 14:38:28.134834 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:38:28.219253 ignition[2006]: INFO : Ignition 2.20.0 Dec 13 14:38:28.219253 ignition[2006]: INFO : Stage: umount Dec 13 14:38:28.219253 ignition[2006]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:38:28.219253 ignition[2006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 14:38:28.219253 ignition[2006]: INFO : umount: umount passed Dec 13 14:38:28.219253 ignition[2006]: INFO : POST message to Packet Timeline Dec 13 14:38:28.219253 ignition[2006]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 14:38:28.134936 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 14:38:28.146047 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:38:28.146136 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 14:38:28.157294 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:38:28.157375 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 14:38:28.173963 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 14:38:28.174050 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 14:38:28.194925 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 14:38:28.201888 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:38:28.201998 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 14:38:28.214195 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 14:38:28.225143 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:38:28.225258 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 14:38:28.236402 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:38:28.236489 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 14:38:28.249514 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:38:28.250271 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:38:28.250350 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 14:38:28.259619 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:38:28.259697 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 14:38:29.943794 ignition[2006]: INFO : GET result: OK Dec 13 14:38:30.231800 ignition[2006]: INFO : Ignition finished successfully Dec 13 14:38:30.234889 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:38:30.235149 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 14:38:30.242074 systemd[1]: Stopped target network.target - Network. Dec 13 14:38:30.251388 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:38:30.251447 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 14:38:30.261091 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:38:30.261125 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 14:38:30.270653 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:38:30.270688 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 14:38:30.280109 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 14:38:30.280143 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 14:38:30.289702 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:38:30.289738 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 14:38:30.299423 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 14:38:30.306730 systemd-networkd[1681]: enP1p1s0f0np0: DHCPv6 lease lost Dec 13 14:38:30.308900 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 14:38:30.318719 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:38:30.318810 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 14:38:30.318867 systemd-networkd[1681]: enP1p1s0f1np1: DHCPv6 lease lost Dec 13 14:38:30.330639 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 14:38:30.330752 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 14:38:30.338695 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:38:30.338899 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 14:38:30.348982 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:38:30.349125 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 14:38:30.364827 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 14:38:30.372660 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:38:30.372732 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 14:38:30.382674 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:38:30.382713 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 14:38:30.392718 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:38:30.392767 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 14:38:30.403086 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 14:38:30.425096 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:38:30.425222 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 14:38:30.441652 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:38:30.441777 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 14:38:30.450727 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:38:30.450806 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 14:38:30.461397 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:38:30.461438 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 14:38:30.472536 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:38:30.472611 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 14:38:30.483051 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:38:30.483108 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 14:38:30.502934 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 14:38:30.515994 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:38:30.516045 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 14:38:30.527123 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:38:30.527167 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:38:30.538713 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:38:30.538785 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 14:38:31.076779 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:38:31.077809 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 14:38:31.088038 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 14:38:31.108873 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 14:38:31.122223 systemd[1]: Switching root. Dec 13 14:38:31.174776 systemd-journald[900]: Journal stopped Dec 13 14:38:09.172529 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] Dec 13 14:38:09.172551 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri Dec 13 11:56:07 -00 2024 Dec 13 14:38:09.172560 kernel: KASLR enabled Dec 13 14:38:09.172565 kernel: efi: EFI v2.7 by American Megatrends Dec 13 14:38:09.172571 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea451818 RNG=0xebf10018 MEMRESERVE=0xe4633f98 Dec 13 14:38:09.172576 kernel: random: crng init done Dec 13 14:38:09.172583 kernel: secureboot: Secure boot disabled Dec 13 14:38:09.172588 kernel: esrt: Reserving ESRT space from 0x00000000ea451818 to 0x00000000ea451878. Dec 13 14:38:09.172596 kernel: ACPI: Early table checksum verification disabled Dec 13 14:38:09.172602 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) Dec 13 14:38:09.172608 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) Dec 13 14:38:09.172613 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) Dec 13 14:38:09.172619 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) Dec 13 14:38:09.172625 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) Dec 13 14:38:09.172633 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) Dec 13 14:38:09.172639 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) Dec 13 14:38:09.172645 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) Dec 13 14:38:09.172651 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) Dec 13 14:38:09.172657 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) Dec 13 14:38:09.172663 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) Dec 13 14:38:09.172669 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) Dec 13 14:38:09.172675 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) Dec 13 14:38:09.172681 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) Dec 13 14:38:09.172687 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) Dec 13 14:38:09.172695 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) Dec 13 14:38:09.172701 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) Dec 13 14:38:09.172749 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) Dec 13 14:38:09.172757 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) Dec 13 14:38:09.172763 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 Dec 13 14:38:09.172769 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] Dec 13 14:38:09.172775 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] Dec 13 14:38:09.172781 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] Dec 13 14:38:09.172787 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] Dec 13 14:38:09.172793 kernel: NUMA: NODE_DATA [mem 0x83fdffcb800-0x83fdffd0fff] Dec 13 14:38:09.172799 kernel: Zone ranges: Dec 13 14:38:09.172807 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] Dec 13 14:38:09.172813 kernel: DMA32 empty Dec 13 14:38:09.172819 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] Dec 13 14:38:09.172825 kernel: Movable zone start for each node Dec 13 14:38:09.172831 kernel: Early memory node ranges Dec 13 14:38:09.172840 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] Dec 13 14:38:09.172846 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] Dec 13 14:38:09.172854 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] Dec 13 14:38:09.172861 kernel: node 0: [mem 0x0000000094000000-0x00000000eba31fff] Dec 13 14:38:09.172867 kernel: node 0: [mem 0x00000000eba32000-0x00000000ebea8fff] Dec 13 14:38:09.172873 kernel: node 0: [mem 0x00000000ebea9000-0x00000000ebeaefff] Dec 13 14:38:09.172880 kernel: node 0: [mem 0x00000000ebeaf000-0x00000000ebeccfff] Dec 13 14:38:09.172886 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] Dec 13 14:38:09.172892 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] Dec 13 14:38:09.172898 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] Dec 13 14:38:09.172905 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] Dec 13 14:38:09.172911 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee53ffff] Dec 13 14:38:09.172919 kernel: node 0: [mem 0x00000000ee540000-0x00000000f765ffff] Dec 13 14:38:09.172925 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] Dec 13 14:38:09.172931 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] Dec 13 14:38:09.172938 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] Dec 13 14:38:09.172944 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] Dec 13 14:38:09.172951 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] Dec 13 14:38:09.172957 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] Dec 13 14:38:09.172963 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] Dec 13 14:38:09.172970 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] Dec 13 14:38:09.172976 kernel: On node 0, zone DMA: 768 pages in unavailable ranges Dec 13 14:38:09.172983 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges Dec 13 14:38:09.172990 kernel: psci: probing for conduit method from ACPI. Dec 13 14:38:09.172997 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 14:38:09.173003 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 14:38:09.173009 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 14:38:09.173016 kernel: psci: SMC Calling Convention v1.2 Dec 13 14:38:09.173022 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 13 14:38:09.173028 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 Dec 13 14:38:09.173035 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 Dec 13 14:38:09.173041 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 Dec 13 14:38:09.173047 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 Dec 13 14:38:09.173053 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 Dec 13 14:38:09.173060 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 Dec 13 14:38:09.173067 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 Dec 13 14:38:09.173074 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 Dec 13 14:38:09.173080 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 Dec 13 14:38:09.173087 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 Dec 13 14:38:09.173093 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 Dec 13 14:38:09.173099 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 Dec 13 14:38:09.173105 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 Dec 13 14:38:09.173112 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 Dec 13 14:38:09.173118 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 Dec 13 14:38:09.173124 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 Dec 13 14:38:09.173131 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 Dec 13 14:38:09.173137 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 Dec 13 14:38:09.173145 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 Dec 13 14:38:09.173151 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 Dec 13 14:38:09.173157 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 Dec 13 14:38:09.173164 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 Dec 13 14:38:09.173170 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 Dec 13 14:38:09.173176 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 Dec 13 14:38:09.173182 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 Dec 13 14:38:09.173189 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 Dec 13 14:38:09.173195 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 Dec 13 14:38:09.173201 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 Dec 13 14:38:09.173208 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 Dec 13 14:38:09.173215 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 Dec 13 14:38:09.173221 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 Dec 13 14:38:09.173228 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 Dec 13 14:38:09.173234 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 Dec 13 14:38:09.173241 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 Dec 13 14:38:09.173247 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 Dec 13 14:38:09.173253 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 Dec 13 14:38:09.173260 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 Dec 13 14:38:09.173266 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 Dec 13 14:38:09.173273 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 Dec 13 14:38:09.173279 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 Dec 13 14:38:09.173285 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 Dec 13 14:38:09.173293 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 Dec 13 14:38:09.173299 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 Dec 13 14:38:09.173306 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 Dec 13 14:38:09.173312 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 Dec 13 14:38:09.173318 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 Dec 13 14:38:09.173325 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 Dec 13 14:38:09.173331 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 Dec 13 14:38:09.173338 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 Dec 13 14:38:09.173350 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 Dec 13 14:38:09.173357 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 Dec 13 14:38:09.173365 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 Dec 13 14:38:09.173372 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 Dec 13 14:38:09.173378 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 Dec 13 14:38:09.173385 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 Dec 13 14:38:09.173392 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 Dec 13 14:38:09.173399 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 Dec 13 14:38:09.173407 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 Dec 13 14:38:09.173413 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 Dec 13 14:38:09.173420 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 Dec 13 14:38:09.173427 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 Dec 13 14:38:09.173434 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 Dec 13 14:38:09.173441 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 Dec 13 14:38:09.173448 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 Dec 13 14:38:09.173454 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 Dec 13 14:38:09.173461 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 Dec 13 14:38:09.173468 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 Dec 13 14:38:09.173475 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 Dec 13 14:38:09.173481 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 Dec 13 14:38:09.173489 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 Dec 13 14:38:09.173496 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 Dec 13 14:38:09.173503 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 Dec 13 14:38:09.173510 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 Dec 13 14:38:09.173517 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 Dec 13 14:38:09.173523 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 Dec 13 14:38:09.173530 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 Dec 13 14:38:09.173537 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 Dec 13 14:38:09.173544 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 Dec 13 14:38:09.173550 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 Dec 13 14:38:09.173557 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 14:38:09.173565 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 14:38:09.173572 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 Dec 13 14:38:09.173579 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 Dec 13 14:38:09.173586 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 Dec 13 14:38:09.173593 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 Dec 13 14:38:09.173600 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 Dec 13 14:38:09.173607 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 Dec 13 14:38:09.173614 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 Dec 13 14:38:09.173620 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 Dec 13 14:38:09.173627 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 Dec 13 14:38:09.173634 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 Dec 13 14:38:09.173642 kernel: Detected PIPT I-cache on CPU0 Dec 13 14:38:09.173648 kernel: CPU features: detected: GIC system register CPU interface Dec 13 14:38:09.173655 kernel: CPU features: detected: Virtualization Host Extensions Dec 13 14:38:09.173662 kernel: CPU features: detected: Hardware dirty bit management Dec 13 14:38:09.173669 kernel: CPU features: detected: Spectre-v4 Dec 13 14:38:09.173675 kernel: CPU features: detected: Spectre-BHB Dec 13 14:38:09.173682 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 14:38:09.173689 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 14:38:09.173696 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 14:38:09.173702 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 14:38:09.173712 kernel: alternatives: applying boot alternatives Dec 13 14:38:09.173720 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 14:38:09.173729 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:38:09.173736 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Dec 13 14:38:09.173743 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes Dec 13 14:38:09.173749 kernel: printk: log_buf_len min size: 262144 bytes Dec 13 14:38:09.173756 kernel: printk: log_buf_len: 1048576 bytes Dec 13 14:38:09.173763 kernel: printk: early log buf free: 249864(95%) Dec 13 14:38:09.173770 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) Dec 13 14:38:09.173777 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) Dec 13 14:38:09.173783 kernel: Fallback order for Node 0: 0 Dec 13 14:38:09.173790 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 Dec 13 14:38:09.173799 kernel: Policy zone: Normal Dec 13 14:38:09.173806 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:38:09.173812 kernel: software IO TLB: area num 128. Dec 13 14:38:09.173819 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) Dec 13 14:38:09.173826 kernel: Memory: 262921880K/268174336K available (10304K kernel code, 2184K rwdata, 8088K rodata, 39936K init, 897K bss, 5252456K reserved, 0K cma-reserved) Dec 13 14:38:09.173833 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 Dec 13 14:38:09.173840 kernel: trace event string verifier disabled Dec 13 14:38:09.173847 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 14:38:09.173854 kernel: rcu: RCU event tracing is enabled. Dec 13 14:38:09.173861 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. Dec 13 14:38:09.173868 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 14:38:09.173875 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:38:09.173884 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:38:09.173891 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 Dec 13 14:38:09.173897 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 14:38:09.173904 kernel: GICv3: GIC: Using split EOI/Deactivate mode Dec 13 14:38:09.173911 kernel: GICv3: 672 SPIs implemented Dec 13 14:38:09.173917 kernel: GICv3: 0 Extended SPIs implemented Dec 13 14:38:09.173924 kernel: Root IRQ handler: gic_handle_irq Dec 13 14:38:09.173931 kernel: GICv3: GICv3 features: 16 PPIs Dec 13 14:38:09.173938 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 Dec 13 14:38:09.173944 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 Dec 13 14:38:09.173951 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 Dec 13 14:38:09.173958 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 Dec 13 14:38:09.173966 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 Dec 13 14:38:09.173972 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 Dec 13 14:38:09.173979 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 Dec 13 14:38:09.173986 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 Dec 13 14:38:09.173992 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 Dec 13 14:38:09.173999 kernel: ITS [mem 0x100100040000-0x10010005ffff] Dec 13 14:38:09.174006 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:38:09.174013 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) Dec 13 14:38:09.174020 kernel: ITS [mem 0x100100060000-0x10010007ffff] Dec 13 14:38:09.174027 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:38:09.174034 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) Dec 13 14:38:09.174042 kernel: ITS [mem 0x100100080000-0x10010009ffff] Dec 13 14:38:09.174049 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:38:09.174056 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) Dec 13 14:38:09.174063 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] Dec 13 14:38:09.174070 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:38:09.174076 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) Dec 13 14:38:09.174083 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] Dec 13 14:38:09.174090 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:38:09.174097 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) Dec 13 14:38:09.174104 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] Dec 13 14:38:09.174110 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:38:09.174119 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) Dec 13 14:38:09.174125 kernel: ITS [mem 0x100100100000-0x10010011ffff] Dec 13 14:38:09.174132 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:38:09.174139 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) Dec 13 14:38:09.174146 kernel: ITS [mem 0x100100120000-0x10010013ffff] Dec 13 14:38:09.174153 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:38:09.174160 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) Dec 13 14:38:09.174166 kernel: GICv3: using LPI property table @0x00000800003e0000 Dec 13 14:38:09.174173 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 Dec 13 14:38:09.174180 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 14:38:09.174187 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174195 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). Dec 13 14:38:09.174202 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). Dec 13 14:38:09.174209 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 14:38:09.174216 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 14:38:09.174223 kernel: Console: colour dummy device 80x25 Dec 13 14:38:09.174230 kernel: printk: console [tty0] enabled Dec 13 14:38:09.174237 kernel: ACPI: Core revision 20230628 Dec 13 14:38:09.174245 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 14:38:09.174252 kernel: pid_max: default: 81920 minimum: 640 Dec 13 14:38:09.174259 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 14:38:09.174267 kernel: landlock: Up and running. Dec 13 14:38:09.174274 kernel: SELinux: Initializing. Dec 13 14:38:09.174281 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:38:09.174288 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:38:09.174295 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Dec 13 14:38:09.174302 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Dec 13 14:38:09.174309 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:38:09.174316 kernel: rcu: Max phase no-delay instances is 400. Dec 13 14:38:09.174323 kernel: Platform MSI: ITS@0x100100040000 domain created Dec 13 14:38:09.174332 kernel: Platform MSI: ITS@0x100100060000 domain created Dec 13 14:38:09.174339 kernel: Platform MSI: ITS@0x100100080000 domain created Dec 13 14:38:09.174346 kernel: Platform MSI: ITS@0x1001000a0000 domain created Dec 13 14:38:09.174352 kernel: Platform MSI: ITS@0x1001000c0000 domain created Dec 13 14:38:09.174359 kernel: Platform MSI: ITS@0x1001000e0000 domain created Dec 13 14:38:09.174366 kernel: Platform MSI: ITS@0x100100100000 domain created Dec 13 14:38:09.174373 kernel: Platform MSI: ITS@0x100100120000 domain created Dec 13 14:38:09.174380 kernel: PCI/MSI: ITS@0x100100040000 domain created Dec 13 14:38:09.174387 kernel: PCI/MSI: ITS@0x100100060000 domain created Dec 13 14:38:09.174395 kernel: PCI/MSI: ITS@0x100100080000 domain created Dec 13 14:38:09.174402 kernel: PCI/MSI: ITS@0x1001000a0000 domain created Dec 13 14:38:09.174409 kernel: PCI/MSI: ITS@0x1001000c0000 domain created Dec 13 14:38:09.174415 kernel: PCI/MSI: ITS@0x1001000e0000 domain created Dec 13 14:38:09.174422 kernel: PCI/MSI: ITS@0x100100100000 domain created Dec 13 14:38:09.174429 kernel: PCI/MSI: ITS@0x100100120000 domain created Dec 13 14:38:09.174436 kernel: Remapping and enabling EFI services. Dec 13 14:38:09.174443 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:38:09.174450 kernel: Detected PIPT I-cache on CPU1 Dec 13 14:38:09.174457 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 Dec 13 14:38:09.174465 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 Dec 13 14:38:09.174472 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174479 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] Dec 13 14:38:09.174486 kernel: Detected PIPT I-cache on CPU2 Dec 13 14:38:09.174493 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 Dec 13 14:38:09.174500 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 Dec 13 14:38:09.174507 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174514 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] Dec 13 14:38:09.174520 kernel: Detected PIPT I-cache on CPU3 Dec 13 14:38:09.174529 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 Dec 13 14:38:09.174536 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 Dec 13 14:38:09.174543 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174550 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] Dec 13 14:38:09.174556 kernel: Detected PIPT I-cache on CPU4 Dec 13 14:38:09.174563 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 Dec 13 14:38:09.174570 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 Dec 13 14:38:09.174577 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174584 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] Dec 13 14:38:09.174591 kernel: Detected PIPT I-cache on CPU5 Dec 13 14:38:09.174599 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 Dec 13 14:38:09.174606 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 Dec 13 14:38:09.174613 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174620 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] Dec 13 14:38:09.174627 kernel: Detected PIPT I-cache on CPU6 Dec 13 14:38:09.174634 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 Dec 13 14:38:09.174641 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 Dec 13 14:38:09.174648 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174655 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] Dec 13 14:38:09.174663 kernel: Detected PIPT I-cache on CPU7 Dec 13 14:38:09.174670 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 Dec 13 14:38:09.174677 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 Dec 13 14:38:09.174684 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174690 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] Dec 13 14:38:09.174697 kernel: Detected PIPT I-cache on CPU8 Dec 13 14:38:09.174704 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 Dec 13 14:38:09.174713 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 Dec 13 14:38:09.174720 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174727 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] Dec 13 14:38:09.174736 kernel: Detected PIPT I-cache on CPU9 Dec 13 14:38:09.174743 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 Dec 13 14:38:09.174750 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 Dec 13 14:38:09.174757 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174764 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] Dec 13 14:38:09.174770 kernel: Detected PIPT I-cache on CPU10 Dec 13 14:38:09.174777 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 Dec 13 14:38:09.174785 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 Dec 13 14:38:09.174792 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174798 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] Dec 13 14:38:09.174807 kernel: Detected PIPT I-cache on CPU11 Dec 13 14:38:09.174814 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 Dec 13 14:38:09.174821 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 Dec 13 14:38:09.174828 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174835 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] Dec 13 14:38:09.174842 kernel: Detected PIPT I-cache on CPU12 Dec 13 14:38:09.174849 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 Dec 13 14:38:09.174856 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 Dec 13 14:38:09.174863 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174872 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] Dec 13 14:38:09.174879 kernel: Detected PIPT I-cache on CPU13 Dec 13 14:38:09.174886 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 Dec 13 14:38:09.174893 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 Dec 13 14:38:09.174900 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174907 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] Dec 13 14:38:09.174914 kernel: Detected PIPT I-cache on CPU14 Dec 13 14:38:09.174921 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 Dec 13 14:38:09.174928 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 Dec 13 14:38:09.174936 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174943 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] Dec 13 14:38:09.174950 kernel: Detected PIPT I-cache on CPU15 Dec 13 14:38:09.174957 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 Dec 13 14:38:09.174964 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 Dec 13 14:38:09.174971 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.174978 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] Dec 13 14:38:09.174985 kernel: Detected PIPT I-cache on CPU16 Dec 13 14:38:09.174992 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 Dec 13 14:38:09.175008 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 Dec 13 14:38:09.175017 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175025 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] Dec 13 14:38:09.175032 kernel: Detected PIPT I-cache on CPU17 Dec 13 14:38:09.175039 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 Dec 13 14:38:09.175046 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 Dec 13 14:38:09.175054 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175061 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] Dec 13 14:38:09.175068 kernel: Detected PIPT I-cache on CPU18 Dec 13 14:38:09.175075 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 Dec 13 14:38:09.175084 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 Dec 13 14:38:09.175091 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175098 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] Dec 13 14:38:09.175106 kernel: Detected PIPT I-cache on CPU19 Dec 13 14:38:09.175113 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 Dec 13 14:38:09.175121 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 Dec 13 14:38:09.175130 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175138 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] Dec 13 14:38:09.175145 kernel: Detected PIPT I-cache on CPU20 Dec 13 14:38:09.175152 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 Dec 13 14:38:09.175159 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 Dec 13 14:38:09.175167 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175174 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] Dec 13 14:38:09.175181 kernel: Detected PIPT I-cache on CPU21 Dec 13 14:38:09.175189 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 Dec 13 14:38:09.175197 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 Dec 13 14:38:09.175205 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175212 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] Dec 13 14:38:09.175219 kernel: Detected PIPT I-cache on CPU22 Dec 13 14:38:09.175227 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 Dec 13 14:38:09.175234 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 Dec 13 14:38:09.175241 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175248 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] Dec 13 14:38:09.175256 kernel: Detected PIPT I-cache on CPU23 Dec 13 14:38:09.175263 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 Dec 13 14:38:09.175271 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 Dec 13 14:38:09.175279 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175286 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] Dec 13 14:38:09.175293 kernel: Detected PIPT I-cache on CPU24 Dec 13 14:38:09.175300 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 Dec 13 14:38:09.175308 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 Dec 13 14:38:09.175315 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175322 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] Dec 13 14:38:09.175329 kernel: Detected PIPT I-cache on CPU25 Dec 13 14:38:09.175338 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 Dec 13 14:38:09.175345 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 Dec 13 14:38:09.175353 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175360 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] Dec 13 14:38:09.175367 kernel: Detected PIPT I-cache on CPU26 Dec 13 14:38:09.175374 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 Dec 13 14:38:09.175382 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 Dec 13 14:38:09.175389 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175396 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] Dec 13 14:38:09.175403 kernel: Detected PIPT I-cache on CPU27 Dec 13 14:38:09.175412 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 Dec 13 14:38:09.175419 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 Dec 13 14:38:09.175426 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175435 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] Dec 13 14:38:09.175444 kernel: Detected PIPT I-cache on CPU28 Dec 13 14:38:09.175451 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 Dec 13 14:38:09.175458 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 Dec 13 14:38:09.175466 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175473 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] Dec 13 14:38:09.175481 kernel: Detected PIPT I-cache on CPU29 Dec 13 14:38:09.175489 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 Dec 13 14:38:09.175496 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 Dec 13 14:38:09.175504 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175511 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] Dec 13 14:38:09.175518 kernel: Detected PIPT I-cache on CPU30 Dec 13 14:38:09.175526 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 Dec 13 14:38:09.175533 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 Dec 13 14:38:09.175540 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175547 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] Dec 13 14:38:09.175556 kernel: Detected PIPT I-cache on CPU31 Dec 13 14:38:09.175563 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 Dec 13 14:38:09.175570 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 Dec 13 14:38:09.175578 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175585 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] Dec 13 14:38:09.175592 kernel: Detected PIPT I-cache on CPU32 Dec 13 14:38:09.175599 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 Dec 13 14:38:09.175607 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 Dec 13 14:38:09.175614 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175623 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] Dec 13 14:38:09.175630 kernel: Detected PIPT I-cache on CPU33 Dec 13 14:38:09.175637 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 Dec 13 14:38:09.175645 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 Dec 13 14:38:09.175652 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175659 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] Dec 13 14:38:09.175666 kernel: Detected PIPT I-cache on CPU34 Dec 13 14:38:09.175674 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 Dec 13 14:38:09.175681 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 Dec 13 14:38:09.175689 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175697 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] Dec 13 14:38:09.175704 kernel: Detected PIPT I-cache on CPU35 Dec 13 14:38:09.175713 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 Dec 13 14:38:09.175721 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 Dec 13 14:38:09.175728 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175736 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] Dec 13 14:38:09.175743 kernel: Detected PIPT I-cache on CPU36 Dec 13 14:38:09.175750 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 Dec 13 14:38:09.175757 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 Dec 13 14:38:09.175766 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175774 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] Dec 13 14:38:09.175781 kernel: Detected PIPT I-cache on CPU37 Dec 13 14:38:09.175788 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 Dec 13 14:38:09.175796 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 Dec 13 14:38:09.175803 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175810 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] Dec 13 14:38:09.175817 kernel: Detected PIPT I-cache on CPU38 Dec 13 14:38:09.175825 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 Dec 13 14:38:09.175833 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 Dec 13 14:38:09.175841 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175848 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] Dec 13 14:38:09.175855 kernel: Detected PIPT I-cache on CPU39 Dec 13 14:38:09.175862 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 Dec 13 14:38:09.175870 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 Dec 13 14:38:09.175877 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175884 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] Dec 13 14:38:09.175893 kernel: Detected PIPT I-cache on CPU40 Dec 13 14:38:09.175900 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 Dec 13 14:38:09.175907 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 Dec 13 14:38:09.175915 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175922 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] Dec 13 14:38:09.175929 kernel: Detected PIPT I-cache on CPU41 Dec 13 14:38:09.175937 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 Dec 13 14:38:09.175945 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 Dec 13 14:38:09.175952 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175960 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] Dec 13 14:38:09.175968 kernel: Detected PIPT I-cache on CPU42 Dec 13 14:38:09.175976 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 Dec 13 14:38:09.175983 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 Dec 13 14:38:09.175990 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.175998 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] Dec 13 14:38:09.176005 kernel: Detected PIPT I-cache on CPU43 Dec 13 14:38:09.176012 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 Dec 13 14:38:09.176019 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 Dec 13 14:38:09.176027 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176035 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] Dec 13 14:38:09.176043 kernel: Detected PIPT I-cache on CPU44 Dec 13 14:38:09.176050 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 Dec 13 14:38:09.176057 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 Dec 13 14:38:09.176064 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176071 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] Dec 13 14:38:09.176078 kernel: Detected PIPT I-cache on CPU45 Dec 13 14:38:09.176086 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 Dec 13 14:38:09.176093 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 Dec 13 14:38:09.176102 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176109 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] Dec 13 14:38:09.176117 kernel: Detected PIPT I-cache on CPU46 Dec 13 14:38:09.176124 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 Dec 13 14:38:09.176131 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 Dec 13 14:38:09.176138 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176145 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] Dec 13 14:38:09.176153 kernel: Detected PIPT I-cache on CPU47 Dec 13 14:38:09.176160 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 Dec 13 14:38:09.176167 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 Dec 13 14:38:09.176176 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176183 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] Dec 13 14:38:09.176191 kernel: Detected PIPT I-cache on CPU48 Dec 13 14:38:09.176198 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 Dec 13 14:38:09.176205 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 Dec 13 14:38:09.176213 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176220 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] Dec 13 14:38:09.176227 kernel: Detected PIPT I-cache on CPU49 Dec 13 14:38:09.176234 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 Dec 13 14:38:09.176243 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 Dec 13 14:38:09.176250 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176257 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] Dec 13 14:38:09.176265 kernel: Detected PIPT I-cache on CPU50 Dec 13 14:38:09.176272 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 Dec 13 14:38:09.176279 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 Dec 13 14:38:09.176286 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176294 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] Dec 13 14:38:09.176301 kernel: Detected PIPT I-cache on CPU51 Dec 13 14:38:09.176308 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 Dec 13 14:38:09.176317 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 Dec 13 14:38:09.176324 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176331 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] Dec 13 14:38:09.176339 kernel: Detected PIPT I-cache on CPU52 Dec 13 14:38:09.176346 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 Dec 13 14:38:09.176353 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 Dec 13 14:38:09.176360 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176368 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] Dec 13 14:38:09.176375 kernel: Detected PIPT I-cache on CPU53 Dec 13 14:38:09.176385 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 Dec 13 14:38:09.176392 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 Dec 13 14:38:09.176399 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176406 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] Dec 13 14:38:09.176414 kernel: Detected PIPT I-cache on CPU54 Dec 13 14:38:09.176421 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 Dec 13 14:38:09.176428 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 Dec 13 14:38:09.176435 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176442 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] Dec 13 14:38:09.176450 kernel: Detected PIPT I-cache on CPU55 Dec 13 14:38:09.176459 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 Dec 13 14:38:09.176466 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 Dec 13 14:38:09.176473 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176481 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] Dec 13 14:38:09.176488 kernel: Detected PIPT I-cache on CPU56 Dec 13 14:38:09.176495 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 Dec 13 14:38:09.176503 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 Dec 13 14:38:09.176510 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176517 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] Dec 13 14:38:09.176526 kernel: Detected PIPT I-cache on CPU57 Dec 13 14:38:09.176533 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 Dec 13 14:38:09.176540 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 Dec 13 14:38:09.176548 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176555 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] Dec 13 14:38:09.176562 kernel: Detected PIPT I-cache on CPU58 Dec 13 14:38:09.176570 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 Dec 13 14:38:09.176577 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 Dec 13 14:38:09.176584 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176591 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] Dec 13 14:38:09.176600 kernel: Detected PIPT I-cache on CPU59 Dec 13 14:38:09.176607 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 Dec 13 14:38:09.176615 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 Dec 13 14:38:09.176622 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176629 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] Dec 13 14:38:09.176636 kernel: Detected PIPT I-cache on CPU60 Dec 13 14:38:09.176644 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 Dec 13 14:38:09.176651 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 Dec 13 14:38:09.176658 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176667 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] Dec 13 14:38:09.176674 kernel: Detected PIPT I-cache on CPU61 Dec 13 14:38:09.176681 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 Dec 13 14:38:09.176689 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 Dec 13 14:38:09.176696 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176703 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] Dec 13 14:38:09.176733 kernel: Detected PIPT I-cache on CPU62 Dec 13 14:38:09.176741 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 Dec 13 14:38:09.176749 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 Dec 13 14:38:09.176758 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176765 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] Dec 13 14:38:09.176772 kernel: Detected PIPT I-cache on CPU63 Dec 13 14:38:09.176780 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 Dec 13 14:38:09.176787 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 Dec 13 14:38:09.176794 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176802 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] Dec 13 14:38:09.176809 kernel: Detected PIPT I-cache on CPU64 Dec 13 14:38:09.176816 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 Dec 13 14:38:09.176824 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 Dec 13 14:38:09.176833 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176840 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] Dec 13 14:38:09.176847 kernel: Detected PIPT I-cache on CPU65 Dec 13 14:38:09.176855 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 Dec 13 14:38:09.176862 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 Dec 13 14:38:09.176870 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176877 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] Dec 13 14:38:09.176884 kernel: Detected PIPT I-cache on CPU66 Dec 13 14:38:09.176891 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 Dec 13 14:38:09.176900 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 Dec 13 14:38:09.176908 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176915 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] Dec 13 14:38:09.176923 kernel: Detected PIPT I-cache on CPU67 Dec 13 14:38:09.176930 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 Dec 13 14:38:09.176937 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 Dec 13 14:38:09.176945 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176952 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] Dec 13 14:38:09.176959 kernel: Detected PIPT I-cache on CPU68 Dec 13 14:38:09.176966 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 Dec 13 14:38:09.176975 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 Dec 13 14:38:09.176982 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.176989 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] Dec 13 14:38:09.176997 kernel: Detected PIPT I-cache on CPU69 Dec 13 14:38:09.177004 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 Dec 13 14:38:09.177012 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 Dec 13 14:38:09.177019 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177026 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] Dec 13 14:38:09.177033 kernel: Detected PIPT I-cache on CPU70 Dec 13 14:38:09.177042 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 Dec 13 14:38:09.177049 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 Dec 13 14:38:09.177057 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177064 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] Dec 13 14:38:09.177071 kernel: Detected PIPT I-cache on CPU71 Dec 13 14:38:09.177078 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 Dec 13 14:38:09.177086 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 Dec 13 14:38:09.177093 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177100 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] Dec 13 14:38:09.177107 kernel: Detected PIPT I-cache on CPU72 Dec 13 14:38:09.177116 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 Dec 13 14:38:09.177123 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 Dec 13 14:38:09.177131 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177138 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] Dec 13 14:38:09.177145 kernel: Detected PIPT I-cache on CPU73 Dec 13 14:38:09.177153 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 Dec 13 14:38:09.177160 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 Dec 13 14:38:09.177167 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177174 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] Dec 13 14:38:09.177183 kernel: Detected PIPT I-cache on CPU74 Dec 13 14:38:09.177190 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 Dec 13 14:38:09.177198 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 Dec 13 14:38:09.177205 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177212 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] Dec 13 14:38:09.177219 kernel: Detected PIPT I-cache on CPU75 Dec 13 14:38:09.177227 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 Dec 13 14:38:09.177234 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 Dec 13 14:38:09.177241 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177249 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] Dec 13 14:38:09.177257 kernel: Detected PIPT I-cache on CPU76 Dec 13 14:38:09.177264 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 Dec 13 14:38:09.177272 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 Dec 13 14:38:09.177279 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177286 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] Dec 13 14:38:09.177293 kernel: Detected PIPT I-cache on CPU77 Dec 13 14:38:09.177301 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 Dec 13 14:38:09.177308 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 Dec 13 14:38:09.177316 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177324 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] Dec 13 14:38:09.177331 kernel: Detected PIPT I-cache on CPU78 Dec 13 14:38:09.177339 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 Dec 13 14:38:09.177346 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 Dec 13 14:38:09.177353 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177361 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] Dec 13 14:38:09.177368 kernel: Detected PIPT I-cache on CPU79 Dec 13 14:38:09.177375 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 Dec 13 14:38:09.177382 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 Dec 13 14:38:09.177391 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:38:09.177399 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] Dec 13 14:38:09.177406 kernel: smp: Brought up 1 node, 80 CPUs Dec 13 14:38:09.177413 kernel: SMP: Total of 80 processors activated. Dec 13 14:38:09.177420 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 14:38:09.177428 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 14:38:09.177435 kernel: CPU features: detected: Common not Private translations Dec 13 14:38:09.177442 kernel: CPU features: detected: CRC32 instructions Dec 13 14:38:09.177450 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 14:38:09.177457 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 14:38:09.177466 kernel: CPU features: detected: LSE atomic instructions Dec 13 14:38:09.177473 kernel: CPU features: detected: Privileged Access Never Dec 13 14:38:09.177481 kernel: CPU features: detected: RAS Extension Support Dec 13 14:38:09.177488 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 14:38:09.177495 kernel: CPU: All CPU(s) started at EL2 Dec 13 14:38:09.177502 kernel: alternatives: applying system-wide alternatives Dec 13 14:38:09.177509 kernel: devtmpfs: initialized Dec 13 14:38:09.177517 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:38:09.177524 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Dec 13 14:38:09.177533 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:38:09.177541 kernel: SMBIOS 3.4.0 present. Dec 13 14:38:09.177548 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 Dec 13 14:38:09.177555 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:38:09.177563 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations Dec 13 14:38:09.177570 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 14:38:09.177578 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 14:38:09.177585 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:38:09.177593 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 Dec 13 14:38:09.177601 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:38:09.177609 kernel: cpuidle: using governor menu Dec 13 14:38:09.177616 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 14:38:09.177623 kernel: ASID allocator initialised with 32768 entries Dec 13 14:38:09.177631 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:38:09.177638 kernel: Serial: AMBA PL011 UART driver Dec 13 14:38:09.177646 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 14:38:09.177653 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 14:38:09.177660 kernel: Modules: 508880 pages in range for PLT usage Dec 13 14:38:09.177669 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:38:09.177676 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 14:38:09.177683 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 14:38:09.177691 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 14:38:09.177698 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:38:09.177705 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 14:38:09.177715 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 14:38:09.177723 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 14:38:09.177730 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:38:09.177739 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:38:09.177746 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:38:09.177753 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:38:09.177761 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded Dec 13 14:38:09.177768 kernel: ACPI: Interpreter enabled Dec 13 14:38:09.177775 kernel: ACPI: Using GIC for interrupt routing Dec 13 14:38:09.177783 kernel: ACPI: MCFG table detected, 8 entries Dec 13 14:38:09.177790 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 Dec 13 14:38:09.177797 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 Dec 13 14:38:09.177806 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 Dec 13 14:38:09.177814 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 Dec 13 14:38:09.177821 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 Dec 13 14:38:09.177828 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 Dec 13 14:38:09.177836 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 Dec 13 14:38:09.177843 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 Dec 13 14:38:09.177850 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA Dec 13 14:38:09.177858 kernel: printk: console [ttyAMA0] enabled Dec 13 14:38:09.177865 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA Dec 13 14:38:09.177874 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) Dec 13 14:38:09.178007 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:38:09.178077 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 14:38:09.178139 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] Dec 13 14:38:09.178200 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 14:38:09.178260 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 Dec 13 14:38:09.178321 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] Dec 13 14:38:09.178331 kernel: PCI host bridge to bus 000d:00 Dec 13 14:38:09.178405 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] Dec 13 14:38:09.178463 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] Dec 13 14:38:09.178520 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] Dec 13 14:38:09.178599 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 Dec 13 14:38:09.178674 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 Dec 13 14:38:09.178746 kernel: pci 000d:00:01.0: enabling Extended Tags Dec 13 14:38:09.178812 kernel: pci 000d:00:01.0: supports D1 D2 Dec 13 14:38:09.178875 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.178950 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 Dec 13 14:38:09.179014 kernel: pci 000d:00:02.0: supports D1 D2 Dec 13 14:38:09.179079 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.179155 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 Dec 13 14:38:09.179218 kernel: pci 000d:00:03.0: supports D1 D2 Dec 13 14:38:09.179285 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.179355 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 Dec 13 14:38:09.179419 kernel: pci 000d:00:04.0: supports D1 D2 Dec 13 14:38:09.179481 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.179491 kernel: acpiphp: Slot [1] registered Dec 13 14:38:09.179500 kernel: acpiphp: Slot [2] registered Dec 13 14:38:09.179508 kernel: acpiphp: Slot [3] registered Dec 13 14:38:09.179515 kernel: acpiphp: Slot [4] registered Dec 13 14:38:09.179571 kernel: pci_bus 000d:00: on NUMA node 0 Dec 13 14:38:09.179633 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 14:38:09.179697 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.179766 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.179831 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 14:38:09.179897 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.179963 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.180028 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 14:38:09.180093 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.180156 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.180222 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 14:38:09.180285 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.180351 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.180416 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] Dec 13 14:38:09.180479 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] Dec 13 14:38:09.180543 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] Dec 13 14:38:09.180605 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] Dec 13 14:38:09.180668 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] Dec 13 14:38:09.180735 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] Dec 13 14:38:09.180800 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] Dec 13 14:38:09.180863 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] Dec 13 14:38:09.180926 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.180988 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.181051 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.181113 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.181176 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.181239 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.181303 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.181365 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.181427 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.181490 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.181552 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.181616 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.181678 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.181746 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.181812 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.181875 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.181938 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Dec 13 14:38:09.182002 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] Dec 13 14:38:09.182064 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] Dec 13 14:38:09.182127 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Dec 13 14:38:09.182190 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] Dec 13 14:38:09.182255 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] Dec 13 14:38:09.182320 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Dec 13 14:38:09.182383 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] Dec 13 14:38:09.182447 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] Dec 13 14:38:09.182510 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Dec 13 14:38:09.182573 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] Dec 13 14:38:09.182638 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] Dec 13 14:38:09.182699 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] Dec 13 14:38:09.182758 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] Dec 13 14:38:09.182829 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] Dec 13 14:38:09.182888 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] Dec 13 14:38:09.182957 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] Dec 13 14:38:09.183020 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] Dec 13 14:38:09.183097 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] Dec 13 14:38:09.183156 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] Dec 13 14:38:09.183222 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] Dec 13 14:38:09.183281 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] Dec 13 14:38:09.183290 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) Dec 13 14:38:09.183361 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:38:09.183424 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 14:38:09.183485 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] Dec 13 14:38:09.183546 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 14:38:09.183606 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 Dec 13 14:38:09.183667 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] Dec 13 14:38:09.183677 kernel: PCI host bridge to bus 0000:00 Dec 13 14:38:09.183747 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] Dec 13 14:38:09.183804 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] Dec 13 14:38:09.183860 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:38:09.183932 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 Dec 13 14:38:09.184003 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 Dec 13 14:38:09.184067 kernel: pci 0000:00:01.0: enabling Extended Tags Dec 13 14:38:09.184130 kernel: pci 0000:00:01.0: supports D1 D2 Dec 13 14:38:09.184197 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.184269 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 Dec 13 14:38:09.184333 kernel: pci 0000:00:02.0: supports D1 D2 Dec 13 14:38:09.184397 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.184466 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 Dec 13 14:38:09.184530 kernel: pci 0000:00:03.0: supports D1 D2 Dec 13 14:38:09.184593 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.184668 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 Dec 13 14:38:09.184734 kernel: pci 0000:00:04.0: supports D1 D2 Dec 13 14:38:09.184798 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.184808 kernel: acpiphp: Slot [1-1] registered Dec 13 14:38:09.184815 kernel: acpiphp: Slot [2-1] registered Dec 13 14:38:09.184822 kernel: acpiphp: Slot [3-1] registered Dec 13 14:38:09.184830 kernel: acpiphp: Slot [4-1] registered Dec 13 14:38:09.184884 kernel: pci_bus 0000:00: on NUMA node 0 Dec 13 14:38:09.184950 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 14:38:09.185012 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.185076 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.185139 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 14:38:09.185203 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.185265 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.185328 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 14:38:09.185393 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.185456 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.185519 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 14:38:09.185582 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.185645 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.185711 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] Dec 13 14:38:09.185775 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Dec 13 14:38:09.185840 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] Dec 13 14:38:09.185904 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Dec 13 14:38:09.185966 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] Dec 13 14:38:09.186029 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Dec 13 14:38:09.186091 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] Dec 13 14:38:09.186154 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Dec 13 14:38:09.186216 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.186280 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.186344 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.186408 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.186471 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.186534 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.186596 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.186659 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.186725 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.186787 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.186854 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.186916 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.186979 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.187041 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.187105 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.187167 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.187230 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 14:38:09.187293 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] Dec 13 14:38:09.187359 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Dec 13 14:38:09.187422 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Dec 13 14:38:09.187484 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] Dec 13 14:38:09.187548 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Dec 13 14:38:09.187610 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Dec 13 14:38:09.187675 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] Dec 13 14:38:09.187742 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Dec 13 14:38:09.187806 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Dec 13 14:38:09.187868 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] Dec 13 14:38:09.187933 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Dec 13 14:38:09.187992 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] Dec 13 14:38:09.188049 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] Dec 13 14:38:09.188118 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] Dec 13 14:38:09.188178 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Dec 13 14:38:09.188245 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] Dec 13 14:38:09.188305 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Dec 13 14:38:09.188381 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] Dec 13 14:38:09.188442 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Dec 13 14:38:09.188506 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] Dec 13 14:38:09.188566 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Dec 13 14:38:09.188576 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) Dec 13 14:38:09.188646 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:38:09.188711 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 14:38:09.188776 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] Dec 13 14:38:09.188837 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 14:38:09.188898 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 Dec 13 14:38:09.188958 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] Dec 13 14:38:09.188968 kernel: PCI host bridge to bus 0005:00 Dec 13 14:38:09.189032 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] Dec 13 14:38:09.189091 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] Dec 13 14:38:09.189148 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] Dec 13 14:38:09.189219 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 Dec 13 14:38:09.189292 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 Dec 13 14:38:09.189356 kernel: pci 0005:00:01.0: supports D1 D2 Dec 13 14:38:09.189419 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.189489 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 Dec 13 14:38:09.189554 kernel: pci 0005:00:03.0: supports D1 D2 Dec 13 14:38:09.189619 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.189688 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 Dec 13 14:38:09.189758 kernel: pci 0005:00:05.0: supports D1 D2 Dec 13 14:38:09.189820 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.189891 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 Dec 13 14:38:09.189956 kernel: pci 0005:00:07.0: supports D1 D2 Dec 13 14:38:09.190024 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.190034 kernel: acpiphp: Slot [1-2] registered Dec 13 14:38:09.190041 kernel: acpiphp: Slot [2-2] registered Dec 13 14:38:09.190113 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 Dec 13 14:38:09.190181 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] Dec 13 14:38:09.190245 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] Dec 13 14:38:09.190320 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 Dec 13 14:38:09.190391 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] Dec 13 14:38:09.190457 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] Dec 13 14:38:09.190517 kernel: pci_bus 0005:00: on NUMA node 0 Dec 13 14:38:09.190581 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 14:38:09.190647 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.190713 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.190781 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 14:38:09.190846 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.190910 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.190972 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 14:38:09.191036 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.191099 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 13 14:38:09.191165 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 14:38:09.191249 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.191319 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 Dec 13 14:38:09.191383 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] Dec 13 14:38:09.191448 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Dec 13 14:38:09.191511 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] Dec 13 14:38:09.191573 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Dec 13 14:38:09.191636 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] Dec 13 14:38:09.191717 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Dec 13 14:38:09.191787 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] Dec 13 14:38:09.191851 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Dec 13 14:38:09.191915 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.191980 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.192043 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.192105 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.192168 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.192230 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.192296 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.192358 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.192422 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.192485 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.192547 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.192610 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.192673 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.192741 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.192804 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.192871 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.192934 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Dec 13 14:38:09.192999 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] Dec 13 14:38:09.193061 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Dec 13 14:38:09.193126 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Dec 13 14:38:09.193189 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] Dec 13 14:38:09.193254 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Dec 13 14:38:09.193324 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] Dec 13 14:38:09.193391 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] Dec 13 14:38:09.193454 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Dec 13 14:38:09.193518 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] Dec 13 14:38:09.193582 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Dec 13 14:38:09.193648 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] Dec 13 14:38:09.193730 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] Dec 13 14:38:09.193797 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Dec 13 14:38:09.193861 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] Dec 13 14:38:09.193924 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Dec 13 14:38:09.193984 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] Dec 13 14:38:09.194040 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] Dec 13 14:38:09.194111 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] Dec 13 14:38:09.194173 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Dec 13 14:38:09.194250 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] Dec 13 14:38:09.194310 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Dec 13 14:38:09.194376 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] Dec 13 14:38:09.194438 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Dec 13 14:38:09.194506 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] Dec 13 14:38:09.194567 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Dec 13 14:38:09.194577 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) Dec 13 14:38:09.194646 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:38:09.194711 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 14:38:09.194774 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] Dec 13 14:38:09.194836 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 14:38:09.194900 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 Dec 13 14:38:09.194961 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] Dec 13 14:38:09.194972 kernel: PCI host bridge to bus 0003:00 Dec 13 14:38:09.195035 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] Dec 13 14:38:09.195092 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] Dec 13 14:38:09.195148 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] Dec 13 14:38:09.195221 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 Dec 13 14:38:09.195296 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 Dec 13 14:38:09.195361 kernel: pci 0003:00:01.0: supports D1 D2 Dec 13 14:38:09.195439 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.195511 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 Dec 13 14:38:09.195574 kernel: pci 0003:00:03.0: supports D1 D2 Dec 13 14:38:09.195637 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.195712 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 Dec 13 14:38:09.195785 kernel: pci 0003:00:05.0: supports D1 D2 Dec 13 14:38:09.195851 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.195861 kernel: acpiphp: Slot [1-3] registered Dec 13 14:38:09.195868 kernel: acpiphp: Slot [2-3] registered Dec 13 14:38:09.195944 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 Dec 13 14:38:09.196013 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] Dec 13 14:38:09.196080 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] Dec 13 14:38:09.196148 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] Dec 13 14:38:09.196213 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 14:38:09.196277 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] Dec 13 14:38:09.196342 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 14:38:09.196407 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] Dec 13 14:38:09.196473 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) Dec 13 14:38:09.196538 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) Dec 13 14:38:09.196613 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 Dec 13 14:38:09.196682 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] Dec 13 14:38:09.196750 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] Dec 13 14:38:09.196816 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] Dec 13 14:38:09.196880 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold Dec 13 14:38:09.196946 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] Dec 13 14:38:09.197011 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 14:38:09.197075 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] Dec 13 14:38:09.197141 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) Dec 13 14:38:09.197201 kernel: pci_bus 0003:00: on NUMA node 0 Dec 13 14:38:09.197265 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 14:38:09.197328 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.197391 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.197454 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 14:38:09.197517 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.197582 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.197646 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 Dec 13 14:38:09.197712 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 Dec 13 14:38:09.197787 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Dec 13 14:38:09.197853 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] Dec 13 14:38:09.197917 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] Dec 13 14:38:09.197979 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] Dec 13 14:38:09.198045 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] Dec 13 14:38:09.198108 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] Dec 13 14:38:09.198171 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.198234 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.198300 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.198364 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.198426 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.198490 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.198554 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.198618 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.198680 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.198747 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.198810 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.198874 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.198936 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Dec 13 14:38:09.199000 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] Dec 13 14:38:09.199067 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] Dec 13 14:38:09.199131 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Dec 13 14:38:09.199194 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] Dec 13 14:38:09.199260 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] Dec 13 14:38:09.199328 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] Dec 13 14:38:09.199394 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] Dec 13 14:38:09.199462 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] Dec 13 14:38:09.199527 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] Dec 13 14:38:09.199593 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] Dec 13 14:38:09.199658 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] Dec 13 14:38:09.199727 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] Dec 13 14:38:09.199793 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] Dec 13 14:38:09.199858 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] Dec 13 14:38:09.199926 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] Dec 13 14:38:09.199991 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] Dec 13 14:38:09.200056 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] Dec 13 14:38:09.200121 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] Dec 13 14:38:09.200186 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] Dec 13 14:38:09.200251 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] Dec 13 14:38:09.200316 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] Dec 13 14:38:09.200379 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] Dec 13 14:38:09.200444 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] Dec 13 14:38:09.200508 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] Dec 13 14:38:09.200566 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc Dec 13 14:38:09.200623 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] Dec 13 14:38:09.200680 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] Dec 13 14:38:09.200763 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] Dec 13 14:38:09.200828 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] Dec 13 14:38:09.200898 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] Dec 13 14:38:09.200957 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] Dec 13 14:38:09.201025 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] Dec 13 14:38:09.201084 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] Dec 13 14:38:09.201095 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) Dec 13 14:38:09.201166 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:38:09.201230 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 14:38:09.201293 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] Dec 13 14:38:09.201354 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 14:38:09.201415 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 Dec 13 14:38:09.201477 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] Dec 13 14:38:09.201487 kernel: PCI host bridge to bus 000c:00 Dec 13 14:38:09.201551 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] Dec 13 14:38:09.201611 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] Dec 13 14:38:09.201667 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] Dec 13 14:38:09.201742 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 Dec 13 14:38:09.201815 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 Dec 13 14:38:09.201880 kernel: pci 000c:00:01.0: enabling Extended Tags Dec 13 14:38:09.201943 kernel: pci 000c:00:01.0: supports D1 D2 Dec 13 14:38:09.202009 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.202083 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 Dec 13 14:38:09.202147 kernel: pci 000c:00:02.0: supports D1 D2 Dec 13 14:38:09.202210 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.202282 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 Dec 13 14:38:09.202345 kernel: pci 000c:00:03.0: supports D1 D2 Dec 13 14:38:09.202409 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.202479 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 Dec 13 14:38:09.202545 kernel: pci 000c:00:04.0: supports D1 D2 Dec 13 14:38:09.202609 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.202619 kernel: acpiphp: Slot [1-4] registered Dec 13 14:38:09.202627 kernel: acpiphp: Slot [2-4] registered Dec 13 14:38:09.202635 kernel: acpiphp: Slot [3-2] registered Dec 13 14:38:09.202643 kernel: acpiphp: Slot [4-2] registered Dec 13 14:38:09.202699 kernel: pci_bus 000c:00: on NUMA node 0 Dec 13 14:38:09.202771 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 14:38:09.202839 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.202903 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.202966 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 14:38:09.203030 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.203092 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.203157 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 14:38:09.203220 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.203286 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.203350 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 14:38:09.203414 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.203476 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.203540 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] Dec 13 14:38:09.203604 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] Dec 13 14:38:09.203669 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] Dec 13 14:38:09.203737 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] Dec 13 14:38:09.203799 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] Dec 13 14:38:09.203863 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] Dec 13 14:38:09.203925 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] Dec 13 14:38:09.203989 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] Dec 13 14:38:09.204052 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.204115 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.204179 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.204243 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.204306 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.204370 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.204434 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.204498 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.204562 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.204626 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.204689 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.204757 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.204820 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.204883 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.204947 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.205010 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.205074 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Dec 13 14:38:09.205137 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] Dec 13 14:38:09.205201 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] Dec 13 14:38:09.205266 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Dec 13 14:38:09.205330 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] Dec 13 14:38:09.205393 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] Dec 13 14:38:09.205457 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Dec 13 14:38:09.205519 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] Dec 13 14:38:09.205583 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] Dec 13 14:38:09.205648 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Dec 13 14:38:09.205714 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] Dec 13 14:38:09.205780 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] Dec 13 14:38:09.205839 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] Dec 13 14:38:09.205896 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] Dec 13 14:38:09.205965 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] Dec 13 14:38:09.206027 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] Dec 13 14:38:09.206102 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] Dec 13 14:38:09.206162 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] Dec 13 14:38:09.206227 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] Dec 13 14:38:09.206287 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] Dec 13 14:38:09.206354 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] Dec 13 14:38:09.206413 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] Dec 13 14:38:09.206426 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) Dec 13 14:38:09.206496 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:38:09.206559 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 14:38:09.206621 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] Dec 13 14:38:09.206683 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 14:38:09.206749 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 Dec 13 14:38:09.206810 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] Dec 13 14:38:09.206822 kernel: PCI host bridge to bus 0002:00 Dec 13 14:38:09.206889 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] Dec 13 14:38:09.206946 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] Dec 13 14:38:09.207003 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] Dec 13 14:38:09.207074 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 Dec 13 14:38:09.207145 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 Dec 13 14:38:09.207211 kernel: pci 0002:00:01.0: supports D1 D2 Dec 13 14:38:09.207275 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.207347 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 Dec 13 14:38:09.207411 kernel: pci 0002:00:03.0: supports D1 D2 Dec 13 14:38:09.207475 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.207547 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 Dec 13 14:38:09.207610 kernel: pci 0002:00:05.0: supports D1 D2 Dec 13 14:38:09.207676 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.207751 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 Dec 13 14:38:09.207814 kernel: pci 0002:00:07.0: supports D1 D2 Dec 13 14:38:09.207878 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.207888 kernel: acpiphp: Slot [1-5] registered Dec 13 14:38:09.207896 kernel: acpiphp: Slot [2-5] registered Dec 13 14:38:09.207903 kernel: acpiphp: Slot [3-3] registered Dec 13 14:38:09.207911 kernel: acpiphp: Slot [4-3] registered Dec 13 14:38:09.207968 kernel: pci_bus 0002:00: on NUMA node 0 Dec 13 14:38:09.208034 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 14:38:09.208097 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.208161 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 14:38:09.208226 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 14:38:09.208292 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.208355 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.208419 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 14:38:09.208482 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.208545 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.208609 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 14:38:09.208674 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.208740 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.208804 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] Dec 13 14:38:09.208867 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] Dec 13 14:38:09.208933 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] Dec 13 14:38:09.208999 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] Dec 13 14:38:09.209062 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] Dec 13 14:38:09.209126 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] Dec 13 14:38:09.209193 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] Dec 13 14:38:09.209256 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] Dec 13 14:38:09.209321 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.209384 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.209448 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.209512 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.209577 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.209644 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.209712 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.209776 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.209840 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.209902 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.209966 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.210029 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.210093 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.210155 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.210219 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.210284 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.210346 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Dec 13 14:38:09.210408 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] Dec 13 14:38:09.210473 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] Dec 13 14:38:09.210549 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Dec 13 14:38:09.210614 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] Dec 13 14:38:09.210677 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] Dec 13 14:38:09.210977 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Dec 13 14:38:09.211046 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] Dec 13 14:38:09.211115 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] Dec 13 14:38:09.211179 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Dec 13 14:38:09.211242 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] Dec 13 14:38:09.211306 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] Dec 13 14:38:09.211369 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] Dec 13 14:38:09.211425 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] Dec 13 14:38:09.211495 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] Dec 13 14:38:09.211554 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] Dec 13 14:38:09.211620 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] Dec 13 14:38:09.211678 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] Dec 13 14:38:09.211762 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] Dec 13 14:38:09.211820 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] Dec 13 14:38:09.211886 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] Dec 13 14:38:09.211944 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] Dec 13 14:38:09.211954 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) Dec 13 14:38:09.212022 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:38:09.212085 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 14:38:09.212145 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] Dec 13 14:38:09.212205 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 14:38:09.212264 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 Dec 13 14:38:09.212324 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] Dec 13 14:38:09.212335 kernel: PCI host bridge to bus 0001:00 Dec 13 14:38:09.212396 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] Dec 13 14:38:09.212455 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] Dec 13 14:38:09.212511 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] Dec 13 14:38:09.212582 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 Dec 13 14:38:09.212652 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 Dec 13 14:38:09.212719 kernel: pci 0001:00:01.0: enabling Extended Tags Dec 13 14:38:09.212782 kernel: pci 0001:00:01.0: supports D1 D2 Dec 13 14:38:09.212849 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.212919 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 Dec 13 14:38:09.212984 kernel: pci 0001:00:02.0: supports D1 D2 Dec 13 14:38:09.213046 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.213117 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 Dec 13 14:38:09.213181 kernel: pci 0001:00:03.0: supports D1 D2 Dec 13 14:38:09.213244 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.213315 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 Dec 13 14:38:09.213378 kernel: pci 0001:00:04.0: supports D1 D2 Dec 13 14:38:09.213442 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.213452 kernel: acpiphp: Slot [1-6] registered Dec 13 14:38:09.213521 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 Dec 13 14:38:09.213597 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] Dec 13 14:38:09.213662 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] Dec 13 14:38:09.213736 kernel: pci 0001:01:00.0: PME# supported from D3cold Dec 13 14:38:09.213801 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 14:38:09.213874 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 Dec 13 14:38:09.213940 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] Dec 13 14:38:09.214004 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] Dec 13 14:38:09.214068 kernel: pci 0001:01:00.1: PME# supported from D3cold Dec 13 14:38:09.214079 kernel: acpiphp: Slot [2-6] registered Dec 13 14:38:09.214086 kernel: acpiphp: Slot [3-4] registered Dec 13 14:38:09.214096 kernel: acpiphp: Slot [4-4] registered Dec 13 14:38:09.214152 kernel: pci_bus 0001:00: on NUMA node 0 Dec 13 14:38:09.214216 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 14:38:09.214279 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 14:38:09.214343 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.214405 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:38:09.214468 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 14:38:09.214533 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.214596 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.214659 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 14:38:09.214883 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.214955 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.215019 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] Dec 13 14:38:09.215081 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] Dec 13 14:38:09.215147 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] Dec 13 14:38:09.215209 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] Dec 13 14:38:09.215272 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] Dec 13 14:38:09.215333 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] Dec 13 14:38:09.215395 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] Dec 13 14:38:09.215457 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] Dec 13 14:38:09.215519 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.215580 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.215646 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.215711 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.215775 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.215837 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.215900 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.215961 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.216024 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.216085 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.216149 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.216211 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.216273 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.216334 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.216397 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.216461 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.216526 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] Dec 13 14:38:09.216593 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] Dec 13 14:38:09.216656 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] Dec 13 14:38:09.216726 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] Dec 13 14:38:09.216788 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Dec 13 14:38:09.216850 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Dec 13 14:38:09.216912 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] Dec 13 14:38:09.216975 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Dec 13 14:38:09.217037 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] Dec 13 14:38:09.217102 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] Dec 13 14:38:09.217164 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Dec 13 14:38:09.217227 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] Dec 13 14:38:09.217289 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] Dec 13 14:38:09.217352 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Dec 13 14:38:09.217414 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] Dec 13 14:38:09.217479 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] Dec 13 14:38:09.217538 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] Dec 13 14:38:09.217594 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] Dec 13 14:38:09.217670 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] Dec 13 14:38:09.217733 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] Dec 13 14:38:09.217801 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] Dec 13 14:38:09.217861 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] Dec 13 14:38:09.217927 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] Dec 13 14:38:09.217986 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] Dec 13 14:38:09.218051 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] Dec 13 14:38:09.218109 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] Dec 13 14:38:09.218119 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) Dec 13 14:38:09.218188 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:38:09.218253 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 14:38:09.218313 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] Dec 13 14:38:09.218373 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 14:38:09.218432 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 Dec 13 14:38:09.218492 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] Dec 13 14:38:09.218502 kernel: PCI host bridge to bus 0004:00 Dec 13 14:38:09.218564 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] Dec 13 14:38:09.218623 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] Dec 13 14:38:09.218678 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] Dec 13 14:38:09.218752 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 Dec 13 14:38:09.218823 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 Dec 13 14:38:09.218887 kernel: pci 0004:00:01.0: supports D1 D2 Dec 13 14:38:09.218949 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.219019 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 Dec 13 14:38:09.219085 kernel: pci 0004:00:03.0: supports D1 D2 Dec 13 14:38:09.219148 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.219218 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 Dec 13 14:38:09.219281 kernel: pci 0004:00:05.0: supports D1 D2 Dec 13 14:38:09.219344 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot Dec 13 14:38:09.219414 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 Dec 13 14:38:09.219482 kernel: pci 0004:01:00.0: enabling Extended Tags Dec 13 14:38:09.219545 kernel: pci 0004:01:00.0: supports D1 D2 Dec 13 14:38:09.219609 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 14:38:09.219687 kernel: pci_bus 0004:02: extended config space not accessible Dec 13 14:38:09.219766 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 Dec 13 14:38:09.219834 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] Dec 13 14:38:09.219901 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] Dec 13 14:38:09.219970 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] Dec 13 14:38:09.220036 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb Dec 13 14:38:09.220102 kernel: pci 0004:02:00.0: supports D1 D2 Dec 13 14:38:09.220169 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 14:38:09.220241 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 Dec 13 14:38:09.220306 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] Dec 13 14:38:09.220370 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 14:38:09.220426 kernel: pci_bus 0004:00: on NUMA node 0 Dec 13 14:38:09.220490 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 Dec 13 14:38:09.220553 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 14:38:09.220616 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 14:38:09.220679 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 13 14:38:09.220745 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 14:38:09.220807 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.220870 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:38:09.220935 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] Dec 13 14:38:09.220998 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] Dec 13 14:38:09.221061 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] Dec 13 14:38:09.221123 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] Dec 13 14:38:09.221185 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] Dec 13 14:38:09.221247 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] Dec 13 14:38:09.221311 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.221374 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.221437 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.221499 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.221561 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.221623 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.221685 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.221753 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.221816 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.221881 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.221944 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.222005 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.222070 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] Dec 13 14:38:09.222134 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] Dec 13 14:38:09.222197 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] Dec 13 14:38:09.222264 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] Dec 13 14:38:09.222331 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] Dec 13 14:38:09.222397 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] Dec 13 14:38:09.222465 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] Dec 13 14:38:09.222530 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Dec 13 14:38:09.222593 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] Dec 13 14:38:09.222656 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Dec 13 14:38:09.222721 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] Dec 13 14:38:09.222785 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] Dec 13 14:38:09.222850 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] Dec 13 14:38:09.222913 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Dec 13 14:38:09.222977 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] Dec 13 14:38:09.223040 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] Dec 13 14:38:09.223102 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Dec 13 14:38:09.223164 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] Dec 13 14:38:09.223226 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] Dec 13 14:38:09.223283 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc Dec 13 14:38:09.223344 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] Dec 13 14:38:09.223400 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] Dec 13 14:38:09.223468 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] Dec 13 14:38:09.223527 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] Dec 13 14:38:09.223589 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] Dec 13 14:38:09.223654 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] Dec 13 14:38:09.223719 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] Dec 13 14:38:09.223784 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] Dec 13 14:38:09.223843 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] Dec 13 14:38:09.223853 kernel: iommu: Default domain type: Translated Dec 13 14:38:09.223861 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 14:38:09.223869 kernel: efivars: Registered efivars operations Dec 13 14:38:09.223934 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device Dec 13 14:38:09.224001 kernel: pci 0004:02:00.0: vgaarb: bridge control possible Dec 13 14:38:09.224070 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none Dec 13 14:38:09.224081 kernel: vgaarb: loaded Dec 13 14:38:09.224089 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 14:38:09.224097 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:38:09.224105 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:38:09.224113 kernel: pnp: PnP ACPI init Dec 13 14:38:09.224180 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved Dec 13 14:38:09.224241 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved Dec 13 14:38:09.224298 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved Dec 13 14:38:09.224355 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved Dec 13 14:38:09.224411 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved Dec 13 14:38:09.224468 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved Dec 13 14:38:09.224525 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved Dec 13 14:38:09.224582 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved Dec 13 14:38:09.224594 kernel: pnp: PnP ACPI: found 1 devices Dec 13 14:38:09.224603 kernel: NET: Registered PF_INET protocol family Dec 13 14:38:09.224611 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:38:09.224619 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 14:38:09.224627 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:38:09.224635 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:38:09.224643 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 14:38:09.224651 kernel: TCP: Hash tables configured (established 524288 bind 65536) Dec 13 14:38:09.224658 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 14:38:09.224668 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 14:38:09.224676 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:38:09.224744 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes Dec 13 14:38:09.224754 kernel: kvm [1]: IPA Size Limit: 48 bits Dec 13 14:38:09.224762 kernel: kvm [1]: GICv3: no GICV resource entry Dec 13 14:38:09.224770 kernel: kvm [1]: disabling GICv2 emulation Dec 13 14:38:09.224778 kernel: kvm [1]: GIC system register CPU interface enabled Dec 13 14:38:09.224786 kernel: kvm [1]: vgic interrupt IRQ9 Dec 13 14:38:09.224793 kernel: kvm [1]: VHE mode initialized successfully Dec 13 14:38:09.224803 kernel: Initialise system trusted keyrings Dec 13 14:38:09.224810 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 Dec 13 14:38:09.224818 kernel: Key type asymmetric registered Dec 13 14:38:09.224825 kernel: Asymmetric key parser 'x509' registered Dec 13 14:38:09.224833 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 14:38:09.224841 kernel: io scheduler mq-deadline registered Dec 13 14:38:09.224849 kernel: io scheduler kyber registered Dec 13 14:38:09.224856 kernel: io scheduler bfq registered Dec 13 14:38:09.224864 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 14:38:09.224874 kernel: ACPI: button: Power Button [PWRB] Dec 13 14:38:09.224881 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). Dec 13 14:38:09.224889 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:38:09.224960 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 Dec 13 14:38:09.225020 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 14:38:09.225079 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 14:38:09.225137 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq Dec 13 14:38:09.225197 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq Dec 13 14:38:09.225255 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq Dec 13 14:38:09.225321 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 Dec 13 14:38:09.225380 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 14:38:09.225438 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 14:38:09.225497 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq Dec 13 14:38:09.225554 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq Dec 13 14:38:09.225614 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq Dec 13 14:38:09.225680 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 Dec 13 14:38:09.225872 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 14:38:09.225934 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 14:38:09.225993 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq Dec 13 14:38:09.226051 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq Dec 13 14:38:09.226112 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq Dec 13 14:38:09.226178 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 Dec 13 14:38:09.226237 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 14:38:09.226294 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 14:38:09.226352 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq Dec 13 14:38:09.226410 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq Dec 13 14:38:09.226467 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq Dec 13 14:38:09.226543 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 Dec 13 14:38:09.226602 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 14:38:09.226660 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 14:38:09.226722 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq Dec 13 14:38:09.226780 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq Dec 13 14:38:09.226839 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq Dec 13 14:38:09.226909 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 Dec 13 14:38:09.226968 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 14:38:09.227028 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 14:38:09.227086 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq Dec 13 14:38:09.227144 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq Dec 13 14:38:09.227202 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq Dec 13 14:38:09.227267 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 Dec 13 14:38:09.227328 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 14:38:09.227386 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 14:38:09.227445 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq Dec 13 14:38:09.227503 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq Dec 13 14:38:09.227561 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq Dec 13 14:38:09.227625 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 Dec 13 14:38:09.227686 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 14:38:09.227748 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 14:38:09.227806 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq Dec 13 14:38:09.227864 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq Dec 13 14:38:09.227921 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq Dec 13 14:38:09.227932 kernel: thunder_xcv, ver 1.0 Dec 13 14:38:09.227940 kernel: thunder_bgx, ver 1.0 Dec 13 14:38:09.227949 kernel: nicpf, ver 1.0 Dec 13 14:38:09.227957 kernel: nicvf, ver 1.0 Dec 13 14:38:09.228022 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 14:38:09.228082 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T14:38:07 UTC (1734100687) Dec 13 14:38:09.228092 kernel: efifb: probing for efifb Dec 13 14:38:09.228100 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k Dec 13 14:38:09.228108 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Dec 13 14:38:09.228115 kernel: efifb: scrolling: redraw Dec 13 14:38:09.228125 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 14:38:09.228132 kernel: Console: switching to colour frame buffer device 100x37 Dec 13 14:38:09.228140 kernel: fb0: EFI VGA frame buffer device Dec 13 14:38:09.228148 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 Dec 13 14:38:09.228156 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:38:09.228164 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 14:38:09.228172 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 14:38:09.228179 kernel: watchdog: Hard watchdog permanently disabled Dec 13 14:38:09.228187 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:38:09.228196 kernel: Segment Routing with IPv6 Dec 13 14:38:09.228204 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:38:09.228211 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:38:09.228219 kernel: Key type dns_resolver registered Dec 13 14:38:09.228227 kernel: registered taskstats version 1 Dec 13 14:38:09.228234 kernel: Loading compiled-in X.509 certificates Dec 13 14:38:09.228242 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 752b3e36c6039904ea643ccad2b3f5f3cb4ebf78' Dec 13 14:38:09.228250 kernel: Key type .fscrypt registered Dec 13 14:38:09.228257 kernel: Key type fscrypt-provisioning registered Dec 13 14:38:09.228266 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:38:09.228274 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:38:09.228282 kernel: ima: No architecture policies found Dec 13 14:38:09.228289 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 14:38:09.228354 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 Dec 13 14:38:09.228418 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 Dec 13 14:38:09.228482 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 Dec 13 14:38:09.228546 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 Dec 13 14:38:09.228611 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 Dec 13 14:38:09.228676 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 Dec 13 14:38:09.228744 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 Dec 13 14:38:09.228807 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 Dec 13 14:38:09.228872 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 Dec 13 14:38:09.228935 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 Dec 13 14:38:09.228998 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 Dec 13 14:38:09.229061 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 Dec 13 14:38:09.229125 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 Dec 13 14:38:09.229191 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 Dec 13 14:38:09.229254 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 Dec 13 14:38:09.229316 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 Dec 13 14:38:09.229381 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 Dec 13 14:38:09.229443 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 Dec 13 14:38:09.229507 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 Dec 13 14:38:09.229569 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 Dec 13 14:38:09.229633 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 Dec 13 14:38:09.229695 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 Dec 13 14:38:09.229765 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 Dec 13 14:38:09.229827 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 Dec 13 14:38:09.229891 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 Dec 13 14:38:09.229954 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 Dec 13 14:38:09.230018 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 Dec 13 14:38:09.230079 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 Dec 13 14:38:09.230144 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 Dec 13 14:38:09.230206 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 Dec 13 14:38:09.230272 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 Dec 13 14:38:09.230336 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 Dec 13 14:38:09.230399 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 Dec 13 14:38:09.230462 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 Dec 13 14:38:09.230526 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 Dec 13 14:38:09.230589 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 Dec 13 14:38:09.230652 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 Dec 13 14:38:09.230719 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 Dec 13 14:38:09.230786 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 Dec 13 14:38:09.230849 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 Dec 13 14:38:09.230912 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 Dec 13 14:38:09.230974 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 Dec 13 14:38:09.231037 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 Dec 13 14:38:09.231100 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 Dec 13 14:38:09.231163 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 Dec 13 14:38:09.231226 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 Dec 13 14:38:09.231291 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 Dec 13 14:38:09.231353 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 Dec 13 14:38:09.231419 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 Dec 13 14:38:09.231481 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 Dec 13 14:38:09.231545 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 Dec 13 14:38:09.231607 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 Dec 13 14:38:09.231671 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 Dec 13 14:38:09.231736 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 Dec 13 14:38:09.231800 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 Dec 13 14:38:09.231865 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 Dec 13 14:38:09.231929 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 Dec 13 14:38:09.231991 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 Dec 13 14:38:09.232054 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 Dec 13 14:38:09.232116 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 Dec 13 14:38:09.232182 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 Dec 13 14:38:09.232192 kernel: clk: Disabling unused clocks Dec 13 14:38:09.232202 kernel: Freeing unused kernel memory: 39936K Dec 13 14:38:09.232209 kernel: Run /init as init process Dec 13 14:38:09.232217 kernel: with arguments: Dec 13 14:38:09.232225 kernel: /init Dec 13 14:38:09.232232 kernel: with environment: Dec 13 14:38:09.232240 kernel: HOME=/ Dec 13 14:38:09.232247 kernel: TERM=linux Dec 13 14:38:09.232255 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:38:09.232265 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 14:38:09.232276 systemd[1]: Detected architecture arm64. Dec 13 14:38:09.232284 systemd[1]: Running in initrd. Dec 13 14:38:09.232292 systemd[1]: No hostname configured, using default hostname. Dec 13 14:38:09.232300 systemd[1]: Hostname set to . Dec 13 14:38:09.232307 systemd[1]: Initializing machine ID from random generator. Dec 13 14:38:09.232315 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:38:09.232324 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 14:38:09.232333 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 14:38:09.232342 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 14:38:09.232350 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 14:38:09.232358 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 14:38:09.232366 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 14:38:09.232375 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 14:38:09.232384 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 14:38:09.232393 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 14:38:09.232401 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 14:38:09.232410 systemd[1]: Reached target paths.target - Path Units. Dec 13 14:38:09.232418 systemd[1]: Reached target slices.target - Slice Units. Dec 13 14:38:09.232426 systemd[1]: Reached target swap.target - Swaps. Dec 13 14:38:09.232434 systemd[1]: Reached target timers.target - Timer Units. Dec 13 14:38:09.232442 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 14:38:09.232450 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 14:38:09.232458 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 14:38:09.232468 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 14:38:09.232476 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 14:38:09.232484 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 14:38:09.232492 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 14:38:09.232500 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 14:38:09.232508 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 14:38:09.232516 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 14:38:09.232524 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 14:38:09.232534 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:38:09.232542 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 14:38:09.232550 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 14:38:09.232580 systemd-journald[900]: Collecting audit messages is disabled. Dec 13 14:38:09.232601 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:38:09.232609 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 14:38:09.232618 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:38:09.232626 kernel: Bridge firewalling registered Dec 13 14:38:09.232634 systemd-journald[900]: Journal started Dec 13 14:38:09.232658 systemd-journald[900]: Runtime Journal (/run/log/journal/dd4de9db7be2427d914b4813423bb1c5) is 8.0M, max 4.0G, 3.9G free. Dec 13 14:38:09.191828 systemd-modules-load[902]: Inserted module 'overlay' Dec 13 14:38:09.271200 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 14:38:09.213724 systemd-modules-load[902]: Inserted module 'br_netfilter' Dec 13 14:38:09.277176 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 14:38:09.287969 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:38:09.299308 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 14:38:09.309934 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:38:09.334868 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 14:38:09.352134 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 14:38:09.358726 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 14:38:09.370164 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 14:38:09.386133 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 14:38:09.402373 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 14:38:09.418784 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 14:38:09.430047 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 14:38:09.458823 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 14:38:09.472210 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 14:38:09.478672 dracut-cmdline[943]: dracut-dracut-053 Dec 13 14:38:09.491557 dracut-cmdline[943]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 14:38:09.485737 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 14:38:09.501731 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 14:38:09.507869 systemd-resolved[951]: Positive Trust Anchors: Dec 13 14:38:09.507880 systemd-resolved[951]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:38:09.507911 systemd-resolved[951]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 14:38:09.522658 systemd-resolved[951]: Defaulting to hostname 'linux'. Dec 13 14:38:09.536403 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 14:38:09.653736 kernel: SCSI subsystem initialized Dec 13 14:38:09.555603 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 14:38:09.669549 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:38:09.682714 kernel: iscsi: registered transport (tcp) Dec 13 14:38:09.710226 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:38:09.710251 kernel: QLogic iSCSI HBA Driver Dec 13 14:38:09.753770 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 14:38:09.782831 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 14:38:09.822721 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:38:09.822751 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:38:09.837404 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 14:38:09.902719 kernel: raid6: neonx8 gen() 15849 MB/s Dec 13 14:38:09.928717 kernel: raid6: neonx4 gen() 15877 MB/s Dec 13 14:38:09.953717 kernel: raid6: neonx2 gen() 13243 MB/s Dec 13 14:38:09.978717 kernel: raid6: neonx1 gen() 10579 MB/s Dec 13 14:38:10.003717 kernel: raid6: int64x8 gen() 6810 MB/s Dec 13 14:38:10.028717 kernel: raid6: int64x4 gen() 7366 MB/s Dec 13 14:38:10.053718 kernel: raid6: int64x2 gen() 6136 MB/s Dec 13 14:38:10.082277 kernel: raid6: int64x1 gen() 5077 MB/s Dec 13 14:38:10.082298 kernel: raid6: using algorithm neonx4 gen() 15877 MB/s Dec 13 14:38:10.117246 kernel: raid6: .... xor() 12410 MB/s, rmw enabled Dec 13 14:38:10.117267 kernel: raid6: using neon recovery algorithm Dec 13 14:38:10.136717 kernel: xor: measuring software checksum speed Dec 13 14:38:10.148621 kernel: 8regs : 20786 MB/sec Dec 13 14:38:10.148641 kernel: 32regs : 21704 MB/sec Dec 13 14:38:10.156421 kernel: arm64_neon : 28244 MB/sec Dec 13 14:38:10.164385 kernel: xor: using function: arm64_neon (28244 MB/sec) Dec 13 14:38:10.224717 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 14:38:10.234029 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 14:38:10.256922 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 14:38:10.270060 systemd-udevd[1136]: Using default interface naming scheme 'v255'. Dec 13 14:38:10.273046 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 14:38:10.286816 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 14:38:10.300819 dracut-pre-trigger[1148]: rd.md=0: removing MD RAID activation Dec 13 14:38:10.327095 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 14:38:10.348878 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 14:38:10.453316 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 14:38:10.482268 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:38:10.482290 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:38:10.505079 kernel: ACPI: bus type USB registered Dec 13 14:38:10.505104 kernel: usbcore: registered new interface driver usbfs Dec 13 14:38:10.514903 kernel: usbcore: registered new interface driver hub Dec 13 14:38:10.514930 kernel: PTP clock support registered Dec 13 14:38:10.514949 kernel: usbcore: registered new device driver usb Dec 13 14:38:10.539852 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 14:38:10.549656 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:38:10.710298 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Dec 13 14:38:10.710315 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Dec 13 14:38:10.710324 kernel: igb 0003:03:00.0: Adding to iommu group 31 Dec 13 14:38:10.742894 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 32 Dec 13 14:38:11.024148 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Dec 13 14:38:11.024324 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 14:38:11.024447 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault Dec 13 14:38:11.024539 kernel: nvme 0005:03:00.0: Adding to iommu group 33 Dec 13 14:38:11.222412 kernel: igb 0003:03:00.0: added PHC on eth0 Dec 13 14:38:11.222504 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 34 Dec 13 14:38:11.660898 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 14:38:11.660995 kernel: nvme 0005:04:00.0: Adding to iommu group 35 Dec 13 14:38:11.661079 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6a:c4 Dec 13 14:38:11.661155 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 Dec 13 14:38:11.661233 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Dec 13 14:38:11.661314 kernel: igb 0003:03:00.1: Adding to iommu group 36 Dec 13 14:38:11.661396 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000410 Dec 13 14:38:11.661481 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Dec 13 14:38:11.661558 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 14:38:11.661634 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 14:38:11.661722 kernel: nvme nvme0: pci function 0005:03:00.0 Dec 13 14:38:11.661821 kernel: hub 1-0:1.0: USB hub found Dec 13 14:38:11.661924 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 Dec 13 14:38:11.662003 kernel: hub 1-0:1.0: 4 ports detected Dec 13 14:38:11.662086 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 14:38:11.662163 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 14:38:11.662290 kernel: hub 2-0:1.0: USB hub found Dec 13 14:38:11.662384 kernel: nvme nvme0: Shutdown timeout set to 8 seconds Dec 13 14:38:11.662460 kernel: hub 2-0:1.0: 4 ports detected Dec 13 14:38:11.662543 kernel: nvme nvme1: pci function 0005:04:00.0 Dec 13 14:38:11.662623 kernel: nvme nvme1: Shutdown timeout set to 8 seconds Dec 13 14:38:11.662693 kernel: nvme nvme0: 32/0/0 default/read/poll queues Dec 13 14:38:11.662773 kernel: nvme nvme1: 32/0/0 default/read/poll queues Dec 13 14:38:11.662845 kernel: igb 0003:03:00.1: added PHC on eth1 Dec 13 14:38:11.662923 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection Dec 13 14:38:11.663002 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6a:c5 Dec 13 14:38:11.663077 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 Dec 13 14:38:11.663152 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Dec 13 14:38:11.663226 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:38:11.663237 kernel: GPT:9289727 != 1875385007 Dec 13 14:38:11.663246 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:38:11.663256 kernel: GPT:9289727 != 1875385007 Dec 13 14:38:11.663265 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:38:11.663276 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:38:11.663285 kernel: igb 0003:03:00.1 eno2: renamed from eth1 Dec 13 14:38:11.663362 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by (udev-worker) (1208) Dec 13 14:38:11.663373 kernel: igb 0003:03:00.0 eno1: renamed from eth0 Dec 13 14:38:11.663450 kernel: BTRFS: device fsid 47b12626-f7d3-4179-9720-ca262eb4c614 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (1188) Dec 13 14:38:11.663460 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd Dec 13 14:38:11.663584 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged Dec 13 14:38:11.663667 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:38:11.663677 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:38:11.663686 kernel: hub 1-3:1.0: USB hub found Dec 13 14:38:11.663789 kernel: hub 1-3:1.0: 4 ports detected Dec 13 14:38:11.663876 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd Dec 13 14:38:11.663999 kernel: hub 2-3:1.0: USB hub found Dec 13 14:38:11.664095 kernel: hub 2-3:1.0: 4 ports detected Dec 13 14:38:11.664180 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Dec 13 14:38:11.664261 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 Dec 13 14:38:12.350536 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 Dec 13 14:38:12.350637 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 14:38:12.350733 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged Dec 13 14:38:12.350814 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Dec 13 14:38:10.549750 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 14:38:12.381897 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 Dec 13 14:38:12.382015 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 Dec 13 14:38:10.704841 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 14:38:12.401506 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:38:10.716194 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:38:10.716245 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:38:10.722144 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:38:10.738826 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:38:12.423194 disk-uuid[1291]: Primary Header is updated. Dec 13 14:38:12.423194 disk-uuid[1291]: Secondary Entries is updated. Dec 13 14:38:12.423194 disk-uuid[1291]: Secondary Header is updated. Dec 13 14:38:10.778177 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 14:38:12.443371 disk-uuid[1292]: The operation has completed successfully. Dec 13 14:38:10.784650 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:38:10.790382 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 14:38:10.795591 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 14:38:10.800690 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 14:38:10.816833 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 14:38:10.822485 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 14:38:10.835905 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 14:38:11.053408 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 14:38:11.244037 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. Dec 13 14:38:11.314624 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. Dec 13 14:38:11.323798 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Dec 13 14:38:11.332323 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Dec 13 14:38:11.347806 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Dec 13 14:38:12.575212 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 14:38:11.365819 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 14:38:12.585926 sh[1480]: Success Dec 13 14:38:12.492171 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:38:12.492288 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 14:38:12.535874 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 14:38:12.595344 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 14:38:12.616250 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 14:38:12.624126 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 14:38:12.728374 kernel: BTRFS info (device dm-0): first mount of filesystem 47b12626-f7d3-4179-9720-ca262eb4c614 Dec 13 14:38:12.728389 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:38:12.728399 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 14:38:12.728410 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 14:38:12.728419 kernel: BTRFS info (device dm-0): using free space tree Dec 13 14:38:12.728428 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 14:38:12.725019 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 14:38:12.734958 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 14:38:12.753863 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 14:38:12.760260 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 14:38:12.872223 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 14:38:12.872243 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:38:12.872253 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:38:12.872263 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:38:12.872272 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Dec 13 14:38:12.872282 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 14:38:12.868969 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 14:38:12.898827 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 14:38:12.909119 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 14:38:12.925923 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 14:38:12.951684 systemd-networkd[1681]: lo: Link UP Dec 13 14:38:12.951690 systemd-networkd[1681]: lo: Gained carrier Dec 13 14:38:12.955672 systemd-networkd[1681]: Enumeration completed Dec 13 14:38:12.955783 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 14:38:12.956950 systemd-networkd[1681]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:38:12.962271 systemd[1]: Reached target network.target - Network. Dec 13 14:38:12.984229 ignition[1670]: Ignition 2.20.0 Dec 13 14:38:12.995373 unknown[1670]: fetched base config from "system" Dec 13 14:38:12.984235 ignition[1670]: Stage: fetch-offline Dec 13 14:38:12.995380 unknown[1670]: fetched user config from "system" Dec 13 14:38:12.984339 ignition[1670]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:38:12.997689 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 14:38:12.984347 ignition[1670]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 14:38:13.007993 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 14:38:12.984665 ignition[1670]: parsed url from cmdline: "" Dec 13 14:38:13.009252 systemd-networkd[1681]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:38:12.984668 ignition[1670]: no config URL provided Dec 13 14:38:13.025835 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 14:38:12.984673 ignition[1670]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:38:13.062380 systemd-networkd[1681]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:38:12.984726 ignition[1670]: parsing config with SHA512: 5eb19a456babce2db99b368f27957c3963b117e77d28155ccd9360657981450c19c94aa0b64b7a8e55864a5696cf01422a50857c29110c3158b3149a2ce92242 Dec 13 14:38:12.995842 ignition[1670]: fetch-offline: fetch-offline passed Dec 13 14:38:12.995847 ignition[1670]: POST message to Packet Timeline Dec 13 14:38:12.995851 ignition[1670]: POST Status error: resource requires networking Dec 13 14:38:12.995940 ignition[1670]: Ignition finished successfully Dec 13 14:38:13.039188 ignition[1706]: Ignition 2.20.0 Dec 13 14:38:13.039194 ignition[1706]: Stage: kargs Dec 13 14:38:13.039437 ignition[1706]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:38:13.039445 ignition[1706]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 14:38:13.040964 ignition[1706]: kargs: kargs passed Dec 13 14:38:13.040981 ignition[1706]: POST message to Packet Timeline Dec 13 14:38:13.041199 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 14:38:13.044271 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49074->[::1]:53: read: connection refused Dec 13 14:38:13.245027 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #2 Dec 13 14:38:13.245848 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51934->[::1]:53: read: connection refused Dec 13 14:38:13.643720 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Dec 13 14:38:13.646140 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #3 Dec 13 14:38:13.646601 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35991->[::1]:53: read: connection refused Dec 13 14:38:13.647196 systemd-networkd[1681]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:38:14.246723 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Dec 13 14:38:14.250254 systemd-networkd[1681]: eno1: Link UP Dec 13 14:38:14.250384 systemd-networkd[1681]: eno2: Link UP Dec 13 14:38:14.250506 systemd-networkd[1681]: enP1p1s0f0np0: Link UP Dec 13 14:38:14.250649 systemd-networkd[1681]: enP1p1s0f0np0: Gained carrier Dec 13 14:38:14.262939 systemd-networkd[1681]: enP1p1s0f1np1: Link UP Dec 13 14:38:14.295761 systemd-networkd[1681]: enP1p1s0f0np0: DHCPv4 address 147.28.228.38/30, gateway 147.28.228.37 acquired from 147.28.144.140 Dec 13 14:38:14.446767 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #4 Dec 13 14:38:14.447183 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47013->[::1]:53: read: connection refused Dec 13 14:38:14.649922 systemd-networkd[1681]: enP1p1s0f1np1: Gained carrier Dec 13 14:38:15.273791 systemd-networkd[1681]: enP1p1s0f0np0: Gained IPv6LL Dec 13 14:38:15.977838 systemd-networkd[1681]: enP1p1s0f1np1: Gained IPv6LL Dec 13 14:38:16.047919 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #5 Dec 13 14:38:16.048350 ignition[1706]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57523->[::1]:53: read: connection refused Dec 13 14:38:19.251740 ignition[1706]: GET https://metadata.packet.net/metadata: attempt #6 Dec 13 14:38:20.389126 ignition[1706]: GET result: OK Dec 13 14:38:20.680575 ignition[1706]: Ignition finished successfully Dec 13 14:38:20.684112 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 14:38:20.694899 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 14:38:20.710326 ignition[1726]: Ignition 2.20.0 Dec 13 14:38:20.710334 ignition[1726]: Stage: disks Dec 13 14:38:20.710566 ignition[1726]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:38:20.710576 ignition[1726]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 14:38:20.712095 ignition[1726]: disks: disks passed Dec 13 14:38:20.712100 ignition[1726]: POST message to Packet Timeline Dec 13 14:38:20.712119 ignition[1726]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 14:38:21.324405 ignition[1726]: GET result: OK Dec 13 14:38:21.641388 ignition[1726]: Ignition finished successfully Dec 13 14:38:21.643817 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 14:38:21.649791 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 14:38:21.656956 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 14:38:21.664794 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 14:38:21.673086 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 14:38:21.681739 systemd[1]: Reached target basic.target - Basic System. Dec 13 14:38:21.699860 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 14:38:21.715684 systemd-fsck[1744]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 14:38:21.719688 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 14:38:21.744765 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 14:38:21.810529 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 14:38:21.820338 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 0aa4851d-a2ba-4d04-90b3-5d00bf608ecc r/w with ordered data mode. Quota mode: none. Dec 13 14:38:21.816233 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 14:38:21.834795 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 14:38:21.927111 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/nvme0n1p6 scanned by mount (1755) Dec 13 14:38:21.927141 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 14:38:21.927161 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:38:21.927180 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:38:21.927198 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:38:21.927222 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Dec 13 14:38:21.840962 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 14:38:21.933356 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 14:38:21.944028 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Dec 13 14:38:21.959874 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:38:21.959902 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 14:38:21.992091 coreos-metadata[1776]: Dec 13 14:38:21.989 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 14:38:22.008608 coreos-metadata[1774]: Dec 13 14:38:21.989 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 14:38:21.973114 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 14:38:21.986729 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 14:38:22.009833 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 14:38:22.042711 initrd-setup-root[1795]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:38:22.048642 initrd-setup-root[1803]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:38:22.054992 initrd-setup-root[1811]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:38:22.061112 initrd-setup-root[1818]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:38:22.131203 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 14:38:22.152784 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 14:38:22.183561 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 14:38:22.159300 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 14:38:22.189930 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 14:38:22.206187 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 14:38:22.214138 ignition[1895]: INFO : Ignition 2.20.0 Dec 13 14:38:22.214138 ignition[1895]: INFO : Stage: mount Dec 13 14:38:22.227595 ignition[1895]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:38:22.227595 ignition[1895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 14:38:22.227595 ignition[1895]: INFO : mount: mount passed Dec 13 14:38:22.227595 ignition[1895]: INFO : POST message to Packet Timeline Dec 13 14:38:22.227595 ignition[1895]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 14:38:22.727253 ignition[1895]: INFO : GET result: OK Dec 13 14:38:22.992919 coreos-metadata[1776]: Dec 13 14:38:22.992 INFO Fetch successful Dec 13 14:38:23.018457 ignition[1895]: INFO : Ignition finished successfully Dec 13 14:38:23.020783 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 14:38:23.037162 systemd[1]: flatcar-static-network.service: Deactivated successfully. Dec 13 14:38:23.037303 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Dec 13 14:38:23.409243 coreos-metadata[1774]: Dec 13 14:38:23.409 INFO Fetch successful Dec 13 14:38:23.449642 coreos-metadata[1774]: Dec 13 14:38:23.449 INFO wrote hostname ci-4186.0.0-a-f374b16159 to /sysroot/etc/hostname Dec 13 14:38:23.453006 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 14:38:23.471820 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 14:38:23.480448 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 14:38:23.517671 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/nvme0n1p6 scanned by mount (1917) Dec 13 14:38:23.517729 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 14:38:23.531992 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:38:23.544963 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:38:23.567839 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:38:23.567860 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Dec 13 14:38:23.576037 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 14:38:23.607482 ignition[1937]: INFO : Ignition 2.20.0 Dec 13 14:38:23.607482 ignition[1937]: INFO : Stage: files Dec 13 14:38:23.616942 ignition[1937]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:38:23.616942 ignition[1937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 14:38:23.616942 ignition[1937]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:38:23.616942 ignition[1937]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:38:23.616942 ignition[1937]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:38:23.616942 ignition[1937]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:38:23.616942 ignition[1937]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:38:23.616942 ignition[1937]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:38:23.616942 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:38:23.616942 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 14:38:23.612932 unknown[1937]: wrote ssh authorized keys file for user: core Dec 13 14:38:24.113228 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 14:38:26.516221 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 14:38:26.527008 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Dec 13 14:38:26.706718 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 14:38:26.803729 ignition[1937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 14:38:26.803729 ignition[1937]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 14:38:26.828188 ignition[1937]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:38:26.828188 ignition[1937]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:38:26.828188 ignition[1937]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 14:38:26.828188 ignition[1937]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:38:26.828188 ignition[1937]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:38:26.828188 ignition[1937]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:38:26.828188 ignition[1937]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:38:26.828188 ignition[1937]: INFO : files: files passed Dec 13 14:38:26.828188 ignition[1937]: INFO : POST message to Packet Timeline Dec 13 14:38:26.828188 ignition[1937]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 14:38:27.295630 ignition[1937]: INFO : GET result: OK Dec 13 14:38:27.590112 ignition[1937]: INFO : Ignition finished successfully Dec 13 14:38:27.592842 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 14:38:27.613885 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 14:38:27.626365 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 14:38:27.644686 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:38:27.644787 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 14:38:27.662609 initrd-setup-root-after-ignition[1981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:38:27.662609 initrd-setup-root-after-ignition[1981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:38:27.657155 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 14:38:27.714516 initrd-setup-root-after-ignition[1985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:38:27.669933 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 14:38:27.695907 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 14:38:27.728433 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:38:27.728527 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 14:38:27.738482 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 14:38:27.754427 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 14:38:27.765822 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 14:38:27.776873 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 14:38:27.800752 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 14:38:27.824874 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 14:38:27.847038 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 14:38:27.858378 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 14:38:27.869735 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 14:38:27.880900 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:38:27.880999 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 14:38:27.892327 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 14:38:27.903327 systemd[1]: Stopped target basic.target - Basic System. Dec 13 14:38:27.914387 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 14:38:27.925308 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 14:38:27.936238 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 14:38:27.947207 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 14:38:27.958114 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 14:38:27.969104 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 14:38:27.985397 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 14:38:27.996349 systemd[1]: Stopped target swap.target - Swaps. Dec 13 14:38:28.007414 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:38:28.007515 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 14:38:28.018519 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 14:38:28.029305 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 14:38:28.040285 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 14:38:28.044748 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 14:38:28.051363 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:38:28.051459 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 14:38:28.062530 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:38:28.062620 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 14:38:28.073478 systemd[1]: Stopped target paths.target - Path Units. Dec 13 14:38:28.090019 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:38:28.090096 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 14:38:28.101140 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 14:38:28.112246 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 14:38:28.123486 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:38:28.123574 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 14:38:28.134834 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:38:28.219253 ignition[2006]: INFO : Ignition 2.20.0 Dec 13 14:38:28.219253 ignition[2006]: INFO : Stage: umount Dec 13 14:38:28.219253 ignition[2006]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:38:28.219253 ignition[2006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 14:38:28.219253 ignition[2006]: INFO : umount: umount passed Dec 13 14:38:28.219253 ignition[2006]: INFO : POST message to Packet Timeline Dec 13 14:38:28.219253 ignition[2006]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 14:38:28.134936 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 14:38:28.146047 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:38:28.146136 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 14:38:28.157294 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:38:28.157375 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 14:38:28.173963 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 14:38:28.174050 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 14:38:28.194925 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 14:38:28.201888 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:38:28.201998 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 14:38:28.214195 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 14:38:28.225143 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:38:28.225258 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 14:38:28.236402 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:38:28.236489 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 14:38:28.249514 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:38:28.250271 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:38:28.250350 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 14:38:28.259619 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:38:28.259697 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 14:38:29.943794 ignition[2006]: INFO : GET result: OK Dec 13 14:38:30.231800 ignition[2006]: INFO : Ignition finished successfully Dec 13 14:38:30.234889 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:38:30.235149 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 14:38:30.242074 systemd[1]: Stopped target network.target - Network. Dec 13 14:38:30.251388 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:38:30.251447 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 14:38:30.261091 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:38:30.261125 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 14:38:30.270653 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:38:30.270688 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 14:38:30.280109 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 14:38:30.280143 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 14:38:30.289702 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:38:30.289738 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 14:38:30.299423 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 14:38:30.306730 systemd-networkd[1681]: enP1p1s0f0np0: DHCPv6 lease lost Dec 13 14:38:30.308900 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 14:38:30.318719 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:38:30.318810 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 14:38:30.318867 systemd-networkd[1681]: enP1p1s0f1np1: DHCPv6 lease lost Dec 13 14:38:30.330639 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 14:38:30.330752 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 14:38:30.338695 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:38:30.338899 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 14:38:30.348982 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:38:30.349125 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 14:38:30.364827 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 14:38:30.372660 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:38:30.372732 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 14:38:30.382674 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:38:30.382713 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 14:38:30.392718 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:38:30.392767 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 14:38:30.403086 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 14:38:30.425096 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:38:30.425222 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 14:38:30.441652 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:38:30.441777 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 14:38:30.450727 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:38:30.450806 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 14:38:30.461397 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:38:30.461438 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 14:38:30.472536 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:38:30.472611 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 14:38:30.483051 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:38:30.483108 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 14:38:30.502934 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 14:38:30.515994 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:38:30.516045 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 14:38:30.527123 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:38:30.527167 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:38:30.538713 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:38:30.538785 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 14:38:31.076779 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:38:31.077809 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 14:38:31.088038 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 14:38:31.108873 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 14:38:31.122223 systemd[1]: Switching root. Dec 13 14:38:31.174776 systemd-journald[900]: Journal stopped Dec 13 14:38:33.133724 systemd-journald[900]: Received SIGTERM from PID 1 (systemd). Dec 13 14:38:33.133751 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:38:33.133761 kernel: SELinux: policy capability open_perms=1 Dec 13 14:38:33.133769 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:38:33.133776 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:38:33.133784 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:38:33.133792 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:38:33.133801 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:38:33.133809 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:38:33.133816 kernel: audit: type=1403 audit(1734100711.356:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:38:33.133825 systemd[1]: Successfully loaded SELinux policy in 113.814ms. Dec 13 14:38:33.133834 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.549ms. Dec 13 14:38:33.133844 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 14:38:33.133853 systemd[1]: Detected architecture arm64. Dec 13 14:38:33.133863 systemd[1]: Detected first boot. Dec 13 14:38:33.133872 systemd[1]: Hostname set to . Dec 13 14:38:33.133881 systemd[1]: Initializing machine ID from random generator. Dec 13 14:38:33.133890 zram_generator::config[2077]: No configuration found. Dec 13 14:38:33.133900 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:38:33.133908 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:38:33.133917 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 14:38:33.133926 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:38:33.133935 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 14:38:33.133943 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 14:38:33.133952 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 14:38:33.133963 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 14:38:33.133974 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 14:38:33.133983 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 14:38:33.133991 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 14:38:33.134000 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 14:38:33.134009 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 14:38:33.134018 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 14:38:33.134026 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 14:38:33.134036 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 14:38:33.134045 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 14:38:33.134054 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 14:38:33.134063 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 14:38:33.134072 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 14:38:33.134081 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 14:38:33.134090 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 14:38:33.134100 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 14:38:33.134110 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 14:38:33.134120 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 14:38:33.134129 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 14:38:33.134138 systemd[1]: Reached target slices.target - Slice Units. Dec 13 14:38:33.134147 systemd[1]: Reached target swap.target - Swaps. Dec 13 14:38:33.134156 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 14:38:33.134165 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 14:38:33.134174 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 14:38:33.134184 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 14:38:33.134193 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 14:38:33.134202 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 14:38:33.134211 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 14:38:33.134220 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 14:38:33.134231 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 14:38:33.134240 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 14:38:33.134249 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 14:38:33.134258 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 14:38:33.134268 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:38:33.134277 systemd[1]: Reached target machines.target - Containers. Dec 13 14:38:33.134286 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 14:38:33.134295 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 14:38:33.134305 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 14:38:33.134315 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 14:38:33.134324 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 14:38:33.134333 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 14:38:33.134342 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 14:38:33.134353 kernel: ACPI: bus type drm_connector registered Dec 13 14:38:33.134361 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 14:38:33.134370 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 14:38:33.134379 kernel: fuse: init (API version 7.39) Dec 13 14:38:33.134389 kernel: loop: module loaded Dec 13 14:38:33.134397 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:38:33.134406 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:38:33.134415 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 14:38:33.134424 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:38:33.134434 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:38:33.134443 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 14:38:33.134452 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 14:38:33.134477 systemd-journald[2181]: Collecting audit messages is disabled. Dec 13 14:38:33.134496 systemd-journald[2181]: Journal started Dec 13 14:38:33.134521 systemd-journald[2181]: Runtime Journal (/run/log/journal/2be99a0ed894417bbd1aee061385a773) is 8.0M, max 4.0G, 3.9G free. Dec 13 14:38:31.869858 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:38:31.885184 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 14:38:31.885490 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:38:31.886888 systemd[1]: systemd-journald.service: Consumed 3.210s CPU time. Dec 13 14:38:33.157724 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 14:38:33.184724 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 14:38:33.205722 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 14:38:33.227808 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:38:33.227845 systemd[1]: Stopped verity-setup.service. Dec 13 14:38:33.251714 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 14:38:33.257515 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 14:38:33.262888 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 14:38:33.268139 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 14:38:33.273343 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 14:38:33.278517 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 14:38:33.283676 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 14:38:33.289843 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 14:38:33.295200 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 14:38:33.300416 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:38:33.301810 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 14:38:33.307030 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:38:33.307159 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 14:38:33.312299 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:38:33.312427 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 14:38:33.317553 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:38:33.317687 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 14:38:33.322785 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:38:33.322913 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 14:38:33.329072 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:38:33.329203 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 14:38:33.334064 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 14:38:33.338971 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 14:38:33.343793 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 14:38:33.348831 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 14:38:33.363092 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 14:38:33.382845 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 14:38:33.388600 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 14:38:33.393468 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:38:33.393496 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 14:38:33.398914 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 14:38:33.404594 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 14:38:33.410377 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 14:38:33.415264 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 14:38:33.416704 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 14:38:33.422374 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 14:38:33.427160 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:38:33.428230 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 14:38:33.432908 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 14:38:33.433792 systemd-journald[2181]: Time spent on flushing to /var/log/journal/2be99a0ed894417bbd1aee061385a773 is 30.626ms for 2348 entries. Dec 13 14:38:33.433792 systemd-journald[2181]: System Journal (/var/log/journal/2be99a0ed894417bbd1aee061385a773) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:38:33.482752 systemd-journald[2181]: Received client request to flush runtime journal. Dec 13 14:38:33.482909 kernel: loop0: detected capacity change from 0 to 189592 Dec 13 14:38:33.434030 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 14:38:33.452014 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 14:38:33.457792 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 14:38:33.463497 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 14:38:33.479554 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 14:38:33.484583 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 14:38:33.488738 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:38:33.499276 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 14:38:33.503993 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 14:38:33.508930 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 14:38:33.513794 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 14:38:33.518582 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 14:38:33.528737 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 14:38:33.533716 kernel: loop1: detected capacity change from 0 to 8 Dec 13 14:38:33.553974 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 14:38:33.560009 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 14:38:33.565536 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:38:33.566226 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 14:38:33.572037 udevadm[2219]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:38:33.581242 systemd-tmpfiles[2235]: ACLs are not supported, ignoring. Dec 13 14:38:33.581255 systemd-tmpfiles[2235]: ACLs are not supported, ignoring. Dec 13 14:38:33.585955 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 14:38:33.594714 kernel: loop2: detected capacity change from 0 to 116784 Dec 13 14:38:33.649727 kernel: loop3: detected capacity change from 0 to 113552 Dec 13 14:38:33.700718 kernel: loop4: detected capacity change from 0 to 189592 Dec 13 14:38:33.710889 ldconfig[2207]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:38:33.713042 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 14:38:33.718714 kernel: loop5: detected capacity change from 0 to 8 Dec 13 14:38:33.720718 kernel: loop6: detected capacity change from 0 to 116784 Dec 13 14:38:33.743717 kernel: loop7: detected capacity change from 0 to 113552 Dec 13 14:38:33.747982 (sd-merge)[2246]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Dec 13 14:38:33.748398 (sd-merge)[2246]: Merged extensions into '/usr'. Dec 13 14:38:33.751221 systemd[1]: Reloading requested from client PID 2216 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 14:38:33.751233 systemd[1]: Reloading... Dec 13 14:38:33.793719 zram_generator::config[2272]: No configuration found. Dec 13 14:38:33.885206 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:38:33.932652 systemd[1]: Reloading finished in 181 ms. Dec 13 14:38:33.961184 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 14:38:33.966226 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 14:38:33.982916 systemd[1]: Starting ensure-sysext.service... Dec 13 14:38:33.988630 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 14:38:33.995112 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 14:38:34.002027 systemd[1]: Reloading requested from client PID 2326 ('systemctl') (unit ensure-sysext.service)... Dec 13 14:38:34.002038 systemd[1]: Reloading... Dec 13 14:38:34.008414 systemd-tmpfiles[2327]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:38:34.008611 systemd-tmpfiles[2327]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 14:38:34.009207 systemd-tmpfiles[2327]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:38:34.009396 systemd-tmpfiles[2327]: ACLs are not supported, ignoring. Dec 13 14:38:34.009442 systemd-tmpfiles[2327]: ACLs are not supported, ignoring. Dec 13 14:38:34.011997 systemd-tmpfiles[2327]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 14:38:34.012005 systemd-tmpfiles[2327]: Skipping /boot Dec 13 14:38:34.020077 systemd-tmpfiles[2327]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 14:38:34.020085 systemd-tmpfiles[2327]: Skipping /boot Dec 13 14:38:34.020280 systemd-udevd[2328]: Using default interface naming scheme 'v255'. Dec 13 14:38:34.047715 zram_generator::config[2356]: No configuration found. Dec 13 14:38:34.074984 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (2363) Dec 13 14:38:34.075120 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2394) Dec 13 14:38:34.095724 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (2363) Dec 13 14:38:34.126714 kernel: IPMI message handler: version 39.2 Dec 13 14:38:34.135715 kernel: ipmi device interface Dec 13 14:38:34.147745 kernel: ipmi_ssif: IPMI SSIF Interface driver Dec 13 14:38:34.147789 kernel: ipmi_si: IPMI System Interface driver Dec 13 14:38:34.161442 kernel: ipmi_si: Unable to find any System Interface(s) Dec 13 14:38:34.172143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:38:34.232732 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 14:38:34.232792 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Dec 13 14:38:34.237925 systemd[1]: Reloading finished in 235 ms. Dec 13 14:38:34.255221 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 14:38:34.274140 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 14:38:34.292704 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 14:38:34.300269 systemd[1]: Finished ensure-sysext.service. Dec 13 14:38:34.337877 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 14:38:34.344387 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 14:38:34.349750 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 14:38:34.350901 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 14:38:34.357161 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 14:38:34.363267 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 14:38:34.369158 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 14:38:34.369666 lvm[2510]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:38:34.375103 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 14:38:34.375786 augenrules[2530]: No rules Dec 13 14:38:34.380025 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 14:38:34.381005 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 14:38:34.386934 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 14:38:34.393739 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 14:38:34.400593 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 14:38:34.407164 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 14:38:34.413128 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 14:38:34.419211 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:38:34.424921 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 14:38:34.425107 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 14:38:34.431052 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 14:38:34.436392 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 14:38:34.441304 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:38:34.441450 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 14:38:34.446356 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:38:34.446497 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 14:38:34.451447 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:38:34.451589 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 14:38:34.456562 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:38:34.456693 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 14:38:34.461406 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 14:38:34.466175 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 14:38:34.472586 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:38:34.483933 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 14:38:34.497954 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 14:38:34.502318 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:38:34.502385 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 14:38:34.503587 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 14:38:34.505198 lvm[2565]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:38:34.510043 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 14:38:34.514738 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:38:34.515793 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 14:38:34.520574 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 14:38:34.542113 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 14:38:34.547192 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 14:38:34.594397 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 14:38:34.598982 systemd-resolved[2541]: Positive Trust Anchors: Dec 13 14:38:34.598995 systemd-resolved[2541]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:38:34.599026 systemd-resolved[2541]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 14:38:34.599262 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 14:38:34.602766 systemd-resolved[2541]: Using system hostname 'ci-4186.0.0-a-f374b16159'. Dec 13 14:38:34.604265 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 14:38:34.605656 systemd-networkd[2539]: lo: Link UP Dec 13 14:38:34.605662 systemd-networkd[2539]: lo: Gained carrier Dec 13 14:38:34.609386 systemd-networkd[2539]: bond0: netdev ready Dec 13 14:38:34.609597 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 14:38:34.613823 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 14:38:34.618083 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 14:38:34.618350 systemd-networkd[2539]: Enumeration completed Dec 13 14:38:34.622304 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 14:38:34.626699 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 14:38:34.628081 systemd-networkd[2539]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:52:20:00.network. Dec 13 14:38:34.630982 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 14:38:34.635246 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 14:38:34.639546 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:38:34.639568 systemd[1]: Reached target paths.target - Path Units. Dec 13 14:38:34.643853 systemd[1]: Reached target timers.target - Timer Units. Dec 13 14:38:34.648705 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 14:38:34.654364 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 14:38:34.664580 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 14:38:34.669376 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 14:38:34.673875 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 14:38:34.678326 systemd[1]: Reached target network.target - Network. Dec 13 14:38:34.682614 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 14:38:34.686804 systemd[1]: Reached target basic.target - Basic System. Dec 13 14:38:34.690917 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 14:38:34.690939 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 14:38:34.711805 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 14:38:34.717210 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 14:38:34.722625 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 14:38:34.728044 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 14:38:34.733495 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 14:38:34.737867 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 14:38:34.738230 jq[2605]: false Dec 13 14:38:34.738504 coreos-metadata[2601]: Dec 13 14:38:34.738 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 14:38:34.738953 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 14:38:34.741480 coreos-metadata[2601]: Dec 13 14:38:34.741 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Dec 13 14:38:34.744171 dbus-daemon[2602]: [system] SELinux support is enabled Dec 13 14:38:34.744292 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 14:38:34.749692 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 14:38:34.752800 extend-filesystems[2606]: Found loop4 Dec 13 14:38:34.758608 extend-filesystems[2606]: Found loop5 Dec 13 14:38:34.758608 extend-filesystems[2606]: Found loop6 Dec 13 14:38:34.758608 extend-filesystems[2606]: Found loop7 Dec 13 14:38:34.758608 extend-filesystems[2606]: Found nvme1n1 Dec 13 14:38:34.758608 extend-filesystems[2606]: Found nvme0n1 Dec 13 14:38:34.758608 extend-filesystems[2606]: Found nvme0n1p1 Dec 13 14:38:34.758608 extend-filesystems[2606]: Found nvme0n1p2 Dec 13 14:38:34.758608 extend-filesystems[2606]: Found nvme0n1p3 Dec 13 14:38:34.758608 extend-filesystems[2606]: Found usr Dec 13 14:38:34.758608 extend-filesystems[2606]: Found nvme0n1p4 Dec 13 14:38:34.758608 extend-filesystems[2606]: Found nvme0n1p6 Dec 13 14:38:34.758608 extend-filesystems[2606]: Found nvme0n1p7 Dec 13 14:38:34.758608 extend-filesystems[2606]: Found nvme0n1p9 Dec 13 14:38:34.758608 extend-filesystems[2606]: Checking size of /dev/nvme0n1p9 Dec 13 14:38:34.897113 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks Dec 13 14:38:34.897137 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2361) Dec 13 14:38:34.755305 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 14:38:34.897213 extend-filesystems[2606]: Resized partition /dev/nvme0n1p9 Dec 13 14:38:34.881430 dbus-daemon[2602]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:38:34.766561 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 14:38:34.902031 extend-filesystems[2626]: resize2fs 1.47.1 (20-May-2024) Dec 13 14:38:34.773271 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 14:38:34.813222 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:38:34.813841 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:38:34.814491 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 14:38:34.911346 update_engine[2635]: I20241213 14:38:34.860152 2635 main.cc:92] Flatcar Update Engine starting Dec 13 14:38:34.911346 update_engine[2635]: I20241213 14:38:34.863399 2635 update_check_scheduler.cc:74] Next update check in 6m29s Dec 13 14:38:34.821462 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 14:38:34.911663 jq[2636]: true Dec 13 14:38:34.829749 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 14:38:34.842570 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:38:34.912029 tar[2638]: linux-arm64/helm Dec 13 14:38:34.842795 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 14:38:34.843101 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:38:34.912375 jq[2639]: true Dec 13 14:38:34.843332 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 14:38:34.852246 systemd-logind[2623]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 14:38:34.852780 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:38:34.852931 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 14:38:34.856330 systemd-logind[2623]: New seat seat0. Dec 13 14:38:34.871811 (ntainerd)[2640]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 14:38:34.873330 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 14:38:34.886241 systemd[1]: Started update-engine.service - Update Engine. Dec 13 14:38:34.898246 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:38:34.898690 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 14:38:34.906636 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:38:34.906740 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 14:38:34.924927 bash[2660]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:38:34.937920 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 14:38:34.946741 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 14:38:34.956315 systemd[1]: Starting sshkeys.service... Dec 13 14:38:34.966892 locksmithd[2661]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:38:34.969328 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 14:38:34.975160 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 14:38:34.994763 coreos-metadata[2677]: Dec 13 14:38:34.994 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 14:38:34.995915 coreos-metadata[2677]: Dec 13 14:38:34.995 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Dec 13 14:38:35.016339 containerd[2640]: time="2024-12-13T14:38:35.016265960Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 14:38:35.037985 containerd[2640]: time="2024-12-13T14:38:35.037945320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:38:35.039269 containerd[2640]: time="2024-12-13T14:38:35.039238040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:38:35.039291 containerd[2640]: time="2024-12-13T14:38:35.039270120Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:38:35.039291 containerd[2640]: time="2024-12-13T14:38:35.039285000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:38:35.039463 containerd[2640]: time="2024-12-13T14:38:35.039447160Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 14:38:35.039487 containerd[2640]: time="2024-12-13T14:38:35.039471720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 14:38:35.039540 containerd[2640]: time="2024-12-13T14:38:35.039525800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:38:35.039540 containerd[2640]: time="2024-12-13T14:38:35.039538320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:38:35.040327 containerd[2640]: time="2024-12-13T14:38:35.040129920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:38:35.040347 containerd[2640]: time="2024-12-13T14:38:35.040328920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:38:35.040365 containerd[2640]: time="2024-12-13T14:38:35.040348320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:38:35.040365 containerd[2640]: time="2024-12-13T14:38:35.040359160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:38:35.040508 containerd[2640]: time="2024-12-13T14:38:35.040493800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:38:35.040725 containerd[2640]: time="2024-12-13T14:38:35.040701280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:38:35.040840 containerd[2640]: time="2024-12-13T14:38:35.040825120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:38:35.040860 containerd[2640]: time="2024-12-13T14:38:35.040839880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:38:35.040932 containerd[2640]: time="2024-12-13T14:38:35.040918400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:38:35.040969 containerd[2640]: time="2024-12-13T14:38:35.040958000Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:38:35.047791 containerd[2640]: time="2024-12-13T14:38:35.047763920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:38:35.047829 containerd[2640]: time="2024-12-13T14:38:35.047807880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:38:35.047829 containerd[2640]: time="2024-12-13T14:38:35.047823200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 14:38:35.047891 containerd[2640]: time="2024-12-13T14:38:35.047838320Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 14:38:35.047891 containerd[2640]: time="2024-12-13T14:38:35.047853240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:38:35.047997 containerd[2640]: time="2024-12-13T14:38:35.047982840Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:38:35.048248 containerd[2640]: time="2024-12-13T14:38:35.048221960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:38:35.048410 containerd[2640]: time="2024-12-13T14:38:35.048392800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 14:38:35.048430 containerd[2640]: time="2024-12-13T14:38:35.048414400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 14:38:35.048447 containerd[2640]: time="2024-12-13T14:38:35.048429760Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 14:38:35.048464 containerd[2640]: time="2024-12-13T14:38:35.048444840Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:38:35.048464 containerd[2640]: time="2024-12-13T14:38:35.048458320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:38:35.048500 containerd[2640]: time="2024-12-13T14:38:35.048472480Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:38:35.048500 containerd[2640]: time="2024-12-13T14:38:35.048485560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:38:35.048535 containerd[2640]: time="2024-12-13T14:38:35.048498800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:38:35.048535 containerd[2640]: time="2024-12-13T14:38:35.048511080Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:38:35.048535 containerd[2640]: time="2024-12-13T14:38:35.048522320Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:38:35.048535 containerd[2640]: time="2024-12-13T14:38:35.048532920Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:38:35.048598 containerd[2640]: time="2024-12-13T14:38:35.048553240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:38:35.048598 containerd[2640]: time="2024-12-13T14:38:35.048566480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:38:35.048598 containerd[2640]: time="2024-12-13T14:38:35.048577920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:38:35.048598 containerd[2640]: time="2024-12-13T14:38:35.048589480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:38:35.048667 containerd[2640]: time="2024-12-13T14:38:35.048600400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:38:35.048667 containerd[2640]: time="2024-12-13T14:38:35.048613080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:38:35.048667 containerd[2640]: time="2024-12-13T14:38:35.048623760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:38:35.048667 containerd[2640]: time="2024-12-13T14:38:35.048636840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:38:35.048667 containerd[2640]: time="2024-12-13T14:38:35.048650200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 14:38:35.048667 containerd[2640]: time="2024-12-13T14:38:35.048664920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 14:38:35.048785 containerd[2640]: time="2024-12-13T14:38:35.048676000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:38:35.048785 containerd[2640]: time="2024-12-13T14:38:35.048687040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 14:38:35.048785 containerd[2640]: time="2024-12-13T14:38:35.048698280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:38:35.048785 containerd[2640]: time="2024-12-13T14:38:35.048719360Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 14:38:35.048785 containerd[2640]: time="2024-12-13T14:38:35.048739200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 14:38:35.048785 containerd[2640]: time="2024-12-13T14:38:35.048753200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:38:35.048785 containerd[2640]: time="2024-12-13T14:38:35.048764200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:38:35.048949 containerd[2640]: time="2024-12-13T14:38:35.048937000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:38:35.048971 containerd[2640]: time="2024-12-13T14:38:35.048953920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 14:38:35.048971 containerd[2640]: time="2024-12-13T14:38:35.048964360Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:38:35.049005 containerd[2640]: time="2024-12-13T14:38:35.048976600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 14:38:35.049005 containerd[2640]: time="2024-12-13T14:38:35.048986200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:38:35.049005 containerd[2640]: time="2024-12-13T14:38:35.048997200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 14:38:35.049056 containerd[2640]: time="2024-12-13T14:38:35.049007280Z" level=info msg="NRI interface is disabled by configuration." Dec 13 14:38:35.049056 containerd[2640]: time="2024-12-13T14:38:35.049017360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:38:35.049378 containerd[2640]: time="2024-12-13T14:38:35.049337240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:38:35.049478 containerd[2640]: time="2024-12-13T14:38:35.049383320Z" level=info msg="Connect containerd service" Dec 13 14:38:35.049478 containerd[2640]: time="2024-12-13T14:38:35.049411160Z" level=info msg="using legacy CRI server" Dec 13 14:38:35.049478 containerd[2640]: time="2024-12-13T14:38:35.049417600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 14:38:35.049653 containerd[2640]: time="2024-12-13T14:38:35.049638880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:38:35.050252 containerd[2640]: time="2024-12-13T14:38:35.050227040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:38:35.050447 containerd[2640]: time="2024-12-13T14:38:35.050422640Z" level=info msg="Start subscribing containerd event" Dec 13 14:38:35.050476 containerd[2640]: time="2024-12-13T14:38:35.050463960Z" level=info msg="Start recovering state" Dec 13 14:38:35.050535 containerd[2640]: time="2024-12-13T14:38:35.050523120Z" level=info msg="Start event monitor" Dec 13 14:38:35.050556 containerd[2640]: time="2024-12-13T14:38:35.050534720Z" level=info msg="Start snapshots syncer" Dec 13 14:38:35.050556 containerd[2640]: time="2024-12-13T14:38:35.050543440Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:38:35.050556 containerd[2640]: time="2024-12-13T14:38:35.050551920Z" level=info msg="Start streaming server" Dec 13 14:38:35.050781 containerd[2640]: time="2024-12-13T14:38:35.050763920Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:38:35.050818 containerd[2640]: time="2024-12-13T14:38:35.050806560Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:38:35.050863 containerd[2640]: time="2024-12-13T14:38:35.050854480Z" level=info msg="containerd successfully booted in 0.035362s" Dec 13 14:38:35.050906 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 14:38:35.178972 tar[2638]: linux-arm64/LICENSE Dec 13 14:38:35.179058 tar[2638]: linux-arm64/README.md Dec 13 14:38:35.189974 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 14:38:35.213421 sshd_keygen[2629]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:38:35.231683 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 14:38:35.254032 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 14:38:35.262802 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:38:35.262984 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 14:38:35.269429 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 14:38:35.282107 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 14:38:35.288233 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 14:38:35.293992 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 14:38:35.298706 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 14:38:35.321721 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 Dec 13 14:38:35.337907 extend-filesystems[2626]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 14:38:35.337907 extend-filesystems[2626]: old_desc_blocks = 1, new_desc_blocks = 112 Dec 13 14:38:35.337907 extend-filesystems[2626]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. Dec 13 14:38:35.364298 extend-filesystems[2606]: Resized filesystem in /dev/nvme0n1p9 Dec 13 14:38:35.340130 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:38:35.340418 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 14:38:35.741685 coreos-metadata[2601]: Dec 13 14:38:35.741 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 14:38:35.742108 coreos-metadata[2601]: Dec 13 14:38:35.742 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Dec 13 14:38:35.931725 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Dec 13 14:38:35.948718 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link Dec 13 14:38:35.949263 systemd-networkd[2539]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:52:20:01.network. Dec 13 14:38:35.996061 coreos-metadata[2677]: Dec 13 14:38:35.996 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 14:38:35.996422 coreos-metadata[2677]: Dec 13 14:38:35.996 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Dec 13 14:38:36.563723 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Dec 13 14:38:36.580927 systemd-networkd[2539]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Dec 13 14:38:36.581717 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link Dec 13 14:38:36.582294 systemd-networkd[2539]: enP1p1s0f0np0: Link UP Dec 13 14:38:36.582599 systemd-networkd[2539]: enP1p1s0f0np0: Gained carrier Dec 13 14:38:36.600716 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 14:38:36.614143 systemd-networkd[2539]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:52:20:00.network. Dec 13 14:38:36.614424 systemd-networkd[2539]: enP1p1s0f1np1: Link UP Dec 13 14:38:36.614678 systemd-networkd[2539]: enP1p1s0f1np1: Gained carrier Dec 13 14:38:36.624979 systemd-networkd[2539]: bond0: Link UP Dec 13 14:38:36.625264 systemd-networkd[2539]: bond0: Gained carrier Dec 13 14:38:36.625429 systemd-timesyncd[2542]: Network configuration changed, trying to establish connection. Dec 13 14:38:36.625948 systemd-timesyncd[2542]: Network configuration changed, trying to establish connection. Dec 13 14:38:36.626265 systemd-timesyncd[2542]: Network configuration changed, trying to establish connection. Dec 13 14:38:36.626403 systemd-timesyncd[2542]: Network configuration changed, trying to establish connection. Dec 13 14:38:36.708963 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex Dec 13 14:38:36.708997 kernel: bond0: active interface up! Dec 13 14:38:36.832719 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex Dec 13 14:38:37.742205 coreos-metadata[2601]: Dec 13 14:38:37.742 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Dec 13 14:38:37.996540 coreos-metadata[2677]: Dec 13 14:38:37.996 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Dec 13 14:38:38.250069 systemd-timesyncd[2542]: Network configuration changed, trying to establish connection. Dec 13 14:38:38.505833 systemd-networkd[2539]: bond0: Gained IPv6LL Dec 13 14:38:38.506039 systemd-timesyncd[2542]: Network configuration changed, trying to establish connection. Dec 13 14:38:38.507919 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 14:38:38.513800 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 14:38:38.530943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:38:38.537425 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 14:38:38.558427 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 14:38:39.104860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:38:39.110892 (kubelet)[2740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:38:39.533504 kubelet[2740]: E1213 14:38:39.533439 2740 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:38:39.535630 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:38:39.535786 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:38:39.743900 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 14:38:39.756084 systemd[1]: Started sshd@0-147.28.228.38:22-147.75.109.163:34562.service - OpenSSH per-connection server daemon (147.75.109.163:34562). Dec 13 14:38:40.172576 sshd[2759]: Accepted publickey for core from 147.75.109.163 port 34562 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:38:40.174353 sshd-session[2759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:38:40.182128 systemd-logind[2623]: New session 1 of user core. Dec 13 14:38:40.183535 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 14:38:40.197033 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 14:38:40.216792 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 14:38:40.224245 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 14:38:40.232486 (systemd)[2764]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:38:40.323882 agetty[2718]: failed to open credentials directory Dec 13 14:38:40.323917 agetty[2719]: failed to open credentials directory Dec 13 14:38:40.330007 login[2718]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:38:40.330924 login[2719]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:38:40.333116 systemd-logind[2623]: New session 3 of user core. Dec 13 14:38:40.335389 systemd-logind[2623]: New session 2 of user core. Dec 13 14:38:40.336496 systemd[2764]: Queued start job for default target default.target. Dec 13 14:38:40.349827 systemd[2764]: Created slice app.slice - User Application Slice. Dec 13 14:38:40.349855 systemd[2764]: Reached target paths.target - Paths. Dec 13 14:38:40.349867 systemd[2764]: Reached target timers.target - Timers. Dec 13 14:38:40.351175 systemd[2764]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 14:38:40.360702 systemd[2764]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 14:38:40.360765 systemd[2764]: Reached target sockets.target - Sockets. Dec 13 14:38:40.360778 systemd[2764]: Reached target basic.target - Basic System. Dec 13 14:38:40.360822 systemd[2764]: Reached target default.target - Main User Target. Dec 13 14:38:40.360846 systemd[2764]: Startup finished in 123ms. Dec 13 14:38:40.361156 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 14:38:40.361294 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 Dec 13 14:38:40.361462 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity Dec 13 14:38:40.362692 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 14:38:40.363576 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 14:38:40.364542 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 14:38:40.676110 systemd[1]: Started sshd@1-147.28.228.38:22-147.75.109.163:52952.service - OpenSSH per-connection server daemon (147.75.109.163:52952). Dec 13 14:38:41.097198 sshd[2804]: Accepted publickey for core from 147.75.109.163 port 52952 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:38:41.098189 sshd-session[2804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:38:41.101020 systemd-logind[2623]: New session 4 of user core. Dec 13 14:38:41.109941 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 14:38:41.402452 sshd[2806]: Connection closed by 147.75.109.163 port 52952 Dec 13 14:38:41.402824 sshd-session[2804]: pam_unix(sshd:session): session closed for user core Dec 13 14:38:41.405362 systemd[1]: sshd@1-147.28.228.38:22-147.75.109.163:52952.service: Deactivated successfully. Dec 13 14:38:41.406996 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:38:41.407486 systemd-logind[2623]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:38:41.408026 systemd-logind[2623]: Removed session 4. Dec 13 14:38:41.470887 systemd[1]: Started sshd@2-147.28.228.38:22-147.75.109.163:52958.service - OpenSSH per-connection server daemon (147.75.109.163:52958). Dec 13 14:38:41.870800 sshd[2812]: Accepted publickey for core from 147.75.109.163 port 52958 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:38:41.871814 sshd-session[2812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:38:41.874509 systemd-logind[2623]: New session 5 of user core. Dec 13 14:38:41.883850 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 14:38:42.162512 sshd[2814]: Connection closed by 147.75.109.163 port 52958 Dec 13 14:38:42.162984 sshd-session[2812]: pam_unix(sshd:session): session closed for user core Dec 13 14:38:42.166436 systemd[1]: sshd@2-147.28.228.38:22-147.75.109.163:52958.service: Deactivated successfully. Dec 13 14:38:42.168157 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:38:42.168749 systemd-logind[2623]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:38:42.169333 systemd-logind[2623]: Removed session 5. Dec 13 14:38:43.418944 coreos-metadata[2601]: Dec 13 14:38:43.418 INFO Fetch successful Dec 13 14:38:43.485507 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 14:38:43.488024 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Dec 13 14:38:43.777215 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Dec 13 14:38:44.105217 coreos-metadata[2677]: Dec 13 14:38:44.105 INFO Fetch successful Dec 13 14:38:44.151528 unknown[2677]: wrote ssh authorized keys file for user: core Dec 13 14:38:44.181122 update-ssh-keys[2827]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:38:44.182237 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 14:38:44.183763 systemd[1]: Finished sshkeys.service. Dec 13 14:38:44.184662 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 14:38:44.188751 systemd[1]: Startup finished in 3.226s (kernel) + 22.885s (initrd) + 12.945s (userspace) = 39.058s. Dec 13 14:38:49.786884 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:38:49.801915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:38:49.899818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:38:49.903458 (kubelet)[2839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:38:49.936576 kubelet[2839]: E1213 14:38:49.936543 2839 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:38:49.939844 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:38:49.939977 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:38:52.242028 systemd[1]: Started sshd@3-147.28.228.38:22-147.75.109.163:43984.service - OpenSSH per-connection server daemon (147.75.109.163:43984). Dec 13 14:38:52.667146 sshd[2863]: Accepted publickey for core from 147.75.109.163 port 43984 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:38:52.668175 sshd-session[2863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:38:52.671194 systemd-logind[2623]: New session 6 of user core. Dec 13 14:38:52.679860 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 14:38:52.974982 sshd[2865]: Connection closed by 147.75.109.163 port 43984 Dec 13 14:38:52.975449 sshd-session[2863]: pam_unix(sshd:session): session closed for user core Dec 13 14:38:52.978825 systemd[1]: sshd@3-147.28.228.38:22-147.75.109.163:43984.service: Deactivated successfully. Dec 13 14:38:52.980486 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:38:52.981024 systemd-logind[2623]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:38:52.981604 systemd-logind[2623]: Removed session 6. Dec 13 14:38:53.042698 systemd[1]: Started sshd@4-147.28.228.38:22-147.75.109.163:43998.service - OpenSSH per-connection server daemon (147.75.109.163:43998). Dec 13 14:38:53.448386 sshd[2870]: Accepted publickey for core from 147.75.109.163 port 43998 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:38:53.449368 sshd-session[2870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:38:53.452098 systemd-logind[2623]: New session 7 of user core. Dec 13 14:38:53.460809 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 14:38:53.740967 sshd[2872]: Connection closed by 147.75.109.163 port 43998 Dec 13 14:38:53.741430 sshd-session[2870]: pam_unix(sshd:session): session closed for user core Dec 13 14:38:53.744879 systemd[1]: sshd@4-147.28.228.38:22-147.75.109.163:43998.service: Deactivated successfully. Dec 13 14:38:53.747398 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:38:53.749085 systemd-logind[2623]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:38:53.749620 systemd-logind[2623]: Removed session 7. Dec 13 14:38:53.811853 systemd[1]: Started sshd@5-147.28.228.38:22-147.75.109.163:44014.service - OpenSSH per-connection server daemon (147.75.109.163:44014). Dec 13 14:38:54.215789 sshd[2878]: Accepted publickey for core from 147.75.109.163 port 44014 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:38:54.217018 sshd-session[2878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:38:54.219642 systemd-logind[2623]: New session 8 of user core. Dec 13 14:38:54.238869 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 14:38:54.509115 sshd[2880]: Connection closed by 147.75.109.163 port 44014 Dec 13 14:38:54.509616 sshd-session[2878]: pam_unix(sshd:session): session closed for user core Dec 13 14:38:54.512536 systemd[1]: sshd@5-147.28.228.38:22-147.75.109.163:44014.service: Deactivated successfully. Dec 13 14:38:54.514064 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:38:54.514522 systemd-logind[2623]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:38:54.515059 systemd-logind[2623]: Removed session 8. Dec 13 14:38:54.584860 systemd[1]: Started sshd@6-147.28.228.38:22-147.75.109.163:44026.service - OpenSSH per-connection server daemon (147.75.109.163:44026). Dec 13 14:38:55.002050 sshd[2885]: Accepted publickey for core from 147.75.109.163 port 44026 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:38:55.003087 sshd-session[2885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:38:55.005829 systemd-logind[2623]: New session 9 of user core. Dec 13 14:38:55.015866 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 14:38:55.246040 sudo[2888]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 14:38:55.246303 sudo[2888]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 14:38:55.261672 sudo[2888]: pam_unix(sudo:session): session closed for user root Dec 13 14:38:55.326430 sshd[2887]: Connection closed by 147.75.109.163 port 44026 Dec 13 14:38:55.327057 sshd-session[2885]: pam_unix(sshd:session): session closed for user core Dec 13 14:38:55.331034 systemd[1]: sshd@6-147.28.228.38:22-147.75.109.163:44026.service: Deactivated successfully. Dec 13 14:38:55.333327 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:38:55.333838 systemd-logind[2623]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:38:55.334428 systemd-logind[2623]: Removed session 9. Dec 13 14:38:55.401010 systemd[1]: Started sshd@7-147.28.228.38:22-147.75.109.163:44030.service - OpenSSH per-connection server daemon (147.75.109.163:44030). Dec 13 14:38:55.808315 sshd[2893]: Accepted publickey for core from 147.75.109.163 port 44030 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:38:55.809430 sshd-session[2893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:38:55.812307 systemd-logind[2623]: New session 10 of user core. Dec 13 14:38:55.821824 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 14:38:56.041163 sudo[2897]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 14:38:56.041428 sudo[2897]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 14:38:56.043892 sudo[2897]: pam_unix(sudo:session): session closed for user root Dec 13 14:38:56.048232 sudo[2896]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 14:38:56.048489 sudo[2896]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 14:38:56.065026 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 14:38:56.086471 augenrules[2919]: No rules Dec 13 14:38:56.087576 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 14:38:56.088808 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 14:38:56.089659 sudo[2896]: pam_unix(sudo:session): session closed for user root Dec 13 14:38:56.152201 sshd[2895]: Connection closed by 147.75.109.163 port 44030 Dec 13 14:38:56.152859 sshd-session[2893]: pam_unix(sshd:session): session closed for user core Dec 13 14:38:56.156617 systemd[1]: sshd@7-147.28.228.38:22-147.75.109.163:44030.service: Deactivated successfully. Dec 13 14:38:56.159298 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:38:56.159788 systemd-logind[2623]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:38:56.160351 systemd-logind[2623]: Removed session 10. Dec 13 14:38:56.228055 systemd[1]: Started sshd@8-147.28.228.38:22-147.75.109.163:44038.service - OpenSSH per-connection server daemon (147.75.109.163:44038). Dec 13 14:38:56.647478 sshd[2927]: Accepted publickey for core from 147.75.109.163 port 44038 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:38:56.648503 sshd-session[2927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:38:56.651391 systemd-logind[2623]: New session 11 of user core. Dec 13 14:38:56.663823 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 14:38:56.887941 sudo[2930]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:38:56.888206 sudo[2930]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 14:38:57.173975 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 14:38:57.174102 (dockerd)[2962]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 14:38:57.364761 dockerd[2962]: time="2024-12-13T14:38:57.364702160Z" level=info msg="Starting up" Dec 13 14:38:57.425256 dockerd[2962]: time="2024-12-13T14:38:57.425189160Z" level=info msg="Loading containers: start." Dec 13 14:38:57.565732 kernel: Initializing XFRM netlink socket Dec 13 14:38:57.583887 systemd-timesyncd[2542]: Network configuration changed, trying to establish connection. Dec 13 14:38:57.644896 systemd-networkd[2539]: docker0: Link UP Dec 13 14:38:57.675879 dockerd[2962]: time="2024-12-13T14:38:57.675814080Z" level=info msg="Loading containers: done." Dec 13 14:38:57.684461 dockerd[2962]: time="2024-12-13T14:38:57.684433360Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:38:57.684542 dockerd[2962]: time="2024-12-13T14:38:57.684499960Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 14:38:57.684676 dockerd[2962]: time="2024-12-13T14:38:57.684662280Z" level=info msg="Daemon has completed initialization" Dec 13 14:38:57.704413 dockerd[2962]: time="2024-12-13T14:38:57.704300880Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:38:57.704410 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 14:38:57.903405 systemd-timesyncd[2542]: Contacted time server [2601:18a:8081:3600:a923:2e66:e3d2:8c95]:123 (2.flatcar.pool.ntp.org). Dec 13 14:38:57.903460 systemd-timesyncd[2542]: Initial clock synchronization to Fri 2024-12-13 14:38:57.603579 UTC. Dec 13 14:38:58.416645 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1436659730-merged.mount: Deactivated successfully. Dec 13 14:38:58.521070 containerd[2640]: time="2024-12-13T14:38:58.521037806Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 14:38:58.887282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1876171832.mount: Deactivated successfully. Dec 13 14:38:59.995105 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:39:00.005918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:39:00.099472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:39:00.103005 (kubelet)[3263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:39:00.142816 kubelet[3263]: E1213 14:39:00.142776 3263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:39:00.145000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:39:00.145135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:39:00.368208 containerd[2640]: time="2024-12-13T14:39:00.368101561Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=25615585" Dec 13 14:39:00.368208 containerd[2640]: time="2024-12-13T14:39:00.368114344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:00.369280 containerd[2640]: time="2024-12-13T14:39:00.369251763Z" level=info msg="ImageCreate event name:\"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:00.371927 containerd[2640]: time="2024-12-13T14:39:00.371904914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:00.372976 containerd[2640]: time="2024-12-13T14:39:00.372935876Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"25612385\" in 1.851855947s" Dec 13 14:39:00.373007 containerd[2640]: time="2024-12-13T14:39:00.372990348Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\"" Dec 13 14:39:00.373591 containerd[2640]: time="2024-12-13T14:39:00.373574735Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 14:39:02.066205 containerd[2640]: time="2024-12-13T14:39:02.066167544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:02.066486 containerd[2640]: time="2024-12-13T14:39:02.066181979Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=22470096" Dec 13 14:39:02.067222 containerd[2640]: time="2024-12-13T14:39:02.067197855Z" level=info msg="ImageCreate event name:\"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:02.069762 containerd[2640]: time="2024-12-13T14:39:02.069738188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:02.070920 containerd[2640]: time="2024-12-13T14:39:02.070881287Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"23872417\" in 1.697273304s" Dec 13 14:39:02.070950 containerd[2640]: time="2024-12-13T14:39:02.070933828Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\"" Dec 13 14:39:02.071322 containerd[2640]: time="2024-12-13T14:39:02.071302120Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 14:39:03.472975 containerd[2640]: time="2024-12-13T14:39:03.472935230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:03.473265 containerd[2640]: time="2024-12-13T14:39:03.472976972Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=17024202" Dec 13 14:39:03.473963 containerd[2640]: time="2024-12-13T14:39:03.473935909Z" level=info msg="ImageCreate event name:\"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:03.476618 containerd[2640]: time="2024-12-13T14:39:03.476598594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:03.477722 containerd[2640]: time="2024-12-13T14:39:03.477659808Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"18426541\" in 1.406311023s" Dec 13 14:39:03.477770 containerd[2640]: time="2024-12-13T14:39:03.477727835Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\"" Dec 13 14:39:03.478125 containerd[2640]: time="2024-12-13T14:39:03.478103987Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 14:39:04.343414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3877447569.mount: Deactivated successfully. Dec 13 14:39:04.881092 containerd[2640]: time="2024-12-13T14:39:04.880876755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:04.881391 containerd[2640]: time="2024-12-13T14:39:04.880901178Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771426" Dec 13 14:39:04.882881 containerd[2640]: time="2024-12-13T14:39:04.882853833Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:04.884520 containerd[2640]: time="2024-12-13T14:39:04.884495486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:04.885296 containerd[2640]: time="2024-12-13T14:39:04.885270397Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 1.407133316s" Dec 13 14:39:04.885328 containerd[2640]: time="2024-12-13T14:39:04.885302842Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Dec 13 14:39:04.885650 containerd[2640]: time="2024-12-13T14:39:04.885637245Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:39:05.216330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2961966937.mount: Deactivated successfully. Dec 13 14:39:05.906031 containerd[2640]: time="2024-12-13T14:39:05.905990180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:05.906329 containerd[2640]: time="2024-12-13T14:39:05.906022694Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Dec 13 14:39:05.907088 containerd[2640]: time="2024-12-13T14:39:05.907061980Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:05.911236 containerd[2640]: time="2024-12-13T14:39:05.911208796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:05.912416 containerd[2640]: time="2024-12-13T14:39:05.912387362Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.026727209s" Dec 13 14:39:05.912437 containerd[2640]: time="2024-12-13T14:39:05.912423108Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 14:39:05.912834 containerd[2640]: time="2024-12-13T14:39:05.912811668Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 14:39:06.194247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2529325589.mount: Deactivated successfully. Dec 13 14:39:06.194737 containerd[2640]: time="2024-12-13T14:39:06.194706740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:06.194831 containerd[2640]: time="2024-12-13T14:39:06.194720007Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Dec 13 14:39:06.195438 containerd[2640]: time="2024-12-13T14:39:06.195419408Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:06.197402 containerd[2640]: time="2024-12-13T14:39:06.197386324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:06.198180 containerd[2640]: time="2024-12-13T14:39:06.198155338Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 285.309087ms" Dec 13 14:39:06.198208 containerd[2640]: time="2024-12-13T14:39:06.198184715Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 13 14:39:06.198516 containerd[2640]: time="2024-12-13T14:39:06.198496648Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 14:39:06.489904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2355441650.mount: Deactivated successfully. Dec 13 14:39:08.940276 containerd[2640]: time="2024-12-13T14:39:08.940216865Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Dec 13 14:39:08.940276 containerd[2640]: time="2024-12-13T14:39:08.940228074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:08.941279 containerd[2640]: time="2024-12-13T14:39:08.941257430Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:08.944207 containerd[2640]: time="2024-12-13T14:39:08.944178679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:08.945322 containerd[2640]: time="2024-12-13T14:39:08.945298455Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.746772524s" Dec 13 14:39:08.945349 containerd[2640]: time="2024-12-13T14:39:08.945327922Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Dec 13 14:39:10.245080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:39:10.254909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:39:10.345481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:39:10.349031 (kubelet)[3486]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:39:10.378773 kubelet[3486]: E1213 14:39:10.378734 3486 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:39:10.380821 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:39:10.380951 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:39:13.580697 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:39:13.589967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:39:13.610192 systemd[1]: Reloading requested from client PID 3521 ('systemctl') (unit session-11.scope)... Dec 13 14:39:13.610203 systemd[1]: Reloading... Dec 13 14:39:13.675713 zram_generator::config[3567]: No configuration found. Dec 13 14:39:13.764332 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:39:13.833402 systemd[1]: Reloading finished in 222 ms. Dec 13 14:39:13.879368 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:39:13.882025 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:39:13.882216 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:39:13.883748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:39:13.985966 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:39:13.989691 (kubelet)[3631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 14:39:14.021605 kubelet[3631]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:39:14.021605 kubelet[3631]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:39:14.021605 kubelet[3631]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:39:14.021923 kubelet[3631]: I1213 14:39:14.021730 3631 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:39:14.915954 kubelet[3631]: I1213 14:39:14.915920 3631 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 14:39:14.915954 kubelet[3631]: I1213 14:39:14.915944 3631 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:39:14.916151 kubelet[3631]: I1213 14:39:14.916143 3631 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 14:39:14.935440 kubelet[3631]: E1213 14:39:14.935411 3631 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.28.228.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.28.228.38:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:39:14.936011 kubelet[3631]: I1213 14:39:14.935996 3631 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:39:14.940923 kubelet[3631]: E1213 14:39:14.940895 3631 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 14:39:14.940949 kubelet[3631]: I1213 14:39:14.940924 3631 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 14:39:14.961340 kubelet[3631]: I1213 14:39:14.961305 3631 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:39:14.962096 kubelet[3631]: I1213 14:39:14.962077 3631 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 14:39:14.962238 kubelet[3631]: I1213 14:39:14.962211 3631 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:39:14.962392 kubelet[3631]: I1213 14:39:14.962238 3631 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.0.0-a-f374b16159","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 14:39:14.962539 kubelet[3631]: I1213 14:39:14.962528 3631 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:39:14.962539 kubelet[3631]: I1213 14:39:14.962538 3631 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 14:39:14.962726 kubelet[3631]: I1213 14:39:14.962714 3631 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:39:14.964476 kubelet[3631]: I1213 14:39:14.964459 3631 kubelet.go:408] "Attempting to sync node with API server" Dec 13 14:39:14.964500 kubelet[3631]: I1213 14:39:14.964480 3631 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:39:14.964572 kubelet[3631]: I1213 14:39:14.964564 3631 kubelet.go:314] "Adding apiserver pod source" Dec 13 14:39:14.964594 kubelet[3631]: I1213 14:39:14.964574 3631 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:39:14.966335 kubelet[3631]: I1213 14:39:14.966315 3631 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 14:39:14.968509 kubelet[3631]: I1213 14:39:14.968491 3631 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:39:14.968580 kubelet[3631]: W1213 14:39:14.968539 3631 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.228.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.228.38:6443: connect: connection refused Dec 13 14:39:14.968609 kubelet[3631]: E1213 14:39:14.968593 3631 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.228.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.228.38:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:39:14.969528 kubelet[3631]: W1213 14:39:14.969488 3631 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.228.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.0.0-a-f374b16159&limit=500&resourceVersion=0": dial tcp 147.28.228.38:6443: connect: connection refused Dec 13 14:39:14.969557 kubelet[3631]: E1213 14:39:14.969539 3631 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.228.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.0.0-a-f374b16159&limit=500&resourceVersion=0\": dial tcp 147.28.228.38:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:39:14.969620 kubelet[3631]: W1213 14:39:14.969606 3631 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:39:14.970245 kubelet[3631]: I1213 14:39:14.970231 3631 server.go:1269] "Started kubelet" Dec 13 14:39:14.970415 kubelet[3631]: I1213 14:39:14.970386 3631 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:39:14.970448 kubelet[3631]: I1213 14:39:14.970377 3631 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:39:14.970721 kubelet[3631]: I1213 14:39:14.970693 3631 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:39:14.971434 kubelet[3631]: I1213 14:39:14.971416 3631 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:39:14.971859 kubelet[3631]: I1213 14:39:14.971830 3631 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 14:39:14.974798 kubelet[3631]: E1213 14:39:14.974548 3631 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:14.974914 kubelet[3631]: I1213 14:39:14.974889 3631 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 14:39:14.974944 kubelet[3631]: I1213 14:39:14.974912 3631 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 14:39:14.975088 kubelet[3631]: I1213 14:39:14.975069 3631 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:39:14.975114 kubelet[3631]: I1213 14:39:14.975088 3631 server.go:460] "Adding debug handlers to kubelet server" Dec 13 14:39:14.975114 kubelet[3631]: E1213 14:39:14.975066 3631 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.228.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.0.0-a-f374b16159?timeout=10s\": dial tcp 147.28.228.38:6443: connect: connection refused" interval="200ms" Dec 13 14:39:14.976799 kubelet[3631]: W1213 14:39:14.976757 3631 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.228.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.228.38:6443: connect: connection refused Dec 13 14:39:14.976824 kubelet[3631]: I1213 14:39:14.976804 3631 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:39:14.976824 kubelet[3631]: E1213 14:39:14.976813 3631 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.228.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.228.38:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:39:14.977352 kubelet[3631]: E1213 14:39:14.977340 3631 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:39:14.977867 kubelet[3631]: E1213 14:39:14.976762 3631 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.228.38:6443/api/v1/namespaces/default/events\": dial tcp 147.28.228.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.0.0-a-f374b16159.1810c372f0bc68ca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.0.0-a-f374b16159,UID:ci-4186.0.0-a-f374b16159,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.0.0-a-f374b16159,},FirstTimestamp:2024-12-13 14:39:14.970208458 +0000 UTC m=+0.977813772,LastTimestamp:2024-12-13 14:39:14.970208458 +0000 UTC m=+0.977813772,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.0.0-a-f374b16159,}" Dec 13 14:39:14.978440 kubelet[3631]: I1213 14:39:14.978425 3631 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:39:14.978463 kubelet[3631]: I1213 14:39:14.978441 3631 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:39:14.986792 kubelet[3631]: I1213 14:39:14.986753 3631 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:39:14.987725 kubelet[3631]: I1213 14:39:14.987713 3631 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:39:14.987804 kubelet[3631]: I1213 14:39:14.987797 3631 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:39:14.987828 kubelet[3631]: I1213 14:39:14.987817 3631 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 14:39:14.987879 kubelet[3631]: E1213 14:39:14.987861 3631 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:39:14.988725 kubelet[3631]: W1213 14:39:14.988685 3631 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.228.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.228.38:6443: connect: connection refused Dec 13 14:39:14.988751 kubelet[3631]: E1213 14:39:14.988739 3631 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.228.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.228.38:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:39:15.029739 kubelet[3631]: I1213 14:39:15.029718 3631 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:39:15.029739 kubelet[3631]: I1213 14:39:15.029736 3631 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:39:15.029985 kubelet[3631]: I1213 14:39:15.029755 3631 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:39:15.030343 kubelet[3631]: I1213 14:39:15.030328 3631 policy_none.go:49] "None policy: Start" Dec 13 14:39:15.030715 kubelet[3631]: I1213 14:39:15.030699 3631 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:39:15.030735 kubelet[3631]: I1213 14:39:15.030724 3631 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:39:15.035315 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 14:39:15.048712 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 14:39:15.051098 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 14:39:15.062436 kubelet[3631]: I1213 14:39:15.062415 3631 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:39:15.062602 kubelet[3631]: I1213 14:39:15.062586 3631 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 14:39:15.062657 kubelet[3631]: I1213 14:39:15.062598 3631 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:39:15.062781 kubelet[3631]: I1213 14:39:15.062764 3631 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:39:15.063458 kubelet[3631]: E1213 14:39:15.063439 3631 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:15.096542 systemd[1]: Created slice kubepods-burstable-pod7e7a8ae1a4d9f1cb01f1f22b2397c4c4.slice - libcontainer container kubepods-burstable-pod7e7a8ae1a4d9f1cb01f1f22b2397c4c4.slice. Dec 13 14:39:15.120355 systemd[1]: Created slice kubepods-burstable-pod9630653fd3a33dea96184fe3e27b49f6.slice - libcontainer container kubepods-burstable-pod9630653fd3a33dea96184fe3e27b49f6.slice. Dec 13 14:39:15.123339 systemd[1]: Created slice kubepods-burstable-pod2271c0a79b75a10cc171fd31d3388822.slice - libcontainer container kubepods-burstable-pod2271c0a79b75a10cc171fd31d3388822.slice. Dec 13 14:39:15.164863 kubelet[3631]: I1213 14:39:15.164831 3631 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.0.0-a-f374b16159" Dec 13 14:39:15.165201 kubelet[3631]: E1213 14:39:15.165179 3631 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.228.38:6443/api/v1/nodes\": dial tcp 147.28.228.38:6443: connect: connection refused" node="ci-4186.0.0-a-f374b16159" Dec 13 14:39:15.175583 kubelet[3631]: E1213 14:39:15.175516 3631 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.228.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.0.0-a-f374b16159?timeout=10s\": dial tcp 147.28.228.38:6443: connect: connection refused" interval="400ms" Dec 13 14:39:15.176602 kubelet[3631]: I1213 14:39:15.176579 3631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e7a8ae1a4d9f1cb01f1f22b2397c4c4-k8s-certs\") pod \"kube-apiserver-ci-4186.0.0-a-f374b16159\" (UID: \"7e7a8ae1a4d9f1cb01f1f22b2397c4c4\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-f374b16159" Dec 13 14:39:15.176637 kubelet[3631]: I1213 14:39:15.176608 3631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9630653fd3a33dea96184fe3e27b49f6-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.0.0-a-f374b16159\" (UID: \"9630653fd3a33dea96184fe3e27b49f6\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-f374b16159" Dec 13 14:39:15.176637 kubelet[3631]: I1213 14:39:15.176628 3631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9630653fd3a33dea96184fe3e27b49f6-k8s-certs\") pod \"kube-controller-manager-ci-4186.0.0-a-f374b16159\" (UID: \"9630653fd3a33dea96184fe3e27b49f6\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-f374b16159" Dec 13 14:39:15.176680 kubelet[3631]: I1213 14:39:15.176645 3631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9630653fd3a33dea96184fe3e27b49f6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.0.0-a-f374b16159\" (UID: \"9630653fd3a33dea96184fe3e27b49f6\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-f374b16159" Dec 13 14:39:15.176680 kubelet[3631]: I1213 14:39:15.176664 3631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2271c0a79b75a10cc171fd31d3388822-kubeconfig\") pod \"kube-scheduler-ci-4186.0.0-a-f374b16159\" (UID: \"2271c0a79b75a10cc171fd31d3388822\") " pod="kube-system/kube-scheduler-ci-4186.0.0-a-f374b16159" Dec 13 14:39:15.176740 kubelet[3631]: I1213 14:39:15.176680 3631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e7a8ae1a4d9f1cb01f1f22b2397c4c4-ca-certs\") pod \"kube-apiserver-ci-4186.0.0-a-f374b16159\" (UID: \"7e7a8ae1a4d9f1cb01f1f22b2397c4c4\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-f374b16159" Dec 13 14:39:15.176740 kubelet[3631]: I1213 14:39:15.176696 3631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e7a8ae1a4d9f1cb01f1f22b2397c4c4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.0.0-a-f374b16159\" (UID: \"7e7a8ae1a4d9f1cb01f1f22b2397c4c4\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-f374b16159" Dec 13 14:39:15.176740 kubelet[3631]: I1213 14:39:15.176716 3631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9630653fd3a33dea96184fe3e27b49f6-ca-certs\") pod \"kube-controller-manager-ci-4186.0.0-a-f374b16159\" (UID: \"9630653fd3a33dea96184fe3e27b49f6\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-f374b16159" Dec 13 14:39:15.176802 kubelet[3631]: I1213 14:39:15.176751 3631 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9630653fd3a33dea96184fe3e27b49f6-kubeconfig\") pod \"kube-controller-manager-ci-4186.0.0-a-f374b16159\" (UID: \"9630653fd3a33dea96184fe3e27b49f6\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-f374b16159" Dec 13 14:39:15.367374 kubelet[3631]: I1213 14:39:15.367343 3631 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.0.0-a-f374b16159" Dec 13 14:39:15.367625 kubelet[3631]: E1213 14:39:15.367602 3631 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.228.38:6443/api/v1/nodes\": dial tcp 147.28.228.38:6443: connect: connection refused" node="ci-4186.0.0-a-f374b16159" Dec 13 14:39:15.419550 containerd[2640]: time="2024-12-13T14:39:15.419508717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.0.0-a-f374b16159,Uid:7e7a8ae1a4d9f1cb01f1f22b2397c4c4,Namespace:kube-system,Attempt:0,}" Dec 13 14:39:15.422976 containerd[2640]: time="2024-12-13T14:39:15.422941662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.0.0-a-f374b16159,Uid:9630653fd3a33dea96184fe3e27b49f6,Namespace:kube-system,Attempt:0,}" Dec 13 14:39:15.426525 containerd[2640]: time="2024-12-13T14:39:15.426476093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.0.0-a-f374b16159,Uid:2271c0a79b75a10cc171fd31d3388822,Namespace:kube-system,Attempt:0,}" Dec 13 14:39:15.576792 kubelet[3631]: E1213 14:39:15.576745 3631 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.228.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.0.0-a-f374b16159?timeout=10s\": dial tcp 147.28.228.38:6443: connect: connection refused" interval="800ms" Dec 13 14:39:15.728947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3342348981.mount: Deactivated successfully. Dec 13 14:39:15.729518 containerd[2640]: time="2024-12-13T14:39:15.729490119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 14:39:15.729974 containerd[2640]: time="2024-12-13T14:39:15.729939932Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 14:39:15.730243 containerd[2640]: time="2024-12-13T14:39:15.730224068Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 14:39:15.730660 containerd[2640]: time="2024-12-13T14:39:15.730639216Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 14:39:15.730876 containerd[2640]: time="2024-12-13T14:39:15.730855017Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 14:39:15.731060 containerd[2640]: time="2024-12-13T14:39:15.731026670Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 14:39:15.733320 containerd[2640]: time="2024-12-13T14:39:15.733289162Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 14:39:15.734731 containerd[2640]: time="2024-12-13T14:39:15.734705779Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 311.699089ms" Dec 13 14:39:15.735744 containerd[2640]: time="2024-12-13T14:39:15.735718047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 14:39:15.737166 containerd[2640]: time="2024-12-13T14:39:15.737148331Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 317.562831ms" Dec 13 14:39:15.740136 containerd[2640]: time="2024-12-13T14:39:15.740106002Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 313.577831ms" Dec 13 14:39:15.770027 kubelet[3631]: I1213 14:39:15.770006 3631 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.0.0-a-f374b16159" Dec 13 14:39:15.770317 kubelet[3631]: E1213 14:39:15.770290 3631 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.228.38:6443/api/v1/nodes\": dial tcp 147.28.228.38:6443: connect: connection refused" node="ci-4186.0.0-a-f374b16159" Dec 13 14:39:15.857047 containerd[2640]: time="2024-12-13T14:39:15.856642757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:39:15.857047 containerd[2640]: time="2024-12-13T14:39:15.857040253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:39:15.857119 containerd[2640]: time="2024-12-13T14:39:15.857054119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:15.857161 containerd[2640]: time="2024-12-13T14:39:15.857141101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:15.857842 containerd[2640]: time="2024-12-13T14:39:15.857788466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:39:15.857868 containerd[2640]: time="2024-12-13T14:39:15.857839189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:39:15.857949 containerd[2640]: time="2024-12-13T14:39:15.857856681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:15.858107 containerd[2640]: time="2024-12-13T14:39:15.858053158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:15.858453 containerd[2640]: time="2024-12-13T14:39:15.858396385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:39:15.858476 containerd[2640]: time="2024-12-13T14:39:15.858453403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:39:15.858476 containerd[2640]: time="2024-12-13T14:39:15.858466233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:15.858558 containerd[2640]: time="2024-12-13T14:39:15.858540505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:15.893836 systemd[1]: Started cri-containerd-73e9d6681df3a3eff509e9d44941aabd28f6f717fd426c8996620ca68c4bcd50.scope - libcontainer container 73e9d6681df3a3eff509e9d44941aabd28f6f717fd426c8996620ca68c4bcd50. Dec 13 14:39:15.895139 systemd[1]: Started cri-containerd-c9add24f439e41cbc1640205d19e0bc2159623b184326011840d20352f248a20.scope - libcontainer container c9add24f439e41cbc1640205d19e0bc2159623b184326011840d20352f248a20. Dec 13 14:39:15.896406 systemd[1]: Started cri-containerd-d64df0f67ddf5232b674d7bcd7c2dfc165b71c42c3a92992a66d1c8c61265404.scope - libcontainer container d64df0f67ddf5232b674d7bcd7c2dfc165b71c42c3a92992a66d1c8c61265404. Dec 13 14:39:15.916887 containerd[2640]: time="2024-12-13T14:39:15.916850600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.0.0-a-f374b16159,Uid:2271c0a79b75a10cc171fd31d3388822,Namespace:kube-system,Attempt:0,} returns sandbox id \"73e9d6681df3a3eff509e9d44941aabd28f6f717fd426c8996620ca68c4bcd50\"" Dec 13 14:39:15.918009 containerd[2640]: time="2024-12-13T14:39:15.917958257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.0.0-a-f374b16159,Uid:7e7a8ae1a4d9f1cb01f1f22b2397c4c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9add24f439e41cbc1640205d19e0bc2159623b184326011840d20352f248a20\"" Dec 13 14:39:15.918847 containerd[2640]: time="2024-12-13T14:39:15.918823098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.0.0-a-f374b16159,Uid:9630653fd3a33dea96184fe3e27b49f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d64df0f67ddf5232b674d7bcd7c2dfc165b71c42c3a92992a66d1c8c61265404\"" Dec 13 14:39:15.919314 containerd[2640]: time="2024-12-13T14:39:15.919290682Z" level=info msg="CreateContainer within sandbox \"73e9d6681df3a3eff509e9d44941aabd28f6f717fd426c8996620ca68c4bcd50\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:39:15.919546 containerd[2640]: time="2024-12-13T14:39:15.919528518Z" level=info msg="CreateContainer within sandbox \"c9add24f439e41cbc1640205d19e0bc2159623b184326011840d20352f248a20\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:39:15.920572 containerd[2640]: time="2024-12-13T14:39:15.920551145Z" level=info msg="CreateContainer within sandbox \"d64df0f67ddf5232b674d7bcd7c2dfc165b71c42c3a92992a66d1c8c61265404\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:39:15.925695 containerd[2640]: time="2024-12-13T14:39:15.925661455Z" level=info msg="CreateContainer within sandbox \"73e9d6681df3a3eff509e9d44941aabd28f6f717fd426c8996620ca68c4bcd50\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e3bc0da3c3eb56a35f09f4ffa606cc3fbf060893fbcd83884844350f86025187\"" Dec 13 14:39:15.926127 containerd[2640]: time="2024-12-13T14:39:15.926104613Z" level=info msg="CreateContainer within sandbox \"c9add24f439e41cbc1640205d19e0bc2159623b184326011840d20352f248a20\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"71283e1433299c0646115372b11f13781911de117fd37ab3471bc73a5483ab3b\"" Dec 13 14:39:15.926178 containerd[2640]: time="2024-12-13T14:39:15.926153941Z" level=info msg="StartContainer for \"e3bc0da3c3eb56a35f09f4ffa606cc3fbf060893fbcd83884844350f86025187\"" Dec 13 14:39:15.926275 containerd[2640]: time="2024-12-13T14:39:15.926255547Z" level=info msg="CreateContainer within sandbox \"d64df0f67ddf5232b674d7bcd7c2dfc165b71c42c3a92992a66d1c8c61265404\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4bc5b6d15411a3204ca58586b6451ffa0d9745e2b6c70a73373116d248f10839\"" Dec 13 14:39:15.926390 containerd[2640]: time="2024-12-13T14:39:15.926374246Z" level=info msg="StartContainer for \"71283e1433299c0646115372b11f13781911de117fd37ab3471bc73a5483ab3b\"" Dec 13 14:39:15.926531 containerd[2640]: time="2024-12-13T14:39:15.926515218Z" level=info msg="StartContainer for \"4bc5b6d15411a3204ca58586b6451ffa0d9745e2b6c70a73373116d248f10839\"" Dec 13 14:39:15.966819 systemd[1]: Started cri-containerd-4bc5b6d15411a3204ca58586b6451ffa0d9745e2b6c70a73373116d248f10839.scope - libcontainer container 4bc5b6d15411a3204ca58586b6451ffa0d9745e2b6c70a73373116d248f10839. Dec 13 14:39:15.967881 systemd[1]: Started cri-containerd-71283e1433299c0646115372b11f13781911de117fd37ab3471bc73a5483ab3b.scope - libcontainer container 71283e1433299c0646115372b11f13781911de117fd37ab3471bc73a5483ab3b. Dec 13 14:39:15.968960 systemd[1]: Started cri-containerd-e3bc0da3c3eb56a35f09f4ffa606cc3fbf060893fbcd83884844350f86025187.scope - libcontainer container e3bc0da3c3eb56a35f09f4ffa606cc3fbf060893fbcd83884844350f86025187. Dec 13 14:39:15.969700 kubelet[3631]: W1213 14:39:15.969653 3631 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.228.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.0.0-a-f374b16159&limit=500&resourceVersion=0": dial tcp 147.28.228.38:6443: connect: connection refused Dec 13 14:39:15.969747 kubelet[3631]: E1213 14:39:15.969720 3631 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.228.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.0.0-a-f374b16159&limit=500&resourceVersion=0\": dial tcp 147.28.228.38:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:39:15.992581 containerd[2640]: time="2024-12-13T14:39:15.992482131Z" level=info msg="StartContainer for \"71283e1433299c0646115372b11f13781911de117fd37ab3471bc73a5483ab3b\" returns successfully" Dec 13 14:39:15.992581 containerd[2640]: time="2024-12-13T14:39:15.992486553Z" level=info msg="StartContainer for \"4bc5b6d15411a3204ca58586b6451ffa0d9745e2b6c70a73373116d248f10839\" returns successfully" Dec 13 14:39:15.994654 containerd[2640]: time="2024-12-13T14:39:15.994625963Z" level=info msg="StartContainer for \"e3bc0da3c3eb56a35f09f4ffa606cc3fbf060893fbcd83884844350f86025187\" returns successfully" Dec 13 14:39:16.016348 kubelet[3631]: W1213 14:39:16.016296 3631 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.228.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.228.38:6443: connect: connection refused Dec 13 14:39:16.016397 kubelet[3631]: E1213 14:39:16.016360 3631 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.228.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.228.38:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:39:16.572767 kubelet[3631]: I1213 14:39:16.572743 3631 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.0.0-a-f374b16159" Dec 13 14:39:17.271905 kubelet[3631]: E1213 14:39:17.271866 3631 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.0.0-a-f374b16159\" not found" node="ci-4186.0.0-a-f374b16159" Dec 13 14:39:17.378699 kubelet[3631]: I1213 14:39:17.378664 3631 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186.0.0-a-f374b16159" Dec 13 14:39:17.378842 kubelet[3631]: E1213 14:39:17.378695 3631 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4186.0.0-a-f374b16159\": node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:17.386032 kubelet[3631]: E1213 14:39:17.386010 3631 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:17.487105 kubelet[3631]: E1213 14:39:17.487081 3631 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:17.587625 kubelet[3631]: E1213 14:39:17.587552 3631 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:17.688260 kubelet[3631]: E1213 14:39:17.688230 3631 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:17.788654 kubelet[3631]: E1213 14:39:17.788632 3631 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:17.889162 kubelet[3631]: E1213 14:39:17.889091 3631 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:17.990104 kubelet[3631]: E1213 14:39:17.990082 3631 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:18.090557 kubelet[3631]: E1213 14:39:18.090534 3631 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:18.191252 kubelet[3631]: E1213 14:39:18.191141 3631 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:18.291549 kubelet[3631]: E1213 14:39:18.291534 3631 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:18.392010 kubelet[3631]: E1213 14:39:18.391988 3631 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:18.492941 kubelet[3631]: E1213 14:39:18.492888 3631 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:18.593303 kubelet[3631]: E1213 14:39:18.593275 3631 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:18.693583 kubelet[3631]: E1213 14:39:18.693559 3631 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:18.794112 kubelet[3631]: E1213 14:39:18.794083 3631 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:18.894305 kubelet[3631]: E1213 14:39:18.894283 3631 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:18.995225 kubelet[3631]: E1213 14:39:18.995206 3631 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:19.008044 kubelet[3631]: W1213 14:39:19.008024 3631 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:39:19.264175 kubelet[3631]: W1213 14:39:19.264153 3631 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:39:19.421485 systemd[1]: Reloading requested from client PID 4070 ('systemctl') (unit session-11.scope)... Dec 13 14:39:19.421495 systemd[1]: Reloading... Dec 13 14:39:19.487722 zram_generator::config[4113]: No configuration found. Dec 13 14:39:19.576561 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:39:19.656437 systemd[1]: Reloading finished in 234 ms. Dec 13 14:39:19.689473 kubelet[3631]: I1213 14:39:19.689449 3631 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:39:19.689516 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:39:19.709607 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:39:19.709875 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:39:19.709924 systemd[1]: kubelet.service: Consumed 1.387s CPU time, 137.6M memory peak, 0B memory swap peak. Dec 13 14:39:19.723104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:39:19.822919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:39:19.826691 (kubelet)[4171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 14:39:19.855968 kubelet[4171]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:39:19.855968 kubelet[4171]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:39:19.855968 kubelet[4171]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:39:19.856265 kubelet[4171]: I1213 14:39:19.856026 4171 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:39:19.860648 kubelet[4171]: I1213 14:39:19.860626 4171 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 14:39:19.860648 kubelet[4171]: I1213 14:39:19.860646 4171 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:39:19.860859 kubelet[4171]: I1213 14:39:19.860846 4171 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 14:39:19.863138 kubelet[4171]: I1213 14:39:19.863121 4171 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:39:19.865152 kubelet[4171]: I1213 14:39:19.865132 4171 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:39:19.867428 kubelet[4171]: E1213 14:39:19.867405 4171 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 14:39:19.867428 kubelet[4171]: I1213 14:39:19.867426 4171 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 14:39:19.885752 kubelet[4171]: I1213 14:39:19.885717 4171 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:39:19.885913 kubelet[4171]: I1213 14:39:19.885894 4171 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 14:39:19.886031 kubelet[4171]: I1213 14:39:19.886003 4171 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:39:19.886174 kubelet[4171]: I1213 14:39:19.886027 4171 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.0.0-a-f374b16159","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 14:39:19.886241 kubelet[4171]: I1213 14:39:19.886185 4171 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:39:19.886241 kubelet[4171]: I1213 14:39:19.886193 4171 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 14:39:19.886241 kubelet[4171]: I1213 14:39:19.886222 4171 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:39:19.886320 kubelet[4171]: I1213 14:39:19.886310 4171 kubelet.go:408] "Attempting to sync node with API server" Dec 13 14:39:19.886343 kubelet[4171]: I1213 14:39:19.886324 4171 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:39:19.886343 kubelet[4171]: I1213 14:39:19.886343 4171 kubelet.go:314] "Adding apiserver pod source" Dec 13 14:39:19.886387 kubelet[4171]: I1213 14:39:19.886353 4171 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:39:19.886865 kubelet[4171]: I1213 14:39:19.886844 4171 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 14:39:19.887334 kubelet[4171]: I1213 14:39:19.887320 4171 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:39:19.887698 kubelet[4171]: I1213 14:39:19.887684 4171 server.go:1269] "Started kubelet" Dec 13 14:39:19.887744 kubelet[4171]: I1213 14:39:19.887714 4171 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:39:19.887792 kubelet[4171]: I1213 14:39:19.887750 4171 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:39:19.887962 kubelet[4171]: I1213 14:39:19.887949 4171 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:39:19.889485 kubelet[4171]: I1213 14:39:19.889470 4171 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:39:19.889513 kubelet[4171]: I1213 14:39:19.889489 4171 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 14:39:19.889575 kubelet[4171]: I1213 14:39:19.889563 4171 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 14:39:19.889625 kubelet[4171]: E1213 14:39:19.889590 4171 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-f374b16159\" not found" Dec 13 14:39:19.889688 kubelet[4171]: E1213 14:39:19.889668 4171 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:39:19.889738 kubelet[4171]: I1213 14:39:19.889730 4171 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 14:39:19.889876 kubelet[4171]: I1213 14:39:19.889868 4171 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:39:19.890568 kubelet[4171]: I1213 14:39:19.890554 4171 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:39:19.890660 kubelet[4171]: I1213 14:39:19.890644 4171 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:39:19.891280 kubelet[4171]: I1213 14:39:19.891263 4171 server.go:460] "Adding debug handlers to kubelet server" Dec 13 14:39:19.891359 kubelet[4171]: I1213 14:39:19.891343 4171 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:39:19.896558 kubelet[4171]: I1213 14:39:19.896518 4171 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:39:19.898010 kubelet[4171]: I1213 14:39:19.897987 4171 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:39:19.898039 kubelet[4171]: I1213 14:39:19.898014 4171 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:39:19.898039 kubelet[4171]: I1213 14:39:19.898035 4171 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 14:39:19.898098 kubelet[4171]: E1213 14:39:19.898079 4171 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:39:19.922286 kubelet[4171]: I1213 14:39:19.922262 4171 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:39:19.922286 kubelet[4171]: I1213 14:39:19.922280 4171 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:39:19.922382 kubelet[4171]: I1213 14:39:19.922299 4171 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:39:19.922456 kubelet[4171]: I1213 14:39:19.922441 4171 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:39:19.922482 kubelet[4171]: I1213 14:39:19.922453 4171 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:39:19.922482 kubelet[4171]: I1213 14:39:19.922472 4171 policy_none.go:49] "None policy: Start" Dec 13 14:39:19.922981 kubelet[4171]: I1213 14:39:19.922962 4171 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:39:19.922981 kubelet[4171]: I1213 14:39:19.922985 4171 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:39:19.923167 kubelet[4171]: I1213 14:39:19.923157 4171 state_mem.go:75] "Updated machine memory state" Dec 13 14:39:19.926051 kubelet[4171]: I1213 14:39:19.926034 4171 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:39:19.926200 kubelet[4171]: I1213 14:39:19.926187 4171 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 14:39:19.926224 kubelet[4171]: I1213 14:39:19.926200 4171 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:39:19.926333 kubelet[4171]: I1213 14:39:19.926318 4171 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:39:20.002176 kubelet[4171]: W1213 14:39:20.002157 4171 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:39:20.002176 kubelet[4171]: W1213 14:39:20.002158 4171 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:39:20.002270 kubelet[4171]: E1213 14:39:20.002225 4171 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.0.0-a-f374b16159\" already exists" pod="kube-system/kube-apiserver-ci-4186.0.0-a-f374b16159" Dec 13 14:39:20.002452 kubelet[4171]: W1213 14:39:20.002435 4171 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:39:20.002484 kubelet[4171]: E1213 14:39:20.002474 4171 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186.0.0-a-f374b16159\" already exists" pod="kube-system/kube-scheduler-ci-4186.0.0-a-f374b16159" Dec 13 14:39:20.028976 kubelet[4171]: I1213 14:39:20.028959 4171 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.0.0-a-f374b16159" Dec 13 14:39:20.047144 kubelet[4171]: I1213 14:39:20.047113 4171 kubelet_node_status.go:111] "Node was previously registered" node="ci-4186.0.0-a-f374b16159" Dec 13 14:39:20.047213 kubelet[4171]: I1213 14:39:20.047183 4171 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186.0.0-a-f374b16159" Dec 13 14:39:20.091456 kubelet[4171]: I1213 14:39:20.091374 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9630653fd3a33dea96184fe3e27b49f6-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.0.0-a-f374b16159\" (UID: \"9630653fd3a33dea96184fe3e27b49f6\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-f374b16159" Dec 13 14:39:20.091456 kubelet[4171]: I1213 14:39:20.091401 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9630653fd3a33dea96184fe3e27b49f6-kubeconfig\") pod \"kube-controller-manager-ci-4186.0.0-a-f374b16159\" (UID: \"9630653fd3a33dea96184fe3e27b49f6\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-f374b16159" Dec 13 14:39:20.091456 kubelet[4171]: I1213 14:39:20.091419 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9630653fd3a33dea96184fe3e27b49f6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.0.0-a-f374b16159\" (UID: \"9630653fd3a33dea96184fe3e27b49f6\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-f374b16159" Dec 13 14:39:20.091456 kubelet[4171]: I1213 14:39:20.091438 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9630653fd3a33dea96184fe3e27b49f6-ca-certs\") pod \"kube-controller-manager-ci-4186.0.0-a-f374b16159\" (UID: \"9630653fd3a33dea96184fe3e27b49f6\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-f374b16159" Dec 13 14:39:20.091456 kubelet[4171]: I1213 14:39:20.091454 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e7a8ae1a4d9f1cb01f1f22b2397c4c4-k8s-certs\") pod \"kube-apiserver-ci-4186.0.0-a-f374b16159\" (UID: \"7e7a8ae1a4d9f1cb01f1f22b2397c4c4\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-f374b16159" Dec 13 14:39:20.091680 kubelet[4171]: I1213 14:39:20.091471 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e7a8ae1a4d9f1cb01f1f22b2397c4c4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.0.0-a-f374b16159\" (UID: \"7e7a8ae1a4d9f1cb01f1f22b2397c4c4\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-f374b16159" Dec 13 14:39:20.091680 kubelet[4171]: I1213 14:39:20.091487 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9630653fd3a33dea96184fe3e27b49f6-k8s-certs\") pod \"kube-controller-manager-ci-4186.0.0-a-f374b16159\" (UID: \"9630653fd3a33dea96184fe3e27b49f6\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-f374b16159" Dec 13 14:39:20.091680 kubelet[4171]: I1213 14:39:20.091502 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2271c0a79b75a10cc171fd31d3388822-kubeconfig\") pod \"kube-scheduler-ci-4186.0.0-a-f374b16159\" (UID: \"2271c0a79b75a10cc171fd31d3388822\") " pod="kube-system/kube-scheduler-ci-4186.0.0-a-f374b16159" Dec 13 14:39:20.091680 kubelet[4171]: I1213 14:39:20.091516 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e7a8ae1a4d9f1cb01f1f22b2397c4c4-ca-certs\") pod \"kube-apiserver-ci-4186.0.0-a-f374b16159\" (UID: \"7e7a8ae1a4d9f1cb01f1f22b2397c4c4\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-f374b16159" Dec 13 14:39:20.131580 update_engine[2635]: I20241213 14:39:20.131521 2635 update_attempter.cc:509] Updating boot flags... Dec 13 14:39:20.163723 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (4238) Dec 13 14:39:20.191718 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (4241) Dec 13 14:39:20.211722 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (4241) Dec 13 14:39:20.887222 kubelet[4171]: I1213 14:39:20.887197 4171 apiserver.go:52] "Watching apiserver" Dec 13 14:39:20.890171 kubelet[4171]: I1213 14:39:20.890157 4171 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 14:39:20.922928 kubelet[4171]: I1213 14:39:20.922309 4171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.0.0-a-f374b16159" podStartSLOduration=1.922295127 podStartE2EDuration="1.922295127s" podCreationTimestamp="2024-12-13 14:39:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:39:20.921995842 +0000 UTC m=+1.092477206" watchObservedRunningTime="2024-12-13 14:39:20.922295127 +0000 UTC m=+1.092776491" Dec 13 14:39:20.935949 kubelet[4171]: I1213 14:39:20.935895 4171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.0.0-a-f374b16159" podStartSLOduration=1.93587596 podStartE2EDuration="1.93587596s" podCreationTimestamp="2024-12-13 14:39:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:39:20.930285597 +0000 UTC m=+1.100766961" watchObservedRunningTime="2024-12-13 14:39:20.93587596 +0000 UTC m=+1.106357363" Dec 13 14:39:20.936064 kubelet[4171]: I1213 14:39:20.936014 4171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.0.0-a-f374b16159" podStartSLOduration=0.936006819 podStartE2EDuration="936.006819ms" podCreationTimestamp="2024-12-13 14:39:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:39:20.935978795 +0000 UTC m=+1.106460159" watchObservedRunningTime="2024-12-13 14:39:20.936006819 +0000 UTC m=+1.106488183" Dec 13 14:39:24.238064 sudo[2930]: pam_unix(sudo:session): session closed for user root Dec 13 14:39:24.302510 sshd[2929]: Connection closed by 147.75.109.163 port 44038 Dec 13 14:39:24.303330 sshd-session[2927]: pam_unix(sshd:session): session closed for user core Dec 13 14:39:24.306136 systemd[1]: sshd@8-147.28.228.38:22-147.75.109.163:44038.service: Deactivated successfully. Dec 13 14:39:24.307685 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:39:24.307853 systemd[1]: session-11.scope: Consumed 6.566s CPU time, 173.3M memory peak, 0B memory swap peak. Dec 13 14:39:24.308179 systemd-logind[2623]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:39:24.308766 systemd-logind[2623]: Removed session 11. Dec 13 14:39:25.176603 kubelet[4171]: I1213 14:39:25.176562 4171 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:39:25.177056 kubelet[4171]: I1213 14:39:25.177008 4171 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:39:25.177092 containerd[2640]: time="2024-12-13T14:39:25.176857697Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:39:25.936956 systemd[1]: Created slice kubepods-besteffort-pode6e5d959_8802_46a0_bdfb_048f72c56d17.slice - libcontainer container kubepods-besteffort-pode6e5d959_8802_46a0_bdfb_048f72c56d17.slice. Dec 13 14:39:26.030624 kubelet[4171]: I1213 14:39:26.030587 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6e5d959-8802-46a0-bdfb-048f72c56d17-xtables-lock\") pod \"kube-proxy-6qt85\" (UID: \"e6e5d959-8802-46a0-bdfb-048f72c56d17\") " pod="kube-system/kube-proxy-6qt85" Dec 13 14:39:26.030624 kubelet[4171]: I1213 14:39:26.030617 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6e5d959-8802-46a0-bdfb-048f72c56d17-lib-modules\") pod \"kube-proxy-6qt85\" (UID: \"e6e5d959-8802-46a0-bdfb-048f72c56d17\") " pod="kube-system/kube-proxy-6qt85" Dec 13 14:39:26.031273 kubelet[4171]: I1213 14:39:26.030635 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k4pc\" (UniqueName: \"kubernetes.io/projected/e6e5d959-8802-46a0-bdfb-048f72c56d17-kube-api-access-5k4pc\") pod \"kube-proxy-6qt85\" (UID: \"e6e5d959-8802-46a0-bdfb-048f72c56d17\") " pod="kube-system/kube-proxy-6qt85" Dec 13 14:39:26.031273 kubelet[4171]: I1213 14:39:26.030657 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e6e5d959-8802-46a0-bdfb-048f72c56d17-kube-proxy\") pod \"kube-proxy-6qt85\" (UID: \"e6e5d959-8802-46a0-bdfb-048f72c56d17\") " pod="kube-system/kube-proxy-6qt85" Dec 13 14:39:26.244143 systemd[1]: Created slice kubepods-besteffort-podb33f5741_05cc_4670_9f69_72824ee3bdde.slice - libcontainer container kubepods-besteffort-podb33f5741_05cc_4670_9f69_72824ee3bdde.slice. Dec 13 14:39:26.252745 containerd[2640]: time="2024-12-13T14:39:26.252689114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6qt85,Uid:e6e5d959-8802-46a0-bdfb-048f72c56d17,Namespace:kube-system,Attempt:0,}" Dec 13 14:39:26.264932 containerd[2640]: time="2024-12-13T14:39:26.264867257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:39:26.264932 containerd[2640]: time="2024-12-13T14:39:26.264919410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:39:26.264932 containerd[2640]: time="2024-12-13T14:39:26.264929362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:26.265037 containerd[2640]: time="2024-12-13T14:39:26.264999859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:26.286824 systemd[1]: Started cri-containerd-983b88cc7c960d3388638e3b3859f39714f89d6aeeaff3328e0d71948fb6308c.scope - libcontainer container 983b88cc7c960d3388638e3b3859f39714f89d6aeeaff3328e0d71948fb6308c. Dec 13 14:39:26.303186 containerd[2640]: time="2024-12-13T14:39:26.303155283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6qt85,Uid:e6e5d959-8802-46a0-bdfb-048f72c56d17,Namespace:kube-system,Attempt:0,} returns sandbox id \"983b88cc7c960d3388638e3b3859f39714f89d6aeeaff3328e0d71948fb6308c\"" Dec 13 14:39:26.305131 containerd[2640]: time="2024-12-13T14:39:26.305105344Z" level=info msg="CreateContainer within sandbox \"983b88cc7c960d3388638e3b3859f39714f89d6aeeaff3328e0d71948fb6308c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:39:26.311659 containerd[2640]: time="2024-12-13T14:39:26.311627450Z" level=info msg="CreateContainer within sandbox \"983b88cc7c960d3388638e3b3859f39714f89d6aeeaff3328e0d71948fb6308c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2910210bf01dd3db89f5d793e19678751b268c7e23133463a9cf9e82ed88d250\"" Dec 13 14:39:26.312073 containerd[2640]: time="2024-12-13T14:39:26.312049953Z" level=info msg="StartContainer for \"2910210bf01dd3db89f5d793e19678751b268c7e23133463a9cf9e82ed88d250\"" Dec 13 14:39:26.331795 kubelet[4171]: I1213 14:39:26.331762 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b33f5741-05cc-4670-9f69-72824ee3bdde-var-lib-calico\") pod \"tigera-operator-76c4976dd7-twgmx\" (UID: \"b33f5741-05cc-4670-9f69-72824ee3bdde\") " pod="tigera-operator/tigera-operator-76c4976dd7-twgmx" Dec 13 14:39:26.341837 kubelet[4171]: I1213 14:39:26.331807 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmr4d\" (UniqueName: \"kubernetes.io/projected/b33f5741-05cc-4670-9f69-72824ee3bdde-kube-api-access-hmr4d\") pod \"tigera-operator-76c4976dd7-twgmx\" (UID: \"b33f5741-05cc-4670-9f69-72824ee3bdde\") " pod="tigera-operator/tigera-operator-76c4976dd7-twgmx" Dec 13 14:39:26.341817 systemd[1]: Started cri-containerd-2910210bf01dd3db89f5d793e19678751b268c7e23133463a9cf9e82ed88d250.scope - libcontainer container 2910210bf01dd3db89f5d793e19678751b268c7e23133463a9cf9e82ed88d250. Dec 13 14:39:26.364672 containerd[2640]: time="2024-12-13T14:39:26.364644625Z" level=info msg="StartContainer for \"2910210bf01dd3db89f5d793e19678751b268c7e23133463a9cf9e82ed88d250\" returns successfully" Dec 13 14:39:26.546887 containerd[2640]: time="2024-12-13T14:39:26.546855142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-twgmx,Uid:b33f5741-05cc-4670-9f69-72824ee3bdde,Namespace:tigera-operator,Attempt:0,}" Dec 13 14:39:26.559666 containerd[2640]: time="2024-12-13T14:39:26.559293893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:39:26.559705 containerd[2640]: time="2024-12-13T14:39:26.559668599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:39:26.559705 containerd[2640]: time="2024-12-13T14:39:26.559685744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:26.559799 containerd[2640]: time="2024-12-13T14:39:26.559780659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:26.580817 systemd[1]: Started cri-containerd-13c11b0529086734873b4c5f401aa58b674f3324bdc8db8e1a867b26a80dd135.scope - libcontainer container 13c11b0529086734873b4c5f401aa58b674f3324bdc8db8e1a867b26a80dd135. Dec 13 14:39:26.603926 containerd[2640]: time="2024-12-13T14:39:26.603884221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-twgmx,Uid:b33f5741-05cc-4670-9f69-72824ee3bdde,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"13c11b0529086734873b4c5f401aa58b674f3324bdc8db8e1a867b26a80dd135\"" Dec 13 14:39:26.605040 containerd[2640]: time="2024-12-13T14:39:26.605019369Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 14:39:26.925626 kubelet[4171]: I1213 14:39:26.925498 4171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6qt85" podStartSLOduration=1.9254826729999999 podStartE2EDuration="1.925482673s" podCreationTimestamp="2024-12-13 14:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:39:26.925433277 +0000 UTC m=+7.095914641" watchObservedRunningTime="2024-12-13 14:39:26.925482673 +0000 UTC m=+7.095964037" Dec 13 14:39:27.510799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1881660134.mount: Deactivated successfully. Dec 13 14:39:28.091091 containerd[2640]: time="2024-12-13T14:39:28.091017726Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125980" Dec 13 14:39:28.091091 containerd[2640]: time="2024-12-13T14:39:28.091019924Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:28.091995 containerd[2640]: time="2024-12-13T14:39:28.091969836Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:28.094190 containerd[2640]: time="2024-12-13T14:39:28.094168536Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:28.094823 containerd[2640]: time="2024-12-13T14:39:28.094801224Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 1.489755158s" Dec 13 14:39:28.094844 containerd[2640]: time="2024-12-13T14:39:28.094829605Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 14:39:28.096428 containerd[2640]: time="2024-12-13T14:39:28.096408887Z" level=info msg="CreateContainer within sandbox \"13c11b0529086734873b4c5f401aa58b674f3324bdc8db8e1a867b26a80dd135\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 14:39:28.101015 containerd[2640]: time="2024-12-13T14:39:28.100990600Z" level=info msg="CreateContainer within sandbox \"13c11b0529086734873b4c5f401aa58b674f3324bdc8db8e1a867b26a80dd135\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bf4cd60f6a6c727f6ae58f74dc34e565570bad4a64b420a0115cedac8ae81b78\"" Dec 13 14:39:28.101338 containerd[2640]: time="2024-12-13T14:39:28.101312021Z" level=info msg="StartContainer for \"bf4cd60f6a6c727f6ae58f74dc34e565570bad4a64b420a0115cedac8ae81b78\"" Dec 13 14:39:28.134813 systemd[1]: Started cri-containerd-bf4cd60f6a6c727f6ae58f74dc34e565570bad4a64b420a0115cedac8ae81b78.scope - libcontainer container bf4cd60f6a6c727f6ae58f74dc34e565570bad4a64b420a0115cedac8ae81b78. Dec 13 14:39:28.152888 containerd[2640]: time="2024-12-13T14:39:28.152861003Z" level=info msg="StartContainer for \"bf4cd60f6a6c727f6ae58f74dc34e565570bad4a64b420a0115cedac8ae81b78\" returns successfully" Dec 13 14:39:28.927042 kubelet[4171]: I1213 14:39:28.926998 4171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-twgmx" podStartSLOduration=1.436275732 podStartE2EDuration="2.926983128s" podCreationTimestamp="2024-12-13 14:39:26 +0000 UTC" firstStartedPulling="2024-12-13 14:39:26.604694378 +0000 UTC m=+6.775175702" lastFinishedPulling="2024-12-13 14:39:28.095401734 +0000 UTC m=+8.265883098" observedRunningTime="2024-12-13 14:39:28.926903182 +0000 UTC m=+9.097384546" watchObservedRunningTime="2024-12-13 14:39:28.926983128 +0000 UTC m=+9.097464452" Dec 13 14:39:32.275633 systemd[1]: Created slice kubepods-besteffort-pod5f9a5479_67eb_434b_9594_4cfe711ec2ea.slice - libcontainer container kubepods-besteffort-pod5f9a5479_67eb_434b_9594_4cfe711ec2ea.slice. Dec 13 14:39:32.366597 kubelet[4171]: I1213 14:39:32.366560 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f9a5479-67eb-434b-9594-4cfe711ec2ea-tigera-ca-bundle\") pod \"calico-typha-596476d696-5f4c6\" (UID: \"5f9a5479-67eb-434b-9594-4cfe711ec2ea\") " pod="calico-system/calico-typha-596476d696-5f4c6" Dec 13 14:39:32.366597 kubelet[4171]: I1213 14:39:32.366599 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h8hv\" (UniqueName: \"kubernetes.io/projected/5f9a5479-67eb-434b-9594-4cfe711ec2ea-kube-api-access-6h8hv\") pod \"calico-typha-596476d696-5f4c6\" (UID: \"5f9a5479-67eb-434b-9594-4cfe711ec2ea\") " pod="calico-system/calico-typha-596476d696-5f4c6" Dec 13 14:39:32.366977 kubelet[4171]: I1213 14:39:32.366618 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5f9a5479-67eb-434b-9594-4cfe711ec2ea-typha-certs\") pod \"calico-typha-596476d696-5f4c6\" (UID: \"5f9a5479-67eb-434b-9594-4cfe711ec2ea\") " pod="calico-system/calico-typha-596476d696-5f4c6" Dec 13 14:39:32.474090 systemd[1]: Created slice kubepods-besteffort-poddfd171e0_06f1_481a_8c3d_7481fbe0f055.slice - libcontainer container kubepods-besteffort-poddfd171e0_06f1_481a_8c3d_7481fbe0f055.slice. Dec 13 14:39:32.567305 kubelet[4171]: I1213 14:39:32.567221 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dfd171e0-06f1-481a-8c3d-7481fbe0f055-policysync\") pod \"calico-node-gjhjh\" (UID: \"dfd171e0-06f1-481a-8c3d-7481fbe0f055\") " pod="calico-system/calico-node-gjhjh" Dec 13 14:39:32.567305 kubelet[4171]: I1213 14:39:32.567255 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dfd171e0-06f1-481a-8c3d-7481fbe0f055-cni-bin-dir\") pod \"calico-node-gjhjh\" (UID: \"dfd171e0-06f1-481a-8c3d-7481fbe0f055\") " pod="calico-system/calico-node-gjhjh" Dec 13 14:39:32.567305 kubelet[4171]: I1213 14:39:32.567274 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dfd171e0-06f1-481a-8c3d-7481fbe0f055-node-certs\") pod \"calico-node-gjhjh\" (UID: \"dfd171e0-06f1-481a-8c3d-7481fbe0f055\") " pod="calico-system/calico-node-gjhjh" Dec 13 14:39:32.567305 kubelet[4171]: I1213 14:39:32.567293 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dfd171e0-06f1-481a-8c3d-7481fbe0f055-var-lib-calico\") pod \"calico-node-gjhjh\" (UID: \"dfd171e0-06f1-481a-8c3d-7481fbe0f055\") " pod="calico-system/calico-node-gjhjh" Dec 13 14:39:32.567517 kubelet[4171]: I1213 14:39:32.567311 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dfd171e0-06f1-481a-8c3d-7481fbe0f055-flexvol-driver-host\") pod \"calico-node-gjhjh\" (UID: \"dfd171e0-06f1-481a-8c3d-7481fbe0f055\") " pod="calico-system/calico-node-gjhjh" Dec 13 14:39:32.567517 kubelet[4171]: I1213 14:39:32.567330 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfd171e0-06f1-481a-8c3d-7481fbe0f055-xtables-lock\") pod \"calico-node-gjhjh\" (UID: \"dfd171e0-06f1-481a-8c3d-7481fbe0f055\") " pod="calico-system/calico-node-gjhjh" Dec 13 14:39:32.567517 kubelet[4171]: I1213 14:39:32.567351 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfd171e0-06f1-481a-8c3d-7481fbe0f055-lib-modules\") pod \"calico-node-gjhjh\" (UID: \"dfd171e0-06f1-481a-8c3d-7481fbe0f055\") " pod="calico-system/calico-node-gjhjh" Dec 13 14:39:32.567517 kubelet[4171]: I1213 14:39:32.567382 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dfd171e0-06f1-481a-8c3d-7481fbe0f055-cni-net-dir\") pod \"calico-node-gjhjh\" (UID: \"dfd171e0-06f1-481a-8c3d-7481fbe0f055\") " pod="calico-system/calico-node-gjhjh" Dec 13 14:39:32.567517 kubelet[4171]: I1213 14:39:32.567474 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwz8r\" (UniqueName: \"kubernetes.io/projected/dfd171e0-06f1-481a-8c3d-7481fbe0f055-kube-api-access-gwz8r\") pod \"calico-node-gjhjh\" (UID: \"dfd171e0-06f1-481a-8c3d-7481fbe0f055\") " pod="calico-system/calico-node-gjhjh" Dec 13 14:39:32.567632 kubelet[4171]: I1213 14:39:32.567556 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfd171e0-06f1-481a-8c3d-7481fbe0f055-tigera-ca-bundle\") pod \"calico-node-gjhjh\" (UID: \"dfd171e0-06f1-481a-8c3d-7481fbe0f055\") " pod="calico-system/calico-node-gjhjh" Dec 13 14:39:32.567632 kubelet[4171]: I1213 14:39:32.567592 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dfd171e0-06f1-481a-8c3d-7481fbe0f055-var-run-calico\") pod \"calico-node-gjhjh\" (UID: \"dfd171e0-06f1-481a-8c3d-7481fbe0f055\") " pod="calico-system/calico-node-gjhjh" Dec 13 14:39:32.567632 kubelet[4171]: I1213 14:39:32.567613 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dfd171e0-06f1-481a-8c3d-7481fbe0f055-cni-log-dir\") pod \"calico-node-gjhjh\" (UID: \"dfd171e0-06f1-481a-8c3d-7481fbe0f055\") " pod="calico-system/calico-node-gjhjh" Dec 13 14:39:32.579839 containerd[2640]: time="2024-12-13T14:39:32.579781569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-596476d696-5f4c6,Uid:5f9a5479-67eb-434b-9594-4cfe711ec2ea,Namespace:calico-system,Attempt:0,}" Dec 13 14:39:32.593645 containerd[2640]: time="2024-12-13T14:39:32.593591494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:39:32.593645 containerd[2640]: time="2024-12-13T14:39:32.593639789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:39:32.593697 containerd[2640]: time="2024-12-13T14:39:32.593650913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:32.593744 containerd[2640]: time="2024-12-13T14:39:32.593728297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:32.611887 systemd[1]: Started cri-containerd-a906b38082e5d7e5a4ce51fba2c09827d13f39f502c250afc33c8c284e0cd062.scope - libcontainer container a906b38082e5d7e5a4ce51fba2c09827d13f39f502c250afc33c8c284e0cd062. Dec 13 14:39:32.634739 containerd[2640]: time="2024-12-13T14:39:32.634686641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-596476d696-5f4c6,Uid:5f9a5479-67eb-434b-9594-4cfe711ec2ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"a906b38082e5d7e5a4ce51fba2c09827d13f39f502c250afc33c8c284e0cd062\"" Dec 13 14:39:32.635793 containerd[2640]: time="2024-12-13T14:39:32.635772667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 14:39:32.665802 kubelet[4171]: E1213 14:39:32.665765 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qx5np" podUID="fd6da168-02f5-4bdf-b4a6-009d0a657e90" Dec 13 14:39:32.669711 kubelet[4171]: E1213 14:39:32.669689 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.669760 kubelet[4171]: W1213 14:39:32.669706 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.669760 kubelet[4171]: E1213 14:39:32.669730 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.671078 kubelet[4171]: E1213 14:39:32.671061 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.671101 kubelet[4171]: W1213 14:39:32.671075 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.671101 kubelet[4171]: E1213 14:39:32.671089 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.676912 kubelet[4171]: E1213 14:39:32.676893 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.676912 kubelet[4171]: W1213 14:39:32.676910 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.676962 kubelet[4171]: E1213 14:39:32.676922 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.762388 kubelet[4171]: E1213 14:39:32.762367 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.762388 kubelet[4171]: W1213 14:39:32.762384 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.762461 kubelet[4171]: E1213 14:39:32.762403 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.762624 kubelet[4171]: E1213 14:39:32.762611 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.762624 kubelet[4171]: W1213 14:39:32.762619 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.762670 kubelet[4171]: E1213 14:39:32.762629 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.762840 kubelet[4171]: E1213 14:39:32.762829 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.762840 kubelet[4171]: W1213 14:39:32.762836 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.762885 kubelet[4171]: E1213 14:39:32.762844 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.763035 kubelet[4171]: E1213 14:39:32.763024 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.763035 kubelet[4171]: W1213 14:39:32.763032 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.763080 kubelet[4171]: E1213 14:39:32.763040 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.763257 kubelet[4171]: E1213 14:39:32.763245 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.763257 kubelet[4171]: W1213 14:39:32.763254 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.763302 kubelet[4171]: E1213 14:39:32.763261 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.763460 kubelet[4171]: E1213 14:39:32.763453 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.763482 kubelet[4171]: W1213 14:39:32.763460 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.763482 kubelet[4171]: E1213 14:39:32.763469 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.763667 kubelet[4171]: E1213 14:39:32.763656 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.763667 kubelet[4171]: W1213 14:39:32.763664 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.763724 kubelet[4171]: E1213 14:39:32.763671 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.763856 kubelet[4171]: E1213 14:39:32.763845 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.763856 kubelet[4171]: W1213 14:39:32.763853 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.763899 kubelet[4171]: E1213 14:39:32.763860 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.764045 kubelet[4171]: E1213 14:39:32.764037 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.764065 kubelet[4171]: W1213 14:39:32.764046 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.764065 kubelet[4171]: E1213 14:39:32.764053 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.764223 kubelet[4171]: E1213 14:39:32.764215 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.764245 kubelet[4171]: W1213 14:39:32.764223 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.764245 kubelet[4171]: E1213 14:39:32.764230 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.764421 kubelet[4171]: E1213 14:39:32.764413 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.764440 kubelet[4171]: W1213 14:39:32.764421 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.764440 kubelet[4171]: E1213 14:39:32.764428 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.764639 kubelet[4171]: E1213 14:39:32.764629 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.764639 kubelet[4171]: W1213 14:39:32.764636 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.764679 kubelet[4171]: E1213 14:39:32.764643 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.764859 kubelet[4171]: E1213 14:39:32.764848 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.764859 kubelet[4171]: W1213 14:39:32.764857 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.764905 kubelet[4171]: E1213 14:39:32.764864 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.765064 kubelet[4171]: E1213 14:39:32.765054 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.765064 kubelet[4171]: W1213 14:39:32.765062 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.765107 kubelet[4171]: E1213 14:39:32.765070 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.765244 kubelet[4171]: E1213 14:39:32.765236 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.765265 kubelet[4171]: W1213 14:39:32.765244 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.765265 kubelet[4171]: E1213 14:39:32.765251 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.765459 kubelet[4171]: E1213 14:39:32.765451 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.765478 kubelet[4171]: W1213 14:39:32.765458 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.765478 kubelet[4171]: E1213 14:39:32.765465 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.765671 kubelet[4171]: E1213 14:39:32.765662 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.765690 kubelet[4171]: W1213 14:39:32.765670 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.765690 kubelet[4171]: E1213 14:39:32.765677 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.765872 kubelet[4171]: E1213 14:39:32.765863 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.765896 kubelet[4171]: W1213 14:39:32.765872 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.765896 kubelet[4171]: E1213 14:39:32.765879 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.766078 kubelet[4171]: E1213 14:39:32.766068 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.766078 kubelet[4171]: W1213 14:39:32.766077 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.766129 kubelet[4171]: E1213 14:39:32.766084 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.766245 kubelet[4171]: E1213 14:39:32.766237 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.766245 kubelet[4171]: W1213 14:39:32.766244 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.766295 kubelet[4171]: E1213 14:39:32.766252 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.769564 kubelet[4171]: E1213 14:39:32.769547 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.769564 kubelet[4171]: W1213 14:39:32.769562 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.769654 kubelet[4171]: E1213 14:39:32.769575 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.769654 kubelet[4171]: I1213 14:39:32.769598 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fd6da168-02f5-4bdf-b4a6-009d0a657e90-registration-dir\") pod \"csi-node-driver-qx5np\" (UID: \"fd6da168-02f5-4bdf-b4a6-009d0a657e90\") " pod="calico-system/csi-node-driver-qx5np" Dec 13 14:39:32.769857 kubelet[4171]: E1213 14:39:32.769843 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.769881 kubelet[4171]: W1213 14:39:32.769857 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.769881 kubelet[4171]: E1213 14:39:32.769869 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.769920 kubelet[4171]: I1213 14:39:32.769884 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fd6da168-02f5-4bdf-b4a6-009d0a657e90-varrun\") pod \"csi-node-driver-qx5np\" (UID: \"fd6da168-02f5-4bdf-b4a6-009d0a657e90\") " pod="calico-system/csi-node-driver-qx5np" Dec 13 14:39:32.770103 kubelet[4171]: E1213 14:39:32.770093 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.770123 kubelet[4171]: W1213 14:39:32.770103 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.770123 kubelet[4171]: E1213 14:39:32.770114 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.770162 kubelet[4171]: I1213 14:39:32.770127 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxrzv\" (UniqueName: \"kubernetes.io/projected/fd6da168-02f5-4bdf-b4a6-009d0a657e90-kube-api-access-vxrzv\") pod \"csi-node-driver-qx5np\" (UID: \"fd6da168-02f5-4bdf-b4a6-009d0a657e90\") " pod="calico-system/csi-node-driver-qx5np" Dec 13 14:39:32.770318 kubelet[4171]: E1213 14:39:32.770309 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.770341 kubelet[4171]: W1213 14:39:32.770319 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.770341 kubelet[4171]: E1213 14:39:32.770330 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.770376 kubelet[4171]: I1213 14:39:32.770342 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd6da168-02f5-4bdf-b4a6-009d0a657e90-kubelet-dir\") pod \"csi-node-driver-qx5np\" (UID: \"fd6da168-02f5-4bdf-b4a6-009d0a657e90\") " pod="calico-system/csi-node-driver-qx5np" Dec 13 14:39:32.770498 kubelet[4171]: E1213 14:39:32.770489 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.770520 kubelet[4171]: W1213 14:39:32.770498 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.770520 kubelet[4171]: E1213 14:39:32.770509 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.770560 kubelet[4171]: I1213 14:39:32.770522 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fd6da168-02f5-4bdf-b4a6-009d0a657e90-socket-dir\") pod \"csi-node-driver-qx5np\" (UID: \"fd6da168-02f5-4bdf-b4a6-009d0a657e90\") " pod="calico-system/csi-node-driver-qx5np" Dec 13 14:39:32.770759 kubelet[4171]: E1213 14:39:32.770747 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.770759 kubelet[4171]: W1213 14:39:32.770755 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.770804 kubelet[4171]: E1213 14:39:32.770767 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.770971 kubelet[4171]: E1213 14:39:32.770961 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.770971 kubelet[4171]: W1213 14:39:32.770969 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.771013 kubelet[4171]: E1213 14:39:32.770986 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.771170 kubelet[4171]: E1213 14:39:32.771160 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.771192 kubelet[4171]: W1213 14:39:32.771176 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.771211 kubelet[4171]: E1213 14:39:32.771198 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.771361 kubelet[4171]: E1213 14:39:32.771354 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.771381 kubelet[4171]: W1213 14:39:32.771361 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.771381 kubelet[4171]: E1213 14:39:32.771376 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.771518 kubelet[4171]: E1213 14:39:32.771510 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.771538 kubelet[4171]: W1213 14:39:32.771521 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.771558 kubelet[4171]: E1213 14:39:32.771536 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.771732 kubelet[4171]: E1213 14:39:32.771725 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.771764 kubelet[4171]: W1213 14:39:32.771732 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.771764 kubelet[4171]: E1213 14:39:32.771746 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.771912 kubelet[4171]: E1213 14:39:32.771904 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.771935 kubelet[4171]: W1213 14:39:32.771912 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.771935 kubelet[4171]: E1213 14:39:32.771920 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.772111 kubelet[4171]: E1213 14:39:32.772103 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.772136 kubelet[4171]: W1213 14:39:32.772111 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.772136 kubelet[4171]: E1213 14:39:32.772119 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.772344 kubelet[4171]: E1213 14:39:32.772336 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.772366 kubelet[4171]: W1213 14:39:32.772344 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.772366 kubelet[4171]: E1213 14:39:32.772351 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.772546 kubelet[4171]: E1213 14:39:32.772538 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.772569 kubelet[4171]: W1213 14:39:32.772546 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.772569 kubelet[4171]: E1213 14:39:32.772554 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.775981 containerd[2640]: time="2024-12-13T14:39:32.775947337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gjhjh,Uid:dfd171e0-06f1-481a-8c3d-7481fbe0f055,Namespace:calico-system,Attempt:0,}" Dec 13 14:39:32.794378 containerd[2640]: time="2024-12-13T14:39:32.794132737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:39:32.794378 containerd[2640]: time="2024-12-13T14:39:32.794368772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:39:32.794443 containerd[2640]: time="2024-12-13T14:39:32.794393900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:32.794487 containerd[2640]: time="2024-12-13T14:39:32.794470285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:32.820820 systemd[1]: Started cri-containerd-0c12d097e91baa730578666a9c9fa6de8784563d16a5a83de4ef2d159d689803.scope - libcontainer container 0c12d097e91baa730578666a9c9fa6de8784563d16a5a83de4ef2d159d689803. Dec 13 14:39:32.836530 containerd[2640]: time="2024-12-13T14:39:32.836495729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gjhjh,Uid:dfd171e0-06f1-481a-8c3d-7481fbe0f055,Namespace:calico-system,Attempt:0,} returns sandbox id \"0c12d097e91baa730578666a9c9fa6de8784563d16a5a83de4ef2d159d689803\"" Dec 13 14:39:32.871183 kubelet[4171]: E1213 14:39:32.871158 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.871183 kubelet[4171]: W1213 14:39:32.871176 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.871298 kubelet[4171]: E1213 14:39:32.871196 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.871442 kubelet[4171]: E1213 14:39:32.871430 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.871442 kubelet[4171]: W1213 14:39:32.871439 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.871484 kubelet[4171]: E1213 14:39:32.871450 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.871680 kubelet[4171]: E1213 14:39:32.871670 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.871680 kubelet[4171]: W1213 14:39:32.871677 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.871772 kubelet[4171]: E1213 14:39:32.871688 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.871998 kubelet[4171]: E1213 14:39:32.871981 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.872024 kubelet[4171]: W1213 14:39:32.871996 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.872024 kubelet[4171]: E1213 14:39:32.872012 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.872226 kubelet[4171]: E1213 14:39:32.872218 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.872249 kubelet[4171]: W1213 14:39:32.872227 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.872249 kubelet[4171]: E1213 14:39:32.872238 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.872405 kubelet[4171]: E1213 14:39:32.872395 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.872405 kubelet[4171]: W1213 14:39:32.872402 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.872447 kubelet[4171]: E1213 14:39:32.872412 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.872626 kubelet[4171]: E1213 14:39:32.872615 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.872626 kubelet[4171]: W1213 14:39:32.872623 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.872669 kubelet[4171]: E1213 14:39:32.872633 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.872880 kubelet[4171]: E1213 14:39:32.872864 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.872909 kubelet[4171]: W1213 14:39:32.872878 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.872909 kubelet[4171]: E1213 14:39:32.872894 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.873084 kubelet[4171]: E1213 14:39:32.873074 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.873084 kubelet[4171]: W1213 14:39:32.873082 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.873127 kubelet[4171]: E1213 14:39:32.873099 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.873252 kubelet[4171]: E1213 14:39:32.873245 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.873272 kubelet[4171]: W1213 14:39:32.873252 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.873294 kubelet[4171]: E1213 14:39:32.873267 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.873434 kubelet[4171]: E1213 14:39:32.873426 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.873456 kubelet[4171]: W1213 14:39:32.873433 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.873477 kubelet[4171]: E1213 14:39:32.873450 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.873613 kubelet[4171]: E1213 14:39:32.873606 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.873633 kubelet[4171]: W1213 14:39:32.873616 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.873633 kubelet[4171]: E1213 14:39:32.873627 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.873828 kubelet[4171]: E1213 14:39:32.873816 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.873853 kubelet[4171]: W1213 14:39:32.873828 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.873853 kubelet[4171]: E1213 14:39:32.873843 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.874073 kubelet[4171]: E1213 14:39:32.874063 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.874093 kubelet[4171]: W1213 14:39:32.874073 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.874093 kubelet[4171]: E1213 14:39:32.874085 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.874306 kubelet[4171]: E1213 14:39:32.874298 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.874328 kubelet[4171]: W1213 14:39:32.874306 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.874328 kubelet[4171]: E1213 14:39:32.874316 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.874487 kubelet[4171]: E1213 14:39:32.874480 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.874508 kubelet[4171]: W1213 14:39:32.874487 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.874508 kubelet[4171]: E1213 14:39:32.874497 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.874715 kubelet[4171]: E1213 14:39:32.874703 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.874740 kubelet[4171]: W1213 14:39:32.874716 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.874763 kubelet[4171]: E1213 14:39:32.874742 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.874883 kubelet[4171]: E1213 14:39:32.874875 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.874908 kubelet[4171]: W1213 14:39:32.874883 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.874908 kubelet[4171]: E1213 14:39:32.874902 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.875040 kubelet[4171]: E1213 14:39:32.875032 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.875065 kubelet[4171]: W1213 14:39:32.875041 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.875065 kubelet[4171]: E1213 14:39:32.875059 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.875182 kubelet[4171]: E1213 14:39:32.875174 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.875202 kubelet[4171]: W1213 14:39:32.875182 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.875221 kubelet[4171]: E1213 14:39:32.875199 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.875385 kubelet[4171]: E1213 14:39:32.875377 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.875443 kubelet[4171]: W1213 14:39:32.875385 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.875443 kubelet[4171]: E1213 14:39:32.875396 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.875631 kubelet[4171]: E1213 14:39:32.875623 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.875654 kubelet[4171]: W1213 14:39:32.875631 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.875654 kubelet[4171]: E1213 14:39:32.875642 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.875839 kubelet[4171]: E1213 14:39:32.875826 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.875839 kubelet[4171]: W1213 14:39:32.875835 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.875889 kubelet[4171]: E1213 14:39:32.875843 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.876050 kubelet[4171]: E1213 14:39:32.876041 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.876050 kubelet[4171]: W1213 14:39:32.876048 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.876091 kubelet[4171]: E1213 14:39:32.876055 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.876284 kubelet[4171]: E1213 14:39:32.876276 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.876307 kubelet[4171]: W1213 14:39:32.876284 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.876307 kubelet[4171]: E1213 14:39:32.876291 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:32.884893 kubelet[4171]: E1213 14:39:32.884880 4171 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:39:32.884918 kubelet[4171]: W1213 14:39:32.884894 4171 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:39:32.884918 kubelet[4171]: E1213 14:39:32.884906 4171 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:39:33.392036 containerd[2640]: time="2024-12-13T14:39:33.392003510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:33.392134 containerd[2640]: time="2024-12-13T14:39:33.392071531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Dec 13 14:39:33.392758 containerd[2640]: time="2024-12-13T14:39:33.392729649Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:33.394448 containerd[2640]: time="2024-12-13T14:39:33.394427160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:33.395118 containerd[2640]: time="2024-12-13T14:39:33.395094842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 759.294486ms" Dec 13 14:39:33.395139 containerd[2640]: time="2024-12-13T14:39:33.395123130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 14:39:33.395859 containerd[2640]: time="2024-12-13T14:39:33.395842627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 14:39:33.400390 containerd[2640]: time="2024-12-13T14:39:33.400365710Z" level=info msg="CreateContainer within sandbox \"a906b38082e5d7e5a4ce51fba2c09827d13f39f502c250afc33c8c284e0cd062\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 14:39:33.407326 containerd[2640]: time="2024-12-13T14:39:33.407292598Z" level=info msg="CreateContainer within sandbox \"a906b38082e5d7e5a4ce51fba2c09827d13f39f502c250afc33c8c284e0cd062\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"60bbd73a9ee9c773c71c91ffe570fa1d4a519cd6f01e8a01ba28e79ab69db66f\"" Dec 13 14:39:33.407693 containerd[2640]: time="2024-12-13T14:39:33.407669191Z" level=info msg="StartContainer for \"60bbd73a9ee9c773c71c91ffe570fa1d4a519cd6f01e8a01ba28e79ab69db66f\"" Dec 13 14:39:33.431891 systemd[1]: Started cri-containerd-60bbd73a9ee9c773c71c91ffe570fa1d4a519cd6f01e8a01ba28e79ab69db66f.scope - libcontainer container 60bbd73a9ee9c773c71c91ffe570fa1d4a519cd6f01e8a01ba28e79ab69db66f. Dec 13 14:39:33.456044 containerd[2640]: time="2024-12-13T14:39:33.456014641Z" level=info msg="StartContainer for \"60bbd73a9ee9c773c71c91ffe570fa1d4a519cd6f01e8a01ba28e79ab69db66f\" returns successfully" Dec 13 14:39:33.751704 containerd[2640]: time="2024-12-13T14:39:33.751672184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:33.752004 containerd[2640]: time="2024-12-13T14:39:33.751698671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Dec 13 14:39:33.752610 containerd[2640]: time="2024-12-13T14:39:33.752411766Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:33.754775 containerd[2640]: time="2024-12-13T14:39:33.754745790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:33.755415 containerd[2640]: time="2024-12-13T14:39:33.755055003Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 359.187849ms" Dec 13 14:39:33.755415 containerd[2640]: time="2024-12-13T14:39:33.755078330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 14:39:33.762379 containerd[2640]: time="2024-12-13T14:39:33.762262055Z" level=info msg="CreateContainer within sandbox \"0c12d097e91baa730578666a9c9fa6de8784563d16a5a83de4ef2d159d689803\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 14:39:33.768755 containerd[2640]: time="2024-12-13T14:39:33.768714800Z" level=info msg="CreateContainer within sandbox \"0c12d097e91baa730578666a9c9fa6de8784563d16a5a83de4ef2d159d689803\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"14e06dc3fd1507d1dcf689b3b3801a0d45e91f7a23410de4ed0e632251562049\"" Dec 13 14:39:33.769207 containerd[2640]: time="2024-12-13T14:39:33.769190103Z" level=info msg="StartContainer for \"14e06dc3fd1507d1dcf689b3b3801a0d45e91f7a23410de4ed0e632251562049\"" Dec 13 14:39:33.795906 systemd[1]: Started cri-containerd-14e06dc3fd1507d1dcf689b3b3801a0d45e91f7a23410de4ed0e632251562049.scope - libcontainer container 14e06dc3fd1507d1dcf689b3b3801a0d45e91f7a23410de4ed0e632251562049. Dec 13 14:39:33.815619 containerd[2640]: time="2024-12-13T14:39:33.815588486Z" level=info msg="StartContainer for \"14e06dc3fd1507d1dcf689b3b3801a0d45e91f7a23410de4ed0e632251562049\" returns successfully" Dec 13 14:39:33.828207 systemd[1]: cri-containerd-14e06dc3fd1507d1dcf689b3b3801a0d45e91f7a23410de4ed0e632251562049.scope: Deactivated successfully. Dec 13 14:39:33.941417 kubelet[4171]: I1213 14:39:33.941366 4171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-596476d696-5f4c6" podStartSLOduration=1.181186915 podStartE2EDuration="1.941348786s" podCreationTimestamp="2024-12-13 14:39:32 +0000 UTC" firstStartedPulling="2024-12-13 14:39:32.635550917 +0000 UTC m=+12.806032281" lastFinishedPulling="2024-12-13 14:39:33.395712788 +0000 UTC m=+13.566194152" observedRunningTime="2024-12-13 14:39:33.941073864 +0000 UTC m=+14.111555188" watchObservedRunningTime="2024-12-13 14:39:33.941348786 +0000 UTC m=+14.111830150" Dec 13 14:39:33.958240 containerd[2640]: time="2024-12-13T14:39:33.958150690Z" level=info msg="shim disconnected" id=14e06dc3fd1507d1dcf689b3b3801a0d45e91f7a23410de4ed0e632251562049 namespace=k8s.io Dec 13 14:39:33.958240 containerd[2640]: time="2024-12-13T14:39:33.958199465Z" level=warning msg="cleaning up after shim disconnected" id=14e06dc3fd1507d1dcf689b3b3801a0d45e91f7a23410de4ed0e632251562049 namespace=k8s.io Dec 13 14:39:33.958240 containerd[2640]: time="2024-12-13T14:39:33.958206867Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:39:34.470918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14e06dc3fd1507d1dcf689b3b3801a0d45e91f7a23410de4ed0e632251562049-rootfs.mount: Deactivated successfully. Dec 13 14:39:34.899256 kubelet[4171]: E1213 14:39:34.899218 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qx5np" podUID="fd6da168-02f5-4bdf-b4a6-009d0a657e90" Dec 13 14:39:34.926241 kubelet[4171]: I1213 14:39:34.926223 4171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:39:34.926953 containerd[2640]: time="2024-12-13T14:39:34.926928638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 14:39:36.369294 containerd[2640]: time="2024-12-13T14:39:36.369250977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:36.369677 containerd[2640]: time="2024-12-13T14:39:36.369290027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 14:39:36.370046 containerd[2640]: time="2024-12-13T14:39:36.370028135Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:36.371832 containerd[2640]: time="2024-12-13T14:39:36.371810309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:36.372512 containerd[2640]: time="2024-12-13T14:39:36.372487762Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 1.445526195s" Dec 13 14:39:36.372536 containerd[2640]: time="2024-12-13T14:39:36.372517370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 14:39:36.374088 containerd[2640]: time="2024-12-13T14:39:36.374067485Z" level=info msg="CreateContainer within sandbox \"0c12d097e91baa730578666a9c9fa6de8784563d16a5a83de4ef2d159d689803\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 14:39:36.380022 containerd[2640]: time="2024-12-13T14:39:36.379998357Z" level=info msg="CreateContainer within sandbox \"0c12d097e91baa730578666a9c9fa6de8784563d16a5a83de4ef2d159d689803\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"347e06137759ef7610ff77639186b38ab01ec0b3fbc343baba8e3b49c2b2c493\"" Dec 13 14:39:36.380338 containerd[2640]: time="2024-12-13T14:39:36.380317878Z" level=info msg="StartContainer for \"347e06137759ef7610ff77639186b38ab01ec0b3fbc343baba8e3b49c2b2c493\"" Dec 13 14:39:36.408830 systemd[1]: Started cri-containerd-347e06137759ef7610ff77639186b38ab01ec0b3fbc343baba8e3b49c2b2c493.scope - libcontainer container 347e06137759ef7610ff77639186b38ab01ec0b3fbc343baba8e3b49c2b2c493. Dec 13 14:39:36.430811 containerd[2640]: time="2024-12-13T14:39:36.430783104Z" level=info msg="StartContainer for \"347e06137759ef7610ff77639186b38ab01ec0b3fbc343baba8e3b49c2b2c493\" returns successfully" Dec 13 14:39:36.784800 systemd[1]: cri-containerd-347e06137759ef7610ff77639186b38ab01ec0b3fbc343baba8e3b49c2b2c493.scope: Deactivated successfully. Dec 13 14:39:36.846632 kubelet[4171]: I1213 14:39:36.846606 4171 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 14:39:36.863985 systemd[1]: Created slice kubepods-burstable-podd5852b00_0540_4fe3_89bd_044f6a8a7943.slice - libcontainer container kubepods-burstable-podd5852b00_0540_4fe3_89bd_044f6a8a7943.slice. Dec 13 14:39:36.867287 systemd[1]: Created slice kubepods-besteffort-pod0b0d5e48_cd22_404d_bff6_5a51346045ef.slice - libcontainer container kubepods-besteffort-pod0b0d5e48_cd22_404d_bff6_5a51346045ef.slice. Dec 13 14:39:36.871314 systemd[1]: Created slice kubepods-burstable-poddfd0925b_82a4_472a_b9e8_3168dbe42206.slice - libcontainer container kubepods-burstable-poddfd0925b_82a4_472a_b9e8_3168dbe42206.slice. Dec 13 14:39:36.874760 systemd[1]: Created slice kubepods-besteffort-poda6b33a15_ad08_44b6_8263_2f26e62973b7.slice - libcontainer container kubepods-besteffort-poda6b33a15_ad08_44b6_8263_2f26e62973b7.slice. Dec 13 14:39:36.878329 systemd[1]: Created slice kubepods-besteffort-podc2385ebc_0f6a_4f2a_89d0_c0db311721a5.slice - libcontainer container kubepods-besteffort-podc2385ebc_0f6a_4f2a_89d0_c0db311721a5.slice. Dec 13 14:39:36.898255 kubelet[4171]: I1213 14:39:36.898202 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0b0d5e48-cd22-404d-bff6-5a51346045ef-calico-apiserver-certs\") pod \"calico-apiserver-d65999df6-fzfgz\" (UID: \"0b0d5e48-cd22-404d-bff6-5a51346045ef\") " pod="calico-apiserver/calico-apiserver-d65999df6-fzfgz" Dec 13 14:39:36.898385 kubelet[4171]: I1213 14:39:36.898262 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5852b00-0540-4fe3-89bd-044f6a8a7943-config-volume\") pod \"coredns-6f6b679f8f-6w4ss\" (UID: \"d5852b00-0540-4fe3-89bd-044f6a8a7943\") " pod="kube-system/coredns-6f6b679f8f-6w4ss" Dec 13 14:39:36.898385 kubelet[4171]: I1213 14:39:36.898296 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6gbq\" (UniqueName: \"kubernetes.io/projected/c2385ebc-0f6a-4f2a-89d0-c0db311721a5-kube-api-access-v6gbq\") pod \"calico-apiserver-d65999df6-l48rl\" (UID: \"c2385ebc-0f6a-4f2a-89d0-c0db311721a5\") " pod="calico-apiserver/calico-apiserver-d65999df6-l48rl" Dec 13 14:39:36.898385 kubelet[4171]: I1213 14:39:36.898328 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfcnk\" (UniqueName: \"kubernetes.io/projected/0b0d5e48-cd22-404d-bff6-5a51346045ef-kube-api-access-tfcnk\") pod \"calico-apiserver-d65999df6-fzfgz\" (UID: \"0b0d5e48-cd22-404d-bff6-5a51346045ef\") " pod="calico-apiserver/calico-apiserver-d65999df6-fzfgz" Dec 13 14:39:36.898385 kubelet[4171]: I1213 14:39:36.898363 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zkft\" (UniqueName: \"kubernetes.io/projected/d5852b00-0540-4fe3-89bd-044f6a8a7943-kube-api-access-6zkft\") pod \"coredns-6f6b679f8f-6w4ss\" (UID: \"d5852b00-0540-4fe3-89bd-044f6a8a7943\") " pod="kube-system/coredns-6f6b679f8f-6w4ss" Dec 13 14:39:36.898548 kubelet[4171]: I1213 14:39:36.898447 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfd0925b-82a4-472a-b9e8-3168dbe42206-config-volume\") pod \"coredns-6f6b679f8f-2rmdm\" (UID: \"dfd0925b-82a4-472a-b9e8-3168dbe42206\") " pod="kube-system/coredns-6f6b679f8f-2rmdm" Dec 13 14:39:36.898548 kubelet[4171]: I1213 14:39:36.898481 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6b33a15-ad08-44b6-8263-2f26e62973b7-tigera-ca-bundle\") pod \"calico-kube-controllers-79bc4ccb78-gx2lg\" (UID: \"a6b33a15-ad08-44b6-8263-2f26e62973b7\") " pod="calico-system/calico-kube-controllers-79bc4ccb78-gx2lg" Dec 13 14:39:36.898548 kubelet[4171]: I1213 14:39:36.898504 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c2385ebc-0f6a-4f2a-89d0-c0db311721a5-calico-apiserver-certs\") pod \"calico-apiserver-d65999df6-l48rl\" (UID: \"c2385ebc-0f6a-4f2a-89d0-c0db311721a5\") " pod="calico-apiserver/calico-apiserver-d65999df6-l48rl" Dec 13 14:39:36.898548 kubelet[4171]: I1213 14:39:36.898524 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wn7t\" (UniqueName: \"kubernetes.io/projected/a6b33a15-ad08-44b6-8263-2f26e62973b7-kube-api-access-5wn7t\") pod \"calico-kube-controllers-79bc4ccb78-gx2lg\" (UID: \"a6b33a15-ad08-44b6-8263-2f26e62973b7\") " pod="calico-system/calico-kube-controllers-79bc4ccb78-gx2lg" Dec 13 14:39:36.898548 kubelet[4171]: I1213 14:39:36.898544 4171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpzs7\" (UniqueName: \"kubernetes.io/projected/dfd0925b-82a4-472a-b9e8-3168dbe42206-kube-api-access-cpzs7\") pod \"coredns-6f6b679f8f-2rmdm\" (UID: \"dfd0925b-82a4-472a-b9e8-3168dbe42206\") " pod="kube-system/coredns-6f6b679f8f-2rmdm" Dec 13 14:39:36.902167 systemd[1]: Created slice kubepods-besteffort-podfd6da168_02f5_4bdf_b4a6_009d0a657e90.slice - libcontainer container kubepods-besteffort-podfd6da168_02f5_4bdf_b4a6_009d0a657e90.slice. Dec 13 14:39:36.903858 containerd[2640]: time="2024-12-13T14:39:36.903828501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qx5np,Uid:fd6da168-02f5-4bdf-b4a6-009d0a657e90,Namespace:calico-system,Attempt:0,}" Dec 13 14:39:36.957355 containerd[2640]: time="2024-12-13T14:39:36.957302974Z" level=info msg="shim disconnected" id=347e06137759ef7610ff77639186b38ab01ec0b3fbc343baba8e3b49c2b2c493 namespace=k8s.io Dec 13 14:39:36.957355 containerd[2640]: time="2024-12-13T14:39:36.957349866Z" level=warning msg="cleaning up after shim disconnected" id=347e06137759ef7610ff77639186b38ab01ec0b3fbc343baba8e3b49c2b2c493 namespace=k8s.io Dec 13 14:39:36.957355 containerd[2640]: time="2024-12-13T14:39:36.957357748Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:39:37.015003 containerd[2640]: time="2024-12-13T14:39:37.014958679Z" level=error msg="Failed to destroy network for sandbox \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.015334 containerd[2640]: time="2024-12-13T14:39:37.015303282Z" level=error msg="encountered an error cleaning up failed sandbox \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.015393 containerd[2640]: time="2024-12-13T14:39:37.015376500Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qx5np,Uid:fd6da168-02f5-4bdf-b4a6-009d0a657e90,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.015578 kubelet[4171]: E1213 14:39:37.015545 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.015616 kubelet[4171]: E1213 14:39:37.015601 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qx5np" Dec 13 14:39:37.015643 kubelet[4171]: E1213 14:39:37.015620 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qx5np" Dec 13 14:39:37.015685 kubelet[4171]: E1213 14:39:37.015659 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qx5np_calico-system(fd6da168-02f5-4bdf-b4a6-009d0a657e90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qx5np_calico-system(fd6da168-02f5-4bdf-b4a6-009d0a657e90)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qx5np" podUID="fd6da168-02f5-4bdf-b4a6-009d0a657e90" Dec 13 14:39:37.167082 containerd[2640]: time="2024-12-13T14:39:37.167009457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6w4ss,Uid:d5852b00-0540-4fe3-89bd-044f6a8a7943,Namespace:kube-system,Attempt:0,}" Dec 13 14:39:37.170047 containerd[2640]: time="2024-12-13T14:39:37.170020344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65999df6-fzfgz,Uid:0b0d5e48-cd22-404d-bff6-5a51346045ef,Namespace:calico-apiserver,Attempt:0,}" Dec 13 14:39:37.173626 containerd[2640]: time="2024-12-13T14:39:37.173600088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rmdm,Uid:dfd0925b-82a4-472a-b9e8-3168dbe42206,Namespace:kube-system,Attempt:0,}" Dec 13 14:39:37.177094 containerd[2640]: time="2024-12-13T14:39:37.177067645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bc4ccb78-gx2lg,Uid:a6b33a15-ad08-44b6-8263-2f26e62973b7,Namespace:calico-system,Attempt:0,}" Dec 13 14:39:37.180612 containerd[2640]: time="2024-12-13T14:39:37.180585454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65999df6-l48rl,Uid:c2385ebc-0f6a-4f2a-89d0-c0db311721a5,Namespace:calico-apiserver,Attempt:0,}" Dec 13 14:39:37.214272 containerd[2640]: time="2024-12-13T14:39:37.214224693Z" level=error msg="Failed to destroy network for sandbox \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.214568 containerd[2640]: time="2024-12-13T14:39:37.214531247Z" level=error msg="Failed to destroy network for sandbox \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.214624 containerd[2640]: time="2024-12-13T14:39:37.214563254Z" level=error msg="encountered an error cleaning up failed sandbox \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.214648 containerd[2640]: time="2024-12-13T14:39:37.214622189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65999df6-fzfgz,Uid:0b0d5e48-cd22-404d-bff6-5a51346045ef,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.214886 kubelet[4171]: E1213 14:39:37.214846 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.214943 kubelet[4171]: E1213 14:39:37.214907 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d65999df6-fzfgz" Dec 13 14:39:37.214943 kubelet[4171]: E1213 14:39:37.214925 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d65999df6-fzfgz" Dec 13 14:39:37.214991 kubelet[4171]: E1213 14:39:37.214962 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d65999df6-fzfgz_calico-apiserver(0b0d5e48-cd22-404d-bff6-5a51346045ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d65999df6-fzfgz_calico-apiserver(0b0d5e48-cd22-404d-bff6-5a51346045ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d65999df6-fzfgz" podUID="0b0d5e48-cd22-404d-bff6-5a51346045ef" Dec 13 14:39:37.215038 containerd[2640]: time="2024-12-13T14:39:37.214977074Z" level=error msg="encountered an error cleaning up failed sandbox \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.215038 containerd[2640]: time="2024-12-13T14:39:37.215026886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6w4ss,Uid:d5852b00-0540-4fe3-89bd-044f6a8a7943,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.215170 kubelet[4171]: E1213 14:39:37.215134 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.215200 kubelet[4171]: E1213 14:39:37.215187 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6w4ss" Dec 13 14:39:37.215225 kubelet[4171]: E1213 14:39:37.215206 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6w4ss" Dec 13 14:39:37.215261 kubelet[4171]: E1213 14:39:37.215241 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-6w4ss_kube-system(d5852b00-0540-4fe3-89bd-044f6a8a7943)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-6w4ss_kube-system(d5852b00-0540-4fe3-89bd-044f6a8a7943)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-6w4ss" podUID="d5852b00-0540-4fe3-89bd-044f6a8a7943" Dec 13 14:39:37.216955 containerd[2640]: time="2024-12-13T14:39:37.216923544Z" level=error msg="Failed to destroy network for sandbox \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.217383 containerd[2640]: time="2024-12-13T14:39:37.217357289Z" level=error msg="encountered an error cleaning up failed sandbox \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.217424 containerd[2640]: time="2024-12-13T14:39:37.217409541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rmdm,Uid:dfd0925b-82a4-472a-b9e8-3168dbe42206,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.217738 kubelet[4171]: E1213 14:39:37.217515 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.217738 kubelet[4171]: E1213 14:39:37.217543 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2rmdm" Dec 13 14:39:37.217738 kubelet[4171]: E1213 14:39:37.217557 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2rmdm" Dec 13 14:39:37.217830 kubelet[4171]: E1213 14:39:37.217585 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-2rmdm_kube-system(dfd0925b-82a4-472a-b9e8-3168dbe42206)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-2rmdm_kube-system(dfd0925b-82a4-472a-b9e8-3168dbe42206)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-2rmdm" podUID="dfd0925b-82a4-472a-b9e8-3168dbe42206" Dec 13 14:39:37.221441 containerd[2640]: time="2024-12-13T14:39:37.221412628Z" level=error msg="Failed to destroy network for sandbox \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.221740 containerd[2640]: time="2024-12-13T14:39:37.221720782Z" level=error msg="encountered an error cleaning up failed sandbox \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.221783 containerd[2640]: time="2024-12-13T14:39:37.221767873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bc4ccb78-gx2lg,Uid:a6b33a15-ad08-44b6-8263-2f26e62973b7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.221899 kubelet[4171]: E1213 14:39:37.221878 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.221928 kubelet[4171]: E1213 14:39:37.221916 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79bc4ccb78-gx2lg" Dec 13 14:39:37.221955 kubelet[4171]: E1213 14:39:37.221932 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79bc4ccb78-gx2lg" Dec 13 14:39:37.221989 kubelet[4171]: E1213 14:39:37.221965 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79bc4ccb78-gx2lg_calico-system(a6b33a15-ad08-44b6-8263-2f26e62973b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79bc4ccb78-gx2lg_calico-system(a6b33a15-ad08-44b6-8263-2f26e62973b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79bc4ccb78-gx2lg" podUID="a6b33a15-ad08-44b6-8263-2f26e62973b7" Dec 13 14:39:37.225491 containerd[2640]: time="2024-12-13T14:39:37.225463525Z" level=error msg="Failed to destroy network for sandbox \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.225783 containerd[2640]: time="2024-12-13T14:39:37.225762437Z" level=error msg="encountered an error cleaning up failed sandbox \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.225821 containerd[2640]: time="2024-12-13T14:39:37.225805888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65999df6-l48rl,Uid:c2385ebc-0f6a-4f2a-89d0-c0db311721a5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.225946 kubelet[4171]: E1213 14:39:37.225926 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.225972 kubelet[4171]: E1213 14:39:37.225958 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d65999df6-l48rl" Dec 13 14:39:37.225993 kubelet[4171]: E1213 14:39:37.225972 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d65999df6-l48rl" Dec 13 14:39:37.226016 kubelet[4171]: E1213 14:39:37.226003 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d65999df6-l48rl_calico-apiserver(c2385ebc-0f6a-4f2a-89d0-c0db311721a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d65999df6-l48rl_calico-apiserver(c2385ebc-0f6a-4f2a-89d0-c0db311721a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d65999df6-l48rl" podUID="c2385ebc-0f6a-4f2a-89d0-c0db311721a5" Dec 13 14:39:37.387757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-347e06137759ef7610ff77639186b38ab01ec0b3fbc343baba8e3b49c2b2c493-rootfs.mount: Deactivated successfully. Dec 13 14:39:37.932315 kubelet[4171]: I1213 14:39:37.932289 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40" Dec 13 14:39:37.932780 containerd[2640]: time="2024-12-13T14:39:37.932760074Z" level=info msg="StopPodSandbox for \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\"" Dec 13 14:39:37.932929 containerd[2640]: time="2024-12-13T14:39:37.932910510Z" level=info msg="Ensure that sandbox c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40 in task-service has been cleanup successfully" Dec 13 14:39:37.933010 kubelet[4171]: I1213 14:39:37.932997 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1" Dec 13 14:39:37.933079 containerd[2640]: time="2024-12-13T14:39:37.933066468Z" level=info msg="TearDown network for sandbox \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\" successfully" Dec 13 14:39:37.933102 containerd[2640]: time="2024-12-13T14:39:37.933079791Z" level=info msg="StopPodSandbox for \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\" returns successfully" Dec 13 14:39:37.933355 containerd[2640]: time="2024-12-13T14:39:37.933338934Z" level=info msg="StopPodSandbox for \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\"" Dec 13 14:39:37.933471 containerd[2640]: time="2024-12-13T14:39:37.933455122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65999df6-l48rl,Uid:c2385ebc-0f6a-4f2a-89d0-c0db311721a5,Namespace:calico-apiserver,Attempt:1,}" Dec 13 14:39:37.933505 containerd[2640]: time="2024-12-13T14:39:37.933491170Z" level=info msg="Ensure that sandbox d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1 in task-service has been cleanup successfully" Dec 13 14:39:37.933656 kubelet[4171]: I1213 14:39:37.933641 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e" Dec 13 14:39:37.933682 containerd[2640]: time="2024-12-13T14:39:37.933661331Z" level=info msg="TearDown network for sandbox \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\" successfully" Dec 13 14:39:37.933682 containerd[2640]: time="2024-12-13T14:39:37.933673654Z" level=info msg="StopPodSandbox for \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\" returns successfully" Dec 13 14:39:37.934005 containerd[2640]: time="2024-12-13T14:39:37.933983249Z" level=info msg="StopPodSandbox for \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\"" Dec 13 14:39:37.934039 containerd[2640]: time="2024-12-13T14:39:37.934022339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6w4ss,Uid:d5852b00-0540-4fe3-89bd-044f6a8a7943,Namespace:kube-system,Attempt:1,}" Dec 13 14:39:37.934140 containerd[2640]: time="2024-12-13T14:39:37.934127244Z" level=info msg="Ensure that sandbox a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e in task-service has been cleanup successfully" Dec 13 14:39:37.934301 containerd[2640]: time="2024-12-13T14:39:37.934288443Z" level=info msg="TearDown network for sandbox \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\" successfully" Dec 13 14:39:37.934321 containerd[2640]: time="2024-12-13T14:39:37.934301486Z" level=info msg="StopPodSandbox for \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\" returns successfully" Dec 13 14:39:37.934362 kubelet[4171]: I1213 14:39:37.934349 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2" Dec 13 14:39:37.934513 systemd[1]: run-netns-cni\x2dfb33676f\x2daa80\x2d3d54\x2d61ef\x2d5ad0c81c0bd0.mount: Deactivated successfully. Dec 13 14:39:37.934649 containerd[2640]: time="2024-12-13T14:39:37.934603319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65999df6-fzfgz,Uid:0b0d5e48-cd22-404d-bff6-5a51346045ef,Namespace:calico-apiserver,Attempt:1,}" Dec 13 14:39:37.934729 containerd[2640]: time="2024-12-13T14:39:37.934715626Z" level=info msg="StopPodSandbox for \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\"" Dec 13 14:39:37.934860 containerd[2640]: time="2024-12-13T14:39:37.934847538Z" level=info msg="Ensure that sandbox 49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2 in task-service has been cleanup successfully" Dec 13 14:39:37.935029 containerd[2640]: time="2024-12-13T14:39:37.935014618Z" level=info msg="TearDown network for sandbox \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\" successfully" Dec 13 14:39:37.935062 containerd[2640]: time="2024-12-13T14:39:37.935050067Z" level=info msg="StopPodSandbox for \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\" returns successfully" Dec 13 14:39:37.935350 containerd[2640]: time="2024-12-13T14:39:37.935333655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qx5np,Uid:fd6da168-02f5-4bdf-b4a6-009d0a657e90,Namespace:calico-system,Attempt:1,}" Dec 13 14:39:37.936198 kubelet[4171]: I1213 14:39:37.936182 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0" Dec 13 14:39:37.936228 containerd[2640]: time="2024-12-13T14:39:37.936197744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 14:39:37.936302 systemd[1]: run-netns-cni\x2db9f977e9\x2d4477\x2dfd67\x2d112a\x2d1ef605c055b2.mount: Deactivated successfully. Dec 13 14:39:37.936370 systemd[1]: run-netns-cni\x2d885bf449\x2d89e3\x2d975f\x2dcc0c\x2d4c6204fcfee7.mount: Deactivated successfully. Dec 13 14:39:37.936416 systemd[1]: run-netns-cni\x2dd2d8e24b\x2d3d02\x2dd75e\x2d160f\x2d12b430239c20.mount: Deactivated successfully. Dec 13 14:39:37.936524 containerd[2640]: time="2024-12-13T14:39:37.936508499Z" level=info msg="StopPodSandbox for \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\"" Dec 13 14:39:37.936756 containerd[2640]: time="2024-12-13T14:39:37.936739754Z" level=info msg="Ensure that sandbox e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0 in task-service has been cleanup successfully" Dec 13 14:39:37.936781 kubelet[4171]: I1213 14:39:37.936766 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf" Dec 13 14:39:37.936908 containerd[2640]: time="2024-12-13T14:39:37.936894592Z" level=info msg="TearDown network for sandbox \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\" successfully" Dec 13 14:39:37.936930 containerd[2640]: time="2024-12-13T14:39:37.936908235Z" level=info msg="StopPodSandbox for \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\" returns successfully" Dec 13 14:39:37.937137 containerd[2640]: time="2024-12-13T14:39:37.937121046Z" level=info msg="StopPodSandbox for \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\"" Dec 13 14:39:37.937268 containerd[2640]: time="2024-12-13T14:39:37.937254919Z" level=info msg="Ensure that sandbox f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf in task-service has been cleanup successfully" Dec 13 14:39:37.937309 containerd[2640]: time="2024-12-13T14:39:37.937290767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bc4ccb78-gx2lg,Uid:a6b33a15-ad08-44b6-8263-2f26e62973b7,Namespace:calico-system,Attempt:1,}" Dec 13 14:39:37.937416 containerd[2640]: time="2024-12-13T14:39:37.937403635Z" level=info msg="TearDown network for sandbox \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\" successfully" Dec 13 14:39:37.937442 containerd[2640]: time="2024-12-13T14:39:37.937417278Z" level=info msg="StopPodSandbox for \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\" returns successfully" Dec 13 14:39:37.937756 containerd[2640]: time="2024-12-13T14:39:37.937733554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rmdm,Uid:dfd0925b-82a4-472a-b9e8-3168dbe42206,Namespace:kube-system,Attempt:1,}" Dec 13 14:39:37.939278 systemd[1]: run-netns-cni\x2d2410a52c\x2d102c\x2daabf\x2d5361\x2d9c58a29dc8d6.mount: Deactivated successfully. Dec 13 14:39:37.939339 systemd[1]: run-netns-cni\x2d8de47b4c\x2dace4\x2d6221\x2d755f\x2d688428dc611c.mount: Deactivated successfully. Dec 13 14:39:37.981304 containerd[2640]: time="2024-12-13T14:39:37.981254698Z" level=error msg="Failed to destroy network for sandbox \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.981560 containerd[2640]: time="2024-12-13T14:39:37.981531805Z" level=error msg="Failed to destroy network for sandbox \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.981623 containerd[2640]: time="2024-12-13T14:39:37.981602222Z" level=error msg="encountered an error cleaning up failed sandbox \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.981682 containerd[2640]: time="2024-12-13T14:39:37.981666838Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65999df6-l48rl,Uid:c2385ebc-0f6a-4f2a-89d0-c0db311721a5,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.981879 containerd[2640]: time="2024-12-13T14:39:37.981857804Z" level=error msg="encountered an error cleaning up failed sandbox \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.981906 kubelet[4171]: E1213 14:39:37.981848 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.981948 kubelet[4171]: E1213 14:39:37.981916 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d65999df6-l48rl" Dec 13 14:39:37.981948 kubelet[4171]: E1213 14:39:37.981935 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d65999df6-l48rl" Dec 13 14:39:37.981991 containerd[2640]: time="2024-12-13T14:39:37.981909256Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6w4ss,Uid:d5852b00-0540-4fe3-89bd-044f6a8a7943,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.982012 kubelet[4171]: E1213 14:39:37.981975 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d65999df6-l48rl_calico-apiserver(c2385ebc-0f6a-4f2a-89d0-c0db311721a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d65999df6-l48rl_calico-apiserver(c2385ebc-0f6a-4f2a-89d0-c0db311721a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d65999df6-l48rl" podUID="c2385ebc-0f6a-4f2a-89d0-c0db311721a5" Dec 13 14:39:37.982053 kubelet[4171]: E1213 14:39:37.982017 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.982079 kubelet[4171]: E1213 14:39:37.982060 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6w4ss" Dec 13 14:39:37.982103 kubelet[4171]: E1213 14:39:37.982080 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6w4ss" Dec 13 14:39:37.982284 kubelet[4171]: E1213 14:39:37.982116 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-6w4ss_kube-system(d5852b00-0540-4fe3-89bd-044f6a8a7943)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-6w4ss_kube-system(d5852b00-0540-4fe3-89bd-044f6a8a7943)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-6w4ss" podUID="d5852b00-0540-4fe3-89bd-044f6a8a7943" Dec 13 14:39:37.983675 containerd[2640]: time="2024-12-13T14:39:37.983647756Z" level=error msg="Failed to destroy network for sandbox \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.983892 containerd[2640]: time="2024-12-13T14:39:37.983861848Z" level=error msg="Failed to destroy network for sandbox \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.983975 containerd[2640]: time="2024-12-13T14:39:37.983952149Z" level=error msg="encountered an error cleaning up failed sandbox \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.983996 containerd[2640]: time="2024-12-13T14:39:37.983974075Z" level=error msg="Failed to destroy network for sandbox \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.984022 containerd[2640]: time="2024-12-13T14:39:37.984004762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65999df6-fzfgz,Uid:0b0d5e48-cd22-404d-bff6-5a51346045ef,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.984140 kubelet[4171]: E1213 14:39:37.984118 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.984174 kubelet[4171]: E1213 14:39:37.984154 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d65999df6-fzfgz" Dec 13 14:39:37.984197 kubelet[4171]: E1213 14:39:37.984177 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d65999df6-fzfgz" Dec 13 14:39:37.984223 containerd[2640]: time="2024-12-13T14:39:37.984158479Z" level=error msg="encountered an error cleaning up failed sandbox \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.984223 containerd[2640]: time="2024-12-13T14:39:37.984201049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qx5np,Uid:fd6da168-02f5-4bdf-b4a6-009d0a657e90,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.984266 kubelet[4171]: E1213 14:39:37.984205 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d65999df6-fzfgz_calico-apiserver(0b0d5e48-cd22-404d-bff6-5a51346045ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d65999df6-fzfgz_calico-apiserver(0b0d5e48-cd22-404d-bff6-5a51346045ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d65999df6-fzfgz" podUID="0b0d5e48-cd22-404d-bff6-5a51346045ef" Dec 13 14:39:37.984302 containerd[2640]: time="2024-12-13T14:39:37.984268586Z" level=error msg="encountered an error cleaning up failed sandbox \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.984322 kubelet[4171]: E1213 14:39:37.984302 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.984350 containerd[2640]: time="2024-12-13T14:39:37.984311836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bc4ccb78-gx2lg,Uid:a6b33a15-ad08-44b6-8263-2f26e62973b7,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.984383 kubelet[4171]: E1213 14:39:37.984346 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qx5np" Dec 13 14:39:37.984383 kubelet[4171]: E1213 14:39:37.984364 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qx5np" Dec 13 14:39:37.984422 kubelet[4171]: E1213 14:39:37.984395 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qx5np_calico-system(fd6da168-02f5-4bdf-b4a6-009d0a657e90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qx5np_calico-system(fd6da168-02f5-4bdf-b4a6-009d0a657e90)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qx5np" podUID="fd6da168-02f5-4bdf-b4a6-009d0a657e90" Dec 13 14:39:37.984422 kubelet[4171]: E1213 14:39:37.984403 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.984481 kubelet[4171]: E1213 14:39:37.984428 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79bc4ccb78-gx2lg" Dec 13 14:39:37.984481 kubelet[4171]: E1213 14:39:37.984442 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79bc4ccb78-gx2lg" Dec 13 14:39:37.984481 kubelet[4171]: E1213 14:39:37.984465 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79bc4ccb78-gx2lg_calico-system(a6b33a15-ad08-44b6-8263-2f26e62973b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79bc4ccb78-gx2lg_calico-system(a6b33a15-ad08-44b6-8263-2f26e62973b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79bc4ccb78-gx2lg" podUID="a6b33a15-ad08-44b6-8263-2f26e62973b7" Dec 13 14:39:37.985743 containerd[2640]: time="2024-12-13T14:39:37.985716055Z" level=error msg="Failed to destroy network for sandbox \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.986092 containerd[2640]: time="2024-12-13T14:39:37.986068580Z" level=error msg="encountered an error cleaning up failed sandbox \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.986127 containerd[2640]: time="2024-12-13T14:39:37.986111631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rmdm,Uid:dfd0925b-82a4-472a-b9e8-3168dbe42206,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.986217 kubelet[4171]: E1213 14:39:37.986203 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:37.986244 kubelet[4171]: E1213 14:39:37.986224 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2rmdm" Dec 13 14:39:37.986244 kubelet[4171]: E1213 14:39:37.986238 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2rmdm" Dec 13 14:39:37.986286 kubelet[4171]: E1213 14:39:37.986263 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-2rmdm_kube-system(dfd0925b-82a4-472a-b9e8-3168dbe42206)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-2rmdm_kube-system(dfd0925b-82a4-472a-b9e8-3168dbe42206)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-2rmdm" podUID="dfd0925b-82a4-472a-b9e8-3168dbe42206" Dec 13 14:39:38.381660 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2-shm.mount: Deactivated successfully. Dec 13 14:39:38.381745 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49-shm.mount: Deactivated successfully. Dec 13 14:39:38.939120 kubelet[4171]: I1213 14:39:38.939091 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2" Dec 13 14:39:38.940107 containerd[2640]: time="2024-12-13T14:39:38.940077682Z" level=info msg="StopPodSandbox for \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\"" Dec 13 14:39:38.940298 containerd[2640]: time="2024-12-13T14:39:38.940278288Z" level=info msg="Ensure that sandbox a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2 in task-service has been cleanup successfully" Dec 13 14:39:38.940929 containerd[2640]: time="2024-12-13T14:39:38.940910753Z" level=info msg="TearDown network for sandbox \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\" successfully" Dec 13 14:39:38.940951 containerd[2640]: time="2024-12-13T14:39:38.940929397Z" level=info msg="StopPodSandbox for \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\" returns successfully" Dec 13 14:39:38.941658 kubelet[4171]: I1213 14:39:38.941635 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c" Dec 13 14:39:38.941783 containerd[2640]: time="2024-12-13T14:39:38.941762988Z" level=info msg="StopPodSandbox for \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\"" Dec 13 14:39:38.941856 containerd[2640]: time="2024-12-13T14:39:38.941845447Z" level=info msg="TearDown network for sandbox \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\" successfully" Dec 13 14:39:38.941877 containerd[2640]: time="2024-12-13T14:39:38.941856809Z" level=info msg="StopPodSandbox for \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\" returns successfully" Dec 13 14:39:38.941967 systemd[1]: run-netns-cni\x2de69a2200\x2d34bc\x2dddd4\x2d4e9a\x2dbc09a1fd1f89.mount: Deactivated successfully. Dec 13 14:39:38.942106 containerd[2640]: time="2024-12-13T14:39:38.942053054Z" level=info msg="StopPodSandbox for \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\"" Dec 13 14:39:38.942210 containerd[2640]: time="2024-12-13T14:39:38.942195327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6w4ss,Uid:d5852b00-0540-4fe3-89bd-044f6a8a7943,Namespace:kube-system,Attempt:2,}" Dec 13 14:39:38.942284 containerd[2640]: time="2024-12-13T14:39:38.942205329Z" level=info msg="Ensure that sandbox d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c in task-service has been cleanup successfully" Dec 13 14:39:38.942453 containerd[2640]: time="2024-12-13T14:39:38.942439022Z" level=info msg="TearDown network for sandbox \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\" successfully" Dec 13 14:39:38.942473 containerd[2640]: time="2024-12-13T14:39:38.942453906Z" level=info msg="StopPodSandbox for \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\" returns successfully" Dec 13 14:39:38.942650 containerd[2640]: time="2024-12-13T14:39:38.942630266Z" level=info msg="StopPodSandbox for \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\"" Dec 13 14:39:38.942726 containerd[2640]: time="2024-12-13T14:39:38.942715325Z" level=info msg="TearDown network for sandbox \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\" successfully" Dec 13 14:39:38.942756 containerd[2640]: time="2024-12-13T14:39:38.942726648Z" level=info msg="StopPodSandbox for \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\" returns successfully" Dec 13 14:39:38.942885 kubelet[4171]: I1213 14:39:38.942865 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c" Dec 13 14:39:38.943066 containerd[2640]: time="2024-12-13T14:39:38.943052923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qx5np,Uid:fd6da168-02f5-4bdf-b4a6-009d0a657e90,Namespace:calico-system,Attempt:2,}" Dec 13 14:39:38.943328 containerd[2640]: time="2024-12-13T14:39:38.943310622Z" level=info msg="StopPodSandbox for \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\"" Dec 13 14:39:38.943469 containerd[2640]: time="2024-12-13T14:39:38.943457695Z" level=info msg="Ensure that sandbox f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c in task-service has been cleanup successfully" Dec 13 14:39:38.943636 containerd[2640]: time="2024-12-13T14:39:38.943621253Z" level=info msg="TearDown network for sandbox \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\" successfully" Dec 13 14:39:38.943659 containerd[2640]: time="2024-12-13T14:39:38.943636896Z" level=info msg="StopPodSandbox for \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\" returns successfully" Dec 13 14:39:38.943831 systemd[1]: run-netns-cni\x2d192933ba\x2d989b\x2d1592\x2d40dd\x2d0fcf3d515c20.mount: Deactivated successfully. Dec 13 14:39:38.943875 containerd[2640]: time="2024-12-13T14:39:38.943822218Z" level=info msg="StopPodSandbox for \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\"" Dec 13 14:39:38.943896 kubelet[4171]: I1213 14:39:38.943834 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341" Dec 13 14:39:38.943917 containerd[2640]: time="2024-12-13T14:39:38.943893595Z" level=info msg="TearDown network for sandbox \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\" successfully" Dec 13 14:39:38.943917 containerd[2640]: time="2024-12-13T14:39:38.943904877Z" level=info msg="StopPodSandbox for \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\" returns successfully" Dec 13 14:39:38.944248 containerd[2640]: time="2024-12-13T14:39:38.944226311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bc4ccb78-gx2lg,Uid:a6b33a15-ad08-44b6-8263-2f26e62973b7,Namespace:calico-system,Attempt:2,}" Dec 13 14:39:38.944304 containerd[2640]: time="2024-12-13T14:39:38.944223110Z" level=info msg="StopPodSandbox for \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\"" Dec 13 14:39:38.944435 containerd[2640]: time="2024-12-13T14:39:38.944423316Z" level=info msg="Ensure that sandbox ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341 in task-service has been cleanup successfully" Dec 13 14:39:38.944599 containerd[2640]: time="2024-12-13T14:39:38.944584393Z" level=info msg="TearDown network for sandbox \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\" successfully" Dec 13 14:39:38.944622 containerd[2640]: time="2024-12-13T14:39:38.944599556Z" level=info msg="StopPodSandbox for \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\" returns successfully" Dec 13 14:39:38.944799 containerd[2640]: time="2024-12-13T14:39:38.944779637Z" level=info msg="StopPodSandbox for \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\"" Dec 13 14:39:38.944835 kubelet[4171]: I1213 14:39:38.944818 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487" Dec 13 14:39:38.944871 containerd[2640]: time="2024-12-13T14:39:38.944855455Z" level=info msg="TearDown network for sandbox \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\" successfully" Dec 13 14:39:38.944871 containerd[2640]: time="2024-12-13T14:39:38.944868098Z" level=info msg="StopPodSandbox for \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\" returns successfully" Dec 13 14:39:38.945213 containerd[2640]: time="2024-12-13T14:39:38.945196213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rmdm,Uid:dfd0925b-82a4-472a-b9e8-3168dbe42206,Namespace:kube-system,Attempt:2,}" Dec 13 14:39:38.945285 containerd[2640]: time="2024-12-13T14:39:38.945199413Z" level=info msg="StopPodSandbox for \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\"" Dec 13 14:39:38.945376 containerd[2640]: time="2024-12-13T14:39:38.945363411Z" level=info msg="Ensure that sandbox a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487 in task-service has been cleanup successfully" Dec 13 14:39:38.945572 containerd[2640]: time="2024-12-13T14:39:38.945555455Z" level=info msg="TearDown network for sandbox \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\" successfully" Dec 13 14:39:38.945595 containerd[2640]: time="2024-12-13T14:39:38.945573019Z" level=info msg="StopPodSandbox for \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\" returns successfully" Dec 13 14:39:38.945647 systemd[1]: run-netns-cni\x2d8fc7c650\x2dfb91\x2de553\x2df1fa\x2d88ea014aaf48.mount: Deactivated successfully. Dec 13 14:39:38.945720 kubelet[4171]: I1213 14:39:38.945702 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49" Dec 13 14:39:38.945721 systemd[1]: run-netns-cni\x2deea996dd\x2d4234\x2d9780\x2d366d\x2d8179dd0133ee.mount: Deactivated successfully. Dec 13 14:39:38.945781 containerd[2640]: time="2024-12-13T14:39:38.945759941Z" level=info msg="StopPodSandbox for \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\"" Dec 13 14:39:38.945846 containerd[2640]: time="2024-12-13T14:39:38.945835359Z" level=info msg="TearDown network for sandbox \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\" successfully" Dec 13 14:39:38.945879 containerd[2640]: time="2024-12-13T14:39:38.945846361Z" level=info msg="StopPodSandbox for \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\" returns successfully" Dec 13 14:39:38.946077 containerd[2640]: time="2024-12-13T14:39:38.946060650Z" level=info msg="StopPodSandbox for \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\"" Dec 13 14:39:38.946174 containerd[2640]: time="2024-12-13T14:39:38.946157032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65999df6-fzfgz,Uid:0b0d5e48-cd22-404d-bff6-5a51346045ef,Namespace:calico-apiserver,Attempt:2,}" Dec 13 14:39:38.946353 containerd[2640]: time="2024-12-13T14:39:38.946333593Z" level=info msg="Ensure that sandbox 3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49 in task-service has been cleanup successfully" Dec 13 14:39:38.946521 containerd[2640]: time="2024-12-13T14:39:38.946507792Z" level=info msg="TearDown network for sandbox \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\" successfully" Dec 13 14:39:38.946542 containerd[2640]: time="2024-12-13T14:39:38.946521476Z" level=info msg="StopPodSandbox for \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\" returns successfully" Dec 13 14:39:38.946779 containerd[2640]: time="2024-12-13T14:39:38.946764611Z" level=info msg="StopPodSandbox for \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\"" Dec 13 14:39:38.946849 containerd[2640]: time="2024-12-13T14:39:38.946838868Z" level=info msg="TearDown network for sandbox \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\" successfully" Dec 13 14:39:38.946870 containerd[2640]: time="2024-12-13T14:39:38.946849190Z" level=info msg="StopPodSandbox for \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\" returns successfully" Dec 13 14:39:38.947190 containerd[2640]: time="2024-12-13T14:39:38.947167703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65999df6-l48rl,Uid:c2385ebc-0f6a-4f2a-89d0-c0db311721a5,Namespace:calico-apiserver,Attempt:2,}" Dec 13 14:39:38.949138 systemd[1]: run-netns-cni\x2d26f93ee6\x2d4be5\x2d8457\x2d384d\x2dd762285e9e34.mount: Deactivated successfully. Dec 13 14:39:38.949210 systemd[1]: run-netns-cni\x2d6b28241d\x2d3628\x2d2ffe\x2dcd97\x2d394b32f33f06.mount: Deactivated successfully. Dec 13 14:39:38.989332 containerd[2640]: time="2024-12-13T14:39:38.989287013Z" level=error msg="Failed to destroy network for sandbox \"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.989674 containerd[2640]: time="2024-12-13T14:39:38.989651536Z" level=error msg="encountered an error cleaning up failed sandbox \"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.989734 containerd[2640]: time="2024-12-13T14:39:38.989716791Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bc4ccb78-gx2lg,Uid:a6b33a15-ad08-44b6-8263-2f26e62973b7,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.989950 kubelet[4171]: E1213 14:39:38.989917 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.990003 kubelet[4171]: E1213 14:39:38.989982 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79bc4ccb78-gx2lg" Dec 13 14:39:38.990030 kubelet[4171]: E1213 14:39:38.990003 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79bc4ccb78-gx2lg" Dec 13 14:39:38.990070 kubelet[4171]: E1213 14:39:38.990048 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79bc4ccb78-gx2lg_calico-system(a6b33a15-ad08-44b6-8263-2f26e62973b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79bc4ccb78-gx2lg_calico-system(a6b33a15-ad08-44b6-8263-2f26e62973b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79bc4ccb78-gx2lg" podUID="a6b33a15-ad08-44b6-8263-2f26e62973b7" Dec 13 14:39:38.990424 containerd[2640]: time="2024-12-13T14:39:38.990395186Z" level=error msg="Failed to destroy network for sandbox \"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.990582 containerd[2640]: time="2024-12-13T14:39:38.990560864Z" level=error msg="Failed to destroy network for sandbox \"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.990730 containerd[2640]: time="2024-12-13T14:39:38.990704056Z" level=error msg="encountered an error cleaning up failed sandbox \"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.990772 containerd[2640]: time="2024-12-13T14:39:38.990757229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6w4ss,Uid:d5852b00-0540-4fe3-89bd-044f6a8a7943,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.990895 kubelet[4171]: E1213 14:39:38.990875 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.990919 kubelet[4171]: E1213 14:39:38.990910 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6w4ss" Dec 13 14:39:38.990947 containerd[2640]: time="2024-12-13T14:39:38.990880657Z" level=error msg="encountered an error cleaning up failed sandbox \"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.990968 kubelet[4171]: E1213 14:39:38.990926 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6w4ss" Dec 13 14:39:38.990989 kubelet[4171]: E1213 14:39:38.990957 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-6w4ss_kube-system(d5852b00-0540-4fe3-89bd-044f6a8a7943)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-6w4ss_kube-system(d5852b00-0540-4fe3-89bd-044f6a8a7943)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-6w4ss" podUID="d5852b00-0540-4fe3-89bd-044f6a8a7943" Dec 13 14:39:38.991027 containerd[2640]: time="2024-12-13T14:39:38.990958795Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qx5np,Uid:fd6da168-02f5-4bdf-b4a6-009d0a657e90,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.991121 kubelet[4171]: E1213 14:39:38.991097 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.991152 kubelet[4171]: E1213 14:39:38.991137 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qx5np" Dec 13 14:39:38.991174 kubelet[4171]: E1213 14:39:38.991154 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qx5np" Dec 13 14:39:38.991205 kubelet[4171]: E1213 14:39:38.991187 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qx5np_calico-system(fd6da168-02f5-4bdf-b4a6-009d0a657e90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qx5np_calico-system(fd6da168-02f5-4bdf-b4a6-009d0a657e90)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qx5np" podUID="fd6da168-02f5-4bdf-b4a6-009d0a657e90" Dec 13 14:39:38.992653 containerd[2640]: time="2024-12-13T14:39:38.992622735Z" level=error msg="Failed to destroy network for sandbox \"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.993008 containerd[2640]: time="2024-12-13T14:39:38.992982457Z" level=error msg="encountered an error cleaning up failed sandbox \"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.993045 containerd[2640]: time="2024-12-13T14:39:38.993029628Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rmdm,Uid:dfd0925b-82a4-472a-b9e8-3168dbe42206,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.993162 kubelet[4171]: E1213 14:39:38.993141 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.993192 kubelet[4171]: E1213 14:39:38.993179 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2rmdm" Dec 13 14:39:38.993224 kubelet[4171]: E1213 14:39:38.993197 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2rmdm" Dec 13 14:39:38.993246 kubelet[4171]: E1213 14:39:38.993226 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-2rmdm_kube-system(dfd0925b-82a4-472a-b9e8-3168dbe42206)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-2rmdm_kube-system(dfd0925b-82a4-472a-b9e8-3168dbe42206)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-2rmdm" podUID="dfd0925b-82a4-472a-b9e8-3168dbe42206" Dec 13 14:39:38.993898 containerd[2640]: time="2024-12-13T14:39:38.993863459Z" level=error msg="Failed to destroy network for sandbox \"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.994207 containerd[2640]: time="2024-12-13T14:39:38.994186973Z" level=error msg="encountered an error cleaning up failed sandbox \"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.994243 containerd[2640]: time="2024-12-13T14:39:38.994228262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65999df6-l48rl,Uid:c2385ebc-0f6a-4f2a-89d0-c0db311721a5,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.994364 kubelet[4171]: E1213 14:39:38.994341 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.994391 kubelet[4171]: E1213 14:39:38.994379 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d65999df6-l48rl" Dec 13 14:39:38.994413 kubelet[4171]: E1213 14:39:38.994396 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d65999df6-l48rl" Dec 13 14:39:38.994446 kubelet[4171]: E1213 14:39:38.994429 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d65999df6-l48rl_calico-apiserver(c2385ebc-0f6a-4f2a-89d0-c0db311721a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d65999df6-l48rl_calico-apiserver(c2385ebc-0f6a-4f2a-89d0-c0db311721a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d65999df6-l48rl" podUID="c2385ebc-0f6a-4f2a-89d0-c0db311721a5" Dec 13 14:39:38.995461 containerd[2640]: time="2024-12-13T14:39:38.995437699Z" level=error msg="Failed to destroy network for sandbox \"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.995740 containerd[2640]: time="2024-12-13T14:39:38.995721444Z" level=error msg="encountered an error cleaning up failed sandbox \"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.995778 containerd[2640]: time="2024-12-13T14:39:38.995761973Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65999df6-fzfgz,Uid:0b0d5e48-cd22-404d-bff6-5a51346045ef,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.995890 kubelet[4171]: E1213 14:39:38.995869 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:38.995914 kubelet[4171]: E1213 14:39:38.995902 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d65999df6-fzfgz" Dec 13 14:39:38.995945 kubelet[4171]: E1213 14:39:38.995918 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d65999df6-fzfgz" Dec 13 14:39:38.995969 kubelet[4171]: E1213 14:39:38.995944 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d65999df6-fzfgz_calico-apiserver(0b0d5e48-cd22-404d-bff6-5a51346045ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d65999df6-fzfgz_calico-apiserver(0b0d5e48-cd22-404d-bff6-5a51346045ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d65999df6-fzfgz" podUID="0b0d5e48-cd22-404d-bff6-5a51346045ef" Dec 13 14:39:39.382783 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622-shm.mount: Deactivated successfully. Dec 13 14:39:39.382862 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba-shm.mount: Deactivated successfully. Dec 13 14:39:39.948727 kubelet[4171]: I1213 14:39:39.948683 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad" Dec 13 14:39:39.949158 containerd[2640]: time="2024-12-13T14:39:39.949129136Z" level=info msg="StopPodSandbox for \"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\"" Dec 13 14:39:39.949314 containerd[2640]: time="2024-12-13T14:39:39.949293532Z" level=info msg="Ensure that sandbox 2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad in task-service has been cleanup successfully" Dec 13 14:39:39.949477 containerd[2640]: time="2024-12-13T14:39:39.949461808Z" level=info msg="TearDown network for sandbox \"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\" successfully" Dec 13 14:39:39.949499 containerd[2640]: time="2024-12-13T14:39:39.949476492Z" level=info msg="StopPodSandbox for \"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\" returns successfully" Dec 13 14:39:39.949761 containerd[2640]: time="2024-12-13T14:39:39.949744870Z" level=info msg="StopPodSandbox for \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\"" Dec 13 14:39:39.949829 containerd[2640]: time="2024-12-13T14:39:39.949818566Z" level=info msg="TearDown network for sandbox \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\" successfully" Dec 13 14:39:39.949850 containerd[2640]: time="2024-12-13T14:39:39.949829728Z" level=info msg="StopPodSandbox for \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\" returns successfully" Dec 13 14:39:39.949891 kubelet[4171]: I1213 14:39:39.949879 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56" Dec 13 14:39:39.950089 containerd[2640]: time="2024-12-13T14:39:39.950068140Z" level=info msg="StopPodSandbox for \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\"" Dec 13 14:39:39.950183 containerd[2640]: time="2024-12-13T14:39:39.950171162Z" level=info msg="TearDown network for sandbox \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\" successfully" Dec 13 14:39:39.950213 containerd[2640]: time="2024-12-13T14:39:39.950184165Z" level=info msg="StopPodSandbox for \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\" returns successfully" Dec 13 14:39:39.950290 containerd[2640]: time="2024-12-13T14:39:39.950268583Z" level=info msg="StopPodSandbox for \"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\"" Dec 13 14:39:39.950428 containerd[2640]: time="2024-12-13T14:39:39.950415175Z" level=info msg="Ensure that sandbox 1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56 in task-service has been cleanup successfully" Dec 13 14:39:39.950567 containerd[2640]: time="2024-12-13T14:39:39.950547284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rmdm,Uid:dfd0925b-82a4-472a-b9e8-3168dbe42206,Namespace:kube-system,Attempt:3,}" Dec 13 14:39:39.950634 containerd[2640]: time="2024-12-13T14:39:39.950616779Z" level=info msg="TearDown network for sandbox \"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\" successfully" Dec 13 14:39:39.950657 containerd[2640]: time="2024-12-13T14:39:39.950635383Z" level=info msg="StopPodSandbox for \"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\" returns successfully" Dec 13 14:39:39.950942 containerd[2640]: time="2024-12-13T14:39:39.950927326Z" level=info msg="StopPodSandbox for \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\"" Dec 13 14:39:39.951010 containerd[2640]: time="2024-12-13T14:39:39.950999582Z" level=info msg="TearDown network for sandbox \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\" successfully" Dec 13 14:39:39.951031 containerd[2640]: time="2024-12-13T14:39:39.951009824Z" level=info msg="StopPodSandbox for \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\" returns successfully" Dec 13 14:39:39.951118 kubelet[4171]: I1213 14:39:39.951103 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791" Dec 13 14:39:39.951125 systemd[1]: run-netns-cni\x2d4981290e\x2dcb2e\x2dc70f\x2d6afe\x2dcfe22b0bf02c.mount: Deactivated successfully. Dec 13 14:39:39.951276 containerd[2640]: time="2024-12-13T14:39:39.951210187Z" level=info msg="StopPodSandbox for \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\"" Dec 13 14:39:39.951308 containerd[2640]: time="2024-12-13T14:39:39.951294366Z" level=info msg="TearDown network for sandbox \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\" successfully" Dec 13 14:39:39.951332 containerd[2640]: time="2024-12-13T14:39:39.951307728Z" level=info msg="StopPodSandbox for \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\" returns successfully" Dec 13 14:39:39.951469 containerd[2640]: time="2024-12-13T14:39:39.951449639Z" level=info msg="StopPodSandbox for \"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\"" Dec 13 14:39:39.951599 containerd[2640]: time="2024-12-13T14:39:39.951585909Z" level=info msg="Ensure that sandbox 747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791 in task-service has been cleanup successfully" Dec 13 14:39:39.951620 containerd[2640]: time="2024-12-13T14:39:39.951597951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65999df6-fzfgz,Uid:0b0d5e48-cd22-404d-bff6-5a51346045ef,Namespace:calico-apiserver,Attempt:3,}" Dec 13 14:39:39.951767 containerd[2640]: time="2024-12-13T14:39:39.951750945Z" level=info msg="TearDown network for sandbox \"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\" successfully" Dec 13 14:39:39.951800 containerd[2640]: time="2024-12-13T14:39:39.951767348Z" level=info msg="StopPodSandbox for \"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\" returns successfully" Dec 13 14:39:39.952023 containerd[2640]: time="2024-12-13T14:39:39.952004319Z" level=info msg="StopPodSandbox for \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\"" Dec 13 14:39:39.952095 containerd[2640]: time="2024-12-13T14:39:39.952085457Z" level=info msg="TearDown network for sandbox \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\" successfully" Dec 13 14:39:39.952119 containerd[2640]: time="2024-12-13T14:39:39.952096259Z" level=info msg="StopPodSandbox for \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\" returns successfully" Dec 13 14:39:39.952309 containerd[2640]: time="2024-12-13T14:39:39.952294222Z" level=info msg="StopPodSandbox for \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\"" Dec 13 14:39:39.952397 containerd[2640]: time="2024-12-13T14:39:39.952384962Z" level=info msg="TearDown network for sandbox \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\" successfully" Dec 13 14:39:39.952417 containerd[2640]: time="2024-12-13T14:39:39.952398045Z" level=info msg="StopPodSandbox for \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\" returns successfully" Dec 13 14:39:39.952450 kubelet[4171]: I1213 14:39:39.952434 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba" Dec 13 14:39:39.952754 containerd[2640]: time="2024-12-13T14:39:39.952736638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65999df6-l48rl,Uid:c2385ebc-0f6a-4f2a-89d0-c0db311721a5,Namespace:calico-apiserver,Attempt:3,}" Dec 13 14:39:39.952845 containerd[2640]: time="2024-12-13T14:39:39.952828458Z" level=info msg="StopPodSandbox for \"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\"" Dec 13 14:39:39.952983 containerd[2640]: time="2024-12-13T14:39:39.952970209Z" level=info msg="Ensure that sandbox 4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba in task-service has been cleanup successfully" Dec 13 14:39:39.953120 systemd[1]: run-netns-cni\x2d888353bd\x2d695d\x2d23c4\x2dfb72\x2df49e1643a1bd.mount: Deactivated successfully. Dec 13 14:39:39.953191 systemd[1]: run-netns-cni\x2d6ecdd062\x2d1d2a\x2d814d\x2dc5da\x2dbf1a92bee6a8.mount: Deactivated successfully. Dec 13 14:39:39.953231 containerd[2640]: time="2024-12-13T14:39:39.953131084Z" level=info msg="TearDown network for sandbox \"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\" successfully" Dec 13 14:39:39.953231 containerd[2640]: time="2024-12-13T14:39:39.953145407Z" level=info msg="StopPodSandbox for \"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\" returns successfully" Dec 13 14:39:39.953353 containerd[2640]: time="2024-12-13T14:39:39.953337488Z" level=info msg="StopPodSandbox for \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\"" Dec 13 14:39:39.953416 containerd[2640]: time="2024-12-13T14:39:39.953405263Z" level=info msg="TearDown network for sandbox \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\" successfully" Dec 13 14:39:39.953441 containerd[2640]: time="2024-12-13T14:39:39.953416785Z" level=info msg="StopPodSandbox for \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\" returns successfully" Dec 13 14:39:39.953643 containerd[2640]: time="2024-12-13T14:39:39.953622190Z" level=info msg="StopPodSandbox for \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\"" Dec 13 14:39:39.953681 kubelet[4171]: I1213 14:39:39.953669 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622" Dec 13 14:39:39.953727 containerd[2640]: time="2024-12-13T14:39:39.953715490Z" level=info msg="TearDown network for sandbox \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\" successfully" Dec 13 14:39:39.953748 containerd[2640]: time="2024-12-13T14:39:39.953727213Z" level=info msg="StopPodSandbox for \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\" returns successfully" Dec 13 14:39:39.954060 containerd[2640]: time="2024-12-13T14:39:39.954041041Z" level=info msg="StopPodSandbox for \"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\"" Dec 13 14:39:39.954101 containerd[2640]: time="2024-12-13T14:39:39.954084730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6w4ss,Uid:d5852b00-0540-4fe3-89bd-044f6a8a7943,Namespace:kube-system,Attempt:3,}" Dec 13 14:39:39.954191 containerd[2640]: time="2024-12-13T14:39:39.954179951Z" level=info msg="Ensure that sandbox 992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622 in task-service has been cleanup successfully" Dec 13 14:39:39.954406 containerd[2640]: time="2024-12-13T14:39:39.954392397Z" level=info msg="TearDown network for sandbox \"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\" successfully" Dec 13 14:39:39.954427 containerd[2640]: time="2024-12-13T14:39:39.954406720Z" level=info msg="StopPodSandbox for \"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\" returns successfully" Dec 13 14:39:39.954609 containerd[2640]: time="2024-12-13T14:39:39.954595361Z" level=info msg="StopPodSandbox for \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\"" Dec 13 14:39:39.954676 containerd[2640]: time="2024-12-13T14:39:39.954666536Z" level=info msg="TearDown network for sandbox \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\" successfully" Dec 13 14:39:39.954696 containerd[2640]: time="2024-12-13T14:39:39.954677139Z" level=info msg="StopPodSandbox for \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\" returns successfully" Dec 13 14:39:39.954919 containerd[2640]: time="2024-12-13T14:39:39.954901067Z" level=info msg="StopPodSandbox for \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\"" Dec 13 14:39:39.954987 containerd[2640]: time="2024-12-13T14:39:39.954976483Z" level=info msg="TearDown network for sandbox \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\" successfully" Dec 13 14:39:39.955006 containerd[2640]: time="2024-12-13T14:39:39.954988286Z" level=info msg="StopPodSandbox for \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\" returns successfully" Dec 13 14:39:39.955059 kubelet[4171]: I1213 14:39:39.955047 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7" Dec 13 14:39:39.955265 systemd[1]: run-netns-cni\x2dbacae5ad\x2dbd99\x2d479e\x2d0489\x2dd46fc99cf339.mount: Deactivated successfully. Dec 13 14:39:39.955380 containerd[2640]: time="2024-12-13T14:39:39.955361687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qx5np,Uid:fd6da168-02f5-4bdf-b4a6-009d0a657e90,Namespace:calico-system,Attempt:3,}" Dec 13 14:39:39.955432 containerd[2640]: time="2024-12-13T14:39:39.955415699Z" level=info msg="StopPodSandbox for \"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\"" Dec 13 14:39:39.955562 containerd[2640]: time="2024-12-13T14:39:39.955550208Z" level=info msg="Ensure that sandbox ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7 in task-service has been cleanup successfully" Dec 13 14:39:39.955740 containerd[2640]: time="2024-12-13T14:39:39.955726366Z" level=info msg="TearDown network for sandbox \"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\" successfully" Dec 13 14:39:39.955760 containerd[2640]: time="2024-12-13T14:39:39.955741249Z" level=info msg="StopPodSandbox for \"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\" returns successfully" Dec 13 14:39:39.956004 containerd[2640]: time="2024-12-13T14:39:39.955983982Z" level=info msg="StopPodSandbox for \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\"" Dec 13 14:39:39.956085 containerd[2640]: time="2024-12-13T14:39:39.956074441Z" level=info msg="TearDown network for sandbox \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\" successfully" Dec 13 14:39:39.956106 containerd[2640]: time="2024-12-13T14:39:39.956085564Z" level=info msg="StopPodSandbox for \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\" returns successfully" Dec 13 14:39:39.956325 containerd[2640]: time="2024-12-13T14:39:39.956305331Z" level=info msg="StopPodSandbox for \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\"" Dec 13 14:39:39.956420 containerd[2640]: time="2024-12-13T14:39:39.956409274Z" level=info msg="TearDown network for sandbox \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\" successfully" Dec 13 14:39:39.956441 containerd[2640]: time="2024-12-13T14:39:39.956420916Z" level=info msg="StopPodSandbox for \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\" returns successfully" Dec 13 14:39:39.956792 containerd[2640]: time="2024-12-13T14:39:39.956771792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bc4ccb78-gx2lg,Uid:a6b33a15-ad08-44b6-8263-2f26e62973b7,Namespace:calico-system,Attempt:3,}" Dec 13 14:39:39.957171 systemd[1]: run-netns-cni\x2d373472ac\x2d2675\x2d69aa\x2df65d\x2de682d7a1a07f.mount: Deactivated successfully. Dec 13 14:39:39.957250 systemd[1]: run-netns-cni\x2d5c91c7cb\x2db472\x2d9fc4\x2d3c2b\x2d2a38b6d9f00d.mount: Deactivated successfully. Dec 13 14:39:40.000582 containerd[2640]: time="2024-12-13T14:39:40.000382242Z" level=error msg="Failed to destroy network for sandbox \"1ed0b4378dac60065ca3c5fe1e3ecbe354d7ef3c0a947f5e0ed5c99cac36133a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.000582 containerd[2640]: time="2024-12-13T14:39:40.000537035Z" level=error msg="Failed to destroy network for sandbox \"248d91735e5c1a87a8225a623ef09127e2dad4d1f395cd923d7fa3c3862e6f65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.001091 containerd[2640]: time="2024-12-13T14:39:40.000827977Z" level=error msg="encountered an error cleaning up failed sandbox \"1ed0b4378dac60065ca3c5fe1e3ecbe354d7ef3c0a947f5e0ed5c99cac36133a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.001303 containerd[2640]: time="2024-12-13T14:39:40.001258065Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65999df6-fzfgz,Uid:0b0d5e48-cd22-404d-bff6-5a51346045ef,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1ed0b4378dac60065ca3c5fe1e3ecbe354d7ef3c0a947f5e0ed5c99cac36133a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.001415 containerd[2640]: time="2024-12-13T14:39:40.000842860Z" level=error msg="Failed to destroy network for sandbox \"220777d6648342a92520bc1dfe96b8137dcc22525e23c9ee7bc48254d770c14d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.001537 containerd[2640]: time="2024-12-13T14:39:40.001506756Z" level=error msg="encountered an error cleaning up failed sandbox \"248d91735e5c1a87a8225a623ef09127e2dad4d1f395cd923d7fa3c3862e6f65\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.001615 containerd[2640]: time="2024-12-13T14:39:40.001597855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rmdm,Uid:dfd0925b-82a4-472a-b9e8-3168dbe42206,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"248d91735e5c1a87a8225a623ef09127e2dad4d1f395cd923d7fa3c3862e6f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.001762 containerd[2640]: time="2024-12-13T14:39:40.001722041Z" level=error msg="encountered an error cleaning up failed sandbox \"220777d6648342a92520bc1dfe96b8137dcc22525e23c9ee7bc48254d770c14d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.001845 containerd[2640]: time="2024-12-13T14:39:40.001813139Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65999df6-l48rl,Uid:c2385ebc-0f6a-4f2a-89d0-c0db311721a5,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"220777d6648342a92520bc1dfe96b8137dcc22525e23c9ee7bc48254d770c14d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.001888 kubelet[4171]: E1213 14:39:40.001848 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"248d91735e5c1a87a8225a623ef09127e2dad4d1f395cd923d7fa3c3862e6f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.001928 kubelet[4171]: E1213 14:39:40.001903 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ed0b4378dac60065ca3c5fe1e3ecbe354d7ef3c0a947f5e0ed5c99cac36133a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.001966 kubelet[4171]: E1213 14:39:40.001950 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ed0b4378dac60065ca3c5fe1e3ecbe354d7ef3c0a947f5e0ed5c99cac36133a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d65999df6-fzfgz" Dec 13 14:39:40.001992 kubelet[4171]: E1213 14:39:40.001970 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ed0b4378dac60065ca3c5fe1e3ecbe354d7ef3c0a947f5e0ed5c99cac36133a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d65999df6-fzfgz" Dec 13 14:39:40.001992 kubelet[4171]: E1213 14:39:40.001909 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"248d91735e5c1a87a8225a623ef09127e2dad4d1f395cd923d7fa3c3862e6f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2rmdm" Dec 13 14:39:40.002031 kubelet[4171]: E1213 14:39:40.001965 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"220777d6648342a92520bc1dfe96b8137dcc22525e23c9ee7bc48254d770c14d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.002031 kubelet[4171]: E1213 14:39:40.001998 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"248d91735e5c1a87a8225a623ef09127e2dad4d1f395cd923d7fa3c3862e6f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2rmdm" Dec 13 14:39:40.002031 kubelet[4171]: E1213 14:39:40.002008 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d65999df6-fzfgz_calico-apiserver(0b0d5e48-cd22-404d-bff6-5a51346045ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d65999df6-fzfgz_calico-apiserver(0b0d5e48-cd22-404d-bff6-5a51346045ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ed0b4378dac60065ca3c5fe1e3ecbe354d7ef3c0a947f5e0ed5c99cac36133a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d65999df6-fzfgz" podUID="0b0d5e48-cd22-404d-bff6-5a51346045ef" Dec 13 14:39:40.002117 kubelet[4171]: E1213 14:39:40.002034 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"220777d6648342a92520bc1dfe96b8137dcc22525e23c9ee7bc48254d770c14d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d65999df6-l48rl" Dec 13 14:39:40.002117 kubelet[4171]: E1213 14:39:40.002036 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-2rmdm_kube-system(dfd0925b-82a4-472a-b9e8-3168dbe42206)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-2rmdm_kube-system(dfd0925b-82a4-472a-b9e8-3168dbe42206)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"248d91735e5c1a87a8225a623ef09127e2dad4d1f395cd923d7fa3c3862e6f65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-2rmdm" podUID="dfd0925b-82a4-472a-b9e8-3168dbe42206" Dec 13 14:39:40.002117 kubelet[4171]: E1213 14:39:40.002053 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"220777d6648342a92520bc1dfe96b8137dcc22525e23c9ee7bc48254d770c14d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d65999df6-l48rl" Dec 13 14:39:40.002193 kubelet[4171]: E1213 14:39:40.002083 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d65999df6-l48rl_calico-apiserver(c2385ebc-0f6a-4f2a-89d0-c0db311721a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d65999df6-l48rl_calico-apiserver(c2385ebc-0f6a-4f2a-89d0-c0db311721a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"220777d6648342a92520bc1dfe96b8137dcc22525e23c9ee7bc48254d770c14d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d65999df6-l48rl" podUID="c2385ebc-0f6a-4f2a-89d0-c0db311721a5" Dec 13 14:39:40.003917 containerd[2640]: time="2024-12-13T14:39:40.003881685Z" level=error msg="Failed to destroy network for sandbox \"555520c450da49cdc364f5dd3edbb3eee13d2666519f8e9ef45dac011e000a96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.004279 containerd[2640]: time="2024-12-13T14:39:40.004254161Z" level=error msg="encountered an error cleaning up failed sandbox \"555520c450da49cdc364f5dd3edbb3eee13d2666519f8e9ef45dac011e000a96\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.004325 containerd[2640]: time="2024-12-13T14:39:40.004308932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6w4ss,Uid:d5852b00-0540-4fe3-89bd-044f6a8a7943,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"555520c450da49cdc364f5dd3edbb3eee13d2666519f8e9ef45dac011e000a96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.004389 containerd[2640]: time="2024-12-13T14:39:40.004268164Z" level=error msg="Failed to destroy network for sandbox \"c81fa7b487d5f7535698254dc55ae70ffa7cae437a19d445310f917242d793e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.004485 kubelet[4171]: E1213 14:39:40.004455 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"555520c450da49cdc364f5dd3edbb3eee13d2666519f8e9ef45dac011e000a96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.004517 kubelet[4171]: E1213 14:39:40.004503 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"555520c450da49cdc364f5dd3edbb3eee13d2666519f8e9ef45dac011e000a96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6w4ss" Dec 13 14:39:40.004539 kubelet[4171]: E1213 14:39:40.004521 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"555520c450da49cdc364f5dd3edbb3eee13d2666519f8e9ef45dac011e000a96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-6w4ss" Dec 13 14:39:40.004576 kubelet[4171]: E1213 14:39:40.004558 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-6w4ss_kube-system(d5852b00-0540-4fe3-89bd-044f6a8a7943)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-6w4ss_kube-system(d5852b00-0540-4fe3-89bd-044f6a8a7943)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"555520c450da49cdc364f5dd3edbb3eee13d2666519f8e9ef45dac011e000a96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-6w4ss" podUID="d5852b00-0540-4fe3-89bd-044f6a8a7943" Dec 13 14:39:40.004692 containerd[2640]: time="2024-12-13T14:39:40.004670087Z" level=error msg="encountered an error cleaning up failed sandbox \"c81fa7b487d5f7535698254dc55ae70ffa7cae437a19d445310f917242d793e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.004739 containerd[2640]: time="2024-12-13T14:39:40.004722937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qx5np,Uid:fd6da168-02f5-4bdf-b4a6-009d0a657e90,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c81fa7b487d5f7535698254dc55ae70ffa7cae437a19d445310f917242d793e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.004849 kubelet[4171]: E1213 14:39:40.004833 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c81fa7b487d5f7535698254dc55ae70ffa7cae437a19d445310f917242d793e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.004871 kubelet[4171]: E1213 14:39:40.004858 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c81fa7b487d5f7535698254dc55ae70ffa7cae437a19d445310f917242d793e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qx5np" Dec 13 14:39:40.004901 kubelet[4171]: E1213 14:39:40.004872 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c81fa7b487d5f7535698254dc55ae70ffa7cae437a19d445310f917242d793e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qx5np" Dec 13 14:39:40.004923 containerd[2640]: time="2024-12-13T14:39:40.004855005Z" level=error msg="Failed to destroy network for sandbox \"f8b8f01197e7462357cb7a9371a114914749e814ef37d852100aa930ffedae9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.004949 kubelet[4171]: E1213 14:39:40.004898 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qx5np_calico-system(fd6da168-02f5-4bdf-b4a6-009d0a657e90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qx5np_calico-system(fd6da168-02f5-4bdf-b4a6-009d0a657e90)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c81fa7b487d5f7535698254dc55ae70ffa7cae437a19d445310f917242d793e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qx5np" podUID="fd6da168-02f5-4bdf-b4a6-009d0a657e90" Dec 13 14:39:40.005142 containerd[2640]: time="2024-12-13T14:39:40.005121379Z" level=error msg="encountered an error cleaning up failed sandbox \"f8b8f01197e7462357cb7a9371a114914749e814ef37d852100aa930ffedae9d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.005180 containerd[2640]: time="2024-12-13T14:39:40.005164868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bc4ccb78-gx2lg,Uid:a6b33a15-ad08-44b6-8263-2f26e62973b7,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f8b8f01197e7462357cb7a9371a114914749e814ef37d852100aa930ffedae9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.005297 kubelet[4171]: E1213 14:39:40.005276 4171 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8b8f01197e7462357cb7a9371a114914749e814ef37d852100aa930ffedae9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:39:40.005325 kubelet[4171]: E1213 14:39:40.005313 4171 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8b8f01197e7462357cb7a9371a114914749e814ef37d852100aa930ffedae9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79bc4ccb78-gx2lg" Dec 13 14:39:40.005347 kubelet[4171]: E1213 14:39:40.005328 4171 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8b8f01197e7462357cb7a9371a114914749e814ef37d852100aa930ffedae9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79bc4ccb78-gx2lg" Dec 13 14:39:40.005375 kubelet[4171]: E1213 14:39:40.005358 4171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79bc4ccb78-gx2lg_calico-system(a6b33a15-ad08-44b6-8263-2f26e62973b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79bc4ccb78-gx2lg_calico-system(a6b33a15-ad08-44b6-8263-2f26e62973b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8b8f01197e7462357cb7a9371a114914749e814ef37d852100aa930ffedae9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79bc4ccb78-gx2lg" podUID="a6b33a15-ad08-44b6-8263-2f26e62973b7" Dec 13 14:39:40.382171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2755308554.mount: Deactivated successfully. Dec 13 14:39:40.398917 containerd[2640]: time="2024-12-13T14:39:40.398876049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:40.398984 containerd[2640]: time="2024-12-13T14:39:40.398923019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 14:39:40.399499 containerd[2640]: time="2024-12-13T14:39:40.399478773Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:40.401013 containerd[2640]: time="2024-12-13T14:39:40.400992804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:40.401699 containerd[2640]: time="2024-12-13T14:39:40.401680945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 2.465459197s" Dec 13 14:39:40.401735 containerd[2640]: time="2024-12-13T14:39:40.401701910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 14:39:40.408131 containerd[2640]: time="2024-12-13T14:39:40.408098384Z" level=info msg="CreateContainer within sandbox \"0c12d097e91baa730578666a9c9fa6de8784563d16a5a83de4ef2d159d689803\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 14:39:40.416686 containerd[2640]: time="2024-12-13T14:39:40.416654222Z" level=info msg="CreateContainer within sandbox \"0c12d097e91baa730578666a9c9fa6de8784563d16a5a83de4ef2d159d689803\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7605d8adf715bdcb311d8b0bb1f58fec6e8dde1e645ff4a9e5d66d3d41ba57d0\"" Dec 13 14:39:40.416984 containerd[2640]: time="2024-12-13T14:39:40.416962165Z" level=info msg="StartContainer for \"7605d8adf715bdcb311d8b0bb1f58fec6e8dde1e645ff4a9e5d66d3d41ba57d0\"" Dec 13 14:39:40.442822 systemd[1]: Started cri-containerd-7605d8adf715bdcb311d8b0bb1f58fec6e8dde1e645ff4a9e5d66d3d41ba57d0.scope - libcontainer container 7605d8adf715bdcb311d8b0bb1f58fec6e8dde1e645ff4a9e5d66d3d41ba57d0. Dec 13 14:39:40.464681 containerd[2640]: time="2024-12-13T14:39:40.464650764Z" level=info msg="StartContainer for \"7605d8adf715bdcb311d8b0bb1f58fec6e8dde1e645ff4a9e5d66d3d41ba57d0\" returns successfully" Dec 13 14:39:40.574653 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 14:39:40.574760 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 14:39:40.958621 kubelet[4171]: I1213 14:39:40.958565 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ed0b4378dac60065ca3c5fe1e3ecbe354d7ef3c0a947f5e0ed5c99cac36133a" Dec 13 14:39:40.959094 containerd[2640]: time="2024-12-13T14:39:40.959066878Z" level=info msg="StopPodSandbox for \"1ed0b4378dac60065ca3c5fe1e3ecbe354d7ef3c0a947f5e0ed5c99cac36133a\"" Dec 13 14:39:40.959268 containerd[2640]: time="2024-12-13T14:39:40.959253877Z" level=info msg="Ensure that sandbox 1ed0b4378dac60065ca3c5fe1e3ecbe354d7ef3c0a947f5e0ed5c99cac36133a in task-service has been cleanup successfully" Dec 13 14:39:40.959505 containerd[2640]: time="2024-12-13T14:39:40.959489565Z" level=info msg="TearDown network for sandbox \"1ed0b4378dac60065ca3c5fe1e3ecbe354d7ef3c0a947f5e0ed5c99cac36133a\" successfully" Dec 13 14:39:40.959527 containerd[2640]: time="2024-12-13T14:39:40.959506409Z" level=info msg="StopPodSandbox for \"1ed0b4378dac60065ca3c5fe1e3ecbe354d7ef3c0a947f5e0ed5c99cac36133a\" returns successfully" Dec 13 14:39:40.959731 containerd[2640]: time="2024-12-13T14:39:40.959696328Z" level=info msg="StopPodSandbox for \"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\"" Dec 13 14:39:40.959801 containerd[2640]: time="2024-12-13T14:39:40.959789227Z" level=info msg="TearDown network for sandbox \"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\" successfully" Dec 13 14:39:40.959845 containerd[2640]: time="2024-12-13T14:39:40.959801109Z" level=info msg="StopPodSandbox for \"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\" returns successfully" Dec 13 14:39:40.959977 containerd[2640]: time="2024-12-13T14:39:40.959961822Z" level=info msg="StopPodSandbox for \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\"" Dec 13 14:39:40.960039 containerd[2640]: time="2024-12-13T14:39:40.960028236Z" level=info msg="TearDown network for sandbox \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\" successfully" Dec 13 14:39:40.960060 containerd[2640]: time="2024-12-13T14:39:40.960039078Z" level=info msg="StopPodSandbox for \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\" returns successfully" Dec 13 14:39:40.960122 kubelet[4171]: I1213 14:39:40.960107 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c81fa7b487d5f7535698254dc55ae70ffa7cae437a19d445310f917242d793e0" Dec 13 14:39:40.960275 containerd[2640]: time="2024-12-13T14:39:40.960257843Z" level=info msg="StopPodSandbox for \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\"" Dec 13 14:39:40.960343 containerd[2640]: time="2024-12-13T14:39:40.960333298Z" level=info msg="TearDown network for sandbox \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\" successfully" Dec 13 14:39:40.960364 containerd[2640]: time="2024-12-13T14:39:40.960344021Z" level=info msg="StopPodSandbox for \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\" returns successfully" Dec 13 14:39:40.960551 containerd[2640]: time="2024-12-13T14:39:40.960529819Z" level=info msg="StopPodSandbox for \"c81fa7b487d5f7535698254dc55ae70ffa7cae437a19d445310f917242d793e0\"" Dec 13 14:39:40.960657 containerd[2640]: time="2024-12-13T14:39:40.960639321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65999df6-fzfgz,Uid:0b0d5e48-cd22-404d-bff6-5a51346045ef,Namespace:calico-apiserver,Attempt:4,}" Dec 13 14:39:40.960694 containerd[2640]: time="2024-12-13T14:39:40.960680930Z" level=info msg="Ensure that sandbox c81fa7b487d5f7535698254dc55ae70ffa7cae437a19d445310f917242d793e0 in task-service has been cleanup successfully" Dec 13 14:39:40.960917 containerd[2640]: time="2024-12-13T14:39:40.960898255Z" level=info msg="TearDown network for sandbox \"c81fa7b487d5f7535698254dc55ae70ffa7cae437a19d445310f917242d793e0\" successfully" Dec 13 14:39:40.960946 containerd[2640]: time="2024-12-13T14:39:40.960918019Z" level=info msg="StopPodSandbox for \"c81fa7b487d5f7535698254dc55ae70ffa7cae437a19d445310f917242d793e0\" returns successfully" Dec 13 14:39:40.961103 containerd[2640]: time="2024-12-13T14:39:40.961087573Z" level=info msg="StopPodSandbox for \"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\"" Dec 13 14:39:40.961175 containerd[2640]: time="2024-12-13T14:39:40.961163109Z" level=info msg="TearDown network for sandbox \"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\" successfully" Dec 13 14:39:40.961195 containerd[2640]: time="2024-12-13T14:39:40.961175271Z" level=info msg="StopPodSandbox for \"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\" returns successfully" Dec 13 14:39:40.961373 containerd[2640]: time="2024-12-13T14:39:40.961357829Z" level=info msg="StopPodSandbox for \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\"" Dec 13 14:39:40.961444 containerd[2640]: time="2024-12-13T14:39:40.961433605Z" level=info msg="TearDown network for sandbox \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\" successfully" Dec 13 14:39:40.961468 containerd[2640]: time="2024-12-13T14:39:40.961444567Z" level=info msg="StopPodSandbox for \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\" returns successfully" Dec 13 14:39:40.962051 containerd[2640]: time="2024-12-13T14:39:40.962035208Z" level=info msg="StopPodSandbox for \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\"" Dec 13 14:39:40.962112 containerd[2640]: time="2024-12-13T14:39:40.962101102Z" level=info msg="TearDown network for sandbox \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\" successfully" Dec 13 14:39:40.962131 containerd[2640]: time="2024-12-13T14:39:40.962111864Z" level=info msg="StopPodSandbox for \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\" returns successfully" Dec 13 14:39:40.962402 containerd[2640]: time="2024-12-13T14:39:40.962385520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qx5np,Uid:fd6da168-02f5-4bdf-b4a6-009d0a657e90,Namespace:calico-system,Attempt:4,}" Dec 13 14:39:40.963247 kubelet[4171]: I1213 14:39:40.963233 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="220777d6648342a92520bc1dfe96b8137dcc22525e23c9ee7bc48254d770c14d" Dec 13 14:39:40.963587 containerd[2640]: time="2024-12-13T14:39:40.963568443Z" level=info msg="StopPodSandbox for \"220777d6648342a92520bc1dfe96b8137dcc22525e23c9ee7bc48254d770c14d\"" Dec 13 14:39:40.963722 containerd[2640]: time="2024-12-13T14:39:40.963704671Z" level=info msg="Ensure that sandbox 220777d6648342a92520bc1dfe96b8137dcc22525e23c9ee7bc48254d770c14d in task-service has been cleanup successfully" Dec 13 14:39:40.963870 containerd[2640]: time="2024-12-13T14:39:40.963857223Z" level=info msg="TearDown network for sandbox \"220777d6648342a92520bc1dfe96b8137dcc22525e23c9ee7bc48254d770c14d\" successfully" Dec 13 14:39:40.963890 containerd[2640]: time="2024-12-13T14:39:40.963870865Z" level=info msg="StopPodSandbox for \"220777d6648342a92520bc1dfe96b8137dcc22525e23c9ee7bc48254d770c14d\" returns successfully" Dec 13 14:39:40.964142 containerd[2640]: time="2024-12-13T14:39:40.964124998Z" level=info msg="StopPodSandbox for \"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\"" Dec 13 14:39:40.964201 containerd[2640]: time="2024-12-13T14:39:40.964191451Z" level=info msg="TearDown network for sandbox \"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\" successfully" Dec 13 14:39:40.964222 containerd[2640]: time="2024-12-13T14:39:40.964201573Z" level=info msg="StopPodSandbox for \"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\" returns successfully" Dec 13 14:39:40.964373 containerd[2640]: time="2024-12-13T14:39:40.964359046Z" level=info msg="StopPodSandbox for \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\"" Dec 13 14:39:40.964429 containerd[2640]: time="2024-12-13T14:39:40.964419258Z" level=info msg="TearDown network for sandbox \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\" successfully" Dec 13 14:39:40.964453 containerd[2640]: time="2024-12-13T14:39:40.964429300Z" level=info msg="StopPodSandbox for \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\" returns successfully" Dec 13 14:39:40.964691 kubelet[4171]: I1213 14:39:40.964676 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="555520c450da49cdc364f5dd3edbb3eee13d2666519f8e9ef45dac011e000a96" Dec 13 14:39:40.964774 containerd[2640]: time="2024-12-13T14:39:40.964753207Z" level=info msg="StopPodSandbox for \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\"" Dec 13 14:39:40.964872 containerd[2640]: time="2024-12-13T14:39:40.964860149Z" level=info msg="TearDown network for sandbox \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\" successfully" Dec 13 14:39:40.964899 containerd[2640]: time="2024-12-13T14:39:40.964872871Z" level=info msg="StopPodSandbox for \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\" returns successfully" Dec 13 14:39:40.965113 containerd[2640]: time="2024-12-13T14:39:40.965090916Z" level=info msg="StopPodSandbox for \"555520c450da49cdc364f5dd3edbb3eee13d2666519f8e9ef45dac011e000a96\"" Dec 13 14:39:40.965174 containerd[2640]: time="2024-12-13T14:39:40.965156089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65999df6-l48rl,Uid:c2385ebc-0f6a-4f2a-89d0-c0db311721a5,Namespace:calico-apiserver,Attempt:4,}" Dec 13 14:39:40.965252 containerd[2640]: time="2024-12-13T14:39:40.965239307Z" level=info msg="Ensure that sandbox 555520c450da49cdc364f5dd3edbb3eee13d2666519f8e9ef45dac011e000a96 in task-service has been cleanup successfully" Dec 13 14:39:40.965425 containerd[2640]: time="2024-12-13T14:39:40.965409822Z" level=info msg="TearDown network for sandbox \"555520c450da49cdc364f5dd3edbb3eee13d2666519f8e9ef45dac011e000a96\" successfully" Dec 13 14:39:40.965445 containerd[2640]: time="2024-12-13T14:39:40.965425905Z" level=info msg="StopPodSandbox for \"555520c450da49cdc364f5dd3edbb3eee13d2666519f8e9ef45dac011e000a96\" returns successfully" Dec 13 14:39:40.965603 containerd[2640]: time="2024-12-13T14:39:40.965586938Z" level=info msg="StopPodSandbox for \"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\"" Dec 13 14:39:40.965669 containerd[2640]: time="2024-12-13T14:39:40.965658633Z" level=info msg="TearDown network for sandbox \"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\" successfully" Dec 13 14:39:40.965689 containerd[2640]: time="2024-12-13T14:39:40.965669595Z" level=info msg="StopPodSandbox for \"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\" returns successfully" Dec 13 14:39:40.965853 containerd[2640]: time="2024-12-13T14:39:40.965832788Z" level=info msg="StopPodSandbox for \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\"" Dec 13 14:39:40.965928 containerd[2640]: time="2024-12-13T14:39:40.965916166Z" level=info msg="TearDown network for sandbox \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\" successfully" Dec 13 14:39:40.965948 containerd[2640]: time="2024-12-13T14:39:40.965929008Z" level=info msg="StopPodSandbox for \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\" returns successfully" Dec 13 14:39:40.966121 kubelet[4171]: I1213 14:39:40.966107 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8b8f01197e7462357cb7a9371a114914749e814ef37d852100aa930ffedae9d" Dec 13 14:39:40.966146 containerd[2640]: time="2024-12-13T14:39:40.966120528Z" level=info msg="StopPodSandbox for \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\"" Dec 13 14:39:40.966207 containerd[2640]: time="2024-12-13T14:39:40.966196903Z" level=info msg="TearDown network for sandbox \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\" successfully" Dec 13 14:39:40.966227 containerd[2640]: time="2024-12-13T14:39:40.966207986Z" level=info msg="StopPodSandbox for \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\" returns successfully" Dec 13 14:39:40.966525 containerd[2640]: time="2024-12-13T14:39:40.966507407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6w4ss,Uid:d5852b00-0540-4fe3-89bd-044f6a8a7943,Namespace:kube-system,Attempt:4,}" Dec 13 14:39:40.966609 containerd[2640]: time="2024-12-13T14:39:40.966511128Z" level=info msg="StopPodSandbox for \"f8b8f01197e7462357cb7a9371a114914749e814ef37d852100aa930ffedae9d\"" Dec 13 14:39:40.966744 containerd[2640]: time="2024-12-13T14:39:40.966731013Z" level=info msg="Ensure that sandbox f8b8f01197e7462357cb7a9371a114914749e814ef37d852100aa930ffedae9d in task-service has been cleanup successfully" Dec 13 14:39:40.966909 containerd[2640]: time="2024-12-13T14:39:40.966895047Z" level=info msg="TearDown network for sandbox \"f8b8f01197e7462357cb7a9371a114914749e814ef37d852100aa930ffedae9d\" successfully" Dec 13 14:39:40.966934 containerd[2640]: time="2024-12-13T14:39:40.966911250Z" level=info msg="StopPodSandbox for \"f8b8f01197e7462357cb7a9371a114914749e814ef37d852100aa930ffedae9d\" returns successfully" Dec 13 14:39:40.967160 containerd[2640]: time="2024-12-13T14:39:40.967146418Z" level=info msg="StopPodSandbox for \"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\"" Dec 13 14:39:40.967224 containerd[2640]: time="2024-12-13T14:39:40.967214552Z" level=info msg="TearDown network for sandbox \"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\" successfully" Dec 13 14:39:40.967244 containerd[2640]: time="2024-12-13T14:39:40.967226035Z" level=info msg="StopPodSandbox for \"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\" returns successfully" Dec 13 14:39:40.967483 containerd[2640]: time="2024-12-13T14:39:40.967462443Z" level=info msg="StopPodSandbox for \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\"" Dec 13 14:39:40.967555 containerd[2640]: time="2024-12-13T14:39:40.967545140Z" level=info msg="TearDown network for sandbox \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\" successfully" Dec 13 14:39:40.967578 kubelet[4171]: I1213 14:39:40.967544 4171 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="248d91735e5c1a87a8225a623ef09127e2dad4d1f395cd923d7fa3c3862e6f65" Dec 13 14:39:40.967599 containerd[2640]: time="2024-12-13T14:39:40.967555342Z" level=info msg="StopPodSandbox for \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\" returns successfully" Dec 13 14:39:40.967803 containerd[2640]: time="2024-12-13T14:39:40.967781389Z" level=info msg="StopPodSandbox for \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\"" Dec 13 14:39:40.967882 containerd[2640]: time="2024-12-13T14:39:40.967870007Z" level=info msg="TearDown network for sandbox \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\" successfully" Dec 13 14:39:40.967903 containerd[2640]: time="2024-12-13T14:39:40.967882570Z" level=info msg="StopPodSandbox for \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\" returns successfully" Dec 13 14:39:40.967932 containerd[2640]: time="2024-12-13T14:39:40.967905294Z" level=info msg="StopPodSandbox for \"248d91735e5c1a87a8225a623ef09127e2dad4d1f395cd923d7fa3c3862e6f65\"" Dec 13 14:39:40.968037 containerd[2640]: time="2024-12-13T14:39:40.968025239Z" level=info msg="Ensure that sandbox 248d91735e5c1a87a8225a623ef09127e2dad4d1f395cd923d7fa3c3862e6f65 in task-service has been cleanup successfully" Dec 13 14:39:40.968190 containerd[2640]: time="2024-12-13T14:39:40.968177910Z" level=info msg="TearDown network for sandbox \"248d91735e5c1a87a8225a623ef09127e2dad4d1f395cd923d7fa3c3862e6f65\" successfully" Dec 13 14:39:40.968214 containerd[2640]: time="2024-12-13T14:39:40.968190393Z" level=info msg="StopPodSandbox for \"248d91735e5c1a87a8225a623ef09127e2dad4d1f395cd923d7fa3c3862e6f65\" returns successfully" Dec 13 14:39:40.968251 containerd[2640]: time="2024-12-13T14:39:40.968230281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bc4ccb78-gx2lg,Uid:a6b33a15-ad08-44b6-8263-2f26e62973b7,Namespace:calico-system,Attempt:4,}" Dec 13 14:39:40.968589 containerd[2640]: time="2024-12-13T14:39:40.968569831Z" level=info msg="StopPodSandbox for \"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\"" Dec 13 14:39:40.968665 containerd[2640]: time="2024-12-13T14:39:40.968654168Z" level=info msg="TearDown network for sandbox \"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\" successfully" Dec 13 14:39:40.968689 containerd[2640]: time="2024-12-13T14:39:40.968665771Z" level=info msg="StopPodSandbox for \"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\" returns successfully" Dec 13 14:39:40.968927 containerd[2640]: time="2024-12-13T14:39:40.968913261Z" level=info msg="StopPodSandbox for \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\"" Dec 13 14:39:40.968988 containerd[2640]: time="2024-12-13T14:39:40.968979115Z" level=info msg="TearDown network for sandbox \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\" successfully" Dec 13 14:39:40.969009 containerd[2640]: time="2024-12-13T14:39:40.968988757Z" level=info msg="StopPodSandbox for \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\" returns successfully" Dec 13 14:39:40.969250 containerd[2640]: time="2024-12-13T14:39:40.969227726Z" level=info msg="StopPodSandbox for \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\"" Dec 13 14:39:40.969330 containerd[2640]: time="2024-12-13T14:39:40.969319225Z" level=info msg="TearDown network for sandbox \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\" successfully" Dec 13 14:39:40.969360 containerd[2640]: time="2024-12-13T14:39:40.969332308Z" level=info msg="StopPodSandbox for \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\" returns successfully" Dec 13 14:39:40.969661 containerd[2640]: time="2024-12-13T14:39:40.969640171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rmdm,Uid:dfd0925b-82a4-472a-b9e8-3168dbe42206,Namespace:kube-system,Attempt:4,}" Dec 13 14:39:40.972471 kubelet[4171]: I1213 14:39:40.972421 4171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gjhjh" podStartSLOduration=1.406484781 podStartE2EDuration="8.972407419s" podCreationTimestamp="2024-12-13 14:39:32 +0000 UTC" firstStartedPulling="2024-12-13 14:39:32.837293583 +0000 UTC m=+13.007774947" lastFinishedPulling="2024-12-13 14:39:40.403216261 +0000 UTC m=+20.573697585" observedRunningTime="2024-12-13 14:39:40.972023981 +0000 UTC m=+21.142505305" watchObservedRunningTime="2024-12-13 14:39:40.972407419 +0000 UTC m=+21.142888743" Dec 13 14:39:41.066446 systemd-networkd[2539]: cali4364b4dd6c6: Link UP Dec 13 14:39:41.066586 systemd-networkd[2539]: cali4364b4dd6c6: Gained carrier Dec 13 14:39:41.073428 containerd[2640]: 2024-12-13 14:39:40.989 [INFO][6798] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 14:39:41.073428 containerd[2640]: 2024-12-13 14:39:41.000 [INFO][6798] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--6w4ss-eth0 coredns-6f6b679f8f- kube-system d5852b00-0540-4fe3-89bd-044f6a8a7943 640 0 2024-12-13 14:39:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.0.0-a-f374b16159 coredns-6f6b679f8f-6w4ss eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4364b4dd6c6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a" Namespace="kube-system" Pod="coredns-6f6b679f8f-6w4ss" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--6w4ss-" Dec 13 14:39:41.073428 containerd[2640]: 2024-12-13 14:39:41.000 [INFO][6798] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a" Namespace="kube-system" Pod="coredns-6f6b679f8f-6w4ss" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--6w4ss-eth0" Dec 13 14:39:41.073428 containerd[2640]: 2024-12-13 14:39:41.030 [INFO][6935] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a" HandleID="k8s-pod-network.477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a" Workload="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--6w4ss-eth0" Dec 13 14:39:41.073428 containerd[2640]: 2024-12-13 14:39:41.040 [INFO][6935] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a" HandleID="k8s-pod-network.477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a" Workload="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--6w4ss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40008193c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.0.0-a-f374b16159", "pod":"coredns-6f6b679f8f-6w4ss", "timestamp":"2024-12-13 14:39:41.030046235 +0000 UTC"}, Hostname:"ci-4186.0.0-a-f374b16159", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:39:41.073428 containerd[2640]: 2024-12-13 14:39:41.040 [INFO][6935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:39:41.073428 containerd[2640]: 2024-12-13 14:39:41.040 [INFO][6935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:39:41.073428 containerd[2640]: 2024-12-13 14:39:41.040 [INFO][6935] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.0.0-a-f374b16159' Dec 13 14:39:41.073428 containerd[2640]: 2024-12-13 14:39:41.042 [INFO][6935] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.073428 containerd[2640]: 2024-12-13 14:39:41.046 [INFO][6935] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.073428 containerd[2640]: 2024-12-13 14:39:41.049 [INFO][6935] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.073428 containerd[2640]: 2024-12-13 14:39:41.050 [INFO][6935] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.073428 containerd[2640]: 2024-12-13 14:39:41.052 [INFO][6935] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.073428 containerd[2640]: 2024-12-13 14:39:41.052 [INFO][6935] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.073428 containerd[2640]: 2024-12-13 14:39:41.053 [INFO][6935] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a Dec 13 14:39:41.073428 containerd[2640]: 2024-12-13 14:39:41.055 [INFO][6935] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.073428 containerd[2640]: 2024-12-13 14:39:41.059 [INFO][6935] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.65/26] block=192.168.124.64/26 handle="k8s-pod-network.477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.073428 containerd[2640]: 2024-12-13 14:39:41.059 [INFO][6935] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.65/26] handle="k8s-pod-network.477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.073428 containerd[2640]: 2024-12-13 14:39:41.059 [INFO][6935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:39:41.073428 containerd[2640]: 2024-12-13 14:39:41.059 [INFO][6935] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.65/26] IPv6=[] ContainerID="477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a" HandleID="k8s-pod-network.477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a" Workload="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--6w4ss-eth0" Dec 13 14:39:41.073867 containerd[2640]: 2024-12-13 14:39:41.061 [INFO][6798] cni-plugin/k8s.go 386: Populated endpoint ContainerID="477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a" Namespace="kube-system" Pod="coredns-6f6b679f8f-6w4ss" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--6w4ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--6w4ss-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d5852b00-0540-4fe3-89bd-044f6a8a7943", ResourceVersion:"640", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 39, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-f374b16159", ContainerID:"", Pod:"coredns-6f6b679f8f-6w4ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4364b4dd6c6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:39:41.073867 containerd[2640]: 2024-12-13 14:39:41.061 [INFO][6798] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.65/32] ContainerID="477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a" Namespace="kube-system" Pod="coredns-6f6b679f8f-6w4ss" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--6w4ss-eth0" Dec 13 14:39:41.073867 containerd[2640]: 2024-12-13 14:39:41.061 [INFO][6798] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4364b4dd6c6 ContainerID="477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a" Namespace="kube-system" Pod="coredns-6f6b679f8f-6w4ss" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--6w4ss-eth0" Dec 13 14:39:41.073867 containerd[2640]: 2024-12-13 14:39:41.066 [INFO][6798] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a" Namespace="kube-system" Pod="coredns-6f6b679f8f-6w4ss" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--6w4ss-eth0" Dec 13 14:39:41.073867 containerd[2640]: 2024-12-13 14:39:41.066 [INFO][6798] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a" Namespace="kube-system" Pod="coredns-6f6b679f8f-6w4ss" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--6w4ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--6w4ss-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d5852b00-0540-4fe3-89bd-044f6a8a7943", ResourceVersion:"640", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 39, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-f374b16159", ContainerID:"477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a", Pod:"coredns-6f6b679f8f-6w4ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4364b4dd6c6", MAC:"5a:cb:0e:ae:80:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:39:41.073867 containerd[2640]: 2024-12-13 14:39:41.072 [INFO][6798] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a" Namespace="kube-system" Pod="coredns-6f6b679f8f-6w4ss" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--6w4ss-eth0" Dec 13 14:39:41.087580 containerd[2640]: time="2024-12-13T14:39:41.087356210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:39:41.087647 containerd[2640]: time="2024-12-13T14:39:41.087574893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:39:41.087647 containerd[2640]: time="2024-12-13T14:39:41.087587735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:41.087721 containerd[2640]: time="2024-12-13T14:39:41.087660629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:41.106833 systemd[1]: Started cri-containerd-477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a.scope - libcontainer container 477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a. Dec 13 14:39:41.129813 containerd[2640]: time="2024-12-13T14:39:41.129777122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6w4ss,Uid:d5852b00-0540-4fe3-89bd-044f6a8a7943,Namespace:kube-system,Attempt:4,} returns sandbox id \"477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a\"" Dec 13 14:39:41.131664 containerd[2640]: time="2024-12-13T14:39:41.131640205Z" level=info msg="CreateContainer within sandbox \"477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:39:41.137001 containerd[2640]: time="2024-12-13T14:39:41.136971204Z" level=info msg="CreateContainer within sandbox \"477bd048d04e764b07b9977e535a7d6f303e2dcdee79d563e450cb193f4a5c5a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e1fb740e4e1c43f18ffd6e56af1998c670ec4631ec819939c53b7423f4d62241\"" Dec 13 14:39:41.137313 containerd[2640]: time="2024-12-13T14:39:41.137285946Z" level=info msg="StartContainer for \"e1fb740e4e1c43f18ffd6e56af1998c670ec4631ec819939c53b7423f4d62241\"" Dec 13 14:39:41.162827 systemd[1]: Started cri-containerd-e1fb740e4e1c43f18ffd6e56af1998c670ec4631ec819939c53b7423f4d62241.scope - libcontainer container e1fb740e4e1c43f18ffd6e56af1998c670ec4631ec819939c53b7423f4d62241. Dec 13 14:39:41.167515 systemd-networkd[2539]: cali3b7da3bbdcb: Link UP Dec 13 14:39:41.167684 systemd-networkd[2539]: cali3b7da3bbdcb: Gained carrier Dec 13 14:39:41.182848 containerd[2640]: 2024-12-13 14:39:40.991 [INFO][6838] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 14:39:41.182848 containerd[2640]: 2024-12-13 14:39:41.001 [INFO][6838] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--2rmdm-eth0 coredns-6f6b679f8f- kube-system dfd0925b-82a4-472a-b9e8-3168dbe42206 646 0 2024-12-13 14:39:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.0.0-a-f374b16159 coredns-6f6b679f8f-2rmdm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3b7da3bbdcb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015" Namespace="kube-system" Pod="coredns-6f6b679f8f-2rmdm" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--2rmdm-" Dec 13 14:39:41.182848 containerd[2640]: 2024-12-13 14:39:41.002 [INFO][6838] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015" Namespace="kube-system" Pod="coredns-6f6b679f8f-2rmdm" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--2rmdm-eth0" Dec 13 14:39:41.182848 containerd[2640]: 2024-12-13 14:39:41.030 [INFO][6949] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015" HandleID="k8s-pod-network.7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015" Workload="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--2rmdm-eth0" Dec 13 14:39:41.182848 containerd[2640]: 2024-12-13 14:39:41.040 [INFO][6949] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015" HandleID="k8s-pod-network.7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015" Workload="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--2rmdm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004bab10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.0.0-a-f374b16159", "pod":"coredns-6f6b679f8f-2rmdm", "timestamp":"2024-12-13 14:39:41.030050716 +0000 UTC"}, Hostname:"ci-4186.0.0-a-f374b16159", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:39:41.182848 containerd[2640]: 2024-12-13 14:39:41.040 [INFO][6949] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:39:41.182848 containerd[2640]: 2024-12-13 14:39:41.059 [INFO][6949] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:39:41.182848 containerd[2640]: 2024-12-13 14:39:41.059 [INFO][6949] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.0.0-a-f374b16159' Dec 13 14:39:41.182848 containerd[2640]: 2024-12-13 14:39:41.145 [INFO][6949] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.182848 containerd[2640]: 2024-12-13 14:39:41.151 [INFO][6949] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.182848 containerd[2640]: 2024-12-13 14:39:41.154 [INFO][6949] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.182848 containerd[2640]: 2024-12-13 14:39:41.156 [INFO][6949] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.182848 containerd[2640]: 2024-12-13 14:39:41.157 [INFO][6949] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.182848 containerd[2640]: 2024-12-13 14:39:41.158 [INFO][6949] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.182848 containerd[2640]: 2024-12-13 14:39:41.158 [INFO][6949] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015 Dec 13 14:39:41.182848 containerd[2640]: 2024-12-13 14:39:41.161 [INFO][6949] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.182848 containerd[2640]: 2024-12-13 14:39:41.164 [INFO][6949] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.66/26] block=192.168.124.64/26 handle="k8s-pod-network.7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.182848 containerd[2640]: 2024-12-13 14:39:41.165 [INFO][6949] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.66/26] handle="k8s-pod-network.7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.182848 containerd[2640]: 2024-12-13 14:39:41.165 [INFO][6949] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:39:41.182848 containerd[2640]: 2024-12-13 14:39:41.165 [INFO][6949] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.66/26] IPv6=[] ContainerID="7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015" HandleID="k8s-pod-network.7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015" Workload="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--2rmdm-eth0" Dec 13 14:39:41.183362 containerd[2640]: 2024-12-13 14:39:41.166 [INFO][6838] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015" Namespace="kube-system" Pod="coredns-6f6b679f8f-2rmdm" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--2rmdm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--2rmdm-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"dfd0925b-82a4-472a-b9e8-3168dbe42206", ResourceVersion:"646", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 39, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-f374b16159", ContainerID:"", Pod:"coredns-6f6b679f8f-2rmdm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b7da3bbdcb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:39:41.183362 containerd[2640]: 2024-12-13 14:39:41.166 [INFO][6838] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.66/32] ContainerID="7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015" Namespace="kube-system" Pod="coredns-6f6b679f8f-2rmdm" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--2rmdm-eth0" Dec 13 14:39:41.183362 containerd[2640]: 2024-12-13 14:39:41.166 [INFO][6838] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b7da3bbdcb ContainerID="7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015" Namespace="kube-system" Pod="coredns-6f6b679f8f-2rmdm" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--2rmdm-eth0" Dec 13 14:39:41.183362 containerd[2640]: 2024-12-13 14:39:41.167 [INFO][6838] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015" Namespace="kube-system" Pod="coredns-6f6b679f8f-2rmdm" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--2rmdm-eth0" Dec 13 14:39:41.183362 containerd[2640]: 2024-12-13 14:39:41.168 [INFO][6838] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015" Namespace="kube-system" Pod="coredns-6f6b679f8f-2rmdm" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--2rmdm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--2rmdm-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"dfd0925b-82a4-472a-b9e8-3168dbe42206", ResourceVersion:"646", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 39, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-f374b16159", ContainerID:"7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015", Pod:"coredns-6f6b679f8f-2rmdm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b7da3bbdcb", MAC:"62:db:4b:d1:34:08", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:39:41.183362 containerd[2640]: 2024-12-13 14:39:41.175 [INFO][6838] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015" Namespace="kube-system" Pod="coredns-6f6b679f8f-2rmdm" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-coredns--6f6b679f8f--2rmdm-eth0" Dec 13 14:39:41.201341 containerd[2640]: time="2024-12-13T14:39:41.201302148Z" level=info msg="StartContainer for \"e1fb740e4e1c43f18ffd6e56af1998c670ec4631ec819939c53b7423f4d62241\" returns successfully" Dec 13 14:39:41.211242 containerd[2640]: time="2024-12-13T14:39:41.211142067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:39:41.211242 containerd[2640]: time="2024-12-13T14:39:41.211193197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:39:41.211242 containerd[2640]: time="2024-12-13T14:39:41.211204919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:41.211304 containerd[2640]: time="2024-12-13T14:39:41.211272932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:41.239898 systemd[1]: Started cri-containerd-7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015.scope - libcontainer container 7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015. Dec 13 14:39:41.263281 containerd[2640]: time="2024-12-13T14:39:41.263250547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rmdm,Uid:dfd0925b-82a4-472a-b9e8-3168dbe42206,Namespace:kube-system,Attempt:4,} returns sandbox id \"7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015\"" Dec 13 14:39:41.265102 containerd[2640]: time="2024-12-13T14:39:41.265083865Z" level=info msg="CreateContainer within sandbox \"7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:39:41.269186 systemd-networkd[2539]: cali7af54411b74: Link UP Dec 13 14:39:41.269362 systemd-networkd[2539]: cali7af54411b74: Gained carrier Dec 13 14:39:41.270549 containerd[2640]: time="2024-12-13T14:39:41.270526166Z" level=info msg="CreateContainer within sandbox \"7d78e718027c9227fe4e4bba2acb6f92c578e67a994960c8217c202113874015\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"437493f62174da045053613e0fbadca7146daf173b45e7cad3c6289699a11b4d\"" Dec 13 14:39:41.270862 containerd[2640]: time="2024-12-13T14:39:41.270840427Z" level=info msg="StartContainer for \"437493f62174da045053613e0fbadca7146daf173b45e7cad3c6289699a11b4d\"" Dec 13 14:39:41.276133 containerd[2640]: 2024-12-13 14:39:40.989 [INFO][6807] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 14:39:41.276133 containerd[2640]: 2024-12-13 14:39:41.000 [INFO][6807] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.0.0--a--f374b16159-k8s-calico--kube--controllers--79bc4ccb78--gx2lg-eth0 calico-kube-controllers-79bc4ccb78- calico-system a6b33a15-ad08-44b6-8263-2f26e62973b7 647 0 2024-12-13 14:39:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79bc4ccb78 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4186.0.0-a-f374b16159 calico-kube-controllers-79bc4ccb78-gx2lg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7af54411b74 [] []}} ContainerID="6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736" Namespace="calico-system" Pod="calico-kube-controllers-79bc4ccb78-gx2lg" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--kube--controllers--79bc4ccb78--gx2lg-" Dec 13 14:39:41.276133 containerd[2640]: 2024-12-13 14:39:41.000 [INFO][6807] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736" Namespace="calico-system" Pod="calico-kube-controllers-79bc4ccb78-gx2lg" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--kube--controllers--79bc4ccb78--gx2lg-eth0" Dec 13 14:39:41.276133 containerd[2640]: 2024-12-13 14:39:41.030 [INFO][6936] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736" HandleID="k8s-pod-network.6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736" Workload="ci--4186.0.0--a--f374b16159-k8s-calico--kube--controllers--79bc4ccb78--gx2lg-eth0" Dec 13 14:39:41.276133 containerd[2640]: 2024-12-13 14:39:41.040 [INFO][6936] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736" HandleID="k8s-pod-network.6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736" Workload="ci--4186.0.0--a--f374b16159-k8s-calico--kube--controllers--79bc4ccb78--gx2lg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40006bafa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.0.0-a-f374b16159", "pod":"calico-kube-controllers-79bc4ccb78-gx2lg", "timestamp":"2024-12-13 14:39:41.030054797 +0000 UTC"}, Hostname:"ci-4186.0.0-a-f374b16159", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:39:41.276133 containerd[2640]: 2024-12-13 14:39:41.040 [INFO][6936] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:39:41.276133 containerd[2640]: 2024-12-13 14:39:41.165 [INFO][6936] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:39:41.276133 containerd[2640]: 2024-12-13 14:39:41.165 [INFO][6936] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.0.0-a-f374b16159' Dec 13 14:39:41.276133 containerd[2640]: 2024-12-13 14:39:41.244 [INFO][6936] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.276133 containerd[2640]: 2024-12-13 14:39:41.253 [INFO][6936] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.276133 containerd[2640]: 2024-12-13 14:39:41.256 [INFO][6936] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.276133 containerd[2640]: 2024-12-13 14:39:41.257 [INFO][6936] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.276133 containerd[2640]: 2024-12-13 14:39:41.259 [INFO][6936] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.276133 containerd[2640]: 2024-12-13 14:39:41.259 [INFO][6936] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.276133 containerd[2640]: 2024-12-13 14:39:41.260 [INFO][6936] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736 Dec 13 14:39:41.276133 containerd[2640]: 2024-12-13 14:39:41.262 [INFO][6936] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.276133 containerd[2640]: 2024-12-13 14:39:41.266 [INFO][6936] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.67/26] block=192.168.124.64/26 handle="k8s-pod-network.6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.276133 containerd[2640]: 2024-12-13 14:39:41.266 [INFO][6936] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.67/26] handle="k8s-pod-network.6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.276133 containerd[2640]: 2024-12-13 14:39:41.266 [INFO][6936] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:39:41.276133 containerd[2640]: 2024-12-13 14:39:41.266 [INFO][6936] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.67/26] IPv6=[] ContainerID="6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736" HandleID="k8s-pod-network.6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736" Workload="ci--4186.0.0--a--f374b16159-k8s-calico--kube--controllers--79bc4ccb78--gx2lg-eth0" Dec 13 14:39:41.276565 containerd[2640]: 2024-12-13 14:39:41.267 [INFO][6807] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736" Namespace="calico-system" Pod="calico-kube-controllers-79bc4ccb78-gx2lg" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--kube--controllers--79bc4ccb78--gx2lg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--f374b16159-k8s-calico--kube--controllers--79bc4ccb78--gx2lg-eth0", GenerateName:"calico-kube-controllers-79bc4ccb78-", Namespace:"calico-system", SelfLink:"", UID:"a6b33a15-ad08-44b6-8263-2f26e62973b7", ResourceVersion:"647", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 39, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79bc4ccb78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-f374b16159", ContainerID:"", Pod:"calico-kube-controllers-79bc4ccb78-gx2lg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7af54411b74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:39:41.276565 containerd[2640]: 2024-12-13 14:39:41.268 [INFO][6807] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.67/32] ContainerID="6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736" Namespace="calico-system" Pod="calico-kube-controllers-79bc4ccb78-gx2lg" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--kube--controllers--79bc4ccb78--gx2lg-eth0" Dec 13 14:39:41.276565 containerd[2640]: 2024-12-13 14:39:41.268 [INFO][6807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7af54411b74 ContainerID="6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736" Namespace="calico-system" Pod="calico-kube-controllers-79bc4ccb78-gx2lg" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--kube--controllers--79bc4ccb78--gx2lg-eth0" Dec 13 14:39:41.276565 containerd[2640]: 2024-12-13 14:39:41.269 [INFO][6807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736" Namespace="calico-system" Pod="calico-kube-controllers-79bc4ccb78-gx2lg" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--kube--controllers--79bc4ccb78--gx2lg-eth0" Dec 13 14:39:41.276565 containerd[2640]: 2024-12-13 14:39:41.269 [INFO][6807] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736" Namespace="calico-system" Pod="calico-kube-controllers-79bc4ccb78-gx2lg" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--kube--controllers--79bc4ccb78--gx2lg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--f374b16159-k8s-calico--kube--controllers--79bc4ccb78--gx2lg-eth0", GenerateName:"calico-kube-controllers-79bc4ccb78-", Namespace:"calico-system", SelfLink:"", UID:"a6b33a15-ad08-44b6-8263-2f26e62973b7", ResourceVersion:"647", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 39, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79bc4ccb78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-f374b16159", ContainerID:"6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736", Pod:"calico-kube-controllers-79bc4ccb78-gx2lg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7af54411b74", MAC:"92:20:10:96:84:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:39:41.276565 containerd[2640]: 2024-12-13 14:39:41.275 [INFO][6807] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736" Namespace="calico-system" Pod="calico-kube-controllers-79bc4ccb78-gx2lg" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--kube--controllers--79bc4ccb78--gx2lg-eth0" Dec 13 14:39:41.290631 containerd[2640]: time="2024-12-13T14:39:41.290191880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:39:41.290631 containerd[2640]: time="2024-12-13T14:39:41.290562233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:39:41.290631 containerd[2640]: time="2024-12-13T14:39:41.290575675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:41.290748 containerd[2640]: time="2024-12-13T14:39:41.290653810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:41.302838 systemd[1]: Started cri-containerd-437493f62174da045053613e0fbadca7146daf173b45e7cad3c6289699a11b4d.scope - libcontainer container 437493f62174da045053613e0fbadca7146daf173b45e7cad3c6289699a11b4d. Dec 13 14:39:41.305099 systemd[1]: Started cri-containerd-6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736.scope - libcontainer container 6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736. Dec 13 14:39:41.320992 containerd[2640]: time="2024-12-13T14:39:41.320961880Z" level=info msg="StartContainer for \"437493f62174da045053613e0fbadca7146daf173b45e7cad3c6289699a11b4d\" returns successfully" Dec 13 14:39:41.327918 containerd[2640]: time="2024-12-13T14:39:41.327894352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bc4ccb78-gx2lg,Uid:a6b33a15-ad08-44b6-8263-2f26e62973b7,Namespace:calico-system,Attempt:4,} returns sandbox id \"6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736\"" Dec 13 14:39:41.329013 containerd[2640]: time="2024-12-13T14:39:41.328990926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 14:39:41.370226 systemd-networkd[2539]: calif051afe997d: Link UP Dec 13 14:39:41.370570 systemd-networkd[2539]: calif051afe997d: Gained carrier Dec 13 14:39:41.389457 containerd[2640]: 2024-12-13 14:39:40.982 [INFO][6770] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 14:39:41.389457 containerd[2640]: 2024-12-13 14:39:40.996 [INFO][6770] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.0.0--a--f374b16159-k8s-csi--node--driver--qx5np-eth0 csi-node-driver- calico-system fd6da168-02f5-4bdf-b4a6-009d0a657e90 584 0 2024-12-13 14:39:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4186.0.0-a-f374b16159 csi-node-driver-qx5np eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif051afe997d [] []}} ContainerID="0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df" Namespace="calico-system" Pod="csi-node-driver-qx5np" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-csi--node--driver--qx5np-" Dec 13 14:39:41.389457 containerd[2640]: 2024-12-13 14:39:40.996 [INFO][6770] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df" Namespace="calico-system" Pod="csi-node-driver-qx5np" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-csi--node--driver--qx5np-eth0" Dec 13 14:39:41.389457 containerd[2640]: 2024-12-13 14:39:41.030 [INFO][6921] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df" HandleID="k8s-pod-network.0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df" Workload="ci--4186.0.0--a--f374b16159-k8s-csi--node--driver--qx5np-eth0" Dec 13 14:39:41.389457 containerd[2640]: 2024-12-13 14:39:41.040 [INFO][6921] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df" HandleID="k8s-pod-network.0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df" Workload="ci--4186.0.0--a--f374b16159-k8s-csi--node--driver--qx5np-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000388b10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.0.0-a-f374b16159", "pod":"csi-node-driver-qx5np", "timestamp":"2024-12-13 14:39:41.030057638 +0000 UTC"}, Hostname:"ci-4186.0.0-a-f374b16159", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:39:41.389457 containerd[2640]: 2024-12-13 14:39:41.040 [INFO][6921] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:39:41.389457 containerd[2640]: 2024-12-13 14:39:41.266 [INFO][6921] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:39:41.389457 containerd[2640]: 2024-12-13 14:39:41.267 [INFO][6921] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.0.0-a-f374b16159' Dec 13 14:39:41.389457 containerd[2640]: 2024-12-13 14:39:41.351 [INFO][6921] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.389457 containerd[2640]: 2024-12-13 14:39:41.355 [INFO][6921] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.389457 containerd[2640]: 2024-12-13 14:39:41.358 [INFO][6921] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.389457 containerd[2640]: 2024-12-13 14:39:41.359 [INFO][6921] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.389457 containerd[2640]: 2024-12-13 14:39:41.360 [INFO][6921] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.389457 containerd[2640]: 2024-12-13 14:39:41.360 [INFO][6921] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.389457 containerd[2640]: 2024-12-13 14:39:41.361 [INFO][6921] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df Dec 13 14:39:41.389457 containerd[2640]: 2024-12-13 14:39:41.364 [INFO][6921] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.389457 containerd[2640]: 2024-12-13 14:39:41.367 [INFO][6921] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.68/26] block=192.168.124.64/26 handle="k8s-pod-network.0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.389457 containerd[2640]: 2024-12-13 14:39:41.367 [INFO][6921] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.68/26] handle="k8s-pod-network.0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.389457 containerd[2640]: 2024-12-13 14:39:41.367 [INFO][6921] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:39:41.389457 containerd[2640]: 2024-12-13 14:39:41.367 [INFO][6921] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.68/26] IPv6=[] ContainerID="0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df" HandleID="k8s-pod-network.0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df" Workload="ci--4186.0.0--a--f374b16159-k8s-csi--node--driver--qx5np-eth0" Dec 13 14:39:41.389902 containerd[2640]: 2024-12-13 14:39:41.369 [INFO][6770] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df" Namespace="calico-system" Pod="csi-node-driver-qx5np" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-csi--node--driver--qx5np-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--f374b16159-k8s-csi--node--driver--qx5np-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fd6da168-02f5-4bdf-b4a6-009d0a657e90", ResourceVersion:"584", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 39, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-f374b16159", ContainerID:"", Pod:"csi-node-driver-qx5np", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif051afe997d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:39:41.389902 containerd[2640]: 2024-12-13 14:39:41.369 [INFO][6770] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.68/32] ContainerID="0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df" Namespace="calico-system" Pod="csi-node-driver-qx5np" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-csi--node--driver--qx5np-eth0" Dec 13 14:39:41.389902 containerd[2640]: 2024-12-13 14:39:41.369 [INFO][6770] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif051afe997d ContainerID="0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df" Namespace="calico-system" Pod="csi-node-driver-qx5np" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-csi--node--driver--qx5np-eth0" Dec 13 14:39:41.389902 containerd[2640]: 2024-12-13 14:39:41.370 [INFO][6770] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df" Namespace="calico-system" Pod="csi-node-driver-qx5np" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-csi--node--driver--qx5np-eth0" Dec 13 14:39:41.389902 containerd[2640]: 2024-12-13 14:39:41.370 [INFO][6770] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df" Namespace="calico-system" Pod="csi-node-driver-qx5np" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-csi--node--driver--qx5np-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--f374b16159-k8s-csi--node--driver--qx5np-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fd6da168-02f5-4bdf-b4a6-009d0a657e90", ResourceVersion:"584", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 39, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-f374b16159", ContainerID:"0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df", Pod:"csi-node-driver-qx5np", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif051afe997d", MAC:"c6:0b:16:f0:94:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:39:41.389902 containerd[2640]: 2024-12-13 14:39:41.388 [INFO][6770] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df" Namespace="calico-system" Pod="csi-node-driver-qx5np" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-csi--node--driver--qx5np-eth0" Dec 13 14:39:41.390279 systemd[1]: run-netns-cni\x2d8daff4df\x2d2b02\x2d5324\x2dc868\x2d447272934cf1.mount: Deactivated successfully. Dec 13 14:39:41.390357 systemd[1]: run-netns-cni\x2dd07863c2\x2dbc3b\x2dd575\x2daaf5\x2dc645f08980e5.mount: Deactivated successfully. Dec 13 14:39:41.390401 systemd[1]: run-netns-cni\x2d85104102\x2d9b40\x2d2e76\x2d1412\x2d1a13a74f8bcf.mount: Deactivated successfully. Dec 13 14:39:41.390446 systemd[1]: run-netns-cni\x2d68103826\x2dc620\x2d3772\x2d7047\x2d3b355fc26f6c.mount: Deactivated successfully. Dec 13 14:39:41.390489 systemd[1]: run-netns-cni\x2db2e8af89\x2db90f\x2d263d\x2d4872\x2d1892fbe85e5d.mount: Deactivated successfully. Dec 13 14:39:41.390529 systemd[1]: run-netns-cni\x2d56f8dcd2\x2d55b1\x2dfd42\x2d37d1\x2d2af00fbe29e4.mount: Deactivated successfully. Dec 13 14:39:41.404992 containerd[2640]: time="2024-12-13T14:39:41.402923262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:39:41.404992 containerd[2640]: time="2024-12-13T14:39:41.402982113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:39:41.404992 containerd[2640]: time="2024-12-13T14:39:41.402992555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:41.404992 containerd[2640]: time="2024-12-13T14:39:41.403067370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:41.424890 systemd[1]: Started cri-containerd-0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df.scope - libcontainer container 0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df. Dec 13 14:39:41.442625 containerd[2640]: time="2024-12-13T14:39:41.442592437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qx5np,Uid:fd6da168-02f5-4bdf-b4a6-009d0a657e90,Namespace:calico-system,Attempt:4,} returns sandbox id \"0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df\"" Dec 13 14:39:41.472034 systemd-networkd[2539]: cali474cff5b3aa: Link UP Dec 13 14:39:41.472211 systemd-networkd[2539]: cali474cff5b3aa: Gained carrier Dec 13 14:39:41.478941 containerd[2640]: 2024-12-13 14:39:40.981 [INFO][6760] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 14:39:41.478941 containerd[2640]: 2024-12-13 14:39:40.996 [INFO][6760] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--fzfgz-eth0 calico-apiserver-d65999df6- calico-apiserver 0b0d5e48-cd22-404d-bff6-5a51346045ef 644 0 2024-12-13 14:39:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d65999df6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.0.0-a-f374b16159 calico-apiserver-d65999df6-fzfgz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali474cff5b3aa [] []}} ContainerID="e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b" Namespace="calico-apiserver" Pod="calico-apiserver-d65999df6-fzfgz" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--fzfgz-" Dec 13 14:39:41.478941 containerd[2640]: 2024-12-13 14:39:40.996 [INFO][6760] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b" Namespace="calico-apiserver" Pod="calico-apiserver-d65999df6-fzfgz" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--fzfgz-eth0" Dec 13 14:39:41.478941 containerd[2640]: 2024-12-13 14:39:41.030 [INFO][6920] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b" HandleID="k8s-pod-network.e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b" Workload="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--fzfgz-eth0" Dec 13 14:39:41.478941 containerd[2640]: 2024-12-13 14:39:41.041 [INFO][6920] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b" HandleID="k8s-pod-network.e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b" Workload="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--fzfgz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000583d70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.0.0-a-f374b16159", "pod":"calico-apiserver-d65999df6-fzfgz", "timestamp":"2024-12-13 14:39:41.030060678 +0000 UTC"}, Hostname:"ci-4186.0.0-a-f374b16159", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:39:41.478941 containerd[2640]: 2024-12-13 14:39:41.041 [INFO][6920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:39:41.478941 containerd[2640]: 2024-12-13 14:39:41.367 [INFO][6920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:39:41.478941 containerd[2640]: 2024-12-13 14:39:41.368 [INFO][6920] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.0.0-a-f374b16159' Dec 13 14:39:41.478941 containerd[2640]: 2024-12-13 14:39:41.452 [INFO][6920] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.478941 containerd[2640]: 2024-12-13 14:39:41.455 [INFO][6920] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.478941 containerd[2640]: 2024-12-13 14:39:41.458 [INFO][6920] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.478941 containerd[2640]: 2024-12-13 14:39:41.460 [INFO][6920] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.478941 containerd[2640]: 2024-12-13 14:39:41.462 [INFO][6920] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.478941 containerd[2640]: 2024-12-13 14:39:41.462 [INFO][6920] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.478941 containerd[2640]: 2024-12-13 14:39:41.463 [INFO][6920] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b Dec 13 14:39:41.478941 containerd[2640]: 2024-12-13 14:39:41.465 [INFO][6920] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.478941 containerd[2640]: 2024-12-13 14:39:41.469 [INFO][6920] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.69/26] block=192.168.124.64/26 handle="k8s-pod-network.e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.478941 containerd[2640]: 2024-12-13 14:39:41.469 [INFO][6920] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.69/26] handle="k8s-pod-network.e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.478941 containerd[2640]: 2024-12-13 14:39:41.469 [INFO][6920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:39:41.478941 containerd[2640]: 2024-12-13 14:39:41.469 [INFO][6920] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.69/26] IPv6=[] ContainerID="e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b" HandleID="k8s-pod-network.e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b" Workload="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--fzfgz-eth0" Dec 13 14:39:41.479358 containerd[2640]: 2024-12-13 14:39:41.470 [INFO][6760] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b" Namespace="calico-apiserver" Pod="calico-apiserver-d65999df6-fzfgz" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--fzfgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--fzfgz-eth0", GenerateName:"calico-apiserver-d65999df6-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b0d5e48-cd22-404d-bff6-5a51346045ef", ResourceVersion:"644", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d65999df6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-f374b16159", ContainerID:"", Pod:"calico-apiserver-d65999df6-fzfgz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali474cff5b3aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:39:41.479358 containerd[2640]: 2024-12-13 14:39:41.471 [INFO][6760] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.69/32] ContainerID="e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b" Namespace="calico-apiserver" Pod="calico-apiserver-d65999df6-fzfgz" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--fzfgz-eth0" Dec 13 14:39:41.479358 containerd[2640]: 2024-12-13 14:39:41.471 [INFO][6760] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali474cff5b3aa ContainerID="e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b" Namespace="calico-apiserver" Pod="calico-apiserver-d65999df6-fzfgz" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--fzfgz-eth0" Dec 13 14:39:41.479358 containerd[2640]: 2024-12-13 14:39:41.472 [INFO][6760] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b" Namespace="calico-apiserver" Pod="calico-apiserver-d65999df6-fzfgz" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--fzfgz-eth0" Dec 13 14:39:41.479358 containerd[2640]: 2024-12-13 14:39:41.472 [INFO][6760] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b" Namespace="calico-apiserver" Pod="calico-apiserver-d65999df6-fzfgz" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--fzfgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--fzfgz-eth0", GenerateName:"calico-apiserver-d65999df6-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b0d5e48-cd22-404d-bff6-5a51346045ef", ResourceVersion:"644", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d65999df6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-f374b16159", ContainerID:"e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b", Pod:"calico-apiserver-d65999df6-fzfgz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali474cff5b3aa", MAC:"d6:ed:47:cc:9f:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:39:41.479358 containerd[2640]: 2024-12-13 14:39:41.477 [INFO][6760] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b" Namespace="calico-apiserver" Pod="calico-apiserver-d65999df6-fzfgz" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--fzfgz-eth0" Dec 13 14:39:41.494566 containerd[2640]: time="2024-12-13T14:39:41.494490556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:39:41.494566 containerd[2640]: time="2024-12-13T14:39:41.494558729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:39:41.494611 containerd[2640]: time="2024-12-13T14:39:41.494570892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:41.494674 containerd[2640]: time="2024-12-13T14:39:41.494654988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:41.515895 systemd[1]: Started cri-containerd-e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b.scope - libcontainer container e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b. Dec 13 14:39:41.539242 containerd[2640]: time="2024-12-13T14:39:41.539210116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65999df6-fzfgz,Uid:0b0d5e48-cd22-404d-bff6-5a51346045ef,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b\"" Dec 13 14:39:41.572459 systemd-networkd[2539]: cali9ddbb8c32c1: Link UP Dec 13 14:39:41.572646 systemd-networkd[2539]: cali9ddbb8c32c1: Gained carrier Dec 13 14:39:41.579442 containerd[2640]: 2024-12-13 14:39:40.986 [INFO][6782] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 14:39:41.579442 containerd[2640]: 2024-12-13 14:39:41.000 [INFO][6782] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--l48rl-eth0 calico-apiserver-d65999df6- calico-apiserver c2385ebc-0f6a-4f2a-89d0-c0db311721a5 645 0 2024-12-13 14:39:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d65999df6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.0.0-a-f374b16159 calico-apiserver-d65999df6-l48rl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9ddbb8c32c1 [] []}} ContainerID="e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8" Namespace="calico-apiserver" Pod="calico-apiserver-d65999df6-l48rl" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--l48rl-" Dec 13 14:39:41.579442 containerd[2640]: 2024-12-13 14:39:41.000 [INFO][6782] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8" Namespace="calico-apiserver" Pod="calico-apiserver-d65999df6-l48rl" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--l48rl-eth0" Dec 13 14:39:41.579442 containerd[2640]: 2024-12-13 14:39:41.030 [INFO][6934] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8" HandleID="k8s-pod-network.e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8" Workload="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--l48rl-eth0" Dec 13 14:39:41.579442 containerd[2640]: 2024-12-13 14:39:41.044 [INFO][6934] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8" HandleID="k8s-pod-network.e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8" Workload="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--l48rl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000388960), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.0.0-a-f374b16159", "pod":"calico-apiserver-d65999df6-l48rl", "timestamp":"2024-12-13 14:39:41.030053717 +0000 UTC"}, Hostname:"ci-4186.0.0-a-f374b16159", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:39:41.579442 containerd[2640]: 2024-12-13 14:39:41.044 [INFO][6934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:39:41.579442 containerd[2640]: 2024-12-13 14:39:41.469 [INFO][6934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:39:41.579442 containerd[2640]: 2024-12-13 14:39:41.469 [INFO][6934] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.0.0-a-f374b16159' Dec 13 14:39:41.579442 containerd[2640]: 2024-12-13 14:39:41.553 [INFO][6934] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.579442 containerd[2640]: 2024-12-13 14:39:41.556 [INFO][6934] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.579442 containerd[2640]: 2024-12-13 14:39:41.559 [INFO][6934] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.579442 containerd[2640]: 2024-12-13 14:39:41.560 [INFO][6934] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.579442 containerd[2640]: 2024-12-13 14:39:41.562 [INFO][6934] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.579442 containerd[2640]: 2024-12-13 14:39:41.562 [INFO][6934] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.579442 containerd[2640]: 2024-12-13 14:39:41.563 [INFO][6934] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8 Dec 13 14:39:41.579442 containerd[2640]: 2024-12-13 14:39:41.565 [INFO][6934] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.579442 containerd[2640]: 2024-12-13 14:39:41.569 [INFO][6934] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.70/26] block=192.168.124.64/26 handle="k8s-pod-network.e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.579442 containerd[2640]: 2024-12-13 14:39:41.569 [INFO][6934] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.70/26] handle="k8s-pod-network.e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8" host="ci-4186.0.0-a-f374b16159" Dec 13 14:39:41.579442 containerd[2640]: 2024-12-13 14:39:41.569 [INFO][6934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:39:41.579442 containerd[2640]: 2024-12-13 14:39:41.569 [INFO][6934] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.70/26] IPv6=[] ContainerID="e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8" HandleID="k8s-pod-network.e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8" Workload="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--l48rl-eth0" Dec 13 14:39:41.579900 containerd[2640]: 2024-12-13 14:39:41.571 [INFO][6782] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8" Namespace="calico-apiserver" Pod="calico-apiserver-d65999df6-l48rl" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--l48rl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--l48rl-eth0", GenerateName:"calico-apiserver-d65999df6-", Namespace:"calico-apiserver", SelfLink:"", UID:"c2385ebc-0f6a-4f2a-89d0-c0db311721a5", ResourceVersion:"645", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d65999df6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-f374b16159", ContainerID:"", Pod:"calico-apiserver-d65999df6-l48rl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ddbb8c32c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:39:41.579900 containerd[2640]: 2024-12-13 14:39:41.571 [INFO][6782] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.70/32] ContainerID="e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8" Namespace="calico-apiserver" Pod="calico-apiserver-d65999df6-l48rl" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--l48rl-eth0" Dec 13 14:39:41.579900 containerd[2640]: 2024-12-13 14:39:41.571 [INFO][6782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ddbb8c32c1 ContainerID="e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8" Namespace="calico-apiserver" Pod="calico-apiserver-d65999df6-l48rl" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--l48rl-eth0" Dec 13 14:39:41.579900 containerd[2640]: 2024-12-13 14:39:41.572 [INFO][6782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8" Namespace="calico-apiserver" Pod="calico-apiserver-d65999df6-l48rl" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--l48rl-eth0" Dec 13 14:39:41.579900 containerd[2640]: 2024-12-13 14:39:41.572 [INFO][6782] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8" Namespace="calico-apiserver" Pod="calico-apiserver-d65999df6-l48rl" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--l48rl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--l48rl-eth0", GenerateName:"calico-apiserver-d65999df6-", Namespace:"calico-apiserver", SelfLink:"", UID:"c2385ebc-0f6a-4f2a-89d0-c0db311721a5", ResourceVersion:"645", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d65999df6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-f374b16159", ContainerID:"e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8", Pod:"calico-apiserver-d65999df6-l48rl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ddbb8c32c1", MAC:"82:b9:e2:31:8b:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:39:41.579900 containerd[2640]: 2024-12-13 14:39:41.578 [INFO][6782] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8" Namespace="calico-apiserver" Pod="calico-apiserver-d65999df6-l48rl" WorkloadEndpoint="ci--4186.0.0--a--f374b16159-k8s-calico--apiserver--d65999df6--l48rl-eth0" Dec 13 14:39:41.593520 containerd[2640]: time="2024-12-13T14:39:41.593455413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:39:41.593546 containerd[2640]: time="2024-12-13T14:39:41.593519185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:39:41.593546 containerd[2640]: time="2024-12-13T14:39:41.593532268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:41.593635 containerd[2640]: time="2024-12-13T14:39:41.593614324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:39:41.623826 systemd[1]: Started cri-containerd-e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8.scope - libcontainer container e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8. Dec 13 14:39:41.647016 containerd[2640]: time="2024-12-13T14:39:41.646986451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65999df6-l48rl,Uid:c2385ebc-0f6a-4f2a-89d0-c0db311721a5,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8\"" Dec 13 14:39:41.980740 kubelet[4171]: I1213 14:39:41.980700 4171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:39:41.985698 kubelet[4171]: I1213 14:39:41.985661 4171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-6w4ss" podStartSLOduration=15.985650006 podStartE2EDuration="15.985650006s" podCreationTimestamp="2024-12-13 14:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:39:41.98525993 +0000 UTC m=+22.155741294" watchObservedRunningTime="2024-12-13 14:39:41.985650006 +0000 UTC m=+22.156131370" Dec 13 14:39:41.992300 kubelet[4171]: I1213 14:39:41.992257 4171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-2rmdm" podStartSLOduration=15.992244292 podStartE2EDuration="15.992244292s" podCreationTimestamp="2024-12-13 14:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:39:41.992045813 +0000 UTC m=+22.162527177" watchObservedRunningTime="2024-12-13 14:39:41.992244292 +0000 UTC m=+22.162725656" Dec 13 14:39:42.181076 containerd[2640]: time="2024-12-13T14:39:42.181034049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:42.181390 containerd[2640]: time="2024-12-13T14:39:42.181044091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Dec 13 14:39:42.181804 containerd[2640]: time="2024-12-13T14:39:42.181787268Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:42.183438 containerd[2640]: time="2024-12-13T14:39:42.183421131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:42.184095 containerd[2640]: time="2024-12-13T14:39:42.184073492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 855.049399ms" Dec 13 14:39:42.184125 containerd[2640]: time="2024-12-13T14:39:42.184101177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 14:39:42.184978 containerd[2640]: time="2024-12-13T14:39:42.184963857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 14:39:42.189440 containerd[2640]: time="2024-12-13T14:39:42.189417281Z" level=info msg="CreateContainer within sandbox \"6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 14:39:42.194424 containerd[2640]: time="2024-12-13T14:39:42.194394843Z" level=info msg="CreateContainer within sandbox \"6dbf85b9a59299b5b05de0929bb98098a098bdd7cba59f1ac069a5ddba9dc736\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b40467c87b0c4dcf8c07b17de096038af5f8407390bc33759022b85d94a6443d\"" Dec 13 14:39:42.194726 containerd[2640]: time="2024-12-13T14:39:42.194695298Z" level=info msg="StartContainer for \"b40467c87b0c4dcf8c07b17de096038af5f8407390bc33759022b85d94a6443d\"" Dec 13 14:39:42.226905 systemd[1]: Started cri-containerd-b40467c87b0c4dcf8c07b17de096038af5f8407390bc33759022b85d94a6443d.scope - libcontainer container b40467c87b0c4dcf8c07b17de096038af5f8407390bc33759022b85d94a6443d. Dec 13 14:39:42.263209 containerd[2640]: time="2024-12-13T14:39:42.263179738Z" level=info msg="StartContainer for \"b40467c87b0c4dcf8c07b17de096038af5f8407390bc33759022b85d94a6443d\" returns successfully" Dec 13 14:39:42.569860 systemd-networkd[2539]: cali474cff5b3aa: Gained IPv6LL Dec 13 14:39:42.693209 containerd[2640]: time="2024-12-13T14:39:42.693161748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:42.693295 containerd[2640]: time="2024-12-13T14:39:42.693180912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 14:39:42.693905 containerd[2640]: time="2024-12-13T14:39:42.693885442Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:42.695578 containerd[2640]: time="2024-12-13T14:39:42.695558552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:42.696242 containerd[2640]: time="2024-12-13T14:39:42.696217194Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 511.230134ms" Dec 13 14:39:42.696269 containerd[2640]: time="2024-12-13T14:39:42.696245599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 14:39:42.697046 containerd[2640]: time="2024-12-13T14:39:42.697023183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 14:39:42.697841 containerd[2640]: time="2024-12-13T14:39:42.697817370Z" level=info msg="CreateContainer within sandbox \"0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 14:39:42.704007 containerd[2640]: time="2024-12-13T14:39:42.703976591Z" level=info msg="CreateContainer within sandbox \"0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"11fec7a3c677a3e51602c90f2ef1d0ff8d02330199fbf3c7603d5ebb0cca8ad6\"" Dec 13 14:39:42.704365 containerd[2640]: time="2024-12-13T14:39:42.704335697Z" level=info msg="StartContainer for \"11fec7a3c677a3e51602c90f2ef1d0ff8d02330199fbf3c7603d5ebb0cca8ad6\"" Dec 13 14:39:42.736905 systemd[1]: Started cri-containerd-11fec7a3c677a3e51602c90f2ef1d0ff8d02330199fbf3c7603d5ebb0cca8ad6.scope - libcontainer container 11fec7a3c677a3e51602c90f2ef1d0ff8d02330199fbf3c7603d5ebb0cca8ad6. Dec 13 14:39:42.757377 containerd[2640]: time="2024-12-13T14:39:42.757347152Z" level=info msg="StartContainer for \"11fec7a3c677a3e51602c90f2ef1d0ff8d02330199fbf3c7603d5ebb0cca8ad6\" returns successfully" Dec 13 14:39:42.761763 systemd-networkd[2539]: cali4364b4dd6c6: Gained IPv6LL Dec 13 14:39:42.762129 systemd-networkd[2539]: cali3b7da3bbdcb: Gained IPv6LL Dec 13 14:39:42.825747 systemd-networkd[2539]: cali9ddbb8c32c1: Gained IPv6LL Dec 13 14:39:42.886739 kubelet[4171]: I1213 14:39:42.886693 4171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:39:42.992055 kubelet[4171]: I1213 14:39:42.992014 4171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79bc4ccb78-gx2lg" podStartSLOduration=10.135965374 podStartE2EDuration="10.992000638s" podCreationTimestamp="2024-12-13 14:39:32 +0000 UTC" firstStartedPulling="2024-12-13 14:39:41.328781085 +0000 UTC m=+21.499262449" lastFinishedPulling="2024-12-13 14:39:42.184816349 +0000 UTC m=+22.355297713" observedRunningTime="2024-12-13 14:39:42.991876255 +0000 UTC m=+23.162357579" watchObservedRunningTime="2024-12-13 14:39:42.992000638 +0000 UTC m=+23.162481962" Dec 13 14:39:43.081852 systemd-networkd[2539]: cali7af54411b74: Gained IPv6LL Dec 13 14:39:43.145794 systemd-networkd[2539]: calif051afe997d: Gained IPv6LL Dec 13 14:39:43.584783 containerd[2640]: time="2024-12-13T14:39:43.584745525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:43.585081 containerd[2640]: time="2024-12-13T14:39:43.584787212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Dec 13 14:39:43.585424 containerd[2640]: time="2024-12-13T14:39:43.585404401Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:43.587197 containerd[2640]: time="2024-12-13T14:39:43.587169672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:43.587880 containerd[2640]: time="2024-12-13T14:39:43.587855192Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 890.801083ms" Dec 13 14:39:43.587915 containerd[2640]: time="2024-12-13T14:39:43.587886638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 14:39:43.588674 containerd[2640]: time="2024-12-13T14:39:43.588657533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 14:39:43.589632 containerd[2640]: time="2024-12-13T14:39:43.589611741Z" level=info msg="CreateContainer within sandbox \"e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 14:39:43.594460 containerd[2640]: time="2024-12-13T14:39:43.594434590Z" level=info msg="CreateContainer within sandbox \"e48725db7a95ae0ab0f004d39b156ebc5cf58a7efcd5fa5eb4580e94b816a26b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ccaf2d178a0b3d5d5cfff82d1da0fec6a9510a1919a7ab913ea4335fd44cbcff\"" Dec 13 14:39:43.594794 containerd[2640]: time="2024-12-13T14:39:43.594773289Z" level=info msg="StartContainer for \"ccaf2d178a0b3d5d5cfff82d1da0fec6a9510a1919a7ab913ea4335fd44cbcff\"" Dec 13 14:39:43.623877 systemd[1]: Started cri-containerd-ccaf2d178a0b3d5d5cfff82d1da0fec6a9510a1919a7ab913ea4335fd44cbcff.scope - libcontainer container ccaf2d178a0b3d5d5cfff82d1da0fec6a9510a1919a7ab913ea4335fd44cbcff. Dec 13 14:39:43.648013 containerd[2640]: time="2024-12-13T14:39:43.647983770Z" level=info msg="StartContainer for \"ccaf2d178a0b3d5d5cfff82d1da0fec6a9510a1919a7ab913ea4335fd44cbcff\" returns successfully" Dec 13 14:39:43.684739 containerd[2640]: time="2024-12-13T14:39:43.684696109Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:43.684816 containerd[2640]: time="2024-12-13T14:39:43.684782524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 14:39:43.687300 containerd[2640]: time="2024-12-13T14:39:43.687270402Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 98.583263ms" Dec 13 14:39:43.687334 containerd[2640]: time="2024-12-13T14:39:43.687302727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 14:39:43.688008 containerd[2640]: time="2024-12-13T14:39:43.687988968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 14:39:43.688934 containerd[2640]: time="2024-12-13T14:39:43.688906929Z" level=info msg="CreateContainer within sandbox \"e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 14:39:43.694050 containerd[2640]: time="2024-12-13T14:39:43.694021389Z" level=info msg="CreateContainer within sandbox \"e5bb7da82a7c57236909354d3fbf3272e2c96a3ef2387d24498b78f78f4db7d8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"04b154d9cc51f7108b89d286e34c0e350bbd6e3e7bfe608ea60be75542c1f2ab\"" Dec 13 14:39:43.694389 containerd[2640]: time="2024-12-13T14:39:43.694362809Z" level=info msg="StartContainer for \"04b154d9cc51f7108b89d286e34c0e350bbd6e3e7bfe608ea60be75542c1f2ab\"" Dec 13 14:39:43.723812 systemd[1]: Started cri-containerd-04b154d9cc51f7108b89d286e34c0e350bbd6e3e7bfe608ea60be75542c1f2ab.scope - libcontainer container 04b154d9cc51f7108b89d286e34c0e350bbd6e3e7bfe608ea60be75542c1f2ab. Dec 13 14:39:43.748755 containerd[2640]: time="2024-12-13T14:39:43.748722612Z" level=info msg="StartContainer for \"04b154d9cc51f7108b89d286e34c0e350bbd6e3e7bfe608ea60be75542c1f2ab\" returns successfully" Dec 13 14:39:43.941733 kernel: bpftool[7963]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 14:39:43.991104 kubelet[4171]: I1213 14:39:43.991072 4171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:39:43.997236 kubelet[4171]: I1213 14:39:43.997192 4171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d65999df6-fzfgz" podStartSLOduration=10.948749739 podStartE2EDuration="12.997180722s" podCreationTimestamp="2024-12-13 14:39:31 +0000 UTC" firstStartedPulling="2024-12-13 14:39:41.540048719 +0000 UTC m=+21.710530043" lastFinishedPulling="2024-12-13 14:39:43.588479662 +0000 UTC m=+23.758961026" observedRunningTime="2024-12-13 14:39:43.997020094 +0000 UTC m=+24.167501458" watchObservedRunningTime="2024-12-13 14:39:43.997180722 +0000 UTC m=+24.167662086" Dec 13 14:39:44.004063 kubelet[4171]: I1213 14:39:44.004027 4171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d65999df6-l48rl" podStartSLOduration=10.964015221 podStartE2EDuration="13.004015825s" podCreationTimestamp="2024-12-13 14:39:31 +0000 UTC" firstStartedPulling="2024-12-13 14:39:41.647836457 +0000 UTC m=+21.818317781" lastFinishedPulling="2024-12-13 14:39:43.687837021 +0000 UTC m=+23.858318385" observedRunningTime="2024-12-13 14:39:44.003723136 +0000 UTC m=+24.174204500" watchObservedRunningTime="2024-12-13 14:39:44.004015825 +0000 UTC m=+24.174497189" Dec 13 14:39:44.104026 systemd-networkd[2539]: vxlan.calico: Link UP Dec 13 14:39:44.104030 systemd-networkd[2539]: vxlan.calico: Gained carrier Dec 13 14:39:44.156145 containerd[2640]: time="2024-12-13T14:39:44.156108866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:44.156249 containerd[2640]: time="2024-12-13T14:39:44.156130230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 14:39:44.156950 containerd[2640]: time="2024-12-13T14:39:44.156929523Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:44.158961 containerd[2640]: time="2024-12-13T14:39:44.158936979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:39:44.159495 containerd[2640]: time="2024-12-13T14:39:44.159466068Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 471.450455ms" Dec 13 14:39:44.159523 containerd[2640]: time="2024-12-13T14:39:44.159501954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 14:39:44.161072 containerd[2640]: time="2024-12-13T14:39:44.161051693Z" level=info msg="CreateContainer within sandbox \"0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 14:39:44.166676 containerd[2640]: time="2024-12-13T14:39:44.166650189Z" level=info msg="CreateContainer within sandbox \"0acecf0faee545805daddae7e4231ee2a834728118914d1c2429063fb3d410df\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"65cffb2dfedf0fdf88a218253730d8c5448a63209e013f134f44140e77add138\"" Dec 13 14:39:44.167003 containerd[2640]: time="2024-12-13T14:39:44.166977964Z" level=info msg="StartContainer for \"65cffb2dfedf0fdf88a218253730d8c5448a63209e013f134f44140e77add138\"" Dec 13 14:39:44.202925 systemd[1]: Started cri-containerd-65cffb2dfedf0fdf88a218253730d8c5448a63209e013f134f44140e77add138.scope - libcontainer container 65cffb2dfedf0fdf88a218253730d8c5448a63209e013f134f44140e77add138. Dec 13 14:39:44.223621 containerd[2640]: time="2024-12-13T14:39:44.223590034Z" level=info msg="StartContainer for \"65cffb2dfedf0fdf88a218253730d8c5448a63209e013f134f44140e77add138\" returns successfully" Dec 13 14:39:44.942077 kubelet[4171]: I1213 14:39:44.942049 4171 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 14:39:44.942077 kubelet[4171]: I1213 14:39:44.942076 4171 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 14:39:44.994367 kubelet[4171]: I1213 14:39:44.994343 4171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:39:44.994367 kubelet[4171]: I1213 14:39:44.994359 4171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:39:45.003908 kubelet[4171]: I1213 14:39:45.003867 4171 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qx5np" podStartSLOduration=10.287204676 podStartE2EDuration="13.003854407s" podCreationTimestamp="2024-12-13 14:39:32 +0000 UTC" firstStartedPulling="2024-12-13 14:39:41.443411476 +0000 UTC m=+21.613892840" lastFinishedPulling="2024-12-13 14:39:44.160061207 +0000 UTC m=+24.330542571" observedRunningTime="2024-12-13 14:39:45.003579443 +0000 UTC m=+25.174060807" watchObservedRunningTime="2024-12-13 14:39:45.003854407 +0000 UTC m=+25.174335771" Dec 13 14:39:45.961911 systemd-networkd[2539]: vxlan.calico: Gained IPv6LL Dec 13 14:39:53.406112 kubelet[4171]: I1213 14:39:53.406034 4171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:40:05.019087 kubelet[4171]: I1213 14:40:05.019007 4171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:40:05.892871 kubelet[4171]: I1213 14:40:05.892832 4171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:40:10.876978 kubelet[4171]: I1213 14:40:10.876935 4171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:40:19.891362 containerd[2640]: time="2024-12-13T14:40:19.891317342Z" level=info msg="StopPodSandbox for \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\"" Dec 13 14:40:19.891751 containerd[2640]: time="2024-12-13T14:40:19.891421188Z" level=info msg="TearDown network for sandbox \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\" successfully" Dec 13 14:40:19.891751 containerd[2640]: time="2024-12-13T14:40:19.891432868Z" level=info msg="StopPodSandbox for \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\" returns successfully" Dec 13 14:40:19.891803 containerd[2640]: time="2024-12-13T14:40:19.891753285Z" level=info msg="RemovePodSandbox for \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\"" Dec 13 14:40:19.891803 containerd[2640]: time="2024-12-13T14:40:19.891784406Z" level=info msg="Forcibly stopping sandbox \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\"" Dec 13 14:40:19.891873 containerd[2640]: time="2024-12-13T14:40:19.891860450Z" level=info msg="TearDown network for sandbox \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\" successfully" Dec 13 14:40:19.893341 containerd[2640]: time="2024-12-13T14:40:19.893319165Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.893375 containerd[2640]: time="2024-12-13T14:40:19.893366607Z" level=info msg="RemovePodSandbox \"c53b88bf774dec38ab5d7f0b3592c1b44adee9b8de63cdd3552802310b347e40\" returns successfully" Dec 13 14:40:19.893679 containerd[2640]: time="2024-12-13T14:40:19.893661702Z" level=info msg="StopPodSandbox for \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\"" Dec 13 14:40:19.893747 containerd[2640]: time="2024-12-13T14:40:19.893735706Z" level=info msg="TearDown network for sandbox \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\" successfully" Dec 13 14:40:19.893771 containerd[2640]: time="2024-12-13T14:40:19.893747427Z" level=info msg="StopPodSandbox for \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\" returns successfully" Dec 13 14:40:19.893954 containerd[2640]: time="2024-12-13T14:40:19.893938156Z" level=info msg="RemovePodSandbox for \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\"" Dec 13 14:40:19.893979 containerd[2640]: time="2024-12-13T14:40:19.893960037Z" level=info msg="Forcibly stopping sandbox \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\"" Dec 13 14:40:19.894032 containerd[2640]: time="2024-12-13T14:40:19.894022281Z" level=info msg="TearDown network for sandbox \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\" successfully" Dec 13 14:40:19.895221 containerd[2640]: time="2024-12-13T14:40:19.895200061Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.895252 containerd[2640]: time="2024-12-13T14:40:19.895242143Z" level=info msg="RemovePodSandbox \"3a3cb5bec44ea8d2dfb38eb1773ab7ab1d149817b10a0bd33afb2a74f3e0ce49\" returns successfully" Dec 13 14:40:19.895479 containerd[2640]: time="2024-12-13T14:40:19.895459354Z" level=info msg="StopPodSandbox for \"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\"" Dec 13 14:40:19.895557 containerd[2640]: time="2024-12-13T14:40:19.895545639Z" level=info msg="TearDown network for sandbox \"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\" successfully" Dec 13 14:40:19.895584 containerd[2640]: time="2024-12-13T14:40:19.895558119Z" level=info msg="StopPodSandbox for \"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\" returns successfully" Dec 13 14:40:19.895764 containerd[2640]: time="2024-12-13T14:40:19.895748769Z" level=info msg="RemovePodSandbox for \"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\"" Dec 13 14:40:19.895787 containerd[2640]: time="2024-12-13T14:40:19.895768850Z" level=info msg="Forcibly stopping sandbox \"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\"" Dec 13 14:40:19.895843 containerd[2640]: time="2024-12-13T14:40:19.895833653Z" level=info msg="TearDown network for sandbox \"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\" successfully" Dec 13 14:40:19.897040 containerd[2640]: time="2024-12-13T14:40:19.897021754Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.897073 containerd[2640]: time="2024-12-13T14:40:19.897063156Z" level=info msg="RemovePodSandbox \"747e1b3e9a7279905aa02aee685e12869b33625045b262428f34fb50d1071791\" returns successfully" Dec 13 14:40:19.897274 containerd[2640]: time="2024-12-13T14:40:19.897259006Z" level=info msg="StopPodSandbox for \"220777d6648342a92520bc1dfe96b8137dcc22525e23c9ee7bc48254d770c14d\"" Dec 13 14:40:19.897340 containerd[2640]: time="2024-12-13T14:40:19.897328130Z" level=info msg="TearDown network for sandbox \"220777d6648342a92520bc1dfe96b8137dcc22525e23c9ee7bc48254d770c14d\" successfully" Dec 13 14:40:19.897382 containerd[2640]: time="2024-12-13T14:40:19.897339490Z" level=info msg="StopPodSandbox for \"220777d6648342a92520bc1dfe96b8137dcc22525e23c9ee7bc48254d770c14d\" returns successfully" Dec 13 14:40:19.897546 containerd[2640]: time="2024-12-13T14:40:19.897528700Z" level=info msg="RemovePodSandbox for \"220777d6648342a92520bc1dfe96b8137dcc22525e23c9ee7bc48254d770c14d\"" Dec 13 14:40:19.897568 containerd[2640]: time="2024-12-13T14:40:19.897552941Z" level=info msg="Forcibly stopping sandbox \"220777d6648342a92520bc1dfe96b8137dcc22525e23c9ee7bc48254d770c14d\"" Dec 13 14:40:19.897629 containerd[2640]: time="2024-12-13T14:40:19.897618424Z" level=info msg="TearDown network for sandbox \"220777d6648342a92520bc1dfe96b8137dcc22525e23c9ee7bc48254d770c14d\" successfully" Dec 13 14:40:19.898885 containerd[2640]: time="2024-12-13T14:40:19.898865528Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"220777d6648342a92520bc1dfe96b8137dcc22525e23c9ee7bc48254d770c14d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.898923 containerd[2640]: time="2024-12-13T14:40:19.898912771Z" level=info msg="RemovePodSandbox \"220777d6648342a92520bc1dfe96b8137dcc22525e23c9ee7bc48254d770c14d\" returns successfully" Dec 13 14:40:19.899125 containerd[2640]: time="2024-12-13T14:40:19.899110781Z" level=info msg="StopPodSandbox for \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\"" Dec 13 14:40:19.899189 containerd[2640]: time="2024-12-13T14:40:19.899179344Z" level=info msg="TearDown network for sandbox \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\" successfully" Dec 13 14:40:19.899210 containerd[2640]: time="2024-12-13T14:40:19.899189945Z" level=info msg="StopPodSandbox for \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\" returns successfully" Dec 13 14:40:19.899413 containerd[2640]: time="2024-12-13T14:40:19.899399195Z" level=info msg="RemovePodSandbox for \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\"" Dec 13 14:40:19.899435 containerd[2640]: time="2024-12-13T14:40:19.899419756Z" level=info msg="Forcibly stopping sandbox \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\"" Dec 13 14:40:19.899492 containerd[2640]: time="2024-12-13T14:40:19.899482160Z" level=info msg="TearDown network for sandbox \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\" successfully" Dec 13 14:40:19.900710 containerd[2640]: time="2024-12-13T14:40:19.900683261Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.900748 containerd[2640]: time="2024-12-13T14:40:19.900735064Z" level=info msg="RemovePodSandbox \"e371a2c997910fc00b5d3c1292391d6f91b2fae86c785b76f69a65132f67fce0\" returns successfully" Dec 13 14:40:19.900971 containerd[2640]: time="2024-12-13T14:40:19.900953795Z" level=info msg="StopPodSandbox for \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\"" Dec 13 14:40:19.901054 containerd[2640]: time="2024-12-13T14:40:19.901029319Z" level=info msg="TearDown network for sandbox \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\" successfully" Dec 13 14:40:19.901077 containerd[2640]: time="2024-12-13T14:40:19.901052040Z" level=info msg="StopPodSandbox for \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\" returns successfully" Dec 13 14:40:19.901252 containerd[2640]: time="2024-12-13T14:40:19.901232969Z" level=info msg="RemovePodSandbox for \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\"" Dec 13 14:40:19.901276 containerd[2640]: time="2024-12-13T14:40:19.901256890Z" level=info msg="Forcibly stopping sandbox \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\"" Dec 13 14:40:19.901331 containerd[2640]: time="2024-12-13T14:40:19.901318773Z" level=info msg="TearDown network for sandbox \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\" successfully" Dec 13 14:40:19.902546 containerd[2640]: time="2024-12-13T14:40:19.902519075Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.902599 containerd[2640]: time="2024-12-13T14:40:19.902562117Z" level=info msg="RemovePodSandbox \"f64c80b4d869eada055b4c68d331ff04b7f4bcc5a1f8997e9ff1c664312a154c\" returns successfully" Dec 13 14:40:19.902816 containerd[2640]: time="2024-12-13T14:40:19.902797889Z" level=info msg="StopPodSandbox for \"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\"" Dec 13 14:40:19.902887 containerd[2640]: time="2024-12-13T14:40:19.902874093Z" level=info msg="TearDown network for sandbox \"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\" successfully" Dec 13 14:40:19.902912 containerd[2640]: time="2024-12-13T14:40:19.902884733Z" level=info msg="StopPodSandbox for \"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\" returns successfully" Dec 13 14:40:19.903103 containerd[2640]: time="2024-12-13T14:40:19.903083664Z" level=info msg="RemovePodSandbox for \"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\"" Dec 13 14:40:19.903127 containerd[2640]: time="2024-12-13T14:40:19.903107345Z" level=info msg="Forcibly stopping sandbox \"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\"" Dec 13 14:40:19.903179 containerd[2640]: time="2024-12-13T14:40:19.903166308Z" level=info msg="TearDown network for sandbox \"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\" successfully" Dec 13 14:40:19.904406 containerd[2640]: time="2024-12-13T14:40:19.904382690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.904435 containerd[2640]: time="2024-12-13T14:40:19.904428292Z" level=info msg="RemovePodSandbox \"ecee77201928d6bd9336d0e347f45518defd4a8557a9785a00e842af9dda8cf7\" returns successfully" Dec 13 14:40:19.904689 containerd[2640]: time="2024-12-13T14:40:19.904671425Z" level=info msg="StopPodSandbox for \"f8b8f01197e7462357cb7a9371a114914749e814ef37d852100aa930ffedae9d\"" Dec 13 14:40:19.904761 containerd[2640]: time="2024-12-13T14:40:19.904748109Z" level=info msg="TearDown network for sandbox \"f8b8f01197e7462357cb7a9371a114914749e814ef37d852100aa930ffedae9d\" successfully" Dec 13 14:40:19.904761 containerd[2640]: time="2024-12-13T14:40:19.904758549Z" level=info msg="StopPodSandbox for \"f8b8f01197e7462357cb7a9371a114914749e814ef37d852100aa930ffedae9d\" returns successfully" Dec 13 14:40:19.904954 containerd[2640]: time="2024-12-13T14:40:19.904937598Z" level=info msg="RemovePodSandbox for \"f8b8f01197e7462357cb7a9371a114914749e814ef37d852100aa930ffedae9d\"" Dec 13 14:40:19.904978 containerd[2640]: time="2024-12-13T14:40:19.904957479Z" level=info msg="Forcibly stopping sandbox \"f8b8f01197e7462357cb7a9371a114914749e814ef37d852100aa930ffedae9d\"" Dec 13 14:40:19.905024 containerd[2640]: time="2024-12-13T14:40:19.905011722Z" level=info msg="TearDown network for sandbox \"f8b8f01197e7462357cb7a9371a114914749e814ef37d852100aa930ffedae9d\" successfully" Dec 13 14:40:19.906258 containerd[2640]: time="2024-12-13T14:40:19.906231544Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8b8f01197e7462357cb7a9371a114914749e814ef37d852100aa930ffedae9d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.906308 containerd[2640]: time="2024-12-13T14:40:19.906277307Z" level=info msg="RemovePodSandbox \"f8b8f01197e7462357cb7a9371a114914749e814ef37d852100aa930ffedae9d\" returns successfully" Dec 13 14:40:19.906499 containerd[2640]: time="2024-12-13T14:40:19.906477597Z" level=info msg="StopPodSandbox for \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\"" Dec 13 14:40:19.906568 containerd[2640]: time="2024-12-13T14:40:19.906551841Z" level=info msg="TearDown network for sandbox \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\" successfully" Dec 13 14:40:19.906568 containerd[2640]: time="2024-12-13T14:40:19.906562241Z" level=info msg="StopPodSandbox for \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\" returns successfully" Dec 13 14:40:19.906771 containerd[2640]: time="2024-12-13T14:40:19.906753291Z" level=info msg="RemovePodSandbox for \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\"" Dec 13 14:40:19.906799 containerd[2640]: time="2024-12-13T14:40:19.906774572Z" level=info msg="Forcibly stopping sandbox \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\"" Dec 13 14:40:19.906844 containerd[2640]: time="2024-12-13T14:40:19.906831775Z" level=info msg="TearDown network for sandbox \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\" successfully" Dec 13 14:40:19.908040 containerd[2640]: time="2024-12-13T14:40:19.908015676Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.908072 containerd[2640]: time="2024-12-13T14:40:19.908060158Z" level=info msg="RemovePodSandbox \"d1fee4e11f831fa38ac11af0a55ac146b18b4cade8c867d3c75624cc70d0cfc1\" returns successfully" Dec 13 14:40:19.908289 containerd[2640]: time="2024-12-13T14:40:19.908272929Z" level=info msg="StopPodSandbox for \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\"" Dec 13 14:40:19.908347 containerd[2640]: time="2024-12-13T14:40:19.908335292Z" level=info msg="TearDown network for sandbox \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\" successfully" Dec 13 14:40:19.908373 containerd[2640]: time="2024-12-13T14:40:19.908345452Z" level=info msg="StopPodSandbox for \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\" returns successfully" Dec 13 14:40:19.908541 containerd[2640]: time="2024-12-13T14:40:19.908522902Z" level=info msg="RemovePodSandbox for \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\"" Dec 13 14:40:19.908565 containerd[2640]: time="2024-12-13T14:40:19.908545223Z" level=info msg="Forcibly stopping sandbox \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\"" Dec 13 14:40:19.908622 containerd[2640]: time="2024-12-13T14:40:19.908609306Z" level=info msg="TearDown network for sandbox \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\" successfully" Dec 13 14:40:19.910015 containerd[2640]: time="2024-12-13T14:40:19.909989416Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.910048 containerd[2640]: time="2024-12-13T14:40:19.910036779Z" level=info msg="RemovePodSandbox \"a9886faa0a5777842e93a980958d31d19d3368d741d237e3886b1cd65a6f1ff2\" returns successfully" Dec 13 14:40:19.910259 containerd[2640]: time="2024-12-13T14:40:19.910241309Z" level=info msg="StopPodSandbox for \"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\"" Dec 13 14:40:19.910330 containerd[2640]: time="2024-12-13T14:40:19.910317393Z" level=info msg="TearDown network for sandbox \"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\" successfully" Dec 13 14:40:19.910330 containerd[2640]: time="2024-12-13T14:40:19.910328114Z" level=info msg="StopPodSandbox for \"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\" returns successfully" Dec 13 14:40:19.910531 containerd[2640]: time="2024-12-13T14:40:19.910514083Z" level=info msg="RemovePodSandbox for \"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\"" Dec 13 14:40:19.910552 containerd[2640]: time="2024-12-13T14:40:19.910534084Z" level=info msg="Forcibly stopping sandbox \"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\"" Dec 13 14:40:19.910609 containerd[2640]: time="2024-12-13T14:40:19.910596407Z" level=info msg="TearDown network for sandbox \"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\" successfully" Dec 13 14:40:19.911803 containerd[2640]: time="2024-12-13T14:40:19.911777548Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.911834 containerd[2640]: time="2024-12-13T14:40:19.911824510Z" level=info msg="RemovePodSandbox \"4df102d62d66ebbdef8c5fdd78097bd9e081c86e0d80aaceff60409a39ca06ba\" returns successfully" Dec 13 14:40:19.912028 containerd[2640]: time="2024-12-13T14:40:19.912013040Z" level=info msg="StopPodSandbox for \"555520c450da49cdc364f5dd3edbb3eee13d2666519f8e9ef45dac011e000a96\"" Dec 13 14:40:19.912087 containerd[2640]: time="2024-12-13T14:40:19.912076763Z" level=info msg="TearDown network for sandbox \"555520c450da49cdc364f5dd3edbb3eee13d2666519f8e9ef45dac011e000a96\" successfully" Dec 13 14:40:19.912111 containerd[2640]: time="2024-12-13T14:40:19.912086844Z" level=info msg="StopPodSandbox for \"555520c450da49cdc364f5dd3edbb3eee13d2666519f8e9ef45dac011e000a96\" returns successfully" Dec 13 14:40:19.912268 containerd[2640]: time="2024-12-13T14:40:19.912255212Z" level=info msg="RemovePodSandbox for \"555520c450da49cdc364f5dd3edbb3eee13d2666519f8e9ef45dac011e000a96\"" Dec 13 14:40:19.912293 containerd[2640]: time="2024-12-13T14:40:19.912273173Z" level=info msg="Forcibly stopping sandbox \"555520c450da49cdc364f5dd3edbb3eee13d2666519f8e9ef45dac011e000a96\"" Dec 13 14:40:19.912336 containerd[2640]: time="2024-12-13T14:40:19.912325656Z" level=info msg="TearDown network for sandbox \"555520c450da49cdc364f5dd3edbb3eee13d2666519f8e9ef45dac011e000a96\" successfully" Dec 13 14:40:19.913539 containerd[2640]: time="2024-12-13T14:40:19.913516397Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"555520c450da49cdc364f5dd3edbb3eee13d2666519f8e9ef45dac011e000a96\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.913593 containerd[2640]: time="2024-12-13T14:40:19.913563039Z" level=info msg="RemovePodSandbox \"555520c450da49cdc364f5dd3edbb3eee13d2666519f8e9ef45dac011e000a96\" returns successfully" Dec 13 14:40:19.913824 containerd[2640]: time="2024-12-13T14:40:19.913810732Z" level=info msg="StopPodSandbox for \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\"" Dec 13 14:40:19.913893 containerd[2640]: time="2024-12-13T14:40:19.913882135Z" level=info msg="TearDown network for sandbox \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\" successfully" Dec 13 14:40:19.913918 containerd[2640]: time="2024-12-13T14:40:19.913892816Z" level=info msg="StopPodSandbox for \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\" returns successfully" Dec 13 14:40:19.914088 containerd[2640]: time="2024-12-13T14:40:19.914073305Z" level=info msg="RemovePodSandbox for \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\"" Dec 13 14:40:19.914110 containerd[2640]: time="2024-12-13T14:40:19.914092426Z" level=info msg="Forcibly stopping sandbox \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\"" Dec 13 14:40:19.914155 containerd[2640]: time="2024-12-13T14:40:19.914144989Z" level=info msg="TearDown network for sandbox \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\" successfully" Dec 13 14:40:19.915357 containerd[2640]: time="2024-12-13T14:40:19.915335330Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.915390 containerd[2640]: time="2024-12-13T14:40:19.915379652Z" level=info msg="RemovePodSandbox \"f2a3f616cb01b87ed12ead9671672108ddd3edb245ef8089ae6ef1087a520bdf\" returns successfully" Dec 13 14:40:19.915575 containerd[2640]: time="2024-12-13T14:40:19.915562141Z" level=info msg="StopPodSandbox for \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\"" Dec 13 14:40:19.915633 containerd[2640]: time="2024-12-13T14:40:19.915624064Z" level=info msg="TearDown network for sandbox \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\" successfully" Dec 13 14:40:19.915656 containerd[2640]: time="2024-12-13T14:40:19.915633585Z" level=info msg="StopPodSandbox for \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\" returns successfully" Dec 13 14:40:19.915837 containerd[2640]: time="2024-12-13T14:40:19.915819154Z" level=info msg="RemovePodSandbox for \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\"" Dec 13 14:40:19.915859 containerd[2640]: time="2024-12-13T14:40:19.915839955Z" level=info msg="Forcibly stopping sandbox \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\"" Dec 13 14:40:19.915915 containerd[2640]: time="2024-12-13T14:40:19.915903319Z" level=info msg="TearDown network for sandbox \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\" successfully" Dec 13 14:40:19.918356 containerd[2640]: time="2024-12-13T14:40:19.918316802Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.918435 containerd[2640]: time="2024-12-13T14:40:19.918418567Z" level=info msg="RemovePodSandbox \"ddbf3d421471a8114f7f4698194f9ff19a3bfe8b2fb7bb6358e4a73aa1a24341\" returns successfully" Dec 13 14:40:19.919224 containerd[2640]: time="2024-12-13T14:40:19.919196247Z" level=info msg="StopPodSandbox for \"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\"" Dec 13 14:40:19.919306 containerd[2640]: time="2024-12-13T14:40:19.919292092Z" level=info msg="TearDown network for sandbox \"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\" successfully" Dec 13 14:40:19.919306 containerd[2640]: time="2024-12-13T14:40:19.919303772Z" level=info msg="StopPodSandbox for \"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\" returns successfully" Dec 13 14:40:19.919581 containerd[2640]: time="2024-12-13T14:40:19.919555425Z" level=info msg="RemovePodSandbox for \"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\"" Dec 13 14:40:19.919604 containerd[2640]: time="2024-12-13T14:40:19.919587707Z" level=info msg="Forcibly stopping sandbox \"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\"" Dec 13 14:40:19.919670 containerd[2640]: time="2024-12-13T14:40:19.919658590Z" level=info msg="TearDown network for sandbox \"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\" successfully" Dec 13 14:40:19.920882 containerd[2640]: time="2024-12-13T14:40:19.920857452Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.920922 containerd[2640]: time="2024-12-13T14:40:19.920902094Z" level=info msg="RemovePodSandbox \"2a69157a18cf61126dc973725a1c71c230695489e7928122433e292fd4561aad\" returns successfully" Dec 13 14:40:19.921134 containerd[2640]: time="2024-12-13T14:40:19.921110305Z" level=info msg="StopPodSandbox for \"248d91735e5c1a87a8225a623ef09127e2dad4d1f395cd923d7fa3c3862e6f65\"" Dec 13 14:40:19.921207 containerd[2640]: time="2024-12-13T14:40:19.921194309Z" level=info msg="TearDown network for sandbox \"248d91735e5c1a87a8225a623ef09127e2dad4d1f395cd923d7fa3c3862e6f65\" successfully" Dec 13 14:40:19.921229 containerd[2640]: time="2024-12-13T14:40:19.921205029Z" level=info msg="StopPodSandbox for \"248d91735e5c1a87a8225a623ef09127e2dad4d1f395cd923d7fa3c3862e6f65\" returns successfully" Dec 13 14:40:19.921471 containerd[2640]: time="2024-12-13T14:40:19.921453082Z" level=info msg="RemovePodSandbox for \"248d91735e5c1a87a8225a623ef09127e2dad4d1f395cd923d7fa3c3862e6f65\"" Dec 13 14:40:19.921492 containerd[2640]: time="2024-12-13T14:40:19.921476723Z" level=info msg="Forcibly stopping sandbox \"248d91735e5c1a87a8225a623ef09127e2dad4d1f395cd923d7fa3c3862e6f65\"" Dec 13 14:40:19.921547 containerd[2640]: time="2024-12-13T14:40:19.921536846Z" level=info msg="TearDown network for sandbox \"248d91735e5c1a87a8225a623ef09127e2dad4d1f395cd923d7fa3c3862e6f65\" successfully" Dec 13 14:40:19.922798 containerd[2640]: time="2024-12-13T14:40:19.922775750Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"248d91735e5c1a87a8225a623ef09127e2dad4d1f395cd923d7fa3c3862e6f65\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.922829 containerd[2640]: time="2024-12-13T14:40:19.922819352Z" level=info msg="RemovePodSandbox \"248d91735e5c1a87a8225a623ef09127e2dad4d1f395cd923d7fa3c3862e6f65\" returns successfully" Dec 13 14:40:19.923064 containerd[2640]: time="2024-12-13T14:40:19.923044243Z" level=info msg="StopPodSandbox for \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\"" Dec 13 14:40:19.923137 containerd[2640]: time="2024-12-13T14:40:19.923126608Z" level=info msg="TearDown network for sandbox \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\" successfully" Dec 13 14:40:19.923160 containerd[2640]: time="2024-12-13T14:40:19.923137928Z" level=info msg="StopPodSandbox for \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\" returns successfully" Dec 13 14:40:19.923319 containerd[2640]: time="2024-12-13T14:40:19.923304577Z" level=info msg="RemovePodSandbox for \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\"" Dec 13 14:40:19.923339 containerd[2640]: time="2024-12-13T14:40:19.923326938Z" level=info msg="Forcibly stopping sandbox \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\"" Dec 13 14:40:19.923392 containerd[2640]: time="2024-12-13T14:40:19.923382221Z" level=info msg="TearDown network for sandbox \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\" successfully" Dec 13 14:40:19.924628 containerd[2640]: time="2024-12-13T14:40:19.924599403Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.924684 containerd[2640]: time="2024-12-13T14:40:19.924646445Z" level=info msg="RemovePodSandbox \"49d51100a22861fa53a47e1cf07bc75f33b49daff9d3652bb5dec27991fc5ce2\" returns successfully" Dec 13 14:40:19.924888 containerd[2640]: time="2024-12-13T14:40:19.924867857Z" level=info msg="StopPodSandbox for \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\"" Dec 13 14:40:19.924959 containerd[2640]: time="2024-12-13T14:40:19.924944861Z" level=info msg="TearDown network for sandbox \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\" successfully" Dec 13 14:40:19.924959 containerd[2640]: time="2024-12-13T14:40:19.924956661Z" level=info msg="StopPodSandbox for \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\" returns successfully" Dec 13 14:40:19.925204 containerd[2640]: time="2024-12-13T14:40:19.925185593Z" level=info msg="RemovePodSandbox for \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\"" Dec 13 14:40:19.925228 containerd[2640]: time="2024-12-13T14:40:19.925206634Z" level=info msg="Forcibly stopping sandbox \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\"" Dec 13 14:40:19.925285 containerd[2640]: time="2024-12-13T14:40:19.925272917Z" level=info msg="TearDown network for sandbox \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\" successfully" Dec 13 14:40:19.926471 containerd[2640]: time="2024-12-13T14:40:19.926446977Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.926506 containerd[2640]: time="2024-12-13T14:40:19.926493580Z" level=info msg="RemovePodSandbox \"d039f3efd03d5c4352d9a3e54efade9932b44b9c0dbd0a7598fc7baeabef567c\" returns successfully" Dec 13 14:40:19.926777 containerd[2640]: time="2024-12-13T14:40:19.926760033Z" level=info msg="StopPodSandbox for \"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\"" Dec 13 14:40:19.926842 containerd[2640]: time="2024-12-13T14:40:19.926830117Z" level=info msg="TearDown network for sandbox \"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\" successfully" Dec 13 14:40:19.926865 containerd[2640]: time="2024-12-13T14:40:19.926841917Z" level=info msg="StopPodSandbox for \"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\" returns successfully" Dec 13 14:40:19.927034 containerd[2640]: time="2024-12-13T14:40:19.927019327Z" level=info msg="RemovePodSandbox for \"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\"" Dec 13 14:40:19.927059 containerd[2640]: time="2024-12-13T14:40:19.927041248Z" level=info msg="Forcibly stopping sandbox \"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\"" Dec 13 14:40:19.927114 containerd[2640]: time="2024-12-13T14:40:19.927103611Z" level=info msg="TearDown network for sandbox \"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\" successfully" Dec 13 14:40:19.928331 containerd[2640]: time="2024-12-13T14:40:19.928305632Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.928380 containerd[2640]: time="2024-12-13T14:40:19.928351755Z" level=info msg="RemovePodSandbox \"992e4cf44e17ffd351f7908922855de3ccb01406c5b4d089ba891f135f81b622\" returns successfully" Dec 13 14:40:19.928566 containerd[2640]: time="2024-12-13T14:40:19.928549685Z" level=info msg="StopPodSandbox for \"c81fa7b487d5f7535698254dc55ae70ffa7cae437a19d445310f917242d793e0\"" Dec 13 14:40:19.928623 containerd[2640]: time="2024-12-13T14:40:19.928612888Z" level=info msg="TearDown network for sandbox \"c81fa7b487d5f7535698254dc55ae70ffa7cae437a19d445310f917242d793e0\" successfully" Dec 13 14:40:19.928646 containerd[2640]: time="2024-12-13T14:40:19.928622808Z" level=info msg="StopPodSandbox for \"c81fa7b487d5f7535698254dc55ae70ffa7cae437a19d445310f917242d793e0\" returns successfully" Dec 13 14:40:19.928901 containerd[2640]: time="2024-12-13T14:40:19.928882182Z" level=info msg="RemovePodSandbox for \"c81fa7b487d5f7535698254dc55ae70ffa7cae437a19d445310f917242d793e0\"" Dec 13 14:40:19.928925 containerd[2640]: time="2024-12-13T14:40:19.928907303Z" level=info msg="Forcibly stopping sandbox \"c81fa7b487d5f7535698254dc55ae70ffa7cae437a19d445310f917242d793e0\"" Dec 13 14:40:19.928985 containerd[2640]: time="2024-12-13T14:40:19.928974786Z" level=info msg="TearDown network for sandbox \"c81fa7b487d5f7535698254dc55ae70ffa7cae437a19d445310f917242d793e0\" successfully" Dec 13 14:40:19.943442 containerd[2640]: time="2024-12-13T14:40:19.943413724Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c81fa7b487d5f7535698254dc55ae70ffa7cae437a19d445310f917242d793e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.943469 containerd[2640]: time="2024-12-13T14:40:19.943462807Z" level=info msg="RemovePodSandbox \"c81fa7b487d5f7535698254dc55ae70ffa7cae437a19d445310f917242d793e0\" returns successfully" Dec 13 14:40:19.943788 containerd[2640]: time="2024-12-13T14:40:19.943766062Z" level=info msg="StopPodSandbox for \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\"" Dec 13 14:40:19.943865 containerd[2640]: time="2024-12-13T14:40:19.943852347Z" level=info msg="TearDown network for sandbox \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\" successfully" Dec 13 14:40:19.943890 containerd[2640]: time="2024-12-13T14:40:19.943863227Z" level=info msg="StopPodSandbox for \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\" returns successfully" Dec 13 14:40:19.944095 containerd[2640]: time="2024-12-13T14:40:19.944076678Z" level=info msg="RemovePodSandbox for \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\"" Dec 13 14:40:19.944118 containerd[2640]: time="2024-12-13T14:40:19.944098799Z" level=info msg="Forcibly stopping sandbox \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\"" Dec 13 14:40:19.944177 containerd[2640]: time="2024-12-13T14:40:19.944164722Z" level=info msg="TearDown network for sandbox \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\" successfully" Dec 13 14:40:19.945378 containerd[2640]: time="2024-12-13T14:40:19.945350223Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.945429 containerd[2640]: time="2024-12-13T14:40:19.945400786Z" level=info msg="RemovePodSandbox \"a7b76401cf0cf5d72a30fabf3b2d036ca4e4bbaeb908d39c1596e1d61c22681e\" returns successfully" Dec 13 14:40:19.945621 containerd[2640]: time="2024-12-13T14:40:19.945601876Z" level=info msg="StopPodSandbox for \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\"" Dec 13 14:40:19.945685 containerd[2640]: time="2024-12-13T14:40:19.945673280Z" level=info msg="TearDown network for sandbox \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\" successfully" Dec 13 14:40:19.945709 containerd[2640]: time="2024-12-13T14:40:19.945683480Z" level=info msg="StopPodSandbox for \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\" returns successfully" Dec 13 14:40:19.945959 containerd[2640]: time="2024-12-13T14:40:19.945942853Z" level=info msg="RemovePodSandbox for \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\"" Dec 13 14:40:19.945984 containerd[2640]: time="2024-12-13T14:40:19.945963254Z" level=info msg="Forcibly stopping sandbox \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\"" Dec 13 14:40:19.946037 containerd[2640]: time="2024-12-13T14:40:19.946027618Z" level=info msg="TearDown network for sandbox \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\" successfully" Dec 13 14:40:19.947209 containerd[2640]: time="2024-12-13T14:40:19.947186597Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.947244 containerd[2640]: time="2024-12-13T14:40:19.947233319Z" level=info msg="RemovePodSandbox \"a274db60ca6b0592129e1f89e0e75eaa9c5d087f63b5b071c4c623d0d15b8487\" returns successfully" Dec 13 14:40:19.947476 containerd[2640]: time="2024-12-13T14:40:19.947457851Z" level=info msg="StopPodSandbox for \"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\"" Dec 13 14:40:19.947545 containerd[2640]: time="2024-12-13T14:40:19.947534455Z" level=info msg="TearDown network for sandbox \"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\" successfully" Dec 13 14:40:19.947568 containerd[2640]: time="2024-12-13T14:40:19.947544935Z" level=info msg="StopPodSandbox for \"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\" returns successfully" Dec 13 14:40:19.947766 containerd[2640]: time="2024-12-13T14:40:19.947750826Z" level=info msg="RemovePodSandbox for \"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\"" Dec 13 14:40:19.947787 containerd[2640]: time="2024-12-13T14:40:19.947772267Z" level=info msg="Forcibly stopping sandbox \"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\"" Dec 13 14:40:19.947849 containerd[2640]: time="2024-12-13T14:40:19.947837510Z" level=info msg="TearDown network for sandbox \"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\" successfully" Dec 13 14:40:19.949020 containerd[2640]: time="2024-12-13T14:40:19.948995969Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.949061 containerd[2640]: time="2024-12-13T14:40:19.949043892Z" level=info msg="RemovePodSandbox \"1a0d46e6654d7bd5b12eaf368a086a69947b18c04dc675ad000727d7a964ba56\" returns successfully" Dec 13 14:40:19.949280 containerd[2640]: time="2024-12-13T14:40:19.949264823Z" level=info msg="StopPodSandbox for \"1ed0b4378dac60065ca3c5fe1e3ecbe354d7ef3c0a947f5e0ed5c99cac36133a\"" Dec 13 14:40:19.949352 containerd[2640]: time="2024-12-13T14:40:19.949341467Z" level=info msg="TearDown network for sandbox \"1ed0b4378dac60065ca3c5fe1e3ecbe354d7ef3c0a947f5e0ed5c99cac36133a\" successfully" Dec 13 14:40:19.949374 containerd[2640]: time="2024-12-13T14:40:19.949352788Z" level=info msg="StopPodSandbox for \"1ed0b4378dac60065ca3c5fe1e3ecbe354d7ef3c0a947f5e0ed5c99cac36133a\" returns successfully" Dec 13 14:40:19.949559 containerd[2640]: time="2024-12-13T14:40:19.949544117Z" level=info msg="RemovePodSandbox for \"1ed0b4378dac60065ca3c5fe1e3ecbe354d7ef3c0a947f5e0ed5c99cac36133a\"" Dec 13 14:40:19.949583 containerd[2640]: time="2024-12-13T14:40:19.949565918Z" level=info msg="Forcibly stopping sandbox \"1ed0b4378dac60065ca3c5fe1e3ecbe354d7ef3c0a947f5e0ed5c99cac36133a\"" Dec 13 14:40:19.949632 containerd[2640]: time="2024-12-13T14:40:19.949622121Z" level=info msg="TearDown network for sandbox \"1ed0b4378dac60065ca3c5fe1e3ecbe354d7ef3c0a947f5e0ed5c99cac36133a\" successfully" Dec 13 14:40:19.950806 containerd[2640]: time="2024-12-13T14:40:19.950785421Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ed0b4378dac60065ca3c5fe1e3ecbe354d7ef3c0a947f5e0ed5c99cac36133a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 14:40:19.950834 containerd[2640]: time="2024-12-13T14:40:19.950825143Z" level=info msg="RemovePodSandbox \"1ed0b4378dac60065ca3c5fe1e3ecbe354d7ef3c0a947f5e0ed5c99cac36133a\" returns successfully" Dec 13 14:40:27.097050 systemd[1]: Started sshd@9-147.28.228.38:22-218.92.0.158:38264.service - OpenSSH per-connection server daemon (218.92.0.158:38264). Dec 13 14:40:29.835995 sshd-session[8513]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Dec 13 14:40:31.726835 sshd[8511]: PAM: Permission denied for root from 218.92.0.158 Dec 13 14:40:32.177054 sshd-session[8514]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Dec 13 14:40:34.007764 sshd[8511]: PAM: Permission denied for root from 218.92.0.158 Dec 13 14:40:34.461784 sshd-session[8515]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Dec 13 14:40:37.039809 sshd[8511]: PAM: Permission denied for root from 218.92.0.158 Dec 13 14:40:37.264456 sshd[8511]: Received disconnect from 218.92.0.158 port 38264:11: [preauth] Dec 13 14:40:37.264456 sshd[8511]: Disconnected from authenticating user root 218.92.0.158 port 38264 [preauth] Dec 13 14:40:37.266642 systemd[1]: sshd@9-147.28.228.38:22-218.92.0.158:38264.service: Deactivated successfully. Dec 13 14:42:28.610161 systemd[1]: Started sshd@10-147.28.228.38:22-218.92.0.158:50755.service - OpenSSH per-connection server daemon (218.92.0.158:50755). Dec 13 14:42:31.080436 sshd-session[8841]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Dec 13 14:42:32.916423 sshd[8839]: PAM: Permission denied for root from 218.92.0.158 Dec 13 14:42:33.325462 sshd-session[8842]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Dec 13 14:42:35.574391 sshd[8839]: PAM: Permission denied for root from 218.92.0.158 Dec 13 14:42:35.980245 sshd-session[8863]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Dec 13 14:42:37.835733 sshd[8839]: PAM: Permission denied for root from 218.92.0.158 Dec 13 14:42:38.038350 sshd[8839]: Received disconnect from 218.92.0.158 port 50755:11: [preauth] Dec 13 14:42:38.038350 sshd[8839]: Disconnected from authenticating user root 218.92.0.158 port 50755 [preauth] Dec 13 14:42:38.040537 systemd[1]: sshd@10-147.28.228.38:22-218.92.0.158:50755.service: Deactivated successfully. Dec 13 14:44:32.760131 systemd[1]: Started sshd@11-147.28.228.38:22-218.92.0.158:32309.service - OpenSSH per-connection server daemon (218.92.0.158:32309). Dec 13 14:44:35.232692 sshd-session[9169]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Dec 13 14:44:36.892877 sshd[9160]: PAM: Permission denied for root from 218.92.0.158 Dec 13 14:44:37.298661 sshd-session[9204]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Dec 13 14:44:38.903226 sshd[9160]: PAM: Permission denied for root from 218.92.0.158 Dec 13 14:44:39.311139 sshd-session[9205]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Dec 13 14:44:41.522912 sshd[9160]: PAM: Permission denied for root from 218.92.0.158 Dec 13 14:44:41.726808 sshd[9160]: Received disconnect from 218.92.0.158 port 32309:11: [preauth] Dec 13 14:44:41.726808 sshd[9160]: Disconnected from authenticating user root 218.92.0.158 port 32309 [preauth] Dec 13 14:44:41.729009 systemd[1]: sshd@11-147.28.228.38:22-218.92.0.158:32309.service: Deactivated successfully. Dec 13 14:45:04.231727 update_engine[2635]: I20241213 14:45:04.231665 2635 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 14:45:04.232204 update_engine[2635]: I20241213 14:45:04.231760 2635 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 14:45:04.232204 update_engine[2635]: I20241213 14:45:04.231976 2635 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 14:45:04.232305 update_engine[2635]: I20241213 14:45:04.232288 2635 omaha_request_params.cc:62] Current group set to alpha Dec 13 14:45:04.232381 update_engine[2635]: I20241213 14:45:04.232369 2635 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 14:45:04.232402 update_engine[2635]: I20241213 14:45:04.232378 2635 update_attempter.cc:643] Scheduling an action processor start. Dec 13 14:45:04.232402 update_engine[2635]: I20241213 14:45:04.232393 2635 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 14:45:04.232444 update_engine[2635]: I20241213 14:45:04.232419 2635 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 14:45:04.232476 update_engine[2635]: I20241213 14:45:04.232465 2635 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 14:45:04.232496 update_engine[2635]: I20241213 14:45:04.232473 2635 omaha_request_action.cc:272] Request: Dec 13 14:45:04.232496 update_engine[2635]: Dec 13 14:45:04.232496 update_engine[2635]: Dec 13 14:45:04.232496 update_engine[2635]: Dec 13 14:45:04.232496 update_engine[2635]: Dec 13 14:45:04.232496 update_engine[2635]: Dec 13 14:45:04.232496 update_engine[2635]: Dec 13 14:45:04.232496 update_engine[2635]: Dec 13 14:45:04.232496 update_engine[2635]: Dec 13 14:45:04.232496 update_engine[2635]: I20241213 14:45:04.232479 2635 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:45:04.232668 locksmithd[2661]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 14:45:04.233379 update_engine[2635]: I20241213 14:45:04.233362 2635 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:45:04.233667 update_engine[2635]: I20241213 14:45:04.233646 2635 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:45:04.234255 update_engine[2635]: E20241213 14:45:04.234239 2635 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:45:04.234298 update_engine[2635]: I20241213 14:45:04.234287 2635 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 14:45:14.141313 update_engine[2635]: I20241213 14:45:14.141242 2635 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:45:14.141756 update_engine[2635]: I20241213 14:45:14.141506 2635 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:45:14.141756 update_engine[2635]: I20241213 14:45:14.141701 2635 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:45:14.142247 update_engine[2635]: E20241213 14:45:14.142231 2635 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:45:14.142274 update_engine[2635]: I20241213 14:45:14.142263 2635 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 14:45:24.141421 update_engine[2635]: I20241213 14:45:24.141350 2635 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:45:24.141883 update_engine[2635]: I20241213 14:45:24.141639 2635 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:45:24.141911 update_engine[2635]: I20241213 14:45:24.141875 2635 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:45:24.142287 update_engine[2635]: E20241213 14:45:24.142270 2635 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:45:24.142316 update_engine[2635]: I20241213 14:45:24.142305 2635 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 14:45:34.142078 update_engine[2635]: I20241213 14:45:34.141997 2635 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:45:34.142616 update_engine[2635]: I20241213 14:45:34.142263 2635 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:45:34.142616 update_engine[2635]: I20241213 14:45:34.142516 2635 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:45:34.143001 update_engine[2635]: E20241213 14:45:34.142980 2635 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:45:34.143040 update_engine[2635]: I20241213 14:45:34.143023 2635 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 14:45:34.143040 update_engine[2635]: I20241213 14:45:34.143033 2635 omaha_request_action.cc:617] Omaha request response: Dec 13 14:45:34.143128 update_engine[2635]: E20241213 14:45:34.143112 2635 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 14:45:34.143159 update_engine[2635]: I20241213 14:45:34.143133 2635 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 14:45:34.143159 update_engine[2635]: I20241213 14:45:34.143141 2635 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 14:45:34.143159 update_engine[2635]: I20241213 14:45:34.143146 2635 update_attempter.cc:306] Processing Done. Dec 13 14:45:34.143232 update_engine[2635]: E20241213 14:45:34.143162 2635 update_attempter.cc:619] Update failed. Dec 13 14:45:34.143232 update_engine[2635]: I20241213 14:45:34.143168 2635 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 14:45:34.143232 update_engine[2635]: I20241213 14:45:34.143174 2635 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 14:45:34.143232 update_engine[2635]: I20241213 14:45:34.143180 2635 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 14:45:34.143334 update_engine[2635]: I20241213 14:45:34.143249 2635 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 14:45:34.143334 update_engine[2635]: I20241213 14:45:34.143272 2635 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 14:45:34.143334 update_engine[2635]: I20241213 14:45:34.143278 2635 omaha_request_action.cc:272] Request: Dec 13 14:45:34.143334 update_engine[2635]: Dec 13 14:45:34.143334 update_engine[2635]: Dec 13 14:45:34.143334 update_engine[2635]: Dec 13 14:45:34.143334 update_engine[2635]: Dec 13 14:45:34.143334 update_engine[2635]: Dec 13 14:45:34.143334 update_engine[2635]: Dec 13 14:45:34.143334 update_engine[2635]: I20241213 14:45:34.143285 2635 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 14:45:34.143546 update_engine[2635]: I20241213 14:45:34.143409 2635 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 14:45:34.143569 locksmithd[2661]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 14:45:34.143783 update_engine[2635]: I20241213 14:45:34.143575 2635 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 14:45:34.144139 update_engine[2635]: E20241213 14:45:34.144122 2635 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 14:45:34.144163 update_engine[2635]: I20241213 14:45:34.144155 2635 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 14:45:34.144188 update_engine[2635]: I20241213 14:45:34.144162 2635 omaha_request_action.cc:617] Omaha request response: Dec 13 14:45:34.144188 update_engine[2635]: I20241213 14:45:34.144168 2635 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 14:45:34.144188 update_engine[2635]: I20241213 14:45:34.144173 2635 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 14:45:34.144188 update_engine[2635]: I20241213 14:45:34.144178 2635 update_attempter.cc:306] Processing Done. Dec 13 14:45:34.144188 update_engine[2635]: I20241213 14:45:34.144183 2635 update_attempter.cc:310] Error event sent. Dec 13 14:45:34.144289 update_engine[2635]: I20241213 14:45:34.144191 2635 update_check_scheduler.cc:74] Next update check in 42m54s Dec 13 14:45:34.144340 locksmithd[2661]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 14:46:42.622213 systemd[1]: Started sshd@12-147.28.228.38:22-218.92.0.158:63479.service - OpenSSH per-connection server daemon (218.92.0.158:63479). Dec 13 14:46:45.535150 sshd-session[9507]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Dec 13 14:46:47.375856 sshd[9505]: PAM: Permission denied for root from 218.92.0.158 Dec 13 14:46:47.854974 sshd-session[9508]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Dec 13 14:46:49.971942 sshd[9505]: PAM: Permission denied for root from 218.92.0.158 Dec 13 14:46:51.424051 sshd-session[9509]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.158 user=root Dec 13 14:46:53.424861 sshd[9505]: PAM: Permission denied for root from 218.92.0.158 Dec 13 14:46:53.664189 sshd[9505]: Received disconnect from 218.92.0.158 port 63479:11: [preauth] Dec 13 14:46:53.664189 sshd[9505]: Disconnected from authenticating user root 218.92.0.158 port 63479 [preauth] Dec 13 14:46:53.665942 systemd[1]: sshd@12-147.28.228.38:22-218.92.0.158:63479.service: Deactivated successfully. Dec 13 14:48:06.147303 systemd[1]: Started sshd@13-147.28.228.38:22-147.75.109.163:57972.service - OpenSSH per-connection server daemon (147.75.109.163:57972). Dec 13 14:48:06.553430 sshd[9723]: Accepted publickey for core from 147.75.109.163 port 57972 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:48:06.554472 sshd-session[9723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:48:06.557695 systemd-logind[2623]: New session 12 of user core. Dec 13 14:48:06.569872 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 14:48:06.908567 sshd[9725]: Connection closed by 147.75.109.163 port 57972 Dec 13 14:48:06.908960 sshd-session[9723]: pam_unix(sshd:session): session closed for user core Dec 13 14:48:06.912374 systemd[1]: sshd@13-147.28.228.38:22-147.75.109.163:57972.service: Deactivated successfully. Dec 13 14:48:06.914338 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:48:06.914863 systemd-logind[2623]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:48:06.915472 systemd-logind[2623]: Removed session 12. Dec 13 14:48:11.985173 systemd[1]: Started sshd@14-147.28.228.38:22-147.75.109.163:57986.service - OpenSSH per-connection server daemon (147.75.109.163:57986). Dec 13 14:48:12.395346 sshd[9771]: Accepted publickey for core from 147.75.109.163 port 57986 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:48:12.396345 sshd-session[9771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:48:12.399267 systemd-logind[2623]: New session 13 of user core. Dec 13 14:48:12.407863 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 14:48:12.749789 sshd[9774]: Connection closed by 147.75.109.163 port 57986 Dec 13 14:48:12.750175 sshd-session[9771]: pam_unix(sshd:session): session closed for user core Dec 13 14:48:12.752988 systemd[1]: sshd@14-147.28.228.38:22-147.75.109.163:57986.service: Deactivated successfully. Dec 13 14:48:12.755230 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:48:12.755747 systemd-logind[2623]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:48:12.756318 systemd-logind[2623]: Removed session 13. Dec 13 14:48:12.831102 systemd[1]: Started sshd@15-147.28.228.38:22-147.75.109.163:57998.service - OpenSSH per-connection server daemon (147.75.109.163:57998). Dec 13 14:48:13.246647 sshd[9808]: Accepted publickey for core from 147.75.109.163 port 57998 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:48:13.247687 sshd-session[9808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:48:13.250651 systemd-logind[2623]: New session 14 of user core. Dec 13 14:48:13.260877 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 14:48:13.630580 sshd[9810]: Connection closed by 147.75.109.163 port 57998 Dec 13 14:48:13.630910 sshd-session[9808]: pam_unix(sshd:session): session closed for user core Dec 13 14:48:13.633762 systemd[1]: sshd@15-147.28.228.38:22-147.75.109.163:57998.service: Deactivated successfully. Dec 13 14:48:13.635434 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:48:13.635958 systemd-logind[2623]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:48:13.636535 systemd-logind[2623]: Removed session 14. Dec 13 14:48:13.707083 systemd[1]: Started sshd@16-147.28.228.38:22-147.75.109.163:58010.service - OpenSSH per-connection server daemon (147.75.109.163:58010). Dec 13 14:48:14.128506 sshd[9849]: Accepted publickey for core from 147.75.109.163 port 58010 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:48:14.129537 sshd-session[9849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:48:14.132431 systemd-logind[2623]: New session 15 of user core. Dec 13 14:48:14.141810 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 14:48:14.492438 sshd[9851]: Connection closed by 147.75.109.163 port 58010 Dec 13 14:48:14.492767 sshd-session[9849]: pam_unix(sshd:session): session closed for user core Dec 13 14:48:14.495536 systemd[1]: sshd@16-147.28.228.38:22-147.75.109.163:58010.service: Deactivated successfully. Dec 13 14:48:14.497212 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:48:14.497707 systemd-logind[2623]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:48:14.498278 systemd-logind[2623]: Removed session 15. Dec 13 14:48:19.570099 systemd[1]: Started sshd@17-147.28.228.38:22-147.75.109.163:58400.service - OpenSSH per-connection server daemon (147.75.109.163:58400). Dec 13 14:48:19.991662 sshd[9889]: Accepted publickey for core from 147.75.109.163 port 58400 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:48:19.992681 sshd-session[9889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:48:19.995543 systemd-logind[2623]: New session 16 of user core. Dec 13 14:48:20.006810 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 14:48:20.038909 systemd[1]: Started sshd@18-147.28.228.38:22-92.255.85.189:46312.service - OpenSSH per-connection server daemon (92.255.85.189:46312). Dec 13 14:48:20.354319 sshd[9893]: Connection closed by 147.75.109.163 port 58400 Dec 13 14:48:20.354749 sshd-session[9889]: pam_unix(sshd:session): session closed for user core Dec 13 14:48:20.357536 systemd[1]: sshd@17-147.28.228.38:22-147.75.109.163:58400.service: Deactivated successfully. Dec 13 14:48:20.359190 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:48:20.359706 systemd-logind[2623]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:48:20.360296 systemd-logind[2623]: Removed session 16. Dec 13 14:48:20.429968 systemd[1]: Started sshd@19-147.28.228.38:22-147.75.109.163:58402.service - OpenSSH per-connection server daemon (147.75.109.163:58402). Dec 13 14:48:20.831995 sshd[9933]: Accepted publickey for core from 147.75.109.163 port 58402 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:48:20.833084 sshd-session[9933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:48:20.836003 systemd-logind[2623]: New session 17 of user core. Dec 13 14:48:20.848820 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 14:48:20.858742 sshd[9895]: Invalid user anonymous from 92.255.85.189 port 46312 Dec 13 14:48:21.007232 sshd[9895]: Connection closed by invalid user anonymous 92.255.85.189 port 46312 [preauth] Dec 13 14:48:21.009918 systemd[1]: sshd@18-147.28.228.38:22-92.255.85.189:46312.service: Deactivated successfully. Dec 13 14:48:21.282102 sshd[9935]: Connection closed by 147.75.109.163 port 58402 Dec 13 14:48:21.282482 sshd-session[9933]: pam_unix(sshd:session): session closed for user core Dec 13 14:48:21.285163 systemd[1]: sshd@19-147.28.228.38:22-147.75.109.163:58402.service: Deactivated successfully. Dec 13 14:48:21.286806 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:48:21.287311 systemd-logind[2623]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:48:21.287895 systemd-logind[2623]: Removed session 17. Dec 13 14:48:21.367126 systemd[1]: Started sshd@20-147.28.228.38:22-147.75.109.163:58412.service - OpenSSH per-connection server daemon (147.75.109.163:58412). Dec 13 14:48:21.788263 sshd[9961]: Accepted publickey for core from 147.75.109.163 port 58412 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:48:21.789350 sshd-session[9961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:48:21.792554 systemd-logind[2623]: New session 18 of user core. Dec 13 14:48:21.805821 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 14:48:23.170396 sshd[9963]: Connection closed by 147.75.109.163 port 58412 Dec 13 14:48:23.170792 sshd-session[9961]: pam_unix(sshd:session): session closed for user core Dec 13 14:48:23.173556 systemd[1]: sshd@20-147.28.228.38:22-147.75.109.163:58412.service: Deactivated successfully. Dec 13 14:48:23.175197 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:48:23.175341 systemd[1]: session-18.scope: Consumed 3.782s CPU time. Dec 13 14:48:23.175672 systemd-logind[2623]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:48:23.176253 systemd-logind[2623]: Removed session 18. Dec 13 14:48:23.247916 systemd[1]: Started sshd@21-147.28.228.38:22-147.75.109.163:58424.service - OpenSSH per-connection server daemon (147.75.109.163:58424). Dec 13 14:48:23.679737 sshd[10059]: Accepted publickey for core from 147.75.109.163 port 58424 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:48:23.680982 sshd-session[10059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:48:23.684118 systemd-logind[2623]: New session 19 of user core. Dec 13 14:48:23.696879 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 14:48:24.140345 sshd[10089]: Connection closed by 147.75.109.163 port 58424 Dec 13 14:48:24.140749 sshd-session[10059]: pam_unix(sshd:session): session closed for user core Dec 13 14:48:24.143648 systemd[1]: sshd@21-147.28.228.38:22-147.75.109.163:58424.service: Deactivated successfully. Dec 13 14:48:24.145263 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:48:24.145780 systemd-logind[2623]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:48:24.146365 systemd-logind[2623]: Removed session 19. Dec 13 14:48:24.208036 systemd[1]: Started sshd@22-147.28.228.38:22-147.75.109.163:58436.service - OpenSSH per-connection server daemon (147.75.109.163:58436). Dec 13 14:48:24.606534 sshd[10133]: Accepted publickey for core from 147.75.109.163 port 58436 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:48:24.607662 sshd-session[10133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:48:24.610684 systemd-logind[2623]: New session 20 of user core. Dec 13 14:48:24.624816 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 14:48:24.951305 sshd[10135]: Connection closed by 147.75.109.163 port 58436 Dec 13 14:48:24.951602 sshd-session[10133]: pam_unix(sshd:session): session closed for user core Dec 13 14:48:24.954329 systemd[1]: sshd@22-147.28.228.38:22-147.75.109.163:58436.service: Deactivated successfully. Dec 13 14:48:24.955912 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:48:24.956373 systemd-logind[2623]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:48:24.956953 systemd-logind[2623]: Removed session 20. Dec 13 14:48:30.035040 systemd[1]: Started sshd@23-147.28.228.38:22-147.75.109.163:37562.service - OpenSSH per-connection server daemon (147.75.109.163:37562). Dec 13 14:48:30.454599 sshd[10171]: Accepted publickey for core from 147.75.109.163 port 37562 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:48:30.455813 sshd-session[10171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:48:30.458732 systemd-logind[2623]: New session 21 of user core. Dec 13 14:48:30.472871 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 14:48:30.811086 sshd[10173]: Connection closed by 147.75.109.163 port 37562 Dec 13 14:48:30.811517 sshd-session[10171]: pam_unix(sshd:session): session closed for user core Dec 13 14:48:30.814334 systemd[1]: sshd@23-147.28.228.38:22-147.75.109.163:37562.service: Deactivated successfully. Dec 13 14:48:30.815974 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:48:30.816966 systemd-logind[2623]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:48:30.817553 systemd-logind[2623]: Removed session 21. Dec 13 14:48:35.888996 systemd[1]: Started sshd@24-147.28.228.38:22-147.75.109.163:37568.service - OpenSSH per-connection server daemon (147.75.109.163:37568). Dec 13 14:48:36.301762 sshd[10226]: Accepted publickey for core from 147.75.109.163 port 37568 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:48:36.302802 sshd-session[10226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:48:36.305734 systemd-logind[2623]: New session 22 of user core. Dec 13 14:48:36.317849 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 14:48:36.654943 sshd[10228]: Connection closed by 147.75.109.163 port 37568 Dec 13 14:48:36.655249 sshd-session[10226]: pam_unix(sshd:session): session closed for user core Dec 13 14:48:36.658031 systemd[1]: sshd@24-147.28.228.38:22-147.75.109.163:37568.service: Deactivated successfully. Dec 13 14:48:36.660249 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:48:36.660751 systemd-logind[2623]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:48:36.661275 systemd-logind[2623]: Removed session 22. Dec 13 14:48:41.735022 systemd[1]: Started sshd@25-147.28.228.38:22-147.75.109.163:39802.service - OpenSSH per-connection server daemon (147.75.109.163:39802). Dec 13 14:48:42.167970 sshd[10279]: Accepted publickey for core from 147.75.109.163 port 39802 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 14:48:42.168964 sshd-session[10279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:48:42.171945 systemd-logind[2623]: New session 23 of user core. Dec 13 14:48:42.187812 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 14:48:42.534184 sshd[10281]: Connection closed by 147.75.109.163 port 39802 Dec 13 14:48:42.534565 sshd-session[10279]: pam_unix(sshd:session): session closed for user core Dec 13 14:48:42.537509 systemd[1]: sshd@25-147.28.228.38:22-147.75.109.163:39802.service: Deactivated successfully. Dec 13 14:48:42.539808 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:48:42.540366 systemd-logind[2623]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:48:42.540917 systemd-logind[2623]: Removed session 23.