Dec 13 15:03:44.162374 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] Dec 13 15:03:44.162397 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri Dec 13 11:56:07 -00 2024 Dec 13 15:03:44.162405 kernel: KASLR enabled Dec 13 15:03:44.162410 kernel: efi: EFI v2.7 by American Megatrends Dec 13 15:03:44.162416 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea47c818 RNG=0xebf10018 MEMRESERVE=0xe465af98 Dec 13 15:03:44.162421 kernel: random: crng init done Dec 13 15:03:44.162428 kernel: secureboot: Secure boot disabled Dec 13 15:03:44.162433 kernel: esrt: Reserving ESRT space from 0x00000000ea47c818 to 0x00000000ea47c878. Dec 13 15:03:44.162441 kernel: ACPI: Early table checksum verification disabled Dec 13 15:03:44.162446 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) Dec 13 15:03:44.162452 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) Dec 13 15:03:44.162458 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) Dec 13 15:03:44.162464 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) Dec 13 15:03:44.162469 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) Dec 13 15:03:44.162478 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) Dec 13 15:03:44.162484 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) Dec 13 15:03:44.162490 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) Dec 13 15:03:44.162496 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) Dec 13 15:03:44.162502 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) Dec 13 15:03:44.162508 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) Dec 13 15:03:44.162514 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) Dec 13 15:03:44.162520 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) Dec 13 15:03:44.162526 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) Dec 13 15:03:44.162532 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) Dec 13 15:03:44.162540 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) Dec 13 15:03:44.162546 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) Dec 13 15:03:44.162552 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) Dec 13 15:03:44.162558 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) Dec 13 15:03:44.162564 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 Dec 13 15:03:44.162570 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] Dec 13 15:03:44.162576 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] Dec 13 15:03:44.162582 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] Dec 13 15:03:44.162588 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] Dec 13 15:03:44.162594 kernel: NUMA: NODE_DATA [mem 0x83fdffca800-0x83fdffcffff] Dec 13 15:03:44.162600 kernel: Zone ranges: Dec 13 15:03:44.162607 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] Dec 13 15:03:44.162613 kernel: DMA32 empty Dec 13 15:03:44.162619 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] Dec 13 15:03:44.162625 kernel: Movable zone start for each node Dec 13 15:03:44.162631 kernel: Early memory node ranges Dec 13 15:03:44.162640 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] Dec 13 15:03:44.162647 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] Dec 13 15:03:44.162654 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] Dec 13 15:03:44.162661 kernel: node 0: [mem 0x0000000094000000-0x00000000eba32fff] Dec 13 15:03:44.162667 kernel: node 0: [mem 0x00000000eba33000-0x00000000ebeb4fff] Dec 13 15:03:44.162674 kernel: node 0: [mem 0x00000000ebeb5000-0x00000000ebeb9fff] Dec 13 15:03:44.162680 kernel: node 0: [mem 0x00000000ebeba000-0x00000000ebeccfff] Dec 13 15:03:44.162686 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] Dec 13 15:03:44.162693 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] Dec 13 15:03:44.162699 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] Dec 13 15:03:44.162705 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] Dec 13 15:03:44.162712 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee53ffff] Dec 13 15:03:44.162720 kernel: node 0: [mem 0x00000000ee540000-0x00000000f765ffff] Dec 13 15:03:44.162726 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] Dec 13 15:03:44.162732 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] Dec 13 15:03:44.162739 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] Dec 13 15:03:44.162745 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] Dec 13 15:03:44.162752 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] Dec 13 15:03:44.162758 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] Dec 13 15:03:44.162764 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] Dec 13 15:03:44.162771 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] Dec 13 15:03:44.162777 kernel: On node 0, zone DMA: 768 pages in unavailable ranges Dec 13 15:03:44.162783 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges Dec 13 15:03:44.162794 kernel: psci: probing for conduit method from ACPI. Dec 13 15:03:44.162801 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 15:03:44.162807 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 15:03:44.162814 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 15:03:44.162820 kernel: psci: SMC Calling Convention v1.2 Dec 13 15:03:44.162826 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 13 15:03:44.162833 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 Dec 13 15:03:44.162839 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 Dec 13 15:03:44.162845 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 Dec 13 15:03:44.162852 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 Dec 13 15:03:44.162858 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 Dec 13 15:03:44.162865 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 Dec 13 15:03:44.162873 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 Dec 13 15:03:44.162879 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 Dec 13 15:03:44.162885 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 Dec 13 15:03:44.162892 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 Dec 13 15:03:44.162898 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 Dec 13 15:03:44.162904 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 Dec 13 15:03:44.162911 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 Dec 13 15:03:44.162917 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 Dec 13 15:03:44.162923 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 Dec 13 15:03:44.162930 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 Dec 13 15:03:44.162936 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 Dec 13 15:03:44.162942 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 Dec 13 15:03:44.162950 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 Dec 13 15:03:44.162957 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 Dec 13 15:03:44.162963 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 Dec 13 15:03:44.162969 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 Dec 13 15:03:44.162975 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 Dec 13 15:03:44.162982 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 Dec 13 15:03:44.162988 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 Dec 13 15:03:44.162994 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 Dec 13 15:03:44.163000 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 Dec 13 15:03:44.163007 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 Dec 13 15:03:44.163013 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 Dec 13 15:03:44.163021 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 Dec 13 15:03:44.163027 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 Dec 13 15:03:44.163033 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 Dec 13 15:03:44.163040 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 Dec 13 15:03:44.163046 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 Dec 13 15:03:44.163052 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 Dec 13 15:03:44.163059 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 Dec 13 15:03:44.163065 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 Dec 13 15:03:44.163071 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 Dec 13 15:03:44.163078 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 Dec 13 15:03:44.163084 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 Dec 13 15:03:44.163090 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 Dec 13 15:03:44.163098 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 Dec 13 15:03:44.163104 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 Dec 13 15:03:44.163111 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 Dec 13 15:03:44.163117 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 Dec 13 15:03:44.163124 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 Dec 13 15:03:44.163130 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 Dec 13 15:03:44.163136 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 Dec 13 15:03:44.163143 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 Dec 13 15:03:44.163155 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 Dec 13 15:03:44.163162 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 Dec 13 15:03:44.163171 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 Dec 13 15:03:44.163177 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 Dec 13 15:03:44.163184 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 Dec 13 15:03:44.163191 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 Dec 13 15:03:44.163198 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 Dec 13 15:03:44.163204 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 Dec 13 15:03:44.163212 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 Dec 13 15:03:44.163219 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 Dec 13 15:03:44.163226 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 Dec 13 15:03:44.163232 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 Dec 13 15:03:44.163239 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 Dec 13 15:03:44.163246 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 Dec 13 15:03:44.163253 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 Dec 13 15:03:44.163260 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 Dec 13 15:03:44.163266 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 Dec 13 15:03:44.163273 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 Dec 13 15:03:44.163280 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 Dec 13 15:03:44.163286 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 Dec 13 15:03:44.163294 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 Dec 13 15:03:44.163301 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 Dec 13 15:03:44.163308 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 Dec 13 15:03:44.163315 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 Dec 13 15:03:44.163321 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 Dec 13 15:03:44.163328 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 Dec 13 15:03:44.163335 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 Dec 13 15:03:44.163341 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 Dec 13 15:03:44.163348 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 Dec 13 15:03:44.163355 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 Dec 13 15:03:44.163362 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 15:03:44.163370 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 15:03:44.163377 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 Dec 13 15:03:44.163384 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 Dec 13 15:03:44.163390 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 Dec 13 15:03:44.163397 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 Dec 13 15:03:44.163404 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 Dec 13 15:03:44.163411 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 Dec 13 15:03:44.163417 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 Dec 13 15:03:44.163424 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 Dec 13 15:03:44.163431 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 Dec 13 15:03:44.163438 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 Dec 13 15:03:44.163446 kernel: Detected PIPT I-cache on CPU0 Dec 13 15:03:44.163453 kernel: CPU features: detected: GIC system register CPU interface Dec 13 15:03:44.163460 kernel: CPU features: detected: Virtualization Host Extensions Dec 13 15:03:44.163466 kernel: CPU features: detected: Hardware dirty bit management Dec 13 15:03:44.163473 kernel: CPU features: detected: Spectre-v4 Dec 13 15:03:44.163480 kernel: CPU features: detected: Spectre-BHB Dec 13 15:03:44.163487 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 15:03:44.163494 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 15:03:44.163500 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 15:03:44.163507 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 15:03:44.163514 kernel: alternatives: applying boot alternatives Dec 13 15:03:44.163522 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 15:03:44.163530 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 15:03:44.163537 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Dec 13 15:03:44.163544 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes Dec 13 15:03:44.163550 kernel: printk: log_buf_len min size: 262144 bytes Dec 13 15:03:44.163557 kernel: printk: log_buf_len: 1048576 bytes Dec 13 15:03:44.163564 kernel: printk: early log buf free: 249864(95%) Dec 13 15:03:44.163571 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) Dec 13 15:03:44.163578 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) Dec 13 15:03:44.163584 kernel: Fallback order for Node 0: 0 Dec 13 15:03:44.163591 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 Dec 13 15:03:44.163599 kernel: Policy zone: Normal Dec 13 15:03:44.163606 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 15:03:44.163613 kernel: software IO TLB: area num 128. Dec 13 15:03:44.163620 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) Dec 13 15:03:44.163627 kernel: Memory: 262921876K/268174336K available (10304K kernel code, 2184K rwdata, 8088K rodata, 39936K init, 897K bss, 5252460K reserved, 0K cma-reserved) Dec 13 15:03:44.163634 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 Dec 13 15:03:44.163640 kernel: trace event string verifier disabled Dec 13 15:03:44.163647 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 15:03:44.163655 kernel: rcu: RCU event tracing is enabled. Dec 13 15:03:44.163662 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. Dec 13 15:03:44.163669 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 15:03:44.163675 kernel: Tracing variant of Tasks RCU enabled. Dec 13 15:03:44.163684 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 15:03:44.163691 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 Dec 13 15:03:44.163698 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 15:03:44.163704 kernel: GICv3: GIC: Using split EOI/Deactivate mode Dec 13 15:03:44.163711 kernel: GICv3: 672 SPIs implemented Dec 13 15:03:44.163718 kernel: GICv3: 0 Extended SPIs implemented Dec 13 15:03:44.163725 kernel: Root IRQ handler: gic_handle_irq Dec 13 15:03:44.163731 kernel: GICv3: GICv3 features: 16 PPIs Dec 13 15:03:44.163738 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 Dec 13 15:03:44.163745 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 Dec 13 15:03:44.163751 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 Dec 13 15:03:44.163758 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 Dec 13 15:03:44.163766 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 Dec 13 15:03:44.163772 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 Dec 13 15:03:44.163779 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 Dec 13 15:03:44.163786 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 Dec 13 15:03:44.163798 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 Dec 13 15:03:44.163804 kernel: ITS [mem 0x100100040000-0x10010005ffff] Dec 13 15:03:44.163811 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) Dec 13 15:03:44.163818 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) Dec 13 15:03:44.163825 kernel: ITS [mem 0x100100060000-0x10010007ffff] Dec 13 15:03:44.163832 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 15:03:44.163839 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) Dec 13 15:03:44.163847 kernel: ITS [mem 0x100100080000-0x10010009ffff] Dec 13 15:03:44.163854 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 15:03:44.163861 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) Dec 13 15:03:44.163868 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] Dec 13 15:03:44.163875 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) Dec 13 15:03:44.163882 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) Dec 13 15:03:44.163888 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] Dec 13 15:03:44.163895 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) Dec 13 15:03:44.163902 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) Dec 13 15:03:44.163909 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] Dec 13 15:03:44.163915 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) Dec 13 15:03:44.163924 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) Dec 13 15:03:44.163930 kernel: ITS [mem 0x100100100000-0x10010011ffff] Dec 13 15:03:44.163937 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) Dec 13 15:03:44.163944 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) Dec 13 15:03:44.163951 kernel: ITS [mem 0x100100120000-0x10010013ffff] Dec 13 15:03:44.163958 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 15:03:44.163965 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) Dec 13 15:03:44.163971 kernel: GICv3: using LPI property table @0x00000800003e0000 Dec 13 15:03:44.163978 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 Dec 13 15:03:44.163985 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 15:03:44.163992 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164000 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). Dec 13 15:03:44.164007 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). Dec 13 15:03:44.164014 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 15:03:44.164020 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 15:03:44.164027 kernel: Console: colour dummy device 80x25 Dec 13 15:03:44.164035 kernel: printk: console [tty0] enabled Dec 13 15:03:44.164042 kernel: ACPI: Core revision 20230628 Dec 13 15:03:44.164049 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 15:03:44.164056 kernel: pid_max: default: 81920 minimum: 640 Dec 13 15:03:44.164063 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 15:03:44.164071 kernel: landlock: Up and running. Dec 13 15:03:44.164078 kernel: SELinux: Initializing. Dec 13 15:03:44.164085 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 15:03:44.164092 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 15:03:44.164099 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Dec 13 15:03:44.164106 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Dec 13 15:03:44.164113 kernel: rcu: Hierarchical SRCU implementation. Dec 13 15:03:44.164120 kernel: rcu: Max phase no-delay instances is 400. Dec 13 15:03:44.164127 kernel: Platform MSI: ITS@0x100100040000 domain created Dec 13 15:03:44.164136 kernel: Platform MSI: ITS@0x100100060000 domain created Dec 13 15:03:44.164143 kernel: Platform MSI: ITS@0x100100080000 domain created Dec 13 15:03:44.164149 kernel: Platform MSI: ITS@0x1001000a0000 domain created Dec 13 15:03:44.164156 kernel: Platform MSI: ITS@0x1001000c0000 domain created Dec 13 15:03:44.164163 kernel: Platform MSI: ITS@0x1001000e0000 domain created Dec 13 15:03:44.164170 kernel: Platform MSI: ITS@0x100100100000 domain created Dec 13 15:03:44.164177 kernel: Platform MSI: ITS@0x100100120000 domain created Dec 13 15:03:44.164184 kernel: PCI/MSI: ITS@0x100100040000 domain created Dec 13 15:03:44.164191 kernel: PCI/MSI: ITS@0x100100060000 domain created Dec 13 15:03:44.164199 kernel: PCI/MSI: ITS@0x100100080000 domain created Dec 13 15:03:44.164206 kernel: PCI/MSI: ITS@0x1001000a0000 domain created Dec 13 15:03:44.164212 kernel: PCI/MSI: ITS@0x1001000c0000 domain created Dec 13 15:03:44.164219 kernel: PCI/MSI: ITS@0x1001000e0000 domain created Dec 13 15:03:44.164226 kernel: PCI/MSI: ITS@0x100100100000 domain created Dec 13 15:03:44.164233 kernel: PCI/MSI: ITS@0x100100120000 domain created Dec 13 15:03:44.164240 kernel: Remapping and enabling EFI services. Dec 13 15:03:44.164247 kernel: smp: Bringing up secondary CPUs ... Dec 13 15:03:44.164254 kernel: Detected PIPT I-cache on CPU1 Dec 13 15:03:44.164261 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 Dec 13 15:03:44.164269 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 Dec 13 15:03:44.164276 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164283 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] Dec 13 15:03:44.164290 kernel: Detected PIPT I-cache on CPU2 Dec 13 15:03:44.164297 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 Dec 13 15:03:44.164304 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 Dec 13 15:03:44.164311 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164318 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] Dec 13 15:03:44.164325 kernel: Detected PIPT I-cache on CPU3 Dec 13 15:03:44.164333 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 Dec 13 15:03:44.164341 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 Dec 13 15:03:44.164348 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164354 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] Dec 13 15:03:44.164361 kernel: Detected PIPT I-cache on CPU4 Dec 13 15:03:44.164368 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 Dec 13 15:03:44.164375 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 Dec 13 15:03:44.164382 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164389 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] Dec 13 15:03:44.164396 kernel: Detected PIPT I-cache on CPU5 Dec 13 15:03:44.164404 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 Dec 13 15:03:44.164411 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 Dec 13 15:03:44.164419 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164425 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] Dec 13 15:03:44.164432 kernel: Detected PIPT I-cache on CPU6 Dec 13 15:03:44.164439 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 Dec 13 15:03:44.164446 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 Dec 13 15:03:44.164453 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164460 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] Dec 13 15:03:44.164468 kernel: Detected PIPT I-cache on CPU7 Dec 13 15:03:44.164476 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 Dec 13 15:03:44.164482 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 Dec 13 15:03:44.164489 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164496 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] Dec 13 15:03:44.164503 kernel: Detected PIPT I-cache on CPU8 Dec 13 15:03:44.164510 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 Dec 13 15:03:44.164517 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 Dec 13 15:03:44.164524 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164531 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] Dec 13 15:03:44.164539 kernel: Detected PIPT I-cache on CPU9 Dec 13 15:03:44.164546 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 Dec 13 15:03:44.164553 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 Dec 13 15:03:44.164560 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164567 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] Dec 13 15:03:44.164574 kernel: Detected PIPT I-cache on CPU10 Dec 13 15:03:44.164581 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 Dec 13 15:03:44.164588 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 Dec 13 15:03:44.164595 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164601 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] Dec 13 15:03:44.164610 kernel: Detected PIPT I-cache on CPU11 Dec 13 15:03:44.164617 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 Dec 13 15:03:44.164624 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 Dec 13 15:03:44.164631 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164638 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] Dec 13 15:03:44.164644 kernel: Detected PIPT I-cache on CPU12 Dec 13 15:03:44.164651 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 Dec 13 15:03:44.164658 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 Dec 13 15:03:44.164665 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164673 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] Dec 13 15:03:44.164680 kernel: Detected PIPT I-cache on CPU13 Dec 13 15:03:44.164688 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 Dec 13 15:03:44.164695 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 Dec 13 15:03:44.164702 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164709 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] Dec 13 15:03:44.164716 kernel: Detected PIPT I-cache on CPU14 Dec 13 15:03:44.164723 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 Dec 13 15:03:44.164730 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 Dec 13 15:03:44.164738 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164745 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] Dec 13 15:03:44.164752 kernel: Detected PIPT I-cache on CPU15 Dec 13 15:03:44.164759 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 Dec 13 15:03:44.164766 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 Dec 13 15:03:44.164773 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164779 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] Dec 13 15:03:44.164786 kernel: Detected PIPT I-cache on CPU16 Dec 13 15:03:44.164796 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 Dec 13 15:03:44.164813 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 Dec 13 15:03:44.164822 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164829 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] Dec 13 15:03:44.164836 kernel: Detected PIPT I-cache on CPU17 Dec 13 15:03:44.164844 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 Dec 13 15:03:44.164851 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 Dec 13 15:03:44.164858 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164865 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] Dec 13 15:03:44.164872 kernel: Detected PIPT I-cache on CPU18 Dec 13 15:03:44.164880 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 Dec 13 15:03:44.164889 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 Dec 13 15:03:44.164896 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164903 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] Dec 13 15:03:44.164910 kernel: Detected PIPT I-cache on CPU19 Dec 13 15:03:44.164918 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 Dec 13 15:03:44.164925 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 Dec 13 15:03:44.164935 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164942 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] Dec 13 15:03:44.164949 kernel: Detected PIPT I-cache on CPU20 Dec 13 15:03:44.164956 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 Dec 13 15:03:44.164964 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 Dec 13 15:03:44.164971 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164978 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] Dec 13 15:03:44.164985 kernel: Detected PIPT I-cache on CPU21 Dec 13 15:03:44.164993 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 Dec 13 15:03:44.165001 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 Dec 13 15:03:44.165008 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165016 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] Dec 13 15:03:44.165023 kernel: Detected PIPT I-cache on CPU22 Dec 13 15:03:44.165030 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 Dec 13 15:03:44.165037 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 Dec 13 15:03:44.165045 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165052 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] Dec 13 15:03:44.165059 kernel: Detected PIPT I-cache on CPU23 Dec 13 15:03:44.165066 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 Dec 13 15:03:44.165075 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 Dec 13 15:03:44.165082 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165090 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] Dec 13 15:03:44.165097 kernel: Detected PIPT I-cache on CPU24 Dec 13 15:03:44.165104 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 Dec 13 15:03:44.165111 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 Dec 13 15:03:44.165119 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165126 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] Dec 13 15:03:44.165133 kernel: Detected PIPT I-cache on CPU25 Dec 13 15:03:44.165142 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 Dec 13 15:03:44.165149 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 Dec 13 15:03:44.165157 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165164 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] Dec 13 15:03:44.165171 kernel: Detected PIPT I-cache on CPU26 Dec 13 15:03:44.165178 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 Dec 13 15:03:44.165186 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 Dec 13 15:03:44.165193 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165200 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] Dec 13 15:03:44.165207 kernel: Detected PIPT I-cache on CPU27 Dec 13 15:03:44.165216 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 Dec 13 15:03:44.165223 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 Dec 13 15:03:44.165231 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165239 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] Dec 13 15:03:44.165248 kernel: Detected PIPT I-cache on CPU28 Dec 13 15:03:44.165255 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 Dec 13 15:03:44.165263 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 Dec 13 15:03:44.165270 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165277 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] Dec 13 15:03:44.165286 kernel: Detected PIPT I-cache on CPU29 Dec 13 15:03:44.165293 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 Dec 13 15:03:44.165301 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 Dec 13 15:03:44.165308 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165315 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] Dec 13 15:03:44.165323 kernel: Detected PIPT I-cache on CPU30 Dec 13 15:03:44.165330 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 Dec 13 15:03:44.165337 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 Dec 13 15:03:44.165345 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165352 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] Dec 13 15:03:44.165361 kernel: Detected PIPT I-cache on CPU31 Dec 13 15:03:44.165368 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 Dec 13 15:03:44.165375 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 Dec 13 15:03:44.165382 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165390 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] Dec 13 15:03:44.165397 kernel: Detected PIPT I-cache on CPU32 Dec 13 15:03:44.165404 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 Dec 13 15:03:44.165412 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 Dec 13 15:03:44.165419 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165427 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] Dec 13 15:03:44.165435 kernel: Detected PIPT I-cache on CPU33 Dec 13 15:03:44.165442 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 Dec 13 15:03:44.165449 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 Dec 13 15:03:44.165457 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165464 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] Dec 13 15:03:44.165471 kernel: Detected PIPT I-cache on CPU34 Dec 13 15:03:44.165478 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 Dec 13 15:03:44.165486 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 Dec 13 15:03:44.165494 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165501 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] Dec 13 15:03:44.165509 kernel: Detected PIPT I-cache on CPU35 Dec 13 15:03:44.165516 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 Dec 13 15:03:44.165523 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 Dec 13 15:03:44.165531 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165538 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] Dec 13 15:03:44.165545 kernel: Detected PIPT I-cache on CPU36 Dec 13 15:03:44.165552 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 Dec 13 15:03:44.165560 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 Dec 13 15:03:44.165568 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165575 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] Dec 13 15:03:44.165583 kernel: Detected PIPT I-cache on CPU37 Dec 13 15:03:44.165590 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 Dec 13 15:03:44.165597 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 Dec 13 15:03:44.165605 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165612 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] Dec 13 15:03:44.165619 kernel: Detected PIPT I-cache on CPU38 Dec 13 15:03:44.165626 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 Dec 13 15:03:44.165635 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 Dec 13 15:03:44.165642 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165650 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] Dec 13 15:03:44.165657 kernel: Detected PIPT I-cache on CPU39 Dec 13 15:03:44.165664 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 Dec 13 15:03:44.165671 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 Dec 13 15:03:44.165679 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165686 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] Dec 13 15:03:44.165695 kernel: Detected PIPT I-cache on CPU40 Dec 13 15:03:44.165702 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 Dec 13 15:03:44.165709 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 Dec 13 15:03:44.165717 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165724 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] Dec 13 15:03:44.165731 kernel: Detected PIPT I-cache on CPU41 Dec 13 15:03:44.165738 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 Dec 13 15:03:44.165747 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 Dec 13 15:03:44.165754 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165761 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] Dec 13 15:03:44.165770 kernel: Detected PIPT I-cache on CPU42 Dec 13 15:03:44.165778 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 Dec 13 15:03:44.165785 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 Dec 13 15:03:44.165795 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165802 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] Dec 13 15:03:44.165809 kernel: Detected PIPT I-cache on CPU43 Dec 13 15:03:44.165817 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 Dec 13 15:03:44.165824 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 Dec 13 15:03:44.165831 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165840 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] Dec 13 15:03:44.165847 kernel: Detected PIPT I-cache on CPU44 Dec 13 15:03:44.165855 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 Dec 13 15:03:44.165862 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 Dec 13 15:03:44.165869 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165876 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] Dec 13 15:03:44.165884 kernel: Detected PIPT I-cache on CPU45 Dec 13 15:03:44.165891 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 Dec 13 15:03:44.165898 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 Dec 13 15:03:44.165907 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165914 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] Dec 13 15:03:44.165921 kernel: Detected PIPT I-cache on CPU46 Dec 13 15:03:44.165929 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 Dec 13 15:03:44.165936 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 Dec 13 15:03:44.165944 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165951 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] Dec 13 15:03:44.165958 kernel: Detected PIPT I-cache on CPU47 Dec 13 15:03:44.165965 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 Dec 13 15:03:44.165973 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 Dec 13 15:03:44.165981 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165989 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] Dec 13 15:03:44.165996 kernel: Detected PIPT I-cache on CPU48 Dec 13 15:03:44.166003 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 Dec 13 15:03:44.166010 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 Dec 13 15:03:44.166018 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166025 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] Dec 13 15:03:44.166032 kernel: Detected PIPT I-cache on CPU49 Dec 13 15:03:44.166039 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 Dec 13 15:03:44.166048 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 Dec 13 15:03:44.166055 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166063 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] Dec 13 15:03:44.166070 kernel: Detected PIPT I-cache on CPU50 Dec 13 15:03:44.166077 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 Dec 13 15:03:44.166084 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 Dec 13 15:03:44.166092 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166099 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] Dec 13 15:03:44.166106 kernel: Detected PIPT I-cache on CPU51 Dec 13 15:03:44.166113 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 Dec 13 15:03:44.166122 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 Dec 13 15:03:44.166129 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166136 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] Dec 13 15:03:44.166144 kernel: Detected PIPT I-cache on CPU52 Dec 13 15:03:44.166151 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 Dec 13 15:03:44.166158 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 Dec 13 15:03:44.166166 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166173 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] Dec 13 15:03:44.166180 kernel: Detected PIPT I-cache on CPU53 Dec 13 15:03:44.166190 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 Dec 13 15:03:44.166197 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 Dec 13 15:03:44.166205 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166212 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] Dec 13 15:03:44.166219 kernel: Detected PIPT I-cache on CPU54 Dec 13 15:03:44.166226 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 Dec 13 15:03:44.166234 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 Dec 13 15:03:44.166241 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166248 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] Dec 13 15:03:44.166255 kernel: Detected PIPT I-cache on CPU55 Dec 13 15:03:44.166264 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 Dec 13 15:03:44.166271 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 Dec 13 15:03:44.166279 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166286 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] Dec 13 15:03:44.166293 kernel: Detected PIPT I-cache on CPU56 Dec 13 15:03:44.166300 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 Dec 13 15:03:44.166308 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 Dec 13 15:03:44.166315 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166322 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] Dec 13 15:03:44.166331 kernel: Detected PIPT I-cache on CPU57 Dec 13 15:03:44.166338 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 Dec 13 15:03:44.166346 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 Dec 13 15:03:44.166353 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166360 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] Dec 13 15:03:44.166367 kernel: Detected PIPT I-cache on CPU58 Dec 13 15:03:44.166375 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 Dec 13 15:03:44.166382 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 Dec 13 15:03:44.166389 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166397 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] Dec 13 15:03:44.166405 kernel: Detected PIPT I-cache on CPU59 Dec 13 15:03:44.166413 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 Dec 13 15:03:44.166420 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 Dec 13 15:03:44.166427 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166434 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] Dec 13 15:03:44.166442 kernel: Detected PIPT I-cache on CPU60 Dec 13 15:03:44.166449 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 Dec 13 15:03:44.166456 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 Dec 13 15:03:44.166464 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166472 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] Dec 13 15:03:44.166479 kernel: Detected PIPT I-cache on CPU61 Dec 13 15:03:44.166487 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 Dec 13 15:03:44.166494 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 Dec 13 15:03:44.166502 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166509 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] Dec 13 15:03:44.166516 kernel: Detected PIPT I-cache on CPU62 Dec 13 15:03:44.166523 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 Dec 13 15:03:44.166530 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 Dec 13 15:03:44.166539 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166546 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] Dec 13 15:03:44.166554 kernel: Detected PIPT I-cache on CPU63 Dec 13 15:03:44.166561 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 Dec 13 15:03:44.166568 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 Dec 13 15:03:44.166575 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166583 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] Dec 13 15:03:44.166590 kernel: Detected PIPT I-cache on CPU64 Dec 13 15:03:44.166597 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 Dec 13 15:03:44.166604 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 Dec 13 15:03:44.166613 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166620 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] Dec 13 15:03:44.166628 kernel: Detected PIPT I-cache on CPU65 Dec 13 15:03:44.166635 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 Dec 13 15:03:44.166643 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 Dec 13 15:03:44.166650 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166657 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] Dec 13 15:03:44.166664 kernel: Detected PIPT I-cache on CPU66 Dec 13 15:03:44.166671 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 Dec 13 15:03:44.166680 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 Dec 13 15:03:44.166688 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166695 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] Dec 13 15:03:44.166702 kernel: Detected PIPT I-cache on CPU67 Dec 13 15:03:44.166710 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 Dec 13 15:03:44.166717 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 Dec 13 15:03:44.166724 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166732 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] Dec 13 15:03:44.166739 kernel: Detected PIPT I-cache on CPU68 Dec 13 15:03:44.166746 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 Dec 13 15:03:44.166755 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 Dec 13 15:03:44.166762 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166769 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] Dec 13 15:03:44.166776 kernel: Detected PIPT I-cache on CPU69 Dec 13 15:03:44.166784 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 Dec 13 15:03:44.166793 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 Dec 13 15:03:44.166800 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166808 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] Dec 13 15:03:44.166815 kernel: Detected PIPT I-cache on CPU70 Dec 13 15:03:44.166824 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 Dec 13 15:03:44.166831 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 Dec 13 15:03:44.166838 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166846 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] Dec 13 15:03:44.166853 kernel: Detected PIPT I-cache on CPU71 Dec 13 15:03:44.166860 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 Dec 13 15:03:44.166867 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 Dec 13 15:03:44.166875 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166882 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] Dec 13 15:03:44.166889 kernel: Detected PIPT I-cache on CPU72 Dec 13 15:03:44.166898 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 Dec 13 15:03:44.166905 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 Dec 13 15:03:44.166913 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166920 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] Dec 13 15:03:44.166927 kernel: Detected PIPT I-cache on CPU73 Dec 13 15:03:44.166934 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 Dec 13 15:03:44.166942 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 Dec 13 15:03:44.166949 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166956 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] Dec 13 15:03:44.166965 kernel: Detected PIPT I-cache on CPU74 Dec 13 15:03:44.166972 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 Dec 13 15:03:44.166979 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 Dec 13 15:03:44.166987 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166994 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] Dec 13 15:03:44.167001 kernel: Detected PIPT I-cache on CPU75 Dec 13 15:03:44.167008 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 Dec 13 15:03:44.167016 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 Dec 13 15:03:44.167023 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.167030 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] Dec 13 15:03:44.167039 kernel: Detected PIPT I-cache on CPU76 Dec 13 15:03:44.167046 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 Dec 13 15:03:44.167053 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 Dec 13 15:03:44.167061 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.167068 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] Dec 13 15:03:44.167075 kernel: Detected PIPT I-cache on CPU77 Dec 13 15:03:44.167083 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 Dec 13 15:03:44.167090 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 Dec 13 15:03:44.167097 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.167106 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] Dec 13 15:03:44.167113 kernel: Detected PIPT I-cache on CPU78 Dec 13 15:03:44.167120 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 Dec 13 15:03:44.167128 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 Dec 13 15:03:44.167135 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.167142 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] Dec 13 15:03:44.167149 kernel: Detected PIPT I-cache on CPU79 Dec 13 15:03:44.167156 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 Dec 13 15:03:44.167164 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 Dec 13 15:03:44.167172 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.167180 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] Dec 13 15:03:44.167187 kernel: smp: Brought up 1 node, 80 CPUs Dec 13 15:03:44.167194 kernel: SMP: Total of 80 processors activated. Dec 13 15:03:44.167201 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 15:03:44.167209 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 15:03:44.167216 kernel: CPU features: detected: Common not Private translations Dec 13 15:03:44.167223 kernel: CPU features: detected: CRC32 instructions Dec 13 15:03:44.167231 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 15:03:44.167238 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 15:03:44.167246 kernel: CPU features: detected: LSE atomic instructions Dec 13 15:03:44.167254 kernel: CPU features: detected: Privileged Access Never Dec 13 15:03:44.167261 kernel: CPU features: detected: RAS Extension Support Dec 13 15:03:44.167268 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 15:03:44.167276 kernel: CPU: All CPU(s) started at EL2 Dec 13 15:03:44.167283 kernel: alternatives: applying system-wide alternatives Dec 13 15:03:44.167290 kernel: devtmpfs: initialized Dec 13 15:03:44.167297 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 15:03:44.167305 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Dec 13 15:03:44.167314 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 15:03:44.167321 kernel: SMBIOS 3.4.0 present. Dec 13 15:03:44.167328 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 Dec 13 15:03:44.167336 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 15:03:44.167343 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations Dec 13 15:03:44.167350 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 15:03:44.167358 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 15:03:44.167365 kernel: audit: initializing netlink subsys (disabled) Dec 13 15:03:44.167372 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 Dec 13 15:03:44.167381 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 15:03:44.167389 kernel: cpuidle: using governor menu Dec 13 15:03:44.167396 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 15:03:44.167403 kernel: ASID allocator initialised with 32768 entries Dec 13 15:03:44.167410 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 15:03:44.167418 kernel: Serial: AMBA PL011 UART driver Dec 13 15:03:44.167425 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 15:03:44.167432 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 15:03:44.167439 kernel: Modules: 508880 pages in range for PLT usage Dec 13 15:03:44.167448 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 15:03:44.167455 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 15:03:44.167463 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 15:03:44.167470 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 15:03:44.167477 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 15:03:44.167485 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 15:03:44.167492 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 15:03:44.167500 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 15:03:44.167507 kernel: ACPI: Added _OSI(Module Device) Dec 13 15:03:44.167515 kernel: ACPI: Added _OSI(Processor Device) Dec 13 15:03:44.167523 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 15:03:44.167530 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 15:03:44.167537 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded Dec 13 15:03:44.167545 kernel: ACPI: Interpreter enabled Dec 13 15:03:44.167552 kernel: ACPI: Using GIC for interrupt routing Dec 13 15:03:44.167560 kernel: ACPI: MCFG table detected, 8 entries Dec 13 15:03:44.167567 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 Dec 13 15:03:44.167574 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 Dec 13 15:03:44.167583 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 Dec 13 15:03:44.167590 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 Dec 13 15:03:44.167598 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 Dec 13 15:03:44.167605 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 Dec 13 15:03:44.167613 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 Dec 13 15:03:44.167620 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 Dec 13 15:03:44.167627 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA Dec 13 15:03:44.167635 kernel: printk: console [ttyAMA0] enabled Dec 13 15:03:44.167642 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA Dec 13 15:03:44.167651 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) Dec 13 15:03:44.167781 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 15:03:44.167859 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 15:03:44.167924 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] Dec 13 15:03:44.167985 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 15:03:44.168047 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 Dec 13 15:03:44.168108 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] Dec 13 15:03:44.168121 kernel: PCI host bridge to bus 000d:00 Dec 13 15:03:44.168190 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] Dec 13 15:03:44.168249 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] Dec 13 15:03:44.168304 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] Dec 13 15:03:44.168383 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 Dec 13 15:03:44.168457 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 Dec 13 15:03:44.168524 kernel: pci 000d:00:01.0: enabling Extended Tags Dec 13 15:03:44.168587 kernel: pci 000d:00:01.0: supports D1 D2 Dec 13 15:03:44.168650 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.168722 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 Dec 13 15:03:44.168786 kernel: pci 000d:00:02.0: supports D1 D2 Dec 13 15:03:44.168912 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.168985 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 Dec 13 15:03:44.169051 kernel: pci 000d:00:03.0: supports D1 D2 Dec 13 15:03:44.169112 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.169182 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 Dec 13 15:03:44.169245 kernel: pci 000d:00:04.0: supports D1 D2 Dec 13 15:03:44.169305 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.169314 kernel: acpiphp: Slot [1] registered Dec 13 15:03:44.169323 kernel: acpiphp: Slot [2] registered Dec 13 15:03:44.169331 kernel: acpiphp: Slot [3] registered Dec 13 15:03:44.169338 kernel: acpiphp: Slot [4] registered Dec 13 15:03:44.169392 kernel: pci_bus 000d:00: on NUMA node 0 Dec 13 15:03:44.169453 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 15:03:44.169514 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.169575 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.169636 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 15:03:44.169698 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.169764 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.169831 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 15:03:44.169897 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.169962 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.170027 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 15:03:44.170090 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.170154 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.170218 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] Dec 13 15:03:44.170280 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] Dec 13 15:03:44.170343 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] Dec 13 15:03:44.170405 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] Dec 13 15:03:44.170468 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] Dec 13 15:03:44.170529 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] Dec 13 15:03:44.170594 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] Dec 13 15:03:44.170656 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] Dec 13 15:03:44.170718 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.170779 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.170846 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.170908 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.170971 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.171033 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.171098 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.171161 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.171222 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.171284 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.171345 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.171407 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.171469 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.171532 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.171596 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.171660 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.171722 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Dec 13 15:03:44.171785 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] Dec 13 15:03:44.171850 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] Dec 13 15:03:44.171913 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Dec 13 15:03:44.171976 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] Dec 13 15:03:44.172041 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] Dec 13 15:03:44.172105 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Dec 13 15:03:44.172167 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] Dec 13 15:03:44.172230 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] Dec 13 15:03:44.172291 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Dec 13 15:03:44.172355 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] Dec 13 15:03:44.172419 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] Dec 13 15:03:44.172478 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] Dec 13 15:03:44.172533 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] Dec 13 15:03:44.172603 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] Dec 13 15:03:44.172662 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] Dec 13 15:03:44.172729 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] Dec 13 15:03:44.172790 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] Dec 13 15:03:44.172868 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] Dec 13 15:03:44.172926 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] Dec 13 15:03:44.172993 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] Dec 13 15:03:44.173050 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] Dec 13 15:03:44.173060 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) Dec 13 15:03:44.173129 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 15:03:44.173193 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 15:03:44.173253 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] Dec 13 15:03:44.173313 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 15:03:44.173372 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 Dec 13 15:03:44.173434 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] Dec 13 15:03:44.173444 kernel: PCI host bridge to bus 0000:00 Dec 13 15:03:44.173506 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] Dec 13 15:03:44.173564 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] Dec 13 15:03:44.173618 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 15:03:44.173689 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 Dec 13 15:03:44.173759 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 Dec 13 15:03:44.173827 kernel: pci 0000:00:01.0: enabling Extended Tags Dec 13 15:03:44.173890 kernel: pci 0000:00:01.0: supports D1 D2 Dec 13 15:03:44.173955 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.174026 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 Dec 13 15:03:44.174089 kernel: pci 0000:00:02.0: supports D1 D2 Dec 13 15:03:44.174153 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.174223 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 Dec 13 15:03:44.174287 kernel: pci 0000:00:03.0: supports D1 D2 Dec 13 15:03:44.174349 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.174421 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 Dec 13 15:03:44.174483 kernel: pci 0000:00:04.0: supports D1 D2 Dec 13 15:03:44.174546 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.174556 kernel: acpiphp: Slot [1-1] registered Dec 13 15:03:44.174563 kernel: acpiphp: Slot [2-1] registered Dec 13 15:03:44.174571 kernel: acpiphp: Slot [3-1] registered Dec 13 15:03:44.174578 kernel: acpiphp: Slot [4-1] registered Dec 13 15:03:44.174631 kernel: pci_bus 0000:00: on NUMA node 0 Dec 13 15:03:44.174696 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 15:03:44.174758 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.174825 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.174887 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 15:03:44.174949 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.175010 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.175073 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 15:03:44.175137 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.175199 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.175261 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 15:03:44.175323 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.175385 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.175447 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] Dec 13 15:03:44.175510 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Dec 13 15:03:44.175574 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] Dec 13 15:03:44.175637 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Dec 13 15:03:44.175698 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] Dec 13 15:03:44.175761 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Dec 13 15:03:44.175828 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] Dec 13 15:03:44.175890 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Dec 13 15:03:44.175951 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.176014 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.176080 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.176141 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.176204 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.176266 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.176329 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.176390 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.176452 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.176513 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.176577 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.176638 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.176700 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.176762 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.176827 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.176889 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.176951 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 15:03:44.177013 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] Dec 13 15:03:44.177078 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Dec 13 15:03:44.177142 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Dec 13 15:03:44.177204 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] Dec 13 15:03:44.177267 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Dec 13 15:03:44.177329 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Dec 13 15:03:44.177394 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] Dec 13 15:03:44.177455 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Dec 13 15:03:44.177517 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Dec 13 15:03:44.177579 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] Dec 13 15:03:44.177642 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Dec 13 15:03:44.177702 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] Dec 13 15:03:44.177758 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] Dec 13 15:03:44.177828 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] Dec 13 15:03:44.177887 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Dec 13 15:03:44.177952 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] Dec 13 15:03:44.178011 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Dec 13 15:03:44.178087 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] Dec 13 15:03:44.178147 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Dec 13 15:03:44.178213 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] Dec 13 15:03:44.178270 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Dec 13 15:03:44.178280 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) Dec 13 15:03:44.178348 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 15:03:44.178408 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 15:03:44.178471 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] Dec 13 15:03:44.178531 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 15:03:44.178591 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 Dec 13 15:03:44.178649 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] Dec 13 15:03:44.178659 kernel: PCI host bridge to bus 0005:00 Dec 13 15:03:44.178722 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] Dec 13 15:03:44.178780 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] Dec 13 15:03:44.178839 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] Dec 13 15:03:44.178908 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 Dec 13 15:03:44.178978 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 Dec 13 15:03:44.179042 kernel: pci 0005:00:01.0: supports D1 D2 Dec 13 15:03:44.179104 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.179175 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 Dec 13 15:03:44.179240 kernel: pci 0005:00:03.0: supports D1 D2 Dec 13 15:03:44.179303 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.179371 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 Dec 13 15:03:44.179434 kernel: pci 0005:00:05.0: supports D1 D2 Dec 13 15:03:44.179496 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.179566 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 Dec 13 15:03:44.179628 kernel: pci 0005:00:07.0: supports D1 D2 Dec 13 15:03:44.179693 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.179702 kernel: acpiphp: Slot [1-2] registered Dec 13 15:03:44.179710 kernel: acpiphp: Slot [2-2] registered Dec 13 15:03:44.179778 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 Dec 13 15:03:44.179849 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] Dec 13 15:03:44.179916 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] Dec 13 15:03:44.179990 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 Dec 13 15:03:44.180059 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] Dec 13 15:03:44.180123 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] Dec 13 15:03:44.180181 kernel: pci_bus 0005:00: on NUMA node 0 Dec 13 15:03:44.180244 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 15:03:44.180310 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.180373 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.180437 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 15:03:44.180500 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.180565 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.180628 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 15:03:44.180692 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.180755 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 13 15:03:44.180822 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 15:03:44.180888 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.180952 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 Dec 13 15:03:44.181015 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] Dec 13 15:03:44.181078 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Dec 13 15:03:44.181142 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] Dec 13 15:03:44.181219 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Dec 13 15:03:44.181284 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] Dec 13 15:03:44.181346 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Dec 13 15:03:44.181411 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] Dec 13 15:03:44.181483 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Dec 13 15:03:44.181546 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.181609 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.181671 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.181733 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.181805 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.181868 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.181932 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.181995 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.182058 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.182120 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.182181 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.182244 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.182305 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.182367 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.182429 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.182494 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.182555 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Dec 13 15:03:44.182618 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] Dec 13 15:03:44.182681 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Dec 13 15:03:44.182743 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Dec 13 15:03:44.182810 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] Dec 13 15:03:44.182872 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Dec 13 15:03:44.182941 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] Dec 13 15:03:44.183006 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] Dec 13 15:03:44.183070 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Dec 13 15:03:44.183133 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] Dec 13 15:03:44.183196 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Dec 13 15:03:44.183262 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] Dec 13 15:03:44.183327 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] Dec 13 15:03:44.183393 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Dec 13 15:03:44.183456 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] Dec 13 15:03:44.183519 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Dec 13 15:03:44.183577 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] Dec 13 15:03:44.183632 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] Dec 13 15:03:44.183700 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] Dec 13 15:03:44.183761 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Dec 13 15:03:44.183850 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] Dec 13 15:03:44.183911 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Dec 13 15:03:44.183977 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] Dec 13 15:03:44.184036 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Dec 13 15:03:44.184101 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] Dec 13 15:03:44.184162 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Dec 13 15:03:44.184172 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) Dec 13 15:03:44.184238 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 15:03:44.184300 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 15:03:44.184361 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] Dec 13 15:03:44.184420 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 15:03:44.184483 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 Dec 13 15:03:44.184543 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] Dec 13 15:03:44.184552 kernel: PCI host bridge to bus 0003:00 Dec 13 15:03:44.184616 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] Dec 13 15:03:44.184672 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] Dec 13 15:03:44.184731 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] Dec 13 15:03:44.184805 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 Dec 13 15:03:44.184880 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 Dec 13 15:03:44.184944 kernel: pci 0003:00:01.0: supports D1 D2 Dec 13 15:03:44.185007 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.185078 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 Dec 13 15:03:44.185141 kernel: pci 0003:00:03.0: supports D1 D2 Dec 13 15:03:44.185203 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.185274 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 Dec 13 15:03:44.185339 kernel: pci 0003:00:05.0: supports D1 D2 Dec 13 15:03:44.185402 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.185412 kernel: acpiphp: Slot [1-3] registered Dec 13 15:03:44.185419 kernel: acpiphp: Slot [2-3] registered Dec 13 15:03:44.185490 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 Dec 13 15:03:44.185555 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] Dec 13 15:03:44.185619 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] Dec 13 15:03:44.185686 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] Dec 13 15:03:44.185752 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 15:03:44.185857 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] Dec 13 15:03:44.185926 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 15:03:44.185994 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] Dec 13 15:03:44.186059 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) Dec 13 15:03:44.186123 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) Dec 13 15:03:44.186193 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 Dec 13 15:03:44.186260 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] Dec 13 15:03:44.186323 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] Dec 13 15:03:44.186386 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] Dec 13 15:03:44.186448 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold Dec 13 15:03:44.186510 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] Dec 13 15:03:44.186574 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 15:03:44.186637 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] Dec 13 15:03:44.186701 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) Dec 13 15:03:44.186757 kernel: pci_bus 0003:00: on NUMA node 0 Dec 13 15:03:44.186822 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 15:03:44.186884 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.186945 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.187008 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 15:03:44.187070 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.187135 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.187200 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 Dec 13 15:03:44.187263 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 Dec 13 15:03:44.187336 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Dec 13 15:03:44.187401 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] Dec 13 15:03:44.187464 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] Dec 13 15:03:44.187527 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] Dec 13 15:03:44.187589 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] Dec 13 15:03:44.187653 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] Dec 13 15:03:44.187716 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.187778 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.187845 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.187907 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.187970 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.188032 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.188098 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.188160 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.188223 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.188285 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.188348 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.188410 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.188472 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Dec 13 15:03:44.188533 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] Dec 13 15:03:44.188598 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] Dec 13 15:03:44.188660 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Dec 13 15:03:44.188722 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] Dec 13 15:03:44.188785 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] Dec 13 15:03:44.188854 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] Dec 13 15:03:44.188920 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] Dec 13 15:03:44.188987 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] Dec 13 15:03:44.189053 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] Dec 13 15:03:44.189117 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] Dec 13 15:03:44.189181 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] Dec 13 15:03:44.189246 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] Dec 13 15:03:44.189310 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] Dec 13 15:03:44.189375 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] Dec 13 15:03:44.189441 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] Dec 13 15:03:44.189509 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] Dec 13 15:03:44.189573 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] Dec 13 15:03:44.189638 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] Dec 13 15:03:44.189701 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] Dec 13 15:03:44.189765 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] Dec 13 15:03:44.189835 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] Dec 13 15:03:44.189898 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] Dec 13 15:03:44.189963 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] Dec 13 15:03:44.190025 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] Dec 13 15:03:44.190084 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc Dec 13 15:03:44.190140 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] Dec 13 15:03:44.190197 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] Dec 13 15:03:44.190273 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] Dec 13 15:03:44.190335 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] Dec 13 15:03:44.190401 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] Dec 13 15:03:44.190460 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] Dec 13 15:03:44.190525 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] Dec 13 15:03:44.190585 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] Dec 13 15:03:44.190595 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) Dec 13 15:03:44.190664 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 15:03:44.190728 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 15:03:44.190789 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] Dec 13 15:03:44.190853 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 15:03:44.190914 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 Dec 13 15:03:44.190974 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] Dec 13 15:03:44.190987 kernel: PCI host bridge to bus 000c:00 Dec 13 15:03:44.191052 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] Dec 13 15:03:44.191110 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] Dec 13 15:03:44.191166 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] Dec 13 15:03:44.191236 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 Dec 13 15:03:44.191308 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 Dec 13 15:03:44.191371 kernel: pci 000c:00:01.0: enabling Extended Tags Dec 13 15:03:44.191434 kernel: pci 000c:00:01.0: supports D1 D2 Dec 13 15:03:44.191496 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.191570 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 Dec 13 15:03:44.191633 kernel: pci 000c:00:02.0: supports D1 D2 Dec 13 15:03:44.191696 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.191768 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 Dec 13 15:03:44.191837 kernel: pci 000c:00:03.0: supports D1 D2 Dec 13 15:03:44.191901 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.191971 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 Dec 13 15:03:44.192038 kernel: pci 000c:00:04.0: supports D1 D2 Dec 13 15:03:44.192100 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.192110 kernel: acpiphp: Slot [1-4] registered Dec 13 15:03:44.192117 kernel: acpiphp: Slot [2-4] registered Dec 13 15:03:44.192125 kernel: acpiphp: Slot [3-2] registered Dec 13 15:03:44.192133 kernel: acpiphp: Slot [4-2] registered Dec 13 15:03:44.192188 kernel: pci_bus 000c:00: on NUMA node 0 Dec 13 15:03:44.192250 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 15:03:44.192315 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.192377 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.192442 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 15:03:44.192505 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.192568 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.192631 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 15:03:44.192693 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.192757 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.193040 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 15:03:44.193110 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.193172 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.193234 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] Dec 13 15:03:44.193295 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] Dec 13 15:03:44.193356 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] Dec 13 15:03:44.193421 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] Dec 13 15:03:44.193483 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] Dec 13 15:03:44.193545 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] Dec 13 15:03:44.193606 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] Dec 13 15:03:44.193667 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] Dec 13 15:03:44.193727 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.193788 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.193857 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.193918 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.193979 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.194039 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.194100 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.194162 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.194222 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.194283 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.194344 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.194407 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.194470 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.194531 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.194592 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.194653 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.194713 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Dec 13 15:03:44.194775 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] Dec 13 15:03:44.194839 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] Dec 13 15:03:44.194903 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Dec 13 15:03:44.194964 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] Dec 13 15:03:44.195026 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] Dec 13 15:03:44.195087 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Dec 13 15:03:44.195148 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] Dec 13 15:03:44.195210 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] Dec 13 15:03:44.195273 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Dec 13 15:03:44.195334 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] Dec 13 15:03:44.195395 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] Dec 13 15:03:44.195452 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] Dec 13 15:03:44.195507 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] Dec 13 15:03:44.195574 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] Dec 13 15:03:44.195634 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] Dec 13 15:03:44.195706 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] Dec 13 15:03:44.195764 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] Dec 13 15:03:44.195832 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] Dec 13 15:03:44.195890 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] Dec 13 15:03:44.195955 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] Dec 13 15:03:44.196011 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] Dec 13 15:03:44.196024 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) Dec 13 15:03:44.196091 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 15:03:44.196151 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 15:03:44.196210 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] Dec 13 15:03:44.196269 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 15:03:44.196328 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 Dec 13 15:03:44.196387 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] Dec 13 15:03:44.196399 kernel: PCI host bridge to bus 0002:00 Dec 13 15:03:44.196462 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] Dec 13 15:03:44.196517 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] Dec 13 15:03:44.196571 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] Dec 13 15:03:44.196640 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 Dec 13 15:03:44.196710 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 Dec 13 15:03:44.196775 kernel: pci 0002:00:01.0: supports D1 D2 Dec 13 15:03:44.196840 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.196908 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 Dec 13 15:03:44.196970 kernel: pci 0002:00:03.0: supports D1 D2 Dec 13 15:03:44.197031 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.197100 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 Dec 13 15:03:44.197162 kernel: pci 0002:00:05.0: supports D1 D2 Dec 13 15:03:44.197226 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.197294 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 Dec 13 15:03:44.197357 kernel: pci 0002:00:07.0: supports D1 D2 Dec 13 15:03:44.197418 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.197428 kernel: acpiphp: Slot [1-5] registered Dec 13 15:03:44.197436 kernel: acpiphp: Slot [2-5] registered Dec 13 15:03:44.197444 kernel: acpiphp: Slot [3-3] registered Dec 13 15:03:44.197452 kernel: acpiphp: Slot [4-3] registered Dec 13 15:03:44.197508 kernel: pci_bus 0002:00: on NUMA node 0 Dec 13 15:03:44.197569 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 15:03:44.197630 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.197691 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.197756 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 15:03:44.197822 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.197884 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.197948 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 15:03:44.198010 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.198070 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.198133 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 15:03:44.198195 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.198258 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.198320 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] Dec 13 15:03:44.198381 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] Dec 13 15:03:44.198443 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] Dec 13 15:03:44.198504 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] Dec 13 15:03:44.198565 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] Dec 13 15:03:44.198626 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] Dec 13 15:03:44.198690 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] Dec 13 15:03:44.198752 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] Dec 13 15:03:44.198818 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.198879 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.198941 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.199001 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.199063 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.199127 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.199190 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.199252 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.199314 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.199376 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.199436 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.199498 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.199559 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.199621 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.199681 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.199747 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.199869 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Dec 13 15:03:44.199941 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] Dec 13 15:03:44.200004 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] Dec 13 15:03:44.200066 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Dec 13 15:03:44.200126 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] Dec 13 15:03:44.200187 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] Dec 13 15:03:44.200252 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Dec 13 15:03:44.200314 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] Dec 13 15:03:44.200375 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] Dec 13 15:03:44.200437 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Dec 13 15:03:44.200499 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] Dec 13 15:03:44.200561 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] Dec 13 15:03:44.200620 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] Dec 13 15:03:44.200675 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] Dec 13 15:03:44.200742 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] Dec 13 15:03:44.200803 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] Dec 13 15:03:44.200884 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] Dec 13 15:03:44.200943 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] Dec 13 15:03:44.201020 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] Dec 13 15:03:44.201078 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] Dec 13 15:03:44.201143 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] Dec 13 15:03:44.201201 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] Dec 13 15:03:44.201211 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) Dec 13 15:03:44.201277 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 15:03:44.201338 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 15:03:44.201400 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] Dec 13 15:03:44.201458 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 15:03:44.201518 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 Dec 13 15:03:44.201576 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] Dec 13 15:03:44.201586 kernel: PCI host bridge to bus 0001:00 Dec 13 15:03:44.201650 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] Dec 13 15:03:44.201707 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] Dec 13 15:03:44.201761 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] Dec 13 15:03:44.201835 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 Dec 13 15:03:44.201906 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 Dec 13 15:03:44.201968 kernel: pci 0001:00:01.0: enabling Extended Tags Dec 13 15:03:44.202030 kernel: pci 0001:00:01.0: supports D1 D2 Dec 13 15:03:44.202093 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.202164 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 Dec 13 15:03:44.202226 kernel: pci 0001:00:02.0: supports D1 D2 Dec 13 15:03:44.202288 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.202356 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 Dec 13 15:03:44.202418 kernel: pci 0001:00:03.0: supports D1 D2 Dec 13 15:03:44.202479 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.202550 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 Dec 13 15:03:44.202613 kernel: pci 0001:00:04.0: supports D1 D2 Dec 13 15:03:44.202674 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.202684 kernel: acpiphp: Slot [1-6] registered Dec 13 15:03:44.202753 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 Dec 13 15:03:44.202825 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] Dec 13 15:03:44.202892 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] Dec 13 15:03:44.202958 kernel: pci 0001:01:00.0: PME# supported from D3cold Dec 13 15:03:44.203022 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 15:03:44.203096 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 Dec 13 15:03:44.203159 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] Dec 13 15:03:44.203224 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] Dec 13 15:03:44.203287 kernel: pci 0001:01:00.1: PME# supported from D3cold Dec 13 15:03:44.203297 kernel: acpiphp: Slot [2-6] registered Dec 13 15:03:44.203305 kernel: acpiphp: Slot [3-4] registered Dec 13 15:03:44.203314 kernel: acpiphp: Slot [4-4] registered Dec 13 15:03:44.203370 kernel: pci_bus 0001:00: on NUMA node 0 Dec 13 15:03:44.203432 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 15:03:44.203496 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 15:03:44.203557 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.203620 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.203682 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 15:03:44.203743 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.203930 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.204000 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 15:03:44.204063 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.204124 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.204186 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] Dec 13 15:03:44.204247 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] Dec 13 15:03:44.204313 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] Dec 13 15:03:44.204374 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] Dec 13 15:03:44.204435 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] Dec 13 15:03:44.204496 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] Dec 13 15:03:44.204558 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] Dec 13 15:03:44.204618 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] Dec 13 15:03:44.204680 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.204740 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.204809 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.204870 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.204932 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.204993 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.205055 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.205115 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.205176 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.205237 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.205301 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.205362 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.205422 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.205483 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.205545 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.205606 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.205670 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] Dec 13 15:03:44.205735 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] Dec 13 15:03:44.205801 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] Dec 13 15:03:44.205868 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] Dec 13 15:03:44.205929 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Dec 13 15:03:44.205993 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Dec 13 15:03:44.206054 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] Dec 13 15:03:44.206115 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Dec 13 15:03:44.206177 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] Dec 13 15:03:44.206240 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] Dec 13 15:03:44.206303 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Dec 13 15:03:44.206365 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] Dec 13 15:03:44.206426 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] Dec 13 15:03:44.206488 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Dec 13 15:03:44.206549 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] Dec 13 15:03:44.206612 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] Dec 13 15:03:44.206669 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] Dec 13 15:03:44.206723 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] Dec 13 15:03:44.206803 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] Dec 13 15:03:44.206862 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] Dec 13 15:03:44.206927 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] Dec 13 15:03:44.206987 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] Dec 13 15:03:44.207054 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] Dec 13 15:03:44.207111 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] Dec 13 15:03:44.207176 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] Dec 13 15:03:44.207233 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] Dec 13 15:03:44.207243 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) Dec 13 15:03:44.207309 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 15:03:44.207372 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 15:03:44.207431 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] Dec 13 15:03:44.207490 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 15:03:44.207548 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 Dec 13 15:03:44.207607 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] Dec 13 15:03:44.207617 kernel: PCI host bridge to bus 0004:00 Dec 13 15:03:44.207678 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] Dec 13 15:03:44.207735 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] Dec 13 15:03:44.207789 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] Dec 13 15:03:44.207862 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 Dec 13 15:03:44.207931 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 Dec 13 15:03:44.207994 kernel: pci 0004:00:01.0: supports D1 D2 Dec 13 15:03:44.208056 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.208124 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 Dec 13 15:03:44.208190 kernel: pci 0004:00:03.0: supports D1 D2 Dec 13 15:03:44.208250 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.208319 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 Dec 13 15:03:44.208380 kernel: pci 0004:00:05.0: supports D1 D2 Dec 13 15:03:44.208442 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.208513 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 Dec 13 15:03:44.208580 kernel: pci 0004:01:00.0: enabling Extended Tags Dec 13 15:03:44.208643 kernel: pci 0004:01:00.0: supports D1 D2 Dec 13 15:03:44.208706 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 15:03:44.208783 kernel: pci_bus 0004:02: extended config space not accessible Dec 13 15:03:44.208861 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 Dec 13 15:03:44.208928 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] Dec 13 15:03:44.208993 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] Dec 13 15:03:44.209061 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] Dec 13 15:03:44.209127 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb Dec 13 15:03:44.209192 kernel: pci 0004:02:00.0: supports D1 D2 Dec 13 15:03:44.209259 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 15:03:44.209330 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 Dec 13 15:03:44.209394 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] Dec 13 15:03:44.209457 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 15:03:44.209513 kernel: pci_bus 0004:00: on NUMA node 0 Dec 13 15:03:44.209577 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 Dec 13 15:03:44.209640 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 15:03:44.209701 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.209762 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 13 15:03:44.209829 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 15:03:44.209891 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.209952 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.210017 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] Dec 13 15:03:44.210079 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] Dec 13 15:03:44.210140 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] Dec 13 15:03:44.210202 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] Dec 13 15:03:44.210262 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] Dec 13 15:03:44.210323 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] Dec 13 15:03:44.210384 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.210447 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.210508 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.210569 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.210630 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.210692 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.210756 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.210822 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.210885 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.210946 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.211010 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.211070 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.211135 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] Dec 13 15:03:44.211198 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.211262 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.211328 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] Dec 13 15:03:44.211395 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] Dec 13 15:03:44.211460 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] Dec 13 15:03:44.211527 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] Dec 13 15:03:44.211591 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Dec 13 15:03:44.211653 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] Dec 13 15:03:44.211716 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Dec 13 15:03:44.211776 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] Dec 13 15:03:44.211958 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] Dec 13 15:03:44.212029 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] Dec 13 15:03:44.212092 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Dec 13 15:03:44.212157 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] Dec 13 15:03:44.212219 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] Dec 13 15:03:44.212281 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Dec 13 15:03:44.212342 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] Dec 13 15:03:44.212404 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] Dec 13 15:03:44.212461 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc Dec 13 15:03:44.212520 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] Dec 13 15:03:44.212575 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] Dec 13 15:03:44.212643 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] Dec 13 15:03:44.212701 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] Dec 13 15:03:44.212762 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] Dec 13 15:03:44.212832 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] Dec 13 15:03:44.212892 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] Dec 13 15:03:44.212957 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] Dec 13 15:03:44.213014 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] Dec 13 15:03:44.213024 kernel: iommu: Default domain type: Translated Dec 13 15:03:44.213032 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 15:03:44.213040 kernel: efivars: Registered efivars operations Dec 13 15:03:44.213105 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device Dec 13 15:03:44.213171 kernel: pci 0004:02:00.0: vgaarb: bridge control possible Dec 13 15:03:44.213239 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none Dec 13 15:03:44.213249 kernel: vgaarb: loaded Dec 13 15:03:44.213257 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 15:03:44.213265 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 15:03:44.213273 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 15:03:44.213281 kernel: pnp: PnP ACPI init Dec 13 15:03:44.213347 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved Dec 13 15:03:44.213406 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved Dec 13 15:03:44.213463 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved Dec 13 15:03:44.213518 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved Dec 13 15:03:44.213574 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved Dec 13 15:03:44.213629 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved Dec 13 15:03:44.213686 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved Dec 13 15:03:44.213741 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved Dec 13 15:03:44.213754 kernel: pnp: PnP ACPI: found 1 devices Dec 13 15:03:44.213762 kernel: NET: Registered PF_INET protocol family Dec 13 15:03:44.213769 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 15:03:44.213777 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 15:03:44.213785 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 15:03:44.213796 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 15:03:44.213804 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 15:03:44.213814 kernel: TCP: Hash tables configured (established 524288 bind 65536) Dec 13 15:03:44.213822 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 15:03:44.213831 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 15:03:44.213839 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 15:03:44.213904 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes Dec 13 15:03:44.213915 kernel: kvm [1]: IPA Size Limit: 48 bits Dec 13 15:03:44.213923 kernel: kvm [1]: GICv3: no GICV resource entry Dec 13 15:03:44.213931 kernel: kvm [1]: disabling GICv2 emulation Dec 13 15:03:44.213939 kernel: kvm [1]: GIC system register CPU interface enabled Dec 13 15:03:44.213946 kernel: kvm [1]: vgic interrupt IRQ9 Dec 13 15:03:44.213954 kernel: kvm [1]: VHE mode initialized successfully Dec 13 15:03:44.213964 kernel: Initialise system trusted keyrings Dec 13 15:03:44.213972 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 Dec 13 15:03:44.213979 kernel: Key type asymmetric registered Dec 13 15:03:44.213987 kernel: Asymmetric key parser 'x509' registered Dec 13 15:03:44.213994 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 15:03:44.214002 kernel: io scheduler mq-deadline registered Dec 13 15:03:44.214010 kernel: io scheduler kyber registered Dec 13 15:03:44.214017 kernel: io scheduler bfq registered Dec 13 15:03:44.214025 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 15:03:44.214034 kernel: ACPI: button: Power Button [PWRB] Dec 13 15:03:44.214042 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). Dec 13 15:03:44.214050 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 15:03:44.214120 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 Dec 13 15:03:44.214180 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 15:03:44.214238 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 15:03:44.214294 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq Dec 13 15:03:44.214354 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq Dec 13 15:03:44.214410 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq Dec 13 15:03:44.214475 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 Dec 13 15:03:44.214532 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 15:03:44.214589 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 15:03:44.214646 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq Dec 13 15:03:44.214703 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq Dec 13 15:03:44.214762 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq Dec 13 15:03:44.214831 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 Dec 13 15:03:44.214889 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 15:03:44.214946 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 15:03:44.215002 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq Dec 13 15:03:44.215060 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq Dec 13 15:03:44.215120 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq Dec 13 15:03:44.215183 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 Dec 13 15:03:44.215241 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 15:03:44.215297 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 15:03:44.215354 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq Dec 13 15:03:44.215411 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq Dec 13 15:03:44.215468 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq Dec 13 15:03:44.215542 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 Dec 13 15:03:44.215600 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 15:03:44.215657 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 15:03:44.215714 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq Dec 13 15:03:44.215770 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq Dec 13 15:03:44.215831 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq Dec 13 15:03:44.215899 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 Dec 13 15:03:44.215957 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 15:03:44.216014 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 15:03:44.216071 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq Dec 13 15:03:44.216128 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq Dec 13 15:03:44.216184 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq Dec 13 15:03:44.216250 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 Dec 13 15:03:44.216309 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 15:03:44.216369 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 15:03:44.216427 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq Dec 13 15:03:44.216485 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq Dec 13 15:03:44.216541 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq Dec 13 15:03:44.216607 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 Dec 13 15:03:44.216666 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 15:03:44.216723 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 15:03:44.216780 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq Dec 13 15:03:44.216841 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq Dec 13 15:03:44.216898 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq Dec 13 15:03:44.216909 kernel: thunder_xcv, ver 1.0 Dec 13 15:03:44.216917 kernel: thunder_bgx, ver 1.0 Dec 13 15:03:44.216926 kernel: nicpf, ver 1.0 Dec 13 15:03:44.216934 kernel: nicvf, ver 1.0 Dec 13 15:03:44.216999 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 15:03:44.217058 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T15:03:42 UTC (1734102222) Dec 13 15:03:44.217068 kernel: efifb: probing for efifb Dec 13 15:03:44.217076 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k Dec 13 15:03:44.217084 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Dec 13 15:03:44.217092 kernel: efifb: scrolling: redraw Dec 13 15:03:44.217101 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 15:03:44.217109 kernel: Console: switching to colour frame buffer device 100x37 Dec 13 15:03:44.217117 kernel: fb0: EFI VGA frame buffer device Dec 13 15:03:44.217124 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 Dec 13 15:03:44.217132 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 15:03:44.217140 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 15:03:44.217148 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 15:03:44.217156 kernel: watchdog: Hard watchdog permanently disabled Dec 13 15:03:44.217163 kernel: NET: Registered PF_INET6 protocol family Dec 13 15:03:44.217172 kernel: Segment Routing with IPv6 Dec 13 15:03:44.217180 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 15:03:44.217188 kernel: NET: Registered PF_PACKET protocol family Dec 13 15:03:44.217196 kernel: Key type dns_resolver registered Dec 13 15:03:44.217203 kernel: registered taskstats version 1 Dec 13 15:03:44.217211 kernel: Loading compiled-in X.509 certificates Dec 13 15:03:44.217219 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 752b3e36c6039904ea643ccad2b3f5f3cb4ebf78' Dec 13 15:03:44.217226 kernel: Key type .fscrypt registered Dec 13 15:03:44.217234 kernel: Key type fscrypt-provisioning registered Dec 13 15:03:44.217241 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 15:03:44.217251 kernel: ima: Allocated hash algorithm: sha1 Dec 13 15:03:44.217258 kernel: ima: No architecture policies found Dec 13 15:03:44.217266 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 15:03:44.217330 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 Dec 13 15:03:44.217394 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 Dec 13 15:03:44.217458 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 Dec 13 15:03:44.217521 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 Dec 13 15:03:44.217584 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 Dec 13 15:03:44.217649 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 Dec 13 15:03:44.217711 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 Dec 13 15:03:44.217773 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 Dec 13 15:03:44.217840 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 Dec 13 15:03:44.217902 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 Dec 13 15:03:44.217964 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 Dec 13 15:03:44.218026 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 Dec 13 15:03:44.218088 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 Dec 13 15:03:44.218151 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 Dec 13 15:03:44.218216 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 Dec 13 15:03:44.218278 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 Dec 13 15:03:44.218341 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 Dec 13 15:03:44.218402 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 Dec 13 15:03:44.218465 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 Dec 13 15:03:44.218527 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 Dec 13 15:03:44.218590 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 Dec 13 15:03:44.218651 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 Dec 13 15:03:44.218717 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 Dec 13 15:03:44.218779 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 Dec 13 15:03:44.218845 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 Dec 13 15:03:44.218906 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 Dec 13 15:03:44.218969 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 Dec 13 15:03:44.219030 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 Dec 13 15:03:44.219094 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 Dec 13 15:03:44.219155 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 Dec 13 15:03:44.219221 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 Dec 13 15:03:44.219284 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 Dec 13 15:03:44.219346 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 Dec 13 15:03:44.219408 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 Dec 13 15:03:44.219470 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 Dec 13 15:03:44.219533 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 Dec 13 15:03:44.219596 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 Dec 13 15:03:44.219658 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 Dec 13 15:03:44.219723 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 Dec 13 15:03:44.219786 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 Dec 13 15:03:44.219851 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 Dec 13 15:03:44.219914 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 Dec 13 15:03:44.219976 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 Dec 13 15:03:44.220039 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 Dec 13 15:03:44.220101 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 Dec 13 15:03:44.220163 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 Dec 13 15:03:44.220225 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 Dec 13 15:03:44.220289 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 Dec 13 15:03:44.220353 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 Dec 13 15:03:44.220414 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 Dec 13 15:03:44.220477 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 Dec 13 15:03:44.220538 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 Dec 13 15:03:44.220600 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 Dec 13 15:03:44.220661 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 Dec 13 15:03:44.220726 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 Dec 13 15:03:44.220790 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 Dec 13 15:03:44.220857 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 Dec 13 15:03:44.220918 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 Dec 13 15:03:44.220981 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 Dec 13 15:03:44.221043 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 Dec 13 15:03:44.221108 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 Dec 13 15:03:44.221118 kernel: clk: Disabling unused clocks Dec 13 15:03:44.221128 kernel: Freeing unused kernel memory: 39936K Dec 13 15:03:44.221136 kernel: Run /init as init process Dec 13 15:03:44.221143 kernel: with arguments: Dec 13 15:03:44.221151 kernel: /init Dec 13 15:03:44.221159 kernel: with environment: Dec 13 15:03:44.221166 kernel: HOME=/ Dec 13 15:03:44.221173 kernel: TERM=linux Dec 13 15:03:44.221181 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 15:03:44.221191 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 15:03:44.221202 systemd[1]: Detected architecture arm64. Dec 13 15:03:44.221210 systemd[1]: Running in initrd. Dec 13 15:03:44.221218 systemd[1]: No hostname configured, using default hostname. Dec 13 15:03:44.221226 systemd[1]: Hostname set to . Dec 13 15:03:44.221234 systemd[1]: Initializing machine ID from random generator. Dec 13 15:03:44.221242 systemd[1]: Queued start job for default target initrd.target. Dec 13 15:03:44.221250 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 15:03:44.221260 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 15:03:44.221268 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 15:03:44.221276 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 15:03:44.221284 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 15:03:44.221293 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 15:03:44.221301 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 15:03:44.221310 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 15:03:44.221320 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 15:03:44.221328 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 15:03:44.221336 systemd[1]: Reached target paths.target - Path Units. Dec 13 15:03:44.221344 systemd[1]: Reached target slices.target - Slice Units. Dec 13 15:03:44.221352 systemd[1]: Reached target swap.target - Swaps. Dec 13 15:03:44.221360 systemd[1]: Reached target timers.target - Timer Units. Dec 13 15:03:44.221368 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 15:03:44.221376 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 15:03:44.221384 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 15:03:44.221394 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 15:03:44.221402 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 15:03:44.221410 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 15:03:44.221418 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 15:03:44.221426 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 15:03:44.221434 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 15:03:44.221442 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 15:03:44.221450 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 15:03:44.221459 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 15:03:44.221467 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 15:03:44.221475 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 15:03:44.221505 systemd-journald[900]: Collecting audit messages is disabled. Dec 13 15:03:44.221526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 15:03:44.221534 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 15:03:44.221542 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 15:03:44.221550 kernel: Bridge firewalling registered Dec 13 15:03:44.221559 systemd-journald[900]: Journal started Dec 13 15:03:44.221582 systemd-journald[900]: Runtime Journal (/run/log/journal/2fe2ef38df1b4399993a3e0b4b812955) is 8.0M, max 4.0G, 3.9G free. Dec 13 15:03:44.181943 systemd-modules-load[902]: Inserted module 'overlay' Dec 13 15:03:44.261858 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 15:03:44.204077 systemd-modules-load[902]: Inserted module 'br_netfilter' Dec 13 15:03:44.267554 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 15:03:44.278355 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 15:03:44.289212 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 15:03:44.299927 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 15:03:44.331989 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 15:03:44.338081 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 15:03:44.368883 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 15:03:44.385548 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 15:03:44.402723 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 15:03:44.419457 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 15:03:44.425227 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 15:03:44.438200 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 15:03:44.466928 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 15:03:44.479971 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 15:03:44.490489 dracut-cmdline[940]: dracut-dracut-053 Dec 13 15:03:44.499408 dracut-cmdline[940]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 15:03:44.493577 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 15:03:44.507581 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 15:03:44.516038 systemd-resolved[947]: Positive Trust Anchors: Dec 13 15:03:44.516047 systemd-resolved[947]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 15:03:44.516077 systemd-resolved[947]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 15:03:44.530760 systemd-resolved[947]: Defaulting to hostname 'linux'. Dec 13 15:03:44.544591 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 15:03:44.563850 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 15:03:44.667363 kernel: SCSI subsystem initialized Dec 13 15:03:44.678797 kernel: Loading iSCSI transport class v2.0-870. Dec 13 15:03:44.697800 kernel: iscsi: registered transport (tcp) Dec 13 15:03:44.725211 kernel: iscsi: registered transport (qla4xxx) Dec 13 15:03:44.725232 kernel: QLogic iSCSI HBA Driver Dec 13 15:03:44.768775 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 15:03:44.785959 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 15:03:44.832214 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 15:03:44.832245 kernel: device-mapper: uevent: version 1.0.3 Dec 13 15:03:44.841826 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 15:03:44.907802 kernel: raid6: neonx8 gen() 15848 MB/s Dec 13 15:03:44.932801 kernel: raid6: neonx4 gen() 15883 MB/s Dec 13 15:03:44.957800 kernel: raid6: neonx2 gen() 13324 MB/s Dec 13 15:03:44.982801 kernel: raid6: neonx1 gen() 10464 MB/s Dec 13 15:03:45.007800 kernel: raid6: int64x8 gen() 6811 MB/s Dec 13 15:03:45.032801 kernel: raid6: int64x4 gen() 7375 MB/s Dec 13 15:03:45.057801 kernel: raid6: int64x2 gen() 6134 MB/s Dec 13 15:03:45.085722 kernel: raid6: int64x1 gen() 5077 MB/s Dec 13 15:03:45.085743 kernel: raid6: using algorithm neonx4 gen() 15883 MB/s Dec 13 15:03:45.120146 kernel: raid6: .... xor() 12520 MB/s, rmw enabled Dec 13 15:03:45.120167 kernel: raid6: using neon recovery algorithm Dec 13 15:03:45.143052 kernel: xor: measuring software checksum speed Dec 13 15:03:45.143075 kernel: 8regs : 21630 MB/sec Dec 13 15:03:45.150960 kernel: 32regs : 21704 MB/sec Dec 13 15:03:45.158683 kernel: arm64_neon : 28099 MB/sec Dec 13 15:03:45.166305 kernel: xor: using function: arm64_neon (28099 MB/sec) Dec 13 15:03:45.226800 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 15:03:45.236233 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 15:03:45.254966 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 15:03:45.270266 systemd-udevd[1137]: Using default interface naming scheme 'v255'. Dec 13 15:03:45.273231 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 15:03:45.288934 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 15:03:45.302793 dracut-pre-trigger[1147]: rd.md=0: removing MD RAID activation Dec 13 15:03:45.328846 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 15:03:45.350908 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 15:03:45.451174 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 15:03:45.480439 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 15:03:45.480460 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 15:03:45.501983 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 15:03:45.549009 kernel: ACPI: bus type USB registered Dec 13 15:03:45.549025 kernel: usbcore: registered new interface driver usbfs Dec 13 15:03:45.549035 kernel: usbcore: registered new interface driver hub Dec 13 15:03:45.549048 kernel: PTP clock support registered Dec 13 15:03:45.549058 kernel: usbcore: registered new device driver usb Dec 13 15:03:45.543832 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 15:03:45.707533 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Dec 13 15:03:45.707546 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Dec 13 15:03:45.707556 kernel: igb 0003:03:00.0: Adding to iommu group 31 Dec 13 15:03:45.764966 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 32 Dec 13 15:03:46.016521 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Dec 13 15:03:46.016678 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 15:03:46.016770 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault Dec 13 15:03:46.016928 kernel: nvme 0005:03:00.0: Adding to iommu group 33 Dec 13 15:03:46.248281 kernel: igb 0003:03:00.0: added PHC on eth0 Dec 13 15:03:46.248453 kernel: nvme 0005:04:00.0: Adding to iommu group 34 Dec 13 15:03:46.248552 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 15:03:46.248643 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 35 Dec 13 15:03:46.700848 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:54:98 Dec 13 15:03:46.700941 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 Dec 13 15:03:46.701017 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Dec 13 15:03:46.701095 kernel: igb 0003:03:00.1: Adding to iommu group 36 Dec 13 15:03:46.701177 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000410 Dec 13 15:03:46.701257 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Dec 13 15:03:46.701332 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 15:03:46.701406 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 15:03:46.701480 kernel: nvme nvme0: pci function 0005:03:00.0 Dec 13 15:03:46.701571 kernel: hub 1-0:1.0: USB hub found Dec 13 15:03:46.701677 kernel: hub 1-0:1.0: 4 ports detected Dec 13 15:03:46.701762 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 15:03:46.701911 kernel: nvme nvme1: pci function 0005:04:00.0 Dec 13 15:03:46.701997 kernel: hub 2-0:1.0: USB hub found Dec 13 15:03:46.702089 kernel: hub 2-0:1.0: 4 ports detected Dec 13 15:03:46.702172 kernel: nvme nvme0: Shutdown timeout set to 8 seconds Dec 13 15:03:46.702246 kernel: nvme nvme1: Shutdown timeout set to 8 seconds Dec 13 15:03:46.702316 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 Dec 13 15:03:46.702395 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 15:03:46.702471 kernel: nvme nvme0: 32/0/0 default/read/poll queues Dec 13 15:03:46.702542 kernel: igb 0003:03:00.1: added PHC on eth1 Dec 13 15:03:46.702618 kernel: nvme nvme1: 32/0/0 default/read/poll queues Dec 13 15:03:46.702688 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection Dec 13 15:03:46.702765 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:54:99 Dec 13 15:03:46.702847 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 Dec 13 15:03:46.702925 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Dec 13 15:03:46.702998 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 15:03:46.703008 kernel: GPT:9289727 != 1875385007 Dec 13 15:03:46.703018 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 15:03:46.703027 kernel: GPT:9289727 != 1875385007 Dec 13 15:03:46.703036 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 15:03:46.703045 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 15:03:46.703056 kernel: igb 0003:03:00.0 eno1: renamed from eth0 Dec 13 15:03:46.703133 kernel: BTRFS: device fsid 47b12626-f7d3-4179-9720-ca262eb4c614 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (1187) Dec 13 15:03:46.703143 kernel: igb 0003:03:00.1 eno2: renamed from eth1 Dec 13 15:03:46.703217 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by (udev-worker) (1188) Dec 13 15:03:46.703227 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd Dec 13 15:03:46.703350 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged Dec 13 15:03:46.703429 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 15:03:46.703442 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 15:03:46.703451 kernel: hub 1-3:1.0: USB hub found Dec 13 15:03:46.703545 kernel: hub 1-3:1.0: 4 ports detected Dec 13 15:03:46.703631 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd Dec 13 15:03:46.703756 kernel: hub 2-3:1.0: USB hub found Dec 13 15:03:46.703858 kernel: hub 2-3:1.0: 4 ports detected Dec 13 15:03:46.703944 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Dec 13 15:03:46.704023 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 Dec 13 15:03:47.388420 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 Dec 13 15:03:47.388540 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 15:03:47.388618 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged Dec 13 15:03:47.388691 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Dec 13 15:03:45.707588 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 15:03:47.429889 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 Dec 13 15:03:47.430007 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 Dec 13 15:03:47.430090 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 15:03:45.713670 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 15:03:45.719616 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 15:03:45.725186 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 15:03:45.725332 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 15:03:45.730986 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 15:03:47.469602 disk-uuid[1291]: Primary Header is updated. Dec 13 15:03:47.469602 disk-uuid[1291]: Secondary Entries is updated. Dec 13 15:03:47.469602 disk-uuid[1291]: Secondary Header is updated. Dec 13 15:03:45.748006 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 15:03:47.490903 disk-uuid[1292]: The operation has completed successfully. Dec 13 15:03:45.763182 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 15:03:45.763329 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 15:03:45.794506 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 15:03:45.806110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 15:03:45.812337 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 15:03:45.818965 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 15:03:45.819055 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 15:03:45.824126 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 15:03:45.845064 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 15:03:45.854732 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 15:03:45.860694 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 15:03:46.040539 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 15:03:47.588335 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 15:03:46.268588 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. Dec 13 15:03:46.338742 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. Dec 13 15:03:47.608230 sh[1485]: Success Dec 13 15:03:46.346767 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Dec 13 15:03:46.351229 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Dec 13 15:03:46.359802 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Dec 13 15:03:46.383898 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 15:03:47.753841 kernel: BTRFS info (device dm-0): first mount of filesystem 47b12626-f7d3-4179-9720-ca262eb4c614 Dec 13 15:03:47.753858 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 15:03:47.753867 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 15:03:47.753877 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 15:03:47.753890 kernel: BTRFS info (device dm-0): using free space tree Dec 13 15:03:47.753900 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 15:03:47.518290 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 15:03:47.518408 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 15:03:47.554950 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 15:03:47.615422 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 15:03:47.642277 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 15:03:47.892960 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 15:03:47.892990 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 15:03:47.893009 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 15:03:47.893027 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 15:03:47.893046 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Dec 13 15:03:47.893070 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 15:03:47.656474 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 15:03:47.760311 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 15:03:47.771716 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 15:03:47.783895 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 15:03:47.796187 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 15:03:47.905002 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 15:03:47.935924 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 15:03:47.999829 ignition[1562]: Ignition 2.20.0 Dec 13 15:03:47.999837 ignition[1562]: Stage: fetch-offline Dec 13 15:03:47.999881 ignition[1562]: no configs at "/usr/lib/ignition/base.d" Dec 13 15:03:47.999889 ignition[1562]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 15:03:48.011886 unknown[1562]: fetched base config from "system" Dec 13 15:03:48.000139 ignition[1562]: parsed url from cmdline: "" Dec 13 15:03:48.011894 unknown[1562]: fetched user config from "system" Dec 13 15:03:48.000142 ignition[1562]: no config URL provided Dec 13 15:03:48.015012 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 15:03:48.000146 ignition[1562]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 15:03:48.039476 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 15:03:48.000195 ignition[1562]: parsing config with SHA512: 7ee626286532049159df5e4abe234d2d075ff13a0bf65f8c77d677a2042f4e63221973b0bcc408e61a1e6e873b15be32d7182a09401fde59edd54cf5c0ca7252 Dec 13 15:03:48.056912 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 15:03:48.012586 ignition[1562]: fetch-offline: fetch-offline passed Dec 13 15:03:48.082525 systemd-networkd[1709]: lo: Link UP Dec 13 15:03:48.012591 ignition[1562]: POST message to Packet Timeline Dec 13 15:03:48.082530 systemd-networkd[1709]: lo: Gained carrier Dec 13 15:03:48.012597 ignition[1562]: POST Status error: resource requires networking Dec 13 15:03:48.086296 systemd-networkd[1709]: Enumeration completed Dec 13 15:03:48.012669 ignition[1562]: Ignition finished successfully Dec 13 15:03:48.086359 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 15:03:48.128078 ignition[1712]: Ignition 2.20.0 Dec 13 15:03:48.087418 systemd-networkd[1709]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 15:03:48.128083 ignition[1712]: Stage: kargs Dec 13 15:03:48.092853 systemd[1]: Reached target network.target - Network. Dec 13 15:03:48.128230 ignition[1712]: no configs at "/usr/lib/ignition/base.d" Dec 13 15:03:48.102412 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 15:03:48.128239 ignition[1712]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 15:03:48.111928 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 15:03:48.129147 ignition[1712]: kargs: kargs passed Dec 13 15:03:48.139493 systemd-networkd[1709]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 15:03:48.129151 ignition[1712]: POST message to Packet Timeline Dec 13 15:03:48.193300 systemd-networkd[1709]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 15:03:48.129357 ignition[1712]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 15:03:48.132669 ignition[1712]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34072->[::1]:53: read: connection refused Dec 13 15:03:48.332936 ignition[1712]: GET https://metadata.packet.net/metadata: attempt #2 Dec 13 15:03:48.333312 ignition[1712]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41782->[::1]:53: read: connection refused Dec 13 15:03:48.733709 ignition[1712]: GET https://metadata.packet.net/metadata: attempt #3 Dec 13 15:03:48.734655 ignition[1712]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58893->[::1]:53: read: connection refused Dec 13 15:03:48.767800 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Dec 13 15:03:48.770553 systemd-networkd[1709]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 15:03:49.374805 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Dec 13 15:03:49.377540 systemd-networkd[1709]: eno1: Link UP Dec 13 15:03:49.377747 systemd-networkd[1709]: eno2: Link UP Dec 13 15:03:49.377885 systemd-networkd[1709]: enP1p1s0f0np0: Link UP Dec 13 15:03:49.378031 systemd-networkd[1709]: enP1p1s0f0np0: Gained carrier Dec 13 15:03:49.388937 systemd-networkd[1709]: enP1p1s0f1np1: Link UP Dec 13 15:03:49.422844 systemd-networkd[1709]: enP1p1s0f0np0: DHCPv4 address 147.28.228.225/31, gateway 147.28.228.224 acquired from 147.28.144.140 Dec 13 15:03:49.535335 ignition[1712]: GET https://metadata.packet.net/metadata: attempt #4 Dec 13 15:03:49.536235 ignition[1712]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38300->[::1]:53: read: connection refused Dec 13 15:03:49.778242 systemd-networkd[1709]: enP1p1s0f1np1: Gained carrier Dec 13 15:03:50.658046 systemd-networkd[1709]: enP1p1s0f0np0: Gained IPv6LL Dec 13 15:03:51.137535 ignition[1712]: GET https://metadata.packet.net/metadata: attempt #5 Dec 13 15:03:51.138346 ignition[1712]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38837->[::1]:53: read: connection refused Dec 13 15:03:51.234167 systemd-networkd[1709]: enP1p1s0f1np1: Gained IPv6LL Dec 13 15:03:54.341222 ignition[1712]: GET https://metadata.packet.net/metadata: attempt #6 Dec 13 15:03:55.328886 ignition[1712]: GET result: OK Dec 13 15:03:55.599134 ignition[1712]: Ignition finished successfully Dec 13 15:03:55.602896 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 15:03:55.616906 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 15:03:55.628820 ignition[1732]: Ignition 2.20.0 Dec 13 15:03:55.628827 ignition[1732]: Stage: disks Dec 13 15:03:55.628981 ignition[1732]: no configs at "/usr/lib/ignition/base.d" Dec 13 15:03:55.628990 ignition[1732]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 15:03:55.630469 ignition[1732]: disks: disks passed Dec 13 15:03:55.630473 ignition[1732]: POST message to Packet Timeline Dec 13 15:03:55.630491 ignition[1732]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 15:03:56.372910 ignition[1732]: GET result: OK Dec 13 15:03:56.637614 ignition[1732]: Ignition finished successfully Dec 13 15:03:56.641841 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 15:03:56.646842 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 15:03:56.654599 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 15:03:56.662955 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 15:03:56.671806 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 15:03:56.681130 systemd[1]: Reached target basic.target - Basic System. Dec 13 15:03:56.699941 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 15:03:56.716164 systemd-fsck[1751]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 15:03:56.719852 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 15:03:56.737866 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 15:03:56.806797 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 0aa4851d-a2ba-4d04-90b3-5d00bf608ecc r/w with ordered data mode. Quota mode: none. Dec 13 15:03:56.807222 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 15:03:56.817687 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 15:03:56.842864 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 15:03:56.851796 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/nvme0n1p6 scanned by mount (1762) Dec 13 15:03:56.851812 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 15:03:56.851823 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 15:03:56.851832 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 15:03:56.852795 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 15:03:56.852806 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Dec 13 15:03:56.948871 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 15:03:56.955311 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 15:03:56.966152 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Dec 13 15:03:56.982209 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 15:03:56.982236 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 15:03:56.995453 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 15:03:57.026145 coreos-metadata[1785]: Dec 13 15:03:57.012 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 15:03:57.041877 coreos-metadata[1781]: Dec 13 15:03:57.012 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 15:03:57.009401 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 15:03:57.030909 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 15:03:57.070192 initrd-setup-root[1807]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 15:03:57.076222 initrd-setup-root[1815]: cut: /sysroot/etc/group: No such file or directory Dec 13 15:03:57.082072 initrd-setup-root[1823]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 15:03:57.087996 initrd-setup-root[1830]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 15:03:57.155535 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 15:03:57.176868 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 15:03:57.188679 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 15:03:57.213963 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 15:03:57.219822 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 15:03:57.231567 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 15:03:57.237001 ignition[1905]: INFO : Ignition 2.20.0 Dec 13 15:03:57.237001 ignition[1905]: INFO : Stage: mount Dec 13 15:03:57.237001 ignition[1905]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 15:03:57.237001 ignition[1905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 15:03:57.268380 ignition[1905]: INFO : mount: mount passed Dec 13 15:03:57.268380 ignition[1905]: INFO : POST message to Packet Timeline Dec 13 15:03:57.268380 ignition[1905]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 15:03:57.781699 coreos-metadata[1781]: Dec 13 15:03:57.781 INFO Fetch successful Dec 13 15:03:57.824130 coreos-metadata[1781]: Dec 13 15:03:57.824 INFO wrote hostname ci-4186.0.0-a-a49a1da819 to /sysroot/etc/hostname Dec 13 15:03:57.827318 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 15:03:58.162566 coreos-metadata[1785]: Dec 13 15:03:58.162 INFO Fetch successful Dec 13 15:03:58.207889 systemd[1]: flatcar-static-network.service: Deactivated successfully. Dec 13 15:03:58.208074 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Dec 13 15:03:58.801329 ignition[1905]: INFO : GET result: OK Dec 13 15:03:59.102850 ignition[1905]: INFO : Ignition finished successfully Dec 13 15:03:59.104913 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 15:03:59.125854 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 15:03:59.134239 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 15:03:59.162799 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/nvme0n1p6 scanned by mount (1931) Dec 13 15:03:59.187200 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 15:03:59.187224 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 15:03:59.200321 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 15:03:59.223553 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 15:03:59.223574 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Dec 13 15:03:59.231709 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 15:03:59.264543 ignition[1951]: INFO : Ignition 2.20.0 Dec 13 15:03:59.264543 ignition[1951]: INFO : Stage: files Dec 13 15:03:59.274357 ignition[1951]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 15:03:59.274357 ignition[1951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 15:03:59.274357 ignition[1951]: DEBUG : files: compiled without relabeling support, skipping Dec 13 15:03:59.274357 ignition[1951]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 15:03:59.274357 ignition[1951]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 15:03:59.274357 ignition[1951]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 15:03:59.274357 ignition[1951]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 15:03:59.274357 ignition[1951]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 15:03:59.274357 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 15:03:59.274357 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 15:03:59.270273 unknown[1951]: wrote ssh authorized keys file for user: core Dec 13 15:03:59.389427 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 15:03:59.542863 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 15:03:59.734903 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 15:03:59.901187 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 15:03:59.913607 ignition[1951]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 15:03:59.913607 ignition[1951]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 15:03:59.913607 ignition[1951]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 15:03:59.913607 ignition[1951]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 15:03:59.913607 ignition[1951]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 15:03:59.913607 ignition[1951]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 15:03:59.913607 ignition[1951]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 15:03:59.913607 ignition[1951]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 15:03:59.913607 ignition[1951]: INFO : files: files passed Dec 13 15:03:59.913607 ignition[1951]: INFO : POST message to Packet Timeline Dec 13 15:03:59.913607 ignition[1951]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 15:04:00.997656 ignition[1951]: INFO : GET result: OK Dec 13 15:04:01.307061 ignition[1951]: INFO : Ignition finished successfully Dec 13 15:04:01.310233 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 15:04:01.332976 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 15:04:01.339837 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 15:04:01.351843 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 15:04:01.351922 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 15:04:01.387196 initrd-setup-root-after-ignition[1992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 15:04:01.387196 initrd-setup-root-after-ignition[1992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 15:04:01.370157 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 15:04:01.433721 initrd-setup-root-after-ignition[1996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 15:04:01.383061 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 15:04:01.402967 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 15:04:01.436361 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 15:04:01.436457 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 15:04:01.451096 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 15:04:01.462362 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 15:04:01.479173 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 15:04:01.491899 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 15:04:01.518223 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 15:04:01.543003 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 15:04:01.557210 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 15:04:01.566551 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 15:04:01.577954 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 15:04:01.589391 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 15:04:01.589507 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 15:04:01.600997 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 15:04:01.612131 systemd[1]: Stopped target basic.target - Basic System. Dec 13 15:04:01.623438 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 15:04:01.634747 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 15:04:01.645975 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 15:04:01.657121 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 15:04:01.668208 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 15:04:01.679338 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 15:04:01.690483 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 15:04:01.707248 systemd[1]: Stopped target swap.target - Swaps. Dec 13 15:04:01.718525 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 15:04:01.718622 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 15:04:01.730027 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 15:04:01.741081 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 15:04:01.752347 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 15:04:01.755830 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 15:04:01.763664 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 15:04:01.763763 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 15:04:01.775139 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 15:04:01.775263 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 15:04:01.786439 systemd[1]: Stopped target paths.target - Path Units. Dec 13 15:04:01.797630 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 15:04:01.800815 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 15:04:01.814832 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 15:04:01.826343 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 15:04:01.837829 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 15:04:01.837939 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 15:04:01.937753 ignition[2018]: INFO : Ignition 2.20.0 Dec 13 15:04:01.937753 ignition[2018]: INFO : Stage: umount Dec 13 15:04:01.937753 ignition[2018]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 15:04:01.937753 ignition[2018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 15:04:01.937753 ignition[2018]: INFO : umount: umount passed Dec 13 15:04:01.937753 ignition[2018]: INFO : POST message to Packet Timeline Dec 13 15:04:01.937753 ignition[2018]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 15:04:01.849470 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 15:04:01.849545 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 15:04:01.861177 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 15:04:01.861266 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 15:04:01.872833 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 15:04:01.872917 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 15:04:01.884429 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 15:04:01.884512 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 15:04:01.907994 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 15:04:01.919803 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 15:04:01.919911 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 15:04:01.943983 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 15:04:01.955623 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 15:04:01.955732 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 15:04:01.967048 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 15:04:01.967135 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 15:04:01.986306 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 15:04:01.988316 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 15:04:01.988394 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 15:04:02.011975 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 15:04:02.130797 ignition[2018]: INFO : GET result: OK Dec 13 15:04:02.012208 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 15:04:02.463587 ignition[2018]: INFO : Ignition finished successfully Dec 13 15:04:02.465774 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 15:04:02.465990 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 15:04:02.474144 systemd[1]: Stopped target network.target - Network. Dec 13 15:04:02.483681 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 15:04:02.483736 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 15:04:02.493885 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 15:04:02.493923 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 15:04:02.503586 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 15:04:02.503617 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 15:04:02.513423 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 15:04:02.513457 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 15:04:02.523424 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 15:04:02.523455 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 15:04:02.533613 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 15:04:02.539820 systemd-networkd[1709]: enP1p1s0f0np0: DHCPv6 lease lost Dec 13 15:04:02.543464 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 15:04:02.549843 systemd-networkd[1709]: enP1p1s0f1np1: DHCPv6 lease lost Dec 13 15:04:02.553487 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 15:04:02.553581 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 15:04:02.565478 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 15:04:02.566107 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 15:04:02.575785 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 15:04:02.575933 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 15:04:02.592889 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 15:04:02.598845 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 15:04:02.598915 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 15:04:02.609186 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 15:04:02.609235 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 15:04:02.619347 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 15:04:02.619397 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 15:04:02.629766 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 15:04:02.629796 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 15:04:02.640318 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 15:04:02.660153 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 15:04:02.660285 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 15:04:02.669376 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 15:04:02.669549 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 15:04:02.678446 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 15:04:02.678503 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 15:04:02.689193 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 15:04:02.689243 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 15:04:02.700370 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 15:04:02.700434 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 15:04:02.711183 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 15:04:02.711235 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 15:04:02.740931 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 15:04:02.750223 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 15:04:02.750269 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 15:04:02.761530 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 15:04:02.761561 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 15:04:02.773274 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 15:04:02.773363 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 15:04:03.318020 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 15:04:03.318176 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 15:04:03.329397 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 15:04:03.351934 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 15:04:03.365551 systemd[1]: Switching root. Dec 13 15:04:03.415618 systemd-journald[900]: Journal stopped Dec 13 15:03:44.162374 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] Dec 13 15:03:44.162397 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri Dec 13 11:56:07 -00 2024 Dec 13 15:03:44.162405 kernel: KASLR enabled Dec 13 15:03:44.162410 kernel: efi: EFI v2.7 by American Megatrends Dec 13 15:03:44.162416 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea47c818 RNG=0xebf10018 MEMRESERVE=0xe465af98 Dec 13 15:03:44.162421 kernel: random: crng init done Dec 13 15:03:44.162428 kernel: secureboot: Secure boot disabled Dec 13 15:03:44.162433 kernel: esrt: Reserving ESRT space from 0x00000000ea47c818 to 0x00000000ea47c878. Dec 13 15:03:44.162441 kernel: ACPI: Early table checksum verification disabled Dec 13 15:03:44.162446 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) Dec 13 15:03:44.162452 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) Dec 13 15:03:44.162458 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) Dec 13 15:03:44.162464 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) Dec 13 15:03:44.162469 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) Dec 13 15:03:44.162478 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) Dec 13 15:03:44.162484 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) Dec 13 15:03:44.162490 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) Dec 13 15:03:44.162496 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) Dec 13 15:03:44.162502 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) Dec 13 15:03:44.162508 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) Dec 13 15:03:44.162514 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) Dec 13 15:03:44.162520 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) Dec 13 15:03:44.162526 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) Dec 13 15:03:44.162532 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) Dec 13 15:03:44.162540 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) Dec 13 15:03:44.162546 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) Dec 13 15:03:44.162552 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) Dec 13 15:03:44.162558 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) Dec 13 15:03:44.162564 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 Dec 13 15:03:44.162570 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] Dec 13 15:03:44.162576 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] Dec 13 15:03:44.162582 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] Dec 13 15:03:44.162588 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] Dec 13 15:03:44.162594 kernel: NUMA: NODE_DATA [mem 0x83fdffca800-0x83fdffcffff] Dec 13 15:03:44.162600 kernel: Zone ranges: Dec 13 15:03:44.162607 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] Dec 13 15:03:44.162613 kernel: DMA32 empty Dec 13 15:03:44.162619 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] Dec 13 15:03:44.162625 kernel: Movable zone start for each node Dec 13 15:03:44.162631 kernel: Early memory node ranges Dec 13 15:03:44.162640 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] Dec 13 15:03:44.162647 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] Dec 13 15:03:44.162654 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] Dec 13 15:03:44.162661 kernel: node 0: [mem 0x0000000094000000-0x00000000eba32fff] Dec 13 15:03:44.162667 kernel: node 0: [mem 0x00000000eba33000-0x00000000ebeb4fff] Dec 13 15:03:44.162674 kernel: node 0: [mem 0x00000000ebeb5000-0x00000000ebeb9fff] Dec 13 15:03:44.162680 kernel: node 0: [mem 0x00000000ebeba000-0x00000000ebeccfff] Dec 13 15:03:44.162686 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] Dec 13 15:03:44.162693 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] Dec 13 15:03:44.162699 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] Dec 13 15:03:44.162705 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] Dec 13 15:03:44.162712 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee53ffff] Dec 13 15:03:44.162720 kernel: node 0: [mem 0x00000000ee540000-0x00000000f765ffff] Dec 13 15:03:44.162726 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] Dec 13 15:03:44.162732 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] Dec 13 15:03:44.162739 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] Dec 13 15:03:44.162745 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] Dec 13 15:03:44.162752 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] Dec 13 15:03:44.162758 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] Dec 13 15:03:44.162764 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] Dec 13 15:03:44.162771 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] Dec 13 15:03:44.162777 kernel: On node 0, zone DMA: 768 pages in unavailable ranges Dec 13 15:03:44.162783 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges Dec 13 15:03:44.162794 kernel: psci: probing for conduit method from ACPI. Dec 13 15:03:44.162801 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 15:03:44.162807 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 15:03:44.162814 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 15:03:44.162820 kernel: psci: SMC Calling Convention v1.2 Dec 13 15:03:44.162826 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 13 15:03:44.162833 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 Dec 13 15:03:44.162839 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 Dec 13 15:03:44.162845 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 Dec 13 15:03:44.162852 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 Dec 13 15:03:44.162858 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 Dec 13 15:03:44.162865 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 Dec 13 15:03:44.162873 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 Dec 13 15:03:44.162879 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 Dec 13 15:03:44.162885 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 Dec 13 15:03:44.162892 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 Dec 13 15:03:44.162898 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 Dec 13 15:03:44.162904 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 Dec 13 15:03:44.162911 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 Dec 13 15:03:44.162917 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 Dec 13 15:03:44.162923 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 Dec 13 15:03:44.162930 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 Dec 13 15:03:44.162936 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 Dec 13 15:03:44.162942 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 Dec 13 15:03:44.162950 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 Dec 13 15:03:44.162957 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 Dec 13 15:03:44.162963 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 Dec 13 15:03:44.162969 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 Dec 13 15:03:44.162975 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 Dec 13 15:03:44.162982 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 Dec 13 15:03:44.162988 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 Dec 13 15:03:44.162994 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 Dec 13 15:03:44.163000 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 Dec 13 15:03:44.163007 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 Dec 13 15:03:44.163013 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 Dec 13 15:03:44.163021 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 Dec 13 15:03:44.163027 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 Dec 13 15:03:44.163033 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 Dec 13 15:03:44.163040 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 Dec 13 15:03:44.163046 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 Dec 13 15:03:44.163052 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 Dec 13 15:03:44.163059 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 Dec 13 15:03:44.163065 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 Dec 13 15:03:44.163071 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 Dec 13 15:03:44.163078 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 Dec 13 15:03:44.163084 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 Dec 13 15:03:44.163090 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 Dec 13 15:03:44.163098 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 Dec 13 15:03:44.163104 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 Dec 13 15:03:44.163111 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 Dec 13 15:03:44.163117 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 Dec 13 15:03:44.163124 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 Dec 13 15:03:44.163130 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 Dec 13 15:03:44.163136 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 Dec 13 15:03:44.163143 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 Dec 13 15:03:44.163155 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 Dec 13 15:03:44.163162 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 Dec 13 15:03:44.163171 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 Dec 13 15:03:44.163177 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 Dec 13 15:03:44.163184 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 Dec 13 15:03:44.163191 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 Dec 13 15:03:44.163198 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 Dec 13 15:03:44.163204 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 Dec 13 15:03:44.163212 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 Dec 13 15:03:44.163219 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 Dec 13 15:03:44.163226 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 Dec 13 15:03:44.163232 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 Dec 13 15:03:44.163239 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 Dec 13 15:03:44.163246 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 Dec 13 15:03:44.163253 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 Dec 13 15:03:44.163260 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 Dec 13 15:03:44.163266 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 Dec 13 15:03:44.163273 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 Dec 13 15:03:44.163280 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 Dec 13 15:03:44.163286 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 Dec 13 15:03:44.163294 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 Dec 13 15:03:44.163301 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 Dec 13 15:03:44.163308 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 Dec 13 15:03:44.163315 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 Dec 13 15:03:44.163321 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 Dec 13 15:03:44.163328 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 Dec 13 15:03:44.163335 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 Dec 13 15:03:44.163341 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 Dec 13 15:03:44.163348 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 Dec 13 15:03:44.163355 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 Dec 13 15:03:44.163362 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 15:03:44.163370 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 15:03:44.163377 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 Dec 13 15:03:44.163384 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 Dec 13 15:03:44.163390 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 Dec 13 15:03:44.163397 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 Dec 13 15:03:44.163404 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 Dec 13 15:03:44.163411 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 Dec 13 15:03:44.163417 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 Dec 13 15:03:44.163424 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 Dec 13 15:03:44.163431 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 Dec 13 15:03:44.163438 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 Dec 13 15:03:44.163446 kernel: Detected PIPT I-cache on CPU0 Dec 13 15:03:44.163453 kernel: CPU features: detected: GIC system register CPU interface Dec 13 15:03:44.163460 kernel: CPU features: detected: Virtualization Host Extensions Dec 13 15:03:44.163466 kernel: CPU features: detected: Hardware dirty bit management Dec 13 15:03:44.163473 kernel: CPU features: detected: Spectre-v4 Dec 13 15:03:44.163480 kernel: CPU features: detected: Spectre-BHB Dec 13 15:03:44.163487 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 15:03:44.163494 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 15:03:44.163500 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 15:03:44.163507 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 15:03:44.163514 kernel: alternatives: applying boot alternatives Dec 13 15:03:44.163522 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 15:03:44.163530 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 15:03:44.163537 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Dec 13 15:03:44.163544 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes Dec 13 15:03:44.163550 kernel: printk: log_buf_len min size: 262144 bytes Dec 13 15:03:44.163557 kernel: printk: log_buf_len: 1048576 bytes Dec 13 15:03:44.163564 kernel: printk: early log buf free: 249864(95%) Dec 13 15:03:44.163571 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) Dec 13 15:03:44.163578 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) Dec 13 15:03:44.163584 kernel: Fallback order for Node 0: 0 Dec 13 15:03:44.163591 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 Dec 13 15:03:44.163599 kernel: Policy zone: Normal Dec 13 15:03:44.163606 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 15:03:44.163613 kernel: software IO TLB: area num 128. Dec 13 15:03:44.163620 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) Dec 13 15:03:44.163627 kernel: Memory: 262921876K/268174336K available (10304K kernel code, 2184K rwdata, 8088K rodata, 39936K init, 897K bss, 5252460K reserved, 0K cma-reserved) Dec 13 15:03:44.163634 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 Dec 13 15:03:44.163640 kernel: trace event string verifier disabled Dec 13 15:03:44.163647 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 15:03:44.163655 kernel: rcu: RCU event tracing is enabled. Dec 13 15:03:44.163662 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. Dec 13 15:03:44.163669 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 15:03:44.163675 kernel: Tracing variant of Tasks RCU enabled. Dec 13 15:03:44.163684 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 15:03:44.163691 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 Dec 13 15:03:44.163698 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 15:03:44.163704 kernel: GICv3: GIC: Using split EOI/Deactivate mode Dec 13 15:03:44.163711 kernel: GICv3: 672 SPIs implemented Dec 13 15:03:44.163718 kernel: GICv3: 0 Extended SPIs implemented Dec 13 15:03:44.163725 kernel: Root IRQ handler: gic_handle_irq Dec 13 15:03:44.163731 kernel: GICv3: GICv3 features: 16 PPIs Dec 13 15:03:44.163738 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 Dec 13 15:03:44.163745 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 Dec 13 15:03:44.163751 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 Dec 13 15:03:44.163758 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 Dec 13 15:03:44.163766 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 Dec 13 15:03:44.163772 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 Dec 13 15:03:44.163779 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 Dec 13 15:03:44.163786 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 Dec 13 15:03:44.163798 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 Dec 13 15:03:44.163804 kernel: ITS [mem 0x100100040000-0x10010005ffff] Dec 13 15:03:44.163811 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) Dec 13 15:03:44.163818 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) Dec 13 15:03:44.163825 kernel: ITS [mem 0x100100060000-0x10010007ffff] Dec 13 15:03:44.163832 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 15:03:44.163839 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) Dec 13 15:03:44.163847 kernel: ITS [mem 0x100100080000-0x10010009ffff] Dec 13 15:03:44.163854 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 15:03:44.163861 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) Dec 13 15:03:44.163868 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] Dec 13 15:03:44.163875 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) Dec 13 15:03:44.163882 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) Dec 13 15:03:44.163888 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] Dec 13 15:03:44.163895 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) Dec 13 15:03:44.163902 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) Dec 13 15:03:44.163909 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] Dec 13 15:03:44.163915 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) Dec 13 15:03:44.163924 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) Dec 13 15:03:44.163930 kernel: ITS [mem 0x100100100000-0x10010011ffff] Dec 13 15:03:44.163937 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) Dec 13 15:03:44.163944 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) Dec 13 15:03:44.163951 kernel: ITS [mem 0x100100120000-0x10010013ffff] Dec 13 15:03:44.163958 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 15:03:44.163965 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) Dec 13 15:03:44.163971 kernel: GICv3: using LPI property table @0x00000800003e0000 Dec 13 15:03:44.163978 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 Dec 13 15:03:44.163985 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 15:03:44.163992 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164000 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). Dec 13 15:03:44.164007 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). Dec 13 15:03:44.164014 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 15:03:44.164020 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 15:03:44.164027 kernel: Console: colour dummy device 80x25 Dec 13 15:03:44.164035 kernel: printk: console [tty0] enabled Dec 13 15:03:44.164042 kernel: ACPI: Core revision 20230628 Dec 13 15:03:44.164049 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 15:03:44.164056 kernel: pid_max: default: 81920 minimum: 640 Dec 13 15:03:44.164063 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 15:03:44.164071 kernel: landlock: Up and running. Dec 13 15:03:44.164078 kernel: SELinux: Initializing. Dec 13 15:03:44.164085 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 15:03:44.164092 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 15:03:44.164099 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Dec 13 15:03:44.164106 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Dec 13 15:03:44.164113 kernel: rcu: Hierarchical SRCU implementation. Dec 13 15:03:44.164120 kernel: rcu: Max phase no-delay instances is 400. Dec 13 15:03:44.164127 kernel: Platform MSI: ITS@0x100100040000 domain created Dec 13 15:03:44.164136 kernel: Platform MSI: ITS@0x100100060000 domain created Dec 13 15:03:44.164143 kernel: Platform MSI: ITS@0x100100080000 domain created Dec 13 15:03:44.164149 kernel: Platform MSI: ITS@0x1001000a0000 domain created Dec 13 15:03:44.164156 kernel: Platform MSI: ITS@0x1001000c0000 domain created Dec 13 15:03:44.164163 kernel: Platform MSI: ITS@0x1001000e0000 domain created Dec 13 15:03:44.164170 kernel: Platform MSI: ITS@0x100100100000 domain created Dec 13 15:03:44.164177 kernel: Platform MSI: ITS@0x100100120000 domain created Dec 13 15:03:44.164184 kernel: PCI/MSI: ITS@0x100100040000 domain created Dec 13 15:03:44.164191 kernel: PCI/MSI: ITS@0x100100060000 domain created Dec 13 15:03:44.164199 kernel: PCI/MSI: ITS@0x100100080000 domain created Dec 13 15:03:44.164206 kernel: PCI/MSI: ITS@0x1001000a0000 domain created Dec 13 15:03:44.164212 kernel: PCI/MSI: ITS@0x1001000c0000 domain created Dec 13 15:03:44.164219 kernel: PCI/MSI: ITS@0x1001000e0000 domain created Dec 13 15:03:44.164226 kernel: PCI/MSI: ITS@0x100100100000 domain created Dec 13 15:03:44.164233 kernel: PCI/MSI: ITS@0x100100120000 domain created Dec 13 15:03:44.164240 kernel: Remapping and enabling EFI services. Dec 13 15:03:44.164247 kernel: smp: Bringing up secondary CPUs ... Dec 13 15:03:44.164254 kernel: Detected PIPT I-cache on CPU1 Dec 13 15:03:44.164261 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 Dec 13 15:03:44.164269 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 Dec 13 15:03:44.164276 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164283 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] Dec 13 15:03:44.164290 kernel: Detected PIPT I-cache on CPU2 Dec 13 15:03:44.164297 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 Dec 13 15:03:44.164304 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 Dec 13 15:03:44.164311 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164318 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] Dec 13 15:03:44.164325 kernel: Detected PIPT I-cache on CPU3 Dec 13 15:03:44.164333 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 Dec 13 15:03:44.164341 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 Dec 13 15:03:44.164348 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164354 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] Dec 13 15:03:44.164361 kernel: Detected PIPT I-cache on CPU4 Dec 13 15:03:44.164368 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 Dec 13 15:03:44.164375 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 Dec 13 15:03:44.164382 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164389 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] Dec 13 15:03:44.164396 kernel: Detected PIPT I-cache on CPU5 Dec 13 15:03:44.164404 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 Dec 13 15:03:44.164411 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 Dec 13 15:03:44.164419 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164425 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] Dec 13 15:03:44.164432 kernel: Detected PIPT I-cache on CPU6 Dec 13 15:03:44.164439 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 Dec 13 15:03:44.164446 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 Dec 13 15:03:44.164453 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164460 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] Dec 13 15:03:44.164468 kernel: Detected PIPT I-cache on CPU7 Dec 13 15:03:44.164476 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 Dec 13 15:03:44.164482 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 Dec 13 15:03:44.164489 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164496 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] Dec 13 15:03:44.164503 kernel: Detected PIPT I-cache on CPU8 Dec 13 15:03:44.164510 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 Dec 13 15:03:44.164517 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 Dec 13 15:03:44.164524 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164531 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] Dec 13 15:03:44.164539 kernel: Detected PIPT I-cache on CPU9 Dec 13 15:03:44.164546 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 Dec 13 15:03:44.164553 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 Dec 13 15:03:44.164560 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164567 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] Dec 13 15:03:44.164574 kernel: Detected PIPT I-cache on CPU10 Dec 13 15:03:44.164581 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 Dec 13 15:03:44.164588 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 Dec 13 15:03:44.164595 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164601 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] Dec 13 15:03:44.164610 kernel: Detected PIPT I-cache on CPU11 Dec 13 15:03:44.164617 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 Dec 13 15:03:44.164624 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 Dec 13 15:03:44.164631 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164638 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] Dec 13 15:03:44.164644 kernel: Detected PIPT I-cache on CPU12 Dec 13 15:03:44.164651 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 Dec 13 15:03:44.164658 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 Dec 13 15:03:44.164665 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164673 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] Dec 13 15:03:44.164680 kernel: Detected PIPT I-cache on CPU13 Dec 13 15:03:44.164688 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 Dec 13 15:03:44.164695 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 Dec 13 15:03:44.164702 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164709 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] Dec 13 15:03:44.164716 kernel: Detected PIPT I-cache on CPU14 Dec 13 15:03:44.164723 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 Dec 13 15:03:44.164730 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 Dec 13 15:03:44.164738 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164745 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] Dec 13 15:03:44.164752 kernel: Detected PIPT I-cache on CPU15 Dec 13 15:03:44.164759 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 Dec 13 15:03:44.164766 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 Dec 13 15:03:44.164773 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164779 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] Dec 13 15:03:44.164786 kernel: Detected PIPT I-cache on CPU16 Dec 13 15:03:44.164796 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 Dec 13 15:03:44.164813 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 Dec 13 15:03:44.164822 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164829 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] Dec 13 15:03:44.164836 kernel: Detected PIPT I-cache on CPU17 Dec 13 15:03:44.164844 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 Dec 13 15:03:44.164851 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 Dec 13 15:03:44.164858 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164865 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] Dec 13 15:03:44.164872 kernel: Detected PIPT I-cache on CPU18 Dec 13 15:03:44.164880 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 Dec 13 15:03:44.164889 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 Dec 13 15:03:44.164896 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164903 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] Dec 13 15:03:44.164910 kernel: Detected PIPT I-cache on CPU19 Dec 13 15:03:44.164918 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 Dec 13 15:03:44.164925 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 Dec 13 15:03:44.164935 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164942 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] Dec 13 15:03:44.164949 kernel: Detected PIPT I-cache on CPU20 Dec 13 15:03:44.164956 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 Dec 13 15:03:44.164964 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 Dec 13 15:03:44.164971 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.164978 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] Dec 13 15:03:44.164985 kernel: Detected PIPT I-cache on CPU21 Dec 13 15:03:44.164993 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 Dec 13 15:03:44.165001 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 Dec 13 15:03:44.165008 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165016 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] Dec 13 15:03:44.165023 kernel: Detected PIPT I-cache on CPU22 Dec 13 15:03:44.165030 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 Dec 13 15:03:44.165037 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 Dec 13 15:03:44.165045 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165052 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] Dec 13 15:03:44.165059 kernel: Detected PIPT I-cache on CPU23 Dec 13 15:03:44.165066 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 Dec 13 15:03:44.165075 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 Dec 13 15:03:44.165082 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165090 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] Dec 13 15:03:44.165097 kernel: Detected PIPT I-cache on CPU24 Dec 13 15:03:44.165104 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 Dec 13 15:03:44.165111 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 Dec 13 15:03:44.165119 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165126 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] Dec 13 15:03:44.165133 kernel: Detected PIPT I-cache on CPU25 Dec 13 15:03:44.165142 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 Dec 13 15:03:44.165149 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 Dec 13 15:03:44.165157 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165164 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] Dec 13 15:03:44.165171 kernel: Detected PIPT I-cache on CPU26 Dec 13 15:03:44.165178 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 Dec 13 15:03:44.165186 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 Dec 13 15:03:44.165193 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165200 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] Dec 13 15:03:44.165207 kernel: Detected PIPT I-cache on CPU27 Dec 13 15:03:44.165216 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 Dec 13 15:03:44.165223 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 Dec 13 15:03:44.165231 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165239 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] Dec 13 15:03:44.165248 kernel: Detected PIPT I-cache on CPU28 Dec 13 15:03:44.165255 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 Dec 13 15:03:44.165263 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 Dec 13 15:03:44.165270 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165277 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] Dec 13 15:03:44.165286 kernel: Detected PIPT I-cache on CPU29 Dec 13 15:03:44.165293 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 Dec 13 15:03:44.165301 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 Dec 13 15:03:44.165308 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165315 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] Dec 13 15:03:44.165323 kernel: Detected PIPT I-cache on CPU30 Dec 13 15:03:44.165330 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 Dec 13 15:03:44.165337 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 Dec 13 15:03:44.165345 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165352 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] Dec 13 15:03:44.165361 kernel: Detected PIPT I-cache on CPU31 Dec 13 15:03:44.165368 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 Dec 13 15:03:44.165375 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 Dec 13 15:03:44.165382 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165390 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] Dec 13 15:03:44.165397 kernel: Detected PIPT I-cache on CPU32 Dec 13 15:03:44.165404 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 Dec 13 15:03:44.165412 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 Dec 13 15:03:44.165419 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165427 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] Dec 13 15:03:44.165435 kernel: Detected PIPT I-cache on CPU33 Dec 13 15:03:44.165442 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 Dec 13 15:03:44.165449 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 Dec 13 15:03:44.165457 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165464 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] Dec 13 15:03:44.165471 kernel: Detected PIPT I-cache on CPU34 Dec 13 15:03:44.165478 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 Dec 13 15:03:44.165486 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 Dec 13 15:03:44.165494 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165501 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] Dec 13 15:03:44.165509 kernel: Detected PIPT I-cache on CPU35 Dec 13 15:03:44.165516 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 Dec 13 15:03:44.165523 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 Dec 13 15:03:44.165531 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165538 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] Dec 13 15:03:44.165545 kernel: Detected PIPT I-cache on CPU36 Dec 13 15:03:44.165552 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 Dec 13 15:03:44.165560 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 Dec 13 15:03:44.165568 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165575 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] Dec 13 15:03:44.165583 kernel: Detected PIPT I-cache on CPU37 Dec 13 15:03:44.165590 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 Dec 13 15:03:44.165597 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 Dec 13 15:03:44.165605 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165612 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] Dec 13 15:03:44.165619 kernel: Detected PIPT I-cache on CPU38 Dec 13 15:03:44.165626 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 Dec 13 15:03:44.165635 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 Dec 13 15:03:44.165642 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165650 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] Dec 13 15:03:44.165657 kernel: Detected PIPT I-cache on CPU39 Dec 13 15:03:44.165664 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 Dec 13 15:03:44.165671 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 Dec 13 15:03:44.165679 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165686 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] Dec 13 15:03:44.165695 kernel: Detected PIPT I-cache on CPU40 Dec 13 15:03:44.165702 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 Dec 13 15:03:44.165709 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 Dec 13 15:03:44.165717 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165724 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] Dec 13 15:03:44.165731 kernel: Detected PIPT I-cache on CPU41 Dec 13 15:03:44.165738 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 Dec 13 15:03:44.165747 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 Dec 13 15:03:44.165754 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165761 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] Dec 13 15:03:44.165770 kernel: Detected PIPT I-cache on CPU42 Dec 13 15:03:44.165778 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 Dec 13 15:03:44.165785 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 Dec 13 15:03:44.165795 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165802 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] Dec 13 15:03:44.165809 kernel: Detected PIPT I-cache on CPU43 Dec 13 15:03:44.165817 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 Dec 13 15:03:44.165824 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 Dec 13 15:03:44.165831 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165840 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] Dec 13 15:03:44.165847 kernel: Detected PIPT I-cache on CPU44 Dec 13 15:03:44.165855 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 Dec 13 15:03:44.165862 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 Dec 13 15:03:44.165869 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165876 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] Dec 13 15:03:44.165884 kernel: Detected PIPT I-cache on CPU45 Dec 13 15:03:44.165891 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 Dec 13 15:03:44.165898 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 Dec 13 15:03:44.165907 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165914 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] Dec 13 15:03:44.165921 kernel: Detected PIPT I-cache on CPU46 Dec 13 15:03:44.165929 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 Dec 13 15:03:44.165936 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 Dec 13 15:03:44.165944 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165951 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] Dec 13 15:03:44.165958 kernel: Detected PIPT I-cache on CPU47 Dec 13 15:03:44.165965 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 Dec 13 15:03:44.165973 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 Dec 13 15:03:44.165981 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.165989 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] Dec 13 15:03:44.165996 kernel: Detected PIPT I-cache on CPU48 Dec 13 15:03:44.166003 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 Dec 13 15:03:44.166010 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 Dec 13 15:03:44.166018 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166025 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] Dec 13 15:03:44.166032 kernel: Detected PIPT I-cache on CPU49 Dec 13 15:03:44.166039 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 Dec 13 15:03:44.166048 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 Dec 13 15:03:44.166055 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166063 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] Dec 13 15:03:44.166070 kernel: Detected PIPT I-cache on CPU50 Dec 13 15:03:44.166077 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 Dec 13 15:03:44.166084 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 Dec 13 15:03:44.166092 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166099 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] Dec 13 15:03:44.166106 kernel: Detected PIPT I-cache on CPU51 Dec 13 15:03:44.166113 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 Dec 13 15:03:44.166122 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 Dec 13 15:03:44.166129 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166136 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] Dec 13 15:03:44.166144 kernel: Detected PIPT I-cache on CPU52 Dec 13 15:03:44.166151 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 Dec 13 15:03:44.166158 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 Dec 13 15:03:44.166166 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166173 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] Dec 13 15:03:44.166180 kernel: Detected PIPT I-cache on CPU53 Dec 13 15:03:44.166190 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 Dec 13 15:03:44.166197 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 Dec 13 15:03:44.166205 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166212 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] Dec 13 15:03:44.166219 kernel: Detected PIPT I-cache on CPU54 Dec 13 15:03:44.166226 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 Dec 13 15:03:44.166234 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 Dec 13 15:03:44.166241 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166248 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] Dec 13 15:03:44.166255 kernel: Detected PIPT I-cache on CPU55 Dec 13 15:03:44.166264 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 Dec 13 15:03:44.166271 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 Dec 13 15:03:44.166279 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166286 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] Dec 13 15:03:44.166293 kernel: Detected PIPT I-cache on CPU56 Dec 13 15:03:44.166300 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 Dec 13 15:03:44.166308 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 Dec 13 15:03:44.166315 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166322 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] Dec 13 15:03:44.166331 kernel: Detected PIPT I-cache on CPU57 Dec 13 15:03:44.166338 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 Dec 13 15:03:44.166346 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 Dec 13 15:03:44.166353 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166360 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] Dec 13 15:03:44.166367 kernel: Detected PIPT I-cache on CPU58 Dec 13 15:03:44.166375 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 Dec 13 15:03:44.166382 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 Dec 13 15:03:44.166389 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166397 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] Dec 13 15:03:44.166405 kernel: Detected PIPT I-cache on CPU59 Dec 13 15:03:44.166413 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 Dec 13 15:03:44.166420 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 Dec 13 15:03:44.166427 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166434 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] Dec 13 15:03:44.166442 kernel: Detected PIPT I-cache on CPU60 Dec 13 15:03:44.166449 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 Dec 13 15:03:44.166456 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 Dec 13 15:03:44.166464 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166472 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] Dec 13 15:03:44.166479 kernel: Detected PIPT I-cache on CPU61 Dec 13 15:03:44.166487 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 Dec 13 15:03:44.166494 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 Dec 13 15:03:44.166502 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166509 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] Dec 13 15:03:44.166516 kernel: Detected PIPT I-cache on CPU62 Dec 13 15:03:44.166523 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 Dec 13 15:03:44.166530 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 Dec 13 15:03:44.166539 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166546 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] Dec 13 15:03:44.166554 kernel: Detected PIPT I-cache on CPU63 Dec 13 15:03:44.166561 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 Dec 13 15:03:44.166568 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 Dec 13 15:03:44.166575 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166583 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] Dec 13 15:03:44.166590 kernel: Detected PIPT I-cache on CPU64 Dec 13 15:03:44.166597 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 Dec 13 15:03:44.166604 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 Dec 13 15:03:44.166613 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166620 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] Dec 13 15:03:44.166628 kernel: Detected PIPT I-cache on CPU65 Dec 13 15:03:44.166635 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 Dec 13 15:03:44.166643 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 Dec 13 15:03:44.166650 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166657 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] Dec 13 15:03:44.166664 kernel: Detected PIPT I-cache on CPU66 Dec 13 15:03:44.166671 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 Dec 13 15:03:44.166680 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 Dec 13 15:03:44.166688 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166695 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] Dec 13 15:03:44.166702 kernel: Detected PIPT I-cache on CPU67 Dec 13 15:03:44.166710 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 Dec 13 15:03:44.166717 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 Dec 13 15:03:44.166724 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166732 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] Dec 13 15:03:44.166739 kernel: Detected PIPT I-cache on CPU68 Dec 13 15:03:44.166746 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 Dec 13 15:03:44.166755 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 Dec 13 15:03:44.166762 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166769 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] Dec 13 15:03:44.166776 kernel: Detected PIPT I-cache on CPU69 Dec 13 15:03:44.166784 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 Dec 13 15:03:44.166793 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 Dec 13 15:03:44.166800 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166808 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] Dec 13 15:03:44.166815 kernel: Detected PIPT I-cache on CPU70 Dec 13 15:03:44.166824 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 Dec 13 15:03:44.166831 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 Dec 13 15:03:44.166838 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166846 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] Dec 13 15:03:44.166853 kernel: Detected PIPT I-cache on CPU71 Dec 13 15:03:44.166860 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 Dec 13 15:03:44.166867 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 Dec 13 15:03:44.166875 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166882 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] Dec 13 15:03:44.166889 kernel: Detected PIPT I-cache on CPU72 Dec 13 15:03:44.166898 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 Dec 13 15:03:44.166905 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 Dec 13 15:03:44.166913 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166920 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] Dec 13 15:03:44.166927 kernel: Detected PIPT I-cache on CPU73 Dec 13 15:03:44.166934 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 Dec 13 15:03:44.166942 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 Dec 13 15:03:44.166949 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166956 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] Dec 13 15:03:44.166965 kernel: Detected PIPT I-cache on CPU74 Dec 13 15:03:44.166972 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 Dec 13 15:03:44.166979 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 Dec 13 15:03:44.166987 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.166994 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] Dec 13 15:03:44.167001 kernel: Detected PIPT I-cache on CPU75 Dec 13 15:03:44.167008 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 Dec 13 15:03:44.167016 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 Dec 13 15:03:44.167023 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.167030 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] Dec 13 15:03:44.167039 kernel: Detected PIPT I-cache on CPU76 Dec 13 15:03:44.167046 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 Dec 13 15:03:44.167053 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 Dec 13 15:03:44.167061 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.167068 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] Dec 13 15:03:44.167075 kernel: Detected PIPT I-cache on CPU77 Dec 13 15:03:44.167083 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 Dec 13 15:03:44.167090 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 Dec 13 15:03:44.167097 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.167106 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] Dec 13 15:03:44.167113 kernel: Detected PIPT I-cache on CPU78 Dec 13 15:03:44.167120 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 Dec 13 15:03:44.167128 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 Dec 13 15:03:44.167135 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.167142 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] Dec 13 15:03:44.167149 kernel: Detected PIPT I-cache on CPU79 Dec 13 15:03:44.167156 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 Dec 13 15:03:44.167164 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 Dec 13 15:03:44.167172 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 15:03:44.167180 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] Dec 13 15:03:44.167187 kernel: smp: Brought up 1 node, 80 CPUs Dec 13 15:03:44.167194 kernel: SMP: Total of 80 processors activated. Dec 13 15:03:44.167201 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 15:03:44.167209 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 15:03:44.167216 kernel: CPU features: detected: Common not Private translations Dec 13 15:03:44.167223 kernel: CPU features: detected: CRC32 instructions Dec 13 15:03:44.167231 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 15:03:44.167238 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 15:03:44.167246 kernel: CPU features: detected: LSE atomic instructions Dec 13 15:03:44.167254 kernel: CPU features: detected: Privileged Access Never Dec 13 15:03:44.167261 kernel: CPU features: detected: RAS Extension Support Dec 13 15:03:44.167268 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 15:03:44.167276 kernel: CPU: All CPU(s) started at EL2 Dec 13 15:03:44.167283 kernel: alternatives: applying system-wide alternatives Dec 13 15:03:44.167290 kernel: devtmpfs: initialized Dec 13 15:03:44.167297 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 15:03:44.167305 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Dec 13 15:03:44.167314 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 15:03:44.167321 kernel: SMBIOS 3.4.0 present. Dec 13 15:03:44.167328 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 Dec 13 15:03:44.167336 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 15:03:44.167343 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations Dec 13 15:03:44.167350 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 15:03:44.167358 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 15:03:44.167365 kernel: audit: initializing netlink subsys (disabled) Dec 13 15:03:44.167372 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 Dec 13 15:03:44.167381 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 15:03:44.167389 kernel: cpuidle: using governor menu Dec 13 15:03:44.167396 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 15:03:44.167403 kernel: ASID allocator initialised with 32768 entries Dec 13 15:03:44.167410 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 15:03:44.167418 kernel: Serial: AMBA PL011 UART driver Dec 13 15:03:44.167425 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 15:03:44.167432 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 15:03:44.167439 kernel: Modules: 508880 pages in range for PLT usage Dec 13 15:03:44.167448 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 15:03:44.167455 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 15:03:44.167463 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 15:03:44.167470 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 15:03:44.167477 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 15:03:44.167485 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 15:03:44.167492 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 15:03:44.167500 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 15:03:44.167507 kernel: ACPI: Added _OSI(Module Device) Dec 13 15:03:44.167515 kernel: ACPI: Added _OSI(Processor Device) Dec 13 15:03:44.167523 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 15:03:44.167530 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 15:03:44.167537 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded Dec 13 15:03:44.167545 kernel: ACPI: Interpreter enabled Dec 13 15:03:44.167552 kernel: ACPI: Using GIC for interrupt routing Dec 13 15:03:44.167560 kernel: ACPI: MCFG table detected, 8 entries Dec 13 15:03:44.167567 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 Dec 13 15:03:44.167574 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 Dec 13 15:03:44.167583 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 Dec 13 15:03:44.167590 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 Dec 13 15:03:44.167598 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 Dec 13 15:03:44.167605 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 Dec 13 15:03:44.167613 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 Dec 13 15:03:44.167620 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 Dec 13 15:03:44.167627 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA Dec 13 15:03:44.167635 kernel: printk: console [ttyAMA0] enabled Dec 13 15:03:44.167642 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA Dec 13 15:03:44.167651 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) Dec 13 15:03:44.167781 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 15:03:44.167859 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 15:03:44.167924 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] Dec 13 15:03:44.167985 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 15:03:44.168047 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 Dec 13 15:03:44.168108 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] Dec 13 15:03:44.168121 kernel: PCI host bridge to bus 000d:00 Dec 13 15:03:44.168190 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] Dec 13 15:03:44.168249 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] Dec 13 15:03:44.168304 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] Dec 13 15:03:44.168383 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 Dec 13 15:03:44.168457 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 Dec 13 15:03:44.168524 kernel: pci 000d:00:01.0: enabling Extended Tags Dec 13 15:03:44.168587 kernel: pci 000d:00:01.0: supports D1 D2 Dec 13 15:03:44.168650 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.168722 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 Dec 13 15:03:44.168786 kernel: pci 000d:00:02.0: supports D1 D2 Dec 13 15:03:44.168912 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.168985 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 Dec 13 15:03:44.169051 kernel: pci 000d:00:03.0: supports D1 D2 Dec 13 15:03:44.169112 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.169182 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 Dec 13 15:03:44.169245 kernel: pci 000d:00:04.0: supports D1 D2 Dec 13 15:03:44.169305 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.169314 kernel: acpiphp: Slot [1] registered Dec 13 15:03:44.169323 kernel: acpiphp: Slot [2] registered Dec 13 15:03:44.169331 kernel: acpiphp: Slot [3] registered Dec 13 15:03:44.169338 kernel: acpiphp: Slot [4] registered Dec 13 15:03:44.169392 kernel: pci_bus 000d:00: on NUMA node 0 Dec 13 15:03:44.169453 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 15:03:44.169514 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.169575 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.169636 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 15:03:44.169698 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.169764 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.169831 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 15:03:44.169897 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.169962 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.170027 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 15:03:44.170090 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.170154 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.170218 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] Dec 13 15:03:44.170280 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] Dec 13 15:03:44.170343 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] Dec 13 15:03:44.170405 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] Dec 13 15:03:44.170468 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] Dec 13 15:03:44.170529 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] Dec 13 15:03:44.170594 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] Dec 13 15:03:44.170656 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] Dec 13 15:03:44.170718 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.170779 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.170846 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.170908 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.170971 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.171033 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.171098 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.171161 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.171222 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.171284 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.171345 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.171407 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.171469 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.171532 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.171596 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.171660 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.171722 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Dec 13 15:03:44.171785 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] Dec 13 15:03:44.171850 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] Dec 13 15:03:44.171913 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Dec 13 15:03:44.171976 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] Dec 13 15:03:44.172041 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] Dec 13 15:03:44.172105 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Dec 13 15:03:44.172167 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] Dec 13 15:03:44.172230 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] Dec 13 15:03:44.172291 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Dec 13 15:03:44.172355 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] Dec 13 15:03:44.172419 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] Dec 13 15:03:44.172478 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] Dec 13 15:03:44.172533 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] Dec 13 15:03:44.172603 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] Dec 13 15:03:44.172662 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] Dec 13 15:03:44.172729 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] Dec 13 15:03:44.172790 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] Dec 13 15:03:44.172868 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] Dec 13 15:03:44.172926 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] Dec 13 15:03:44.172993 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] Dec 13 15:03:44.173050 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] Dec 13 15:03:44.173060 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) Dec 13 15:03:44.173129 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 15:03:44.173193 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 15:03:44.173253 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] Dec 13 15:03:44.173313 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 15:03:44.173372 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 Dec 13 15:03:44.173434 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] Dec 13 15:03:44.173444 kernel: PCI host bridge to bus 0000:00 Dec 13 15:03:44.173506 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] Dec 13 15:03:44.173564 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] Dec 13 15:03:44.173618 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 15:03:44.173689 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 Dec 13 15:03:44.173759 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 Dec 13 15:03:44.173827 kernel: pci 0000:00:01.0: enabling Extended Tags Dec 13 15:03:44.173890 kernel: pci 0000:00:01.0: supports D1 D2 Dec 13 15:03:44.173955 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.174026 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 Dec 13 15:03:44.174089 kernel: pci 0000:00:02.0: supports D1 D2 Dec 13 15:03:44.174153 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.174223 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 Dec 13 15:03:44.174287 kernel: pci 0000:00:03.0: supports D1 D2 Dec 13 15:03:44.174349 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.174421 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 Dec 13 15:03:44.174483 kernel: pci 0000:00:04.0: supports D1 D2 Dec 13 15:03:44.174546 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.174556 kernel: acpiphp: Slot [1-1] registered Dec 13 15:03:44.174563 kernel: acpiphp: Slot [2-1] registered Dec 13 15:03:44.174571 kernel: acpiphp: Slot [3-1] registered Dec 13 15:03:44.174578 kernel: acpiphp: Slot [4-1] registered Dec 13 15:03:44.174631 kernel: pci_bus 0000:00: on NUMA node 0 Dec 13 15:03:44.174696 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 15:03:44.174758 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.174825 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.174887 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 15:03:44.174949 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.175010 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.175073 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 15:03:44.175137 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.175199 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.175261 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 15:03:44.175323 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.175385 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.175447 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] Dec 13 15:03:44.175510 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Dec 13 15:03:44.175574 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] Dec 13 15:03:44.175637 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Dec 13 15:03:44.175698 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] Dec 13 15:03:44.175761 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Dec 13 15:03:44.175828 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] Dec 13 15:03:44.175890 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Dec 13 15:03:44.175951 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.176014 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.176080 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.176141 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.176204 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.176266 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.176329 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.176390 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.176452 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.176513 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.176577 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.176638 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.176700 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.176762 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.176827 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.176889 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.176951 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 15:03:44.177013 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] Dec 13 15:03:44.177078 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Dec 13 15:03:44.177142 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Dec 13 15:03:44.177204 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] Dec 13 15:03:44.177267 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Dec 13 15:03:44.177329 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Dec 13 15:03:44.177394 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] Dec 13 15:03:44.177455 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Dec 13 15:03:44.177517 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Dec 13 15:03:44.177579 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] Dec 13 15:03:44.177642 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Dec 13 15:03:44.177702 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] Dec 13 15:03:44.177758 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] Dec 13 15:03:44.177828 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] Dec 13 15:03:44.177887 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Dec 13 15:03:44.177952 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] Dec 13 15:03:44.178011 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Dec 13 15:03:44.178087 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] Dec 13 15:03:44.178147 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Dec 13 15:03:44.178213 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] Dec 13 15:03:44.178270 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Dec 13 15:03:44.178280 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) Dec 13 15:03:44.178348 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 15:03:44.178408 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 15:03:44.178471 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] Dec 13 15:03:44.178531 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 15:03:44.178591 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 Dec 13 15:03:44.178649 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] Dec 13 15:03:44.178659 kernel: PCI host bridge to bus 0005:00 Dec 13 15:03:44.178722 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] Dec 13 15:03:44.178780 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] Dec 13 15:03:44.178839 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] Dec 13 15:03:44.178908 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 Dec 13 15:03:44.178978 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 Dec 13 15:03:44.179042 kernel: pci 0005:00:01.0: supports D1 D2 Dec 13 15:03:44.179104 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.179175 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 Dec 13 15:03:44.179240 kernel: pci 0005:00:03.0: supports D1 D2 Dec 13 15:03:44.179303 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.179371 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 Dec 13 15:03:44.179434 kernel: pci 0005:00:05.0: supports D1 D2 Dec 13 15:03:44.179496 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.179566 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 Dec 13 15:03:44.179628 kernel: pci 0005:00:07.0: supports D1 D2 Dec 13 15:03:44.179693 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.179702 kernel: acpiphp: Slot [1-2] registered Dec 13 15:03:44.179710 kernel: acpiphp: Slot [2-2] registered Dec 13 15:03:44.179778 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 Dec 13 15:03:44.179849 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] Dec 13 15:03:44.179916 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] Dec 13 15:03:44.179990 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 Dec 13 15:03:44.180059 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] Dec 13 15:03:44.180123 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] Dec 13 15:03:44.180181 kernel: pci_bus 0005:00: on NUMA node 0 Dec 13 15:03:44.180244 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 15:03:44.180310 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.180373 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.180437 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 15:03:44.180500 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.180565 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.180628 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 15:03:44.180692 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.180755 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 13 15:03:44.180822 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 15:03:44.180888 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.180952 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 Dec 13 15:03:44.181015 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] Dec 13 15:03:44.181078 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Dec 13 15:03:44.181142 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] Dec 13 15:03:44.181219 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Dec 13 15:03:44.181284 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] Dec 13 15:03:44.181346 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Dec 13 15:03:44.181411 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] Dec 13 15:03:44.181483 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Dec 13 15:03:44.181546 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.181609 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.181671 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.181733 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.181805 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.181868 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.181932 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.181995 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.182058 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.182120 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.182181 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.182244 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.182305 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.182367 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.182429 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.182494 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.182555 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Dec 13 15:03:44.182618 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] Dec 13 15:03:44.182681 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Dec 13 15:03:44.182743 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Dec 13 15:03:44.182810 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] Dec 13 15:03:44.182872 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Dec 13 15:03:44.182941 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] Dec 13 15:03:44.183006 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] Dec 13 15:03:44.183070 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Dec 13 15:03:44.183133 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] Dec 13 15:03:44.183196 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Dec 13 15:03:44.183262 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] Dec 13 15:03:44.183327 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] Dec 13 15:03:44.183393 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Dec 13 15:03:44.183456 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] Dec 13 15:03:44.183519 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Dec 13 15:03:44.183577 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] Dec 13 15:03:44.183632 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] Dec 13 15:03:44.183700 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] Dec 13 15:03:44.183761 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Dec 13 15:03:44.183850 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] Dec 13 15:03:44.183911 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Dec 13 15:03:44.183977 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] Dec 13 15:03:44.184036 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Dec 13 15:03:44.184101 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] Dec 13 15:03:44.184162 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Dec 13 15:03:44.184172 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) Dec 13 15:03:44.184238 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 15:03:44.184300 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 15:03:44.184361 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] Dec 13 15:03:44.184420 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 15:03:44.184483 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 Dec 13 15:03:44.184543 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] Dec 13 15:03:44.184552 kernel: PCI host bridge to bus 0003:00 Dec 13 15:03:44.184616 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] Dec 13 15:03:44.184672 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] Dec 13 15:03:44.184731 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] Dec 13 15:03:44.184805 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 Dec 13 15:03:44.184880 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 Dec 13 15:03:44.184944 kernel: pci 0003:00:01.0: supports D1 D2 Dec 13 15:03:44.185007 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.185078 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 Dec 13 15:03:44.185141 kernel: pci 0003:00:03.0: supports D1 D2 Dec 13 15:03:44.185203 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.185274 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 Dec 13 15:03:44.185339 kernel: pci 0003:00:05.0: supports D1 D2 Dec 13 15:03:44.185402 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.185412 kernel: acpiphp: Slot [1-3] registered Dec 13 15:03:44.185419 kernel: acpiphp: Slot [2-3] registered Dec 13 15:03:44.185490 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 Dec 13 15:03:44.185555 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] Dec 13 15:03:44.185619 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] Dec 13 15:03:44.185686 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] Dec 13 15:03:44.185752 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 15:03:44.185857 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] Dec 13 15:03:44.185926 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 15:03:44.185994 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] Dec 13 15:03:44.186059 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) Dec 13 15:03:44.186123 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) Dec 13 15:03:44.186193 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 Dec 13 15:03:44.186260 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] Dec 13 15:03:44.186323 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] Dec 13 15:03:44.186386 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] Dec 13 15:03:44.186448 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold Dec 13 15:03:44.186510 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] Dec 13 15:03:44.186574 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 15:03:44.186637 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] Dec 13 15:03:44.186701 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) Dec 13 15:03:44.186757 kernel: pci_bus 0003:00: on NUMA node 0 Dec 13 15:03:44.186822 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 15:03:44.186884 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.186945 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.187008 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 15:03:44.187070 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.187135 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.187200 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 Dec 13 15:03:44.187263 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 Dec 13 15:03:44.187336 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Dec 13 15:03:44.187401 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] Dec 13 15:03:44.187464 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] Dec 13 15:03:44.187527 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] Dec 13 15:03:44.187589 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] Dec 13 15:03:44.187653 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] Dec 13 15:03:44.187716 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.187778 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.187845 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.187907 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.187970 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.188032 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.188098 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.188160 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.188223 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.188285 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.188348 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.188410 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.188472 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Dec 13 15:03:44.188533 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] Dec 13 15:03:44.188598 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] Dec 13 15:03:44.188660 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Dec 13 15:03:44.188722 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] Dec 13 15:03:44.188785 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] Dec 13 15:03:44.188854 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] Dec 13 15:03:44.188920 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] Dec 13 15:03:44.188987 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] Dec 13 15:03:44.189053 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] Dec 13 15:03:44.189117 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] Dec 13 15:03:44.189181 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] Dec 13 15:03:44.189246 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] Dec 13 15:03:44.189310 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] Dec 13 15:03:44.189375 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] Dec 13 15:03:44.189441 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] Dec 13 15:03:44.189509 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] Dec 13 15:03:44.189573 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] Dec 13 15:03:44.189638 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] Dec 13 15:03:44.189701 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] Dec 13 15:03:44.189765 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] Dec 13 15:03:44.189835 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] Dec 13 15:03:44.189898 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] Dec 13 15:03:44.189963 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] Dec 13 15:03:44.190025 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] Dec 13 15:03:44.190084 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc Dec 13 15:03:44.190140 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] Dec 13 15:03:44.190197 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] Dec 13 15:03:44.190273 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] Dec 13 15:03:44.190335 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] Dec 13 15:03:44.190401 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] Dec 13 15:03:44.190460 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] Dec 13 15:03:44.190525 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] Dec 13 15:03:44.190585 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] Dec 13 15:03:44.190595 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) Dec 13 15:03:44.190664 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 15:03:44.190728 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 15:03:44.190789 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] Dec 13 15:03:44.190853 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 15:03:44.190914 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 Dec 13 15:03:44.190974 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] Dec 13 15:03:44.190987 kernel: PCI host bridge to bus 000c:00 Dec 13 15:03:44.191052 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] Dec 13 15:03:44.191110 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] Dec 13 15:03:44.191166 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] Dec 13 15:03:44.191236 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 Dec 13 15:03:44.191308 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 Dec 13 15:03:44.191371 kernel: pci 000c:00:01.0: enabling Extended Tags Dec 13 15:03:44.191434 kernel: pci 000c:00:01.0: supports D1 D2 Dec 13 15:03:44.191496 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.191570 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 Dec 13 15:03:44.191633 kernel: pci 000c:00:02.0: supports D1 D2 Dec 13 15:03:44.191696 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.191768 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 Dec 13 15:03:44.191837 kernel: pci 000c:00:03.0: supports D1 D2 Dec 13 15:03:44.191901 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.191971 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 Dec 13 15:03:44.192038 kernel: pci 000c:00:04.0: supports D1 D2 Dec 13 15:03:44.192100 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.192110 kernel: acpiphp: Slot [1-4] registered Dec 13 15:03:44.192117 kernel: acpiphp: Slot [2-4] registered Dec 13 15:03:44.192125 kernel: acpiphp: Slot [3-2] registered Dec 13 15:03:44.192133 kernel: acpiphp: Slot [4-2] registered Dec 13 15:03:44.192188 kernel: pci_bus 000c:00: on NUMA node 0 Dec 13 15:03:44.192250 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 15:03:44.192315 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.192377 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.192442 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 15:03:44.192505 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.192568 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.192631 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 15:03:44.192693 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.192757 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.193040 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 15:03:44.193110 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.193172 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.193234 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] Dec 13 15:03:44.193295 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] Dec 13 15:03:44.193356 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] Dec 13 15:03:44.193421 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] Dec 13 15:03:44.193483 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] Dec 13 15:03:44.193545 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] Dec 13 15:03:44.193606 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] Dec 13 15:03:44.193667 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] Dec 13 15:03:44.193727 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.193788 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.193857 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.193918 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.193979 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.194039 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.194100 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.194162 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.194222 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.194283 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.194344 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.194407 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.194470 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.194531 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.194592 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.194653 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.194713 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Dec 13 15:03:44.194775 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] Dec 13 15:03:44.194839 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] Dec 13 15:03:44.194903 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Dec 13 15:03:44.194964 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] Dec 13 15:03:44.195026 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] Dec 13 15:03:44.195087 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Dec 13 15:03:44.195148 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] Dec 13 15:03:44.195210 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] Dec 13 15:03:44.195273 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Dec 13 15:03:44.195334 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] Dec 13 15:03:44.195395 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] Dec 13 15:03:44.195452 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] Dec 13 15:03:44.195507 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] Dec 13 15:03:44.195574 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] Dec 13 15:03:44.195634 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] Dec 13 15:03:44.195706 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] Dec 13 15:03:44.195764 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] Dec 13 15:03:44.195832 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] Dec 13 15:03:44.195890 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] Dec 13 15:03:44.195955 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] Dec 13 15:03:44.196011 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] Dec 13 15:03:44.196024 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) Dec 13 15:03:44.196091 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 15:03:44.196151 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 15:03:44.196210 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] Dec 13 15:03:44.196269 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 15:03:44.196328 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 Dec 13 15:03:44.196387 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] Dec 13 15:03:44.196399 kernel: PCI host bridge to bus 0002:00 Dec 13 15:03:44.196462 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] Dec 13 15:03:44.196517 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] Dec 13 15:03:44.196571 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] Dec 13 15:03:44.196640 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 Dec 13 15:03:44.196710 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 Dec 13 15:03:44.196775 kernel: pci 0002:00:01.0: supports D1 D2 Dec 13 15:03:44.196840 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.196908 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 Dec 13 15:03:44.196970 kernel: pci 0002:00:03.0: supports D1 D2 Dec 13 15:03:44.197031 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.197100 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 Dec 13 15:03:44.197162 kernel: pci 0002:00:05.0: supports D1 D2 Dec 13 15:03:44.197226 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.197294 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 Dec 13 15:03:44.197357 kernel: pci 0002:00:07.0: supports D1 D2 Dec 13 15:03:44.197418 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.197428 kernel: acpiphp: Slot [1-5] registered Dec 13 15:03:44.197436 kernel: acpiphp: Slot [2-5] registered Dec 13 15:03:44.197444 kernel: acpiphp: Slot [3-3] registered Dec 13 15:03:44.197452 kernel: acpiphp: Slot [4-3] registered Dec 13 15:03:44.197508 kernel: pci_bus 0002:00: on NUMA node 0 Dec 13 15:03:44.197569 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 15:03:44.197630 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.197691 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Dec 13 15:03:44.197756 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 15:03:44.197822 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.197884 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.197948 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 15:03:44.198010 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.198070 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.198133 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 15:03:44.198195 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.198258 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.198320 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] Dec 13 15:03:44.198381 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] Dec 13 15:03:44.198443 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] Dec 13 15:03:44.198504 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] Dec 13 15:03:44.198565 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] Dec 13 15:03:44.198626 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] Dec 13 15:03:44.198690 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] Dec 13 15:03:44.198752 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] Dec 13 15:03:44.198818 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.198879 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.198941 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.199001 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.199063 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.199127 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.199190 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.199252 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.199314 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.199376 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.199436 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.199498 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.199559 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.199621 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.199681 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.199747 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.199869 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Dec 13 15:03:44.199941 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] Dec 13 15:03:44.200004 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] Dec 13 15:03:44.200066 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Dec 13 15:03:44.200126 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] Dec 13 15:03:44.200187 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] Dec 13 15:03:44.200252 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Dec 13 15:03:44.200314 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] Dec 13 15:03:44.200375 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] Dec 13 15:03:44.200437 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Dec 13 15:03:44.200499 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] Dec 13 15:03:44.200561 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] Dec 13 15:03:44.200620 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] Dec 13 15:03:44.200675 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] Dec 13 15:03:44.200742 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] Dec 13 15:03:44.200803 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] Dec 13 15:03:44.200884 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] Dec 13 15:03:44.200943 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] Dec 13 15:03:44.201020 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] Dec 13 15:03:44.201078 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] Dec 13 15:03:44.201143 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] Dec 13 15:03:44.201201 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] Dec 13 15:03:44.201211 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) Dec 13 15:03:44.201277 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 15:03:44.201338 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 15:03:44.201400 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] Dec 13 15:03:44.201458 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 15:03:44.201518 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 Dec 13 15:03:44.201576 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] Dec 13 15:03:44.201586 kernel: PCI host bridge to bus 0001:00 Dec 13 15:03:44.201650 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] Dec 13 15:03:44.201707 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] Dec 13 15:03:44.201761 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] Dec 13 15:03:44.201835 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 Dec 13 15:03:44.201906 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 Dec 13 15:03:44.201968 kernel: pci 0001:00:01.0: enabling Extended Tags Dec 13 15:03:44.202030 kernel: pci 0001:00:01.0: supports D1 D2 Dec 13 15:03:44.202093 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.202164 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 Dec 13 15:03:44.202226 kernel: pci 0001:00:02.0: supports D1 D2 Dec 13 15:03:44.202288 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.202356 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 Dec 13 15:03:44.202418 kernel: pci 0001:00:03.0: supports D1 D2 Dec 13 15:03:44.202479 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.202550 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 Dec 13 15:03:44.202613 kernel: pci 0001:00:04.0: supports D1 D2 Dec 13 15:03:44.202674 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.202684 kernel: acpiphp: Slot [1-6] registered Dec 13 15:03:44.202753 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 Dec 13 15:03:44.202825 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] Dec 13 15:03:44.202892 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] Dec 13 15:03:44.202958 kernel: pci 0001:01:00.0: PME# supported from D3cold Dec 13 15:03:44.203022 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 15:03:44.203096 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 Dec 13 15:03:44.203159 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] Dec 13 15:03:44.203224 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] Dec 13 15:03:44.203287 kernel: pci 0001:01:00.1: PME# supported from D3cold Dec 13 15:03:44.203297 kernel: acpiphp: Slot [2-6] registered Dec 13 15:03:44.203305 kernel: acpiphp: Slot [3-4] registered Dec 13 15:03:44.203314 kernel: acpiphp: Slot [4-4] registered Dec 13 15:03:44.203370 kernel: pci_bus 0001:00: on NUMA node 0 Dec 13 15:03:44.203432 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 15:03:44.203496 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 15:03:44.203557 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.203620 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Dec 13 15:03:44.203682 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 15:03:44.203743 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.203930 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.204000 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 15:03:44.204063 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.204124 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.204186 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] Dec 13 15:03:44.204247 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] Dec 13 15:03:44.204313 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] Dec 13 15:03:44.204374 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] Dec 13 15:03:44.204435 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] Dec 13 15:03:44.204496 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] Dec 13 15:03:44.204558 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] Dec 13 15:03:44.204618 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] Dec 13 15:03:44.204680 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.204740 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.204809 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.204870 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.204932 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.204993 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.205055 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.205115 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.205176 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.205237 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.205301 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.205362 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.205422 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.205483 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.205545 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.205606 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.205670 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] Dec 13 15:03:44.205735 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] Dec 13 15:03:44.205801 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] Dec 13 15:03:44.205868 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] Dec 13 15:03:44.205929 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Dec 13 15:03:44.205993 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Dec 13 15:03:44.206054 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] Dec 13 15:03:44.206115 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Dec 13 15:03:44.206177 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] Dec 13 15:03:44.206240 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] Dec 13 15:03:44.206303 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Dec 13 15:03:44.206365 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] Dec 13 15:03:44.206426 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] Dec 13 15:03:44.206488 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Dec 13 15:03:44.206549 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] Dec 13 15:03:44.206612 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] Dec 13 15:03:44.206669 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] Dec 13 15:03:44.206723 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] Dec 13 15:03:44.206803 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] Dec 13 15:03:44.206862 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] Dec 13 15:03:44.206927 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] Dec 13 15:03:44.206987 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] Dec 13 15:03:44.207054 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] Dec 13 15:03:44.207111 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] Dec 13 15:03:44.207176 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] Dec 13 15:03:44.207233 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] Dec 13 15:03:44.207243 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) Dec 13 15:03:44.207309 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 15:03:44.207372 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] Dec 13 15:03:44.207431 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] Dec 13 15:03:44.207490 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops Dec 13 15:03:44.207548 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 Dec 13 15:03:44.207607 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] Dec 13 15:03:44.207617 kernel: PCI host bridge to bus 0004:00 Dec 13 15:03:44.207678 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] Dec 13 15:03:44.207735 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] Dec 13 15:03:44.207789 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] Dec 13 15:03:44.207862 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 Dec 13 15:03:44.207931 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 Dec 13 15:03:44.207994 kernel: pci 0004:00:01.0: supports D1 D2 Dec 13 15:03:44.208056 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.208124 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 Dec 13 15:03:44.208190 kernel: pci 0004:00:03.0: supports D1 D2 Dec 13 15:03:44.208250 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.208319 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 Dec 13 15:03:44.208380 kernel: pci 0004:00:05.0: supports D1 D2 Dec 13 15:03:44.208442 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot Dec 13 15:03:44.208513 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 Dec 13 15:03:44.208580 kernel: pci 0004:01:00.0: enabling Extended Tags Dec 13 15:03:44.208643 kernel: pci 0004:01:00.0: supports D1 D2 Dec 13 15:03:44.208706 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 15:03:44.208783 kernel: pci_bus 0004:02: extended config space not accessible Dec 13 15:03:44.208861 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 Dec 13 15:03:44.208928 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] Dec 13 15:03:44.208993 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] Dec 13 15:03:44.209061 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] Dec 13 15:03:44.209127 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb Dec 13 15:03:44.209192 kernel: pci 0004:02:00.0: supports D1 D2 Dec 13 15:03:44.209259 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 15:03:44.209330 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 Dec 13 15:03:44.209394 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] Dec 13 15:03:44.209457 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 15:03:44.209513 kernel: pci_bus 0004:00: on NUMA node 0 Dec 13 15:03:44.209577 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 Dec 13 15:03:44.209640 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 15:03:44.209701 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Dec 13 15:03:44.209762 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 13 15:03:44.209829 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 15:03:44.209891 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.209952 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 15:03:44.210017 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] Dec 13 15:03:44.210079 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] Dec 13 15:03:44.210140 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] Dec 13 15:03:44.210202 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] Dec 13 15:03:44.210262 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] Dec 13 15:03:44.210323 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] Dec 13 15:03:44.210384 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.210447 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.210508 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.210569 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.210630 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.210692 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.210756 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.210822 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.210885 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.210946 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.211010 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.211070 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.211135 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] Dec 13 15:03:44.211198 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] Dec 13 15:03:44.211262 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] Dec 13 15:03:44.211328 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] Dec 13 15:03:44.211395 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] Dec 13 15:03:44.211460 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] Dec 13 15:03:44.211527 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] Dec 13 15:03:44.211591 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Dec 13 15:03:44.211653 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] Dec 13 15:03:44.211716 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Dec 13 15:03:44.211776 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] Dec 13 15:03:44.211958 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] Dec 13 15:03:44.212029 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] Dec 13 15:03:44.212092 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Dec 13 15:03:44.212157 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] Dec 13 15:03:44.212219 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] Dec 13 15:03:44.212281 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Dec 13 15:03:44.212342 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] Dec 13 15:03:44.212404 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] Dec 13 15:03:44.212461 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc Dec 13 15:03:44.212520 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] Dec 13 15:03:44.212575 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] Dec 13 15:03:44.212643 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] Dec 13 15:03:44.212701 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] Dec 13 15:03:44.212762 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] Dec 13 15:03:44.212832 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] Dec 13 15:03:44.212892 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] Dec 13 15:03:44.212957 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] Dec 13 15:03:44.213014 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] Dec 13 15:03:44.213024 kernel: iommu: Default domain type: Translated Dec 13 15:03:44.213032 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 15:03:44.213040 kernel: efivars: Registered efivars operations Dec 13 15:03:44.213105 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device Dec 13 15:03:44.213171 kernel: pci 0004:02:00.0: vgaarb: bridge control possible Dec 13 15:03:44.213239 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none Dec 13 15:03:44.213249 kernel: vgaarb: loaded Dec 13 15:03:44.213257 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 15:03:44.213265 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 15:03:44.213273 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 15:03:44.213281 kernel: pnp: PnP ACPI init Dec 13 15:03:44.213347 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved Dec 13 15:03:44.213406 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved Dec 13 15:03:44.213463 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved Dec 13 15:03:44.213518 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved Dec 13 15:03:44.213574 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved Dec 13 15:03:44.213629 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved Dec 13 15:03:44.213686 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved Dec 13 15:03:44.213741 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved Dec 13 15:03:44.213754 kernel: pnp: PnP ACPI: found 1 devices Dec 13 15:03:44.213762 kernel: NET: Registered PF_INET protocol family Dec 13 15:03:44.213769 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 15:03:44.213777 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 15:03:44.213785 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 15:03:44.213796 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 15:03:44.213804 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 15:03:44.213814 kernel: TCP: Hash tables configured (established 524288 bind 65536) Dec 13 15:03:44.213822 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 15:03:44.213831 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 15:03:44.213839 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 15:03:44.213904 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes Dec 13 15:03:44.213915 kernel: kvm [1]: IPA Size Limit: 48 bits Dec 13 15:03:44.213923 kernel: kvm [1]: GICv3: no GICV resource entry Dec 13 15:03:44.213931 kernel: kvm [1]: disabling GICv2 emulation Dec 13 15:03:44.213939 kernel: kvm [1]: GIC system register CPU interface enabled Dec 13 15:03:44.213946 kernel: kvm [1]: vgic interrupt IRQ9 Dec 13 15:03:44.213954 kernel: kvm [1]: VHE mode initialized successfully Dec 13 15:03:44.213964 kernel: Initialise system trusted keyrings Dec 13 15:03:44.213972 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 Dec 13 15:03:44.213979 kernel: Key type asymmetric registered Dec 13 15:03:44.213987 kernel: Asymmetric key parser 'x509' registered Dec 13 15:03:44.213994 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 15:03:44.214002 kernel: io scheduler mq-deadline registered Dec 13 15:03:44.214010 kernel: io scheduler kyber registered Dec 13 15:03:44.214017 kernel: io scheduler bfq registered Dec 13 15:03:44.214025 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 15:03:44.214034 kernel: ACPI: button: Power Button [PWRB] Dec 13 15:03:44.214042 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). Dec 13 15:03:44.214050 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 15:03:44.214120 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 Dec 13 15:03:44.214180 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 15:03:44.214238 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 15:03:44.214294 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq Dec 13 15:03:44.214354 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq Dec 13 15:03:44.214410 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq Dec 13 15:03:44.214475 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 Dec 13 15:03:44.214532 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 15:03:44.214589 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 15:03:44.214646 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq Dec 13 15:03:44.214703 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq Dec 13 15:03:44.214762 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq Dec 13 15:03:44.214831 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 Dec 13 15:03:44.214889 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 15:03:44.214946 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 15:03:44.215002 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq Dec 13 15:03:44.215060 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq Dec 13 15:03:44.215120 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq Dec 13 15:03:44.215183 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 Dec 13 15:03:44.215241 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 15:03:44.215297 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 15:03:44.215354 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq Dec 13 15:03:44.215411 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq Dec 13 15:03:44.215468 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq Dec 13 15:03:44.215542 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 Dec 13 15:03:44.215600 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 15:03:44.215657 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 15:03:44.215714 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq Dec 13 15:03:44.215770 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq Dec 13 15:03:44.215831 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq Dec 13 15:03:44.215899 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 Dec 13 15:03:44.215957 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 15:03:44.216014 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 15:03:44.216071 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq Dec 13 15:03:44.216128 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq Dec 13 15:03:44.216184 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq Dec 13 15:03:44.216250 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 Dec 13 15:03:44.216309 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 15:03:44.216369 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 15:03:44.216427 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq Dec 13 15:03:44.216485 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq Dec 13 15:03:44.216541 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq Dec 13 15:03:44.216607 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 Dec 13 15:03:44.216666 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) Dec 13 15:03:44.216723 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) Dec 13 15:03:44.216780 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq Dec 13 15:03:44.216841 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq Dec 13 15:03:44.216898 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq Dec 13 15:03:44.216909 kernel: thunder_xcv, ver 1.0 Dec 13 15:03:44.216917 kernel: thunder_bgx, ver 1.0 Dec 13 15:03:44.216926 kernel: nicpf, ver 1.0 Dec 13 15:03:44.216934 kernel: nicvf, ver 1.0 Dec 13 15:03:44.216999 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 15:03:44.217058 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T15:03:42 UTC (1734102222) Dec 13 15:03:44.217068 kernel: efifb: probing for efifb Dec 13 15:03:44.217076 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k Dec 13 15:03:44.217084 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Dec 13 15:03:44.217092 kernel: efifb: scrolling: redraw Dec 13 15:03:44.217101 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 15:03:44.217109 kernel: Console: switching to colour frame buffer device 100x37 Dec 13 15:03:44.217117 kernel: fb0: EFI VGA frame buffer device Dec 13 15:03:44.217124 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 Dec 13 15:03:44.217132 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 15:03:44.217140 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 15:03:44.217148 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 15:03:44.217156 kernel: watchdog: Hard watchdog permanently disabled Dec 13 15:03:44.217163 kernel: NET: Registered PF_INET6 protocol family Dec 13 15:03:44.217172 kernel: Segment Routing with IPv6 Dec 13 15:03:44.217180 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 15:03:44.217188 kernel: NET: Registered PF_PACKET protocol family Dec 13 15:03:44.217196 kernel: Key type dns_resolver registered Dec 13 15:03:44.217203 kernel: registered taskstats version 1 Dec 13 15:03:44.217211 kernel: Loading compiled-in X.509 certificates Dec 13 15:03:44.217219 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 752b3e36c6039904ea643ccad2b3f5f3cb4ebf78' Dec 13 15:03:44.217226 kernel: Key type .fscrypt registered Dec 13 15:03:44.217234 kernel: Key type fscrypt-provisioning registered Dec 13 15:03:44.217241 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 15:03:44.217251 kernel: ima: Allocated hash algorithm: sha1 Dec 13 15:03:44.217258 kernel: ima: No architecture policies found Dec 13 15:03:44.217266 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 15:03:44.217330 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 Dec 13 15:03:44.217394 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 Dec 13 15:03:44.217458 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 Dec 13 15:03:44.217521 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 Dec 13 15:03:44.217584 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 Dec 13 15:03:44.217649 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 Dec 13 15:03:44.217711 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 Dec 13 15:03:44.217773 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 Dec 13 15:03:44.217840 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 Dec 13 15:03:44.217902 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 Dec 13 15:03:44.217964 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 Dec 13 15:03:44.218026 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 Dec 13 15:03:44.218088 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 Dec 13 15:03:44.218151 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 Dec 13 15:03:44.218216 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 Dec 13 15:03:44.218278 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 Dec 13 15:03:44.218341 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 Dec 13 15:03:44.218402 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 Dec 13 15:03:44.218465 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 Dec 13 15:03:44.218527 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 Dec 13 15:03:44.218590 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 Dec 13 15:03:44.218651 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 Dec 13 15:03:44.218717 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 Dec 13 15:03:44.218779 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 Dec 13 15:03:44.218845 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 Dec 13 15:03:44.218906 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 Dec 13 15:03:44.218969 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 Dec 13 15:03:44.219030 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 Dec 13 15:03:44.219094 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 Dec 13 15:03:44.219155 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 Dec 13 15:03:44.219221 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 Dec 13 15:03:44.219284 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 Dec 13 15:03:44.219346 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 Dec 13 15:03:44.219408 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 Dec 13 15:03:44.219470 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 Dec 13 15:03:44.219533 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 Dec 13 15:03:44.219596 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 Dec 13 15:03:44.219658 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 Dec 13 15:03:44.219723 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 Dec 13 15:03:44.219786 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 Dec 13 15:03:44.219851 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 Dec 13 15:03:44.219914 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 Dec 13 15:03:44.219976 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 Dec 13 15:03:44.220039 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 Dec 13 15:03:44.220101 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 Dec 13 15:03:44.220163 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 Dec 13 15:03:44.220225 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 Dec 13 15:03:44.220289 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 Dec 13 15:03:44.220353 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 Dec 13 15:03:44.220414 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 Dec 13 15:03:44.220477 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 Dec 13 15:03:44.220538 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 Dec 13 15:03:44.220600 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 Dec 13 15:03:44.220661 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 Dec 13 15:03:44.220726 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 Dec 13 15:03:44.220790 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 Dec 13 15:03:44.220857 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 Dec 13 15:03:44.220918 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 Dec 13 15:03:44.220981 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 Dec 13 15:03:44.221043 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 Dec 13 15:03:44.221108 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 Dec 13 15:03:44.221118 kernel: clk: Disabling unused clocks Dec 13 15:03:44.221128 kernel: Freeing unused kernel memory: 39936K Dec 13 15:03:44.221136 kernel: Run /init as init process Dec 13 15:03:44.221143 kernel: with arguments: Dec 13 15:03:44.221151 kernel: /init Dec 13 15:03:44.221159 kernel: with environment: Dec 13 15:03:44.221166 kernel: HOME=/ Dec 13 15:03:44.221173 kernel: TERM=linux Dec 13 15:03:44.221181 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 15:03:44.221191 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 15:03:44.221202 systemd[1]: Detected architecture arm64. Dec 13 15:03:44.221210 systemd[1]: Running in initrd. Dec 13 15:03:44.221218 systemd[1]: No hostname configured, using default hostname. Dec 13 15:03:44.221226 systemd[1]: Hostname set to . Dec 13 15:03:44.221234 systemd[1]: Initializing machine ID from random generator. Dec 13 15:03:44.221242 systemd[1]: Queued start job for default target initrd.target. Dec 13 15:03:44.221250 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 15:03:44.221260 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 15:03:44.221268 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 15:03:44.221276 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 15:03:44.221284 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 15:03:44.221293 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 15:03:44.221301 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 15:03:44.221310 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 15:03:44.221320 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 15:03:44.221328 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 15:03:44.221336 systemd[1]: Reached target paths.target - Path Units. Dec 13 15:03:44.221344 systemd[1]: Reached target slices.target - Slice Units. Dec 13 15:03:44.221352 systemd[1]: Reached target swap.target - Swaps. Dec 13 15:03:44.221360 systemd[1]: Reached target timers.target - Timer Units. Dec 13 15:03:44.221368 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 15:03:44.221376 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 15:03:44.221384 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 15:03:44.221394 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 15:03:44.221402 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 15:03:44.221410 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 15:03:44.221418 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 15:03:44.221426 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 15:03:44.221434 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 15:03:44.221442 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 15:03:44.221450 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 15:03:44.221459 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 15:03:44.221467 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 15:03:44.221475 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 15:03:44.221505 systemd-journald[900]: Collecting audit messages is disabled. Dec 13 15:03:44.221526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 15:03:44.221534 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 15:03:44.221542 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 15:03:44.221550 kernel: Bridge firewalling registered Dec 13 15:03:44.221559 systemd-journald[900]: Journal started Dec 13 15:03:44.221582 systemd-journald[900]: Runtime Journal (/run/log/journal/2fe2ef38df1b4399993a3e0b4b812955) is 8.0M, max 4.0G, 3.9G free. Dec 13 15:03:44.181943 systemd-modules-load[902]: Inserted module 'overlay' Dec 13 15:03:44.261858 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 15:03:44.204077 systemd-modules-load[902]: Inserted module 'br_netfilter' Dec 13 15:03:44.267554 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 15:03:44.278355 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 15:03:44.289212 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 15:03:44.299927 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 15:03:44.331989 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 15:03:44.338081 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 15:03:44.368883 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 15:03:44.385548 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 15:03:44.402723 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 15:03:44.419457 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 15:03:44.425227 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 15:03:44.438200 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 15:03:44.466928 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 15:03:44.479971 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 15:03:44.490489 dracut-cmdline[940]: dracut-dracut-053 Dec 13 15:03:44.499408 dracut-cmdline[940]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 15:03:44.493577 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 15:03:44.507581 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 15:03:44.516038 systemd-resolved[947]: Positive Trust Anchors: Dec 13 15:03:44.516047 systemd-resolved[947]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 15:03:44.516077 systemd-resolved[947]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 15:03:44.530760 systemd-resolved[947]: Defaulting to hostname 'linux'. Dec 13 15:03:44.544591 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 15:03:44.563850 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 15:03:44.667363 kernel: SCSI subsystem initialized Dec 13 15:03:44.678797 kernel: Loading iSCSI transport class v2.0-870. Dec 13 15:03:44.697800 kernel: iscsi: registered transport (tcp) Dec 13 15:03:44.725211 kernel: iscsi: registered transport (qla4xxx) Dec 13 15:03:44.725232 kernel: QLogic iSCSI HBA Driver Dec 13 15:03:44.768775 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 15:03:44.785959 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 15:03:44.832214 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 15:03:44.832245 kernel: device-mapper: uevent: version 1.0.3 Dec 13 15:03:44.841826 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 15:03:44.907802 kernel: raid6: neonx8 gen() 15848 MB/s Dec 13 15:03:44.932801 kernel: raid6: neonx4 gen() 15883 MB/s Dec 13 15:03:44.957800 kernel: raid6: neonx2 gen() 13324 MB/s Dec 13 15:03:44.982801 kernel: raid6: neonx1 gen() 10464 MB/s Dec 13 15:03:45.007800 kernel: raid6: int64x8 gen() 6811 MB/s Dec 13 15:03:45.032801 kernel: raid6: int64x4 gen() 7375 MB/s Dec 13 15:03:45.057801 kernel: raid6: int64x2 gen() 6134 MB/s Dec 13 15:03:45.085722 kernel: raid6: int64x1 gen() 5077 MB/s Dec 13 15:03:45.085743 kernel: raid6: using algorithm neonx4 gen() 15883 MB/s Dec 13 15:03:45.120146 kernel: raid6: .... xor() 12520 MB/s, rmw enabled Dec 13 15:03:45.120167 kernel: raid6: using neon recovery algorithm Dec 13 15:03:45.143052 kernel: xor: measuring software checksum speed Dec 13 15:03:45.143075 kernel: 8regs : 21630 MB/sec Dec 13 15:03:45.150960 kernel: 32regs : 21704 MB/sec Dec 13 15:03:45.158683 kernel: arm64_neon : 28099 MB/sec Dec 13 15:03:45.166305 kernel: xor: using function: arm64_neon (28099 MB/sec) Dec 13 15:03:45.226800 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 15:03:45.236233 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 15:03:45.254966 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 15:03:45.270266 systemd-udevd[1137]: Using default interface naming scheme 'v255'. Dec 13 15:03:45.273231 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 15:03:45.288934 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 15:03:45.302793 dracut-pre-trigger[1147]: rd.md=0: removing MD RAID activation Dec 13 15:03:45.328846 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 15:03:45.350908 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 15:03:45.451174 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 15:03:45.480439 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 15:03:45.480460 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 15:03:45.501983 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 15:03:45.549009 kernel: ACPI: bus type USB registered Dec 13 15:03:45.549025 kernel: usbcore: registered new interface driver usbfs Dec 13 15:03:45.549035 kernel: usbcore: registered new interface driver hub Dec 13 15:03:45.549048 kernel: PTP clock support registered Dec 13 15:03:45.549058 kernel: usbcore: registered new device driver usb Dec 13 15:03:45.543832 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 15:03:45.707533 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Dec 13 15:03:45.707546 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Dec 13 15:03:45.707556 kernel: igb 0003:03:00.0: Adding to iommu group 31 Dec 13 15:03:45.764966 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 32 Dec 13 15:03:46.016521 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Dec 13 15:03:46.016678 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 15:03:46.016770 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault Dec 13 15:03:46.016928 kernel: nvme 0005:03:00.0: Adding to iommu group 33 Dec 13 15:03:46.248281 kernel: igb 0003:03:00.0: added PHC on eth0 Dec 13 15:03:46.248453 kernel: nvme 0005:04:00.0: Adding to iommu group 34 Dec 13 15:03:46.248552 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 15:03:46.248643 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 35 Dec 13 15:03:46.700848 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:54:98 Dec 13 15:03:46.700941 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 Dec 13 15:03:46.701017 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Dec 13 15:03:46.701095 kernel: igb 0003:03:00.1: Adding to iommu group 36 Dec 13 15:03:46.701177 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000410 Dec 13 15:03:46.701257 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Dec 13 15:03:46.701332 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 15:03:46.701406 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 15:03:46.701480 kernel: nvme nvme0: pci function 0005:03:00.0 Dec 13 15:03:46.701571 kernel: hub 1-0:1.0: USB hub found Dec 13 15:03:46.701677 kernel: hub 1-0:1.0: 4 ports detected Dec 13 15:03:46.701762 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 15:03:46.701911 kernel: nvme nvme1: pci function 0005:04:00.0 Dec 13 15:03:46.701997 kernel: hub 2-0:1.0: USB hub found Dec 13 15:03:46.702089 kernel: hub 2-0:1.0: 4 ports detected Dec 13 15:03:46.702172 kernel: nvme nvme0: Shutdown timeout set to 8 seconds Dec 13 15:03:46.702246 kernel: nvme nvme1: Shutdown timeout set to 8 seconds Dec 13 15:03:46.702316 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 Dec 13 15:03:46.702395 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 15:03:46.702471 kernel: nvme nvme0: 32/0/0 default/read/poll queues Dec 13 15:03:46.702542 kernel: igb 0003:03:00.1: added PHC on eth1 Dec 13 15:03:46.702618 kernel: nvme nvme1: 32/0/0 default/read/poll queues Dec 13 15:03:46.702688 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection Dec 13 15:03:46.702765 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:54:99 Dec 13 15:03:46.702847 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 Dec 13 15:03:46.702925 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Dec 13 15:03:46.702998 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 15:03:46.703008 kernel: GPT:9289727 != 1875385007 Dec 13 15:03:46.703018 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 15:03:46.703027 kernel: GPT:9289727 != 1875385007 Dec 13 15:03:46.703036 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 15:03:46.703045 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 15:03:46.703056 kernel: igb 0003:03:00.0 eno1: renamed from eth0 Dec 13 15:03:46.703133 kernel: BTRFS: device fsid 47b12626-f7d3-4179-9720-ca262eb4c614 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (1187) Dec 13 15:03:46.703143 kernel: igb 0003:03:00.1 eno2: renamed from eth1 Dec 13 15:03:46.703217 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by (udev-worker) (1188) Dec 13 15:03:46.703227 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd Dec 13 15:03:46.703350 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged Dec 13 15:03:46.703429 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 15:03:46.703442 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 15:03:46.703451 kernel: hub 1-3:1.0: USB hub found Dec 13 15:03:46.703545 kernel: hub 1-3:1.0: 4 ports detected Dec 13 15:03:46.703631 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd Dec 13 15:03:46.703756 kernel: hub 2-3:1.0: USB hub found Dec 13 15:03:46.703858 kernel: hub 2-3:1.0: 4 ports detected Dec 13 15:03:46.703944 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Dec 13 15:03:46.704023 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 Dec 13 15:03:47.388420 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 Dec 13 15:03:47.388540 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 15:03:47.388618 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged Dec 13 15:03:47.388691 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Dec 13 15:03:45.707588 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 15:03:47.429889 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 Dec 13 15:03:47.430007 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 Dec 13 15:03:47.430090 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 15:03:45.713670 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 15:03:45.719616 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 15:03:45.725186 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 15:03:45.725332 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 15:03:45.730986 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 15:03:47.469602 disk-uuid[1291]: Primary Header is updated. Dec 13 15:03:47.469602 disk-uuid[1291]: Secondary Entries is updated. Dec 13 15:03:47.469602 disk-uuid[1291]: Secondary Header is updated. Dec 13 15:03:45.748006 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 15:03:47.490903 disk-uuid[1292]: The operation has completed successfully. Dec 13 15:03:45.763182 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 15:03:45.763329 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 15:03:45.794506 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 15:03:45.806110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 15:03:45.812337 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 15:03:45.818965 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 15:03:45.819055 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 15:03:45.824126 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 15:03:45.845064 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 15:03:45.854732 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 15:03:45.860694 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 15:03:46.040539 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 15:03:47.588335 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 15:03:46.268588 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. Dec 13 15:03:46.338742 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. Dec 13 15:03:47.608230 sh[1485]: Success Dec 13 15:03:46.346767 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Dec 13 15:03:46.351229 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Dec 13 15:03:46.359802 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Dec 13 15:03:46.383898 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 15:03:47.753841 kernel: BTRFS info (device dm-0): first mount of filesystem 47b12626-f7d3-4179-9720-ca262eb4c614 Dec 13 15:03:47.753858 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 15:03:47.753867 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 15:03:47.753877 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 15:03:47.753890 kernel: BTRFS info (device dm-0): using free space tree Dec 13 15:03:47.753900 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 15:03:47.518290 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 15:03:47.518408 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 15:03:47.554950 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 15:03:47.615422 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 15:03:47.642277 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 15:03:47.892960 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 15:03:47.892990 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 15:03:47.893009 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 15:03:47.893027 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 15:03:47.893046 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Dec 13 15:03:47.893070 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 15:03:47.656474 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 15:03:47.760311 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 15:03:47.771716 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 15:03:47.783895 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 15:03:47.796187 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 15:03:47.905002 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 15:03:47.935924 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 15:03:47.999829 ignition[1562]: Ignition 2.20.0 Dec 13 15:03:47.999837 ignition[1562]: Stage: fetch-offline Dec 13 15:03:47.999881 ignition[1562]: no configs at "/usr/lib/ignition/base.d" Dec 13 15:03:47.999889 ignition[1562]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 15:03:48.011886 unknown[1562]: fetched base config from "system" Dec 13 15:03:48.000139 ignition[1562]: parsed url from cmdline: "" Dec 13 15:03:48.011894 unknown[1562]: fetched user config from "system" Dec 13 15:03:48.000142 ignition[1562]: no config URL provided Dec 13 15:03:48.015012 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 15:03:48.000146 ignition[1562]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 15:03:48.039476 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 15:03:48.000195 ignition[1562]: parsing config with SHA512: 7ee626286532049159df5e4abe234d2d075ff13a0bf65f8c77d677a2042f4e63221973b0bcc408e61a1e6e873b15be32d7182a09401fde59edd54cf5c0ca7252 Dec 13 15:03:48.056912 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 15:03:48.012586 ignition[1562]: fetch-offline: fetch-offline passed Dec 13 15:03:48.082525 systemd-networkd[1709]: lo: Link UP Dec 13 15:03:48.012591 ignition[1562]: POST message to Packet Timeline Dec 13 15:03:48.082530 systemd-networkd[1709]: lo: Gained carrier Dec 13 15:03:48.012597 ignition[1562]: POST Status error: resource requires networking Dec 13 15:03:48.086296 systemd-networkd[1709]: Enumeration completed Dec 13 15:03:48.012669 ignition[1562]: Ignition finished successfully Dec 13 15:03:48.086359 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 15:03:48.128078 ignition[1712]: Ignition 2.20.0 Dec 13 15:03:48.087418 systemd-networkd[1709]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 15:03:48.128083 ignition[1712]: Stage: kargs Dec 13 15:03:48.092853 systemd[1]: Reached target network.target - Network. Dec 13 15:03:48.128230 ignition[1712]: no configs at "/usr/lib/ignition/base.d" Dec 13 15:03:48.102412 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 15:03:48.128239 ignition[1712]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 15:03:48.111928 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 15:03:48.129147 ignition[1712]: kargs: kargs passed Dec 13 15:03:48.139493 systemd-networkd[1709]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 15:03:48.129151 ignition[1712]: POST message to Packet Timeline Dec 13 15:03:48.193300 systemd-networkd[1709]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 15:03:48.129357 ignition[1712]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 15:03:48.132669 ignition[1712]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34072->[::1]:53: read: connection refused Dec 13 15:03:48.332936 ignition[1712]: GET https://metadata.packet.net/metadata: attempt #2 Dec 13 15:03:48.333312 ignition[1712]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41782->[::1]:53: read: connection refused Dec 13 15:03:48.733709 ignition[1712]: GET https://metadata.packet.net/metadata: attempt #3 Dec 13 15:03:48.734655 ignition[1712]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58893->[::1]:53: read: connection refused Dec 13 15:03:48.767800 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Dec 13 15:03:48.770553 systemd-networkd[1709]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 15:03:49.374805 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Dec 13 15:03:49.377540 systemd-networkd[1709]: eno1: Link UP Dec 13 15:03:49.377747 systemd-networkd[1709]: eno2: Link UP Dec 13 15:03:49.377885 systemd-networkd[1709]: enP1p1s0f0np0: Link UP Dec 13 15:03:49.378031 systemd-networkd[1709]: enP1p1s0f0np0: Gained carrier Dec 13 15:03:49.388937 systemd-networkd[1709]: enP1p1s0f1np1: Link UP Dec 13 15:03:49.422844 systemd-networkd[1709]: enP1p1s0f0np0: DHCPv4 address 147.28.228.225/31, gateway 147.28.228.224 acquired from 147.28.144.140 Dec 13 15:03:49.535335 ignition[1712]: GET https://metadata.packet.net/metadata: attempt #4 Dec 13 15:03:49.536235 ignition[1712]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38300->[::1]:53: read: connection refused Dec 13 15:03:49.778242 systemd-networkd[1709]: enP1p1s0f1np1: Gained carrier Dec 13 15:03:50.658046 systemd-networkd[1709]: enP1p1s0f0np0: Gained IPv6LL Dec 13 15:03:51.137535 ignition[1712]: GET https://metadata.packet.net/metadata: attempt #5 Dec 13 15:03:51.138346 ignition[1712]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38837->[::1]:53: read: connection refused Dec 13 15:03:51.234167 systemd-networkd[1709]: enP1p1s0f1np1: Gained IPv6LL Dec 13 15:03:54.341222 ignition[1712]: GET https://metadata.packet.net/metadata: attempt #6 Dec 13 15:03:55.328886 ignition[1712]: GET result: OK Dec 13 15:03:55.599134 ignition[1712]: Ignition finished successfully Dec 13 15:03:55.602896 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 15:03:55.616906 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 15:03:55.628820 ignition[1732]: Ignition 2.20.0 Dec 13 15:03:55.628827 ignition[1732]: Stage: disks Dec 13 15:03:55.628981 ignition[1732]: no configs at "/usr/lib/ignition/base.d" Dec 13 15:03:55.628990 ignition[1732]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 15:03:55.630469 ignition[1732]: disks: disks passed Dec 13 15:03:55.630473 ignition[1732]: POST message to Packet Timeline Dec 13 15:03:55.630491 ignition[1732]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 15:03:56.372910 ignition[1732]: GET result: OK Dec 13 15:03:56.637614 ignition[1732]: Ignition finished successfully Dec 13 15:03:56.641841 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 15:03:56.646842 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 15:03:56.654599 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 15:03:56.662955 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 15:03:56.671806 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 15:03:56.681130 systemd[1]: Reached target basic.target - Basic System. Dec 13 15:03:56.699941 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 15:03:56.716164 systemd-fsck[1751]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 15:03:56.719852 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 15:03:56.737866 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 15:03:56.806797 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 0aa4851d-a2ba-4d04-90b3-5d00bf608ecc r/w with ordered data mode. Quota mode: none. Dec 13 15:03:56.807222 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 15:03:56.817687 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 15:03:56.842864 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 15:03:56.851796 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/nvme0n1p6 scanned by mount (1762) Dec 13 15:03:56.851812 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 15:03:56.851823 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 15:03:56.851832 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 15:03:56.852795 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 15:03:56.852806 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Dec 13 15:03:56.948871 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 15:03:56.955311 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 15:03:56.966152 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Dec 13 15:03:56.982209 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 15:03:56.982236 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 15:03:56.995453 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 15:03:57.026145 coreos-metadata[1785]: Dec 13 15:03:57.012 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 15:03:57.041877 coreos-metadata[1781]: Dec 13 15:03:57.012 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 15:03:57.009401 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 15:03:57.030909 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 15:03:57.070192 initrd-setup-root[1807]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 15:03:57.076222 initrd-setup-root[1815]: cut: /sysroot/etc/group: No such file or directory Dec 13 15:03:57.082072 initrd-setup-root[1823]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 15:03:57.087996 initrd-setup-root[1830]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 15:03:57.155535 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 15:03:57.176868 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 15:03:57.188679 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 15:03:57.213963 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 15:03:57.219822 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 15:03:57.231567 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 15:03:57.237001 ignition[1905]: INFO : Ignition 2.20.0 Dec 13 15:03:57.237001 ignition[1905]: INFO : Stage: mount Dec 13 15:03:57.237001 ignition[1905]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 15:03:57.237001 ignition[1905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 15:03:57.268380 ignition[1905]: INFO : mount: mount passed Dec 13 15:03:57.268380 ignition[1905]: INFO : POST message to Packet Timeline Dec 13 15:03:57.268380 ignition[1905]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 15:03:57.781699 coreos-metadata[1781]: Dec 13 15:03:57.781 INFO Fetch successful Dec 13 15:03:57.824130 coreos-metadata[1781]: Dec 13 15:03:57.824 INFO wrote hostname ci-4186.0.0-a-a49a1da819 to /sysroot/etc/hostname Dec 13 15:03:57.827318 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 15:03:58.162566 coreos-metadata[1785]: Dec 13 15:03:58.162 INFO Fetch successful Dec 13 15:03:58.207889 systemd[1]: flatcar-static-network.service: Deactivated successfully. Dec 13 15:03:58.208074 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Dec 13 15:03:58.801329 ignition[1905]: INFO : GET result: OK Dec 13 15:03:59.102850 ignition[1905]: INFO : Ignition finished successfully Dec 13 15:03:59.104913 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 15:03:59.125854 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 15:03:59.134239 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 15:03:59.162799 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/nvme0n1p6 scanned by mount (1931) Dec 13 15:03:59.187200 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 15:03:59.187224 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 15:03:59.200321 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 15:03:59.223553 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 15:03:59.223574 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard Dec 13 15:03:59.231709 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 15:03:59.264543 ignition[1951]: INFO : Ignition 2.20.0 Dec 13 15:03:59.264543 ignition[1951]: INFO : Stage: files Dec 13 15:03:59.274357 ignition[1951]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 15:03:59.274357 ignition[1951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 15:03:59.274357 ignition[1951]: DEBUG : files: compiled without relabeling support, skipping Dec 13 15:03:59.274357 ignition[1951]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 15:03:59.274357 ignition[1951]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 15:03:59.274357 ignition[1951]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 15:03:59.274357 ignition[1951]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 15:03:59.274357 ignition[1951]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 15:03:59.274357 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 15:03:59.274357 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 15:03:59.270273 unknown[1951]: wrote ssh authorized keys file for user: core Dec 13 15:03:59.389427 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 15:03:59.542863 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 15:03:59.553386 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 15:03:59.734903 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 15:03:59.901187 ignition[1951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 15:03:59.913607 ignition[1951]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 15:03:59.913607 ignition[1951]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 15:03:59.913607 ignition[1951]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 15:03:59.913607 ignition[1951]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 15:03:59.913607 ignition[1951]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 15:03:59.913607 ignition[1951]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 15:03:59.913607 ignition[1951]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 15:03:59.913607 ignition[1951]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 15:03:59.913607 ignition[1951]: INFO : files: files passed Dec 13 15:03:59.913607 ignition[1951]: INFO : POST message to Packet Timeline Dec 13 15:03:59.913607 ignition[1951]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 15:04:00.997656 ignition[1951]: INFO : GET result: OK Dec 13 15:04:01.307061 ignition[1951]: INFO : Ignition finished successfully Dec 13 15:04:01.310233 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 15:04:01.332976 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 15:04:01.339837 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 15:04:01.351843 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 15:04:01.351922 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 15:04:01.387196 initrd-setup-root-after-ignition[1992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 15:04:01.387196 initrd-setup-root-after-ignition[1992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 15:04:01.370157 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 15:04:01.433721 initrd-setup-root-after-ignition[1996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 15:04:01.383061 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 15:04:01.402967 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 15:04:01.436361 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 15:04:01.436457 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 15:04:01.451096 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 15:04:01.462362 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 15:04:01.479173 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 15:04:01.491899 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 15:04:01.518223 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 15:04:01.543003 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 15:04:01.557210 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 15:04:01.566551 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 15:04:01.577954 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 15:04:01.589391 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 15:04:01.589507 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 15:04:01.600997 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 15:04:01.612131 systemd[1]: Stopped target basic.target - Basic System. Dec 13 15:04:01.623438 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 15:04:01.634747 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 15:04:01.645975 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 15:04:01.657121 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 15:04:01.668208 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 15:04:01.679338 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 15:04:01.690483 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 15:04:01.707248 systemd[1]: Stopped target swap.target - Swaps. Dec 13 15:04:01.718525 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 15:04:01.718622 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 15:04:01.730027 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 15:04:01.741081 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 15:04:01.752347 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 15:04:01.755830 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 15:04:01.763664 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 15:04:01.763763 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 15:04:01.775139 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 15:04:01.775263 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 15:04:01.786439 systemd[1]: Stopped target paths.target - Path Units. Dec 13 15:04:01.797630 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 15:04:01.800815 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 15:04:01.814832 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 15:04:01.826343 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 15:04:01.837829 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 15:04:01.837939 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 15:04:01.937753 ignition[2018]: INFO : Ignition 2.20.0 Dec 13 15:04:01.937753 ignition[2018]: INFO : Stage: umount Dec 13 15:04:01.937753 ignition[2018]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 15:04:01.937753 ignition[2018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 15:04:01.937753 ignition[2018]: INFO : umount: umount passed Dec 13 15:04:01.937753 ignition[2018]: INFO : POST message to Packet Timeline Dec 13 15:04:01.937753 ignition[2018]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 15:04:01.849470 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 15:04:01.849545 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 15:04:01.861177 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 15:04:01.861266 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 15:04:01.872833 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 15:04:01.872917 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 15:04:01.884429 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 15:04:01.884512 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 15:04:01.907994 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 15:04:01.919803 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 15:04:01.919911 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 15:04:01.943983 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 15:04:01.955623 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 15:04:01.955732 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 15:04:01.967048 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 15:04:01.967135 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 15:04:01.986306 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 15:04:01.988316 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 15:04:01.988394 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 15:04:02.011975 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 15:04:02.130797 ignition[2018]: INFO : GET result: OK Dec 13 15:04:02.012208 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 15:04:02.463587 ignition[2018]: INFO : Ignition finished successfully Dec 13 15:04:02.465774 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 15:04:02.465990 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 15:04:02.474144 systemd[1]: Stopped target network.target - Network. Dec 13 15:04:02.483681 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 15:04:02.483736 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 15:04:02.493885 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 15:04:02.493923 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 15:04:02.503586 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 15:04:02.503617 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 15:04:02.513423 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 15:04:02.513457 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 15:04:02.523424 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 15:04:02.523455 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 15:04:02.533613 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 15:04:02.539820 systemd-networkd[1709]: enP1p1s0f0np0: DHCPv6 lease lost Dec 13 15:04:02.543464 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 15:04:02.549843 systemd-networkd[1709]: enP1p1s0f1np1: DHCPv6 lease lost Dec 13 15:04:02.553487 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 15:04:02.553581 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 15:04:02.565478 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 15:04:02.566107 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 15:04:02.575785 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 15:04:02.575933 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 15:04:02.592889 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 15:04:02.598845 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 15:04:02.598915 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 15:04:02.609186 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 15:04:02.609235 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 15:04:02.619347 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 15:04:02.619397 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 15:04:02.629766 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 15:04:02.629796 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 15:04:02.640318 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 15:04:02.660153 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 15:04:02.660285 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 15:04:02.669376 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 15:04:02.669549 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 15:04:02.678446 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 15:04:02.678503 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 15:04:02.689193 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 15:04:02.689243 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 15:04:02.700370 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 15:04:02.700434 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 15:04:02.711183 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 15:04:02.711235 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 15:04:02.740931 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 15:04:02.750223 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 15:04:02.750269 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 15:04:02.761530 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 15:04:02.761561 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 15:04:02.773274 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 15:04:02.773363 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 15:04:03.318020 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 15:04:03.318176 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 15:04:03.329397 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 15:04:03.351934 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 15:04:03.365551 systemd[1]: Switching root. Dec 13 15:04:03.415618 systemd-journald[900]: Journal stopped Dec 13 15:04:05.378321 systemd-journald[900]: Received SIGTERM from PID 1 (systemd). Dec 13 15:04:05.378352 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 15:04:05.378363 kernel: SELinux: policy capability open_perms=1 Dec 13 15:04:05.378371 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 15:04:05.378378 kernel: SELinux: policy capability always_check_network=0 Dec 13 15:04:05.378386 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 15:04:05.378394 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 15:04:05.378403 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 15:04:05.378411 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 15:04:05.378418 kernel: audit: type=1403 audit(1734102243.597:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 15:04:05.378427 systemd[1]: Successfully loaded SELinux policy in 118.417ms. Dec 13 15:04:05.378437 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.494ms. Dec 13 15:04:05.378446 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 15:04:05.378455 systemd[1]: Detected architecture arm64. Dec 13 15:04:05.378465 systemd[1]: Detected first boot. Dec 13 15:04:05.378474 systemd[1]: Hostname set to . Dec 13 15:04:05.378482 systemd[1]: Initializing machine ID from random generator. Dec 13 15:04:05.378491 zram_generator::config[2081]: No configuration found. Dec 13 15:04:05.378501 systemd[1]: Populated /etc with preset unit settings. Dec 13 15:04:05.378510 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 15:04:05.378519 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 15:04:05.378530 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 15:04:05.378539 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 15:04:05.378548 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 15:04:05.378556 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 15:04:05.378566 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 15:04:05.378575 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 15:04:05.378584 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 15:04:05.378593 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 15:04:05.378602 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 15:04:05.378610 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 15:04:05.378619 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 15:04:05.378628 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 15:04:05.378638 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 15:04:05.378647 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 15:04:05.378656 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 15:04:05.378665 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 15:04:05.378673 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 15:04:05.378682 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 15:04:05.378691 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 15:04:05.378702 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 15:04:05.378710 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 15:04:05.378721 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 15:04:05.378730 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 15:04:05.378739 systemd[1]: Reached target slices.target - Slice Units. Dec 13 15:04:05.378748 systemd[1]: Reached target swap.target - Swaps. Dec 13 15:04:05.378757 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 15:04:05.378766 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 15:04:05.378775 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 15:04:05.378785 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 15:04:05.378797 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 15:04:05.378807 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 15:04:05.378817 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 15:04:05.378825 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 15:04:05.378836 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 15:04:05.378845 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 15:04:05.378854 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 15:04:05.378863 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 15:04:05.378873 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 15:04:05.378882 systemd[1]: Reached target machines.target - Containers. Dec 13 15:04:05.378891 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 15:04:05.378900 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 15:04:05.378911 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 15:04:05.378921 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 15:04:05.378930 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 15:04:05.378939 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 15:04:05.378948 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 15:04:05.378957 kernel: ACPI: bus type drm_connector registered Dec 13 15:04:05.378965 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 15:04:05.378974 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 15:04:05.378983 kernel: fuse: init (API version 7.39) Dec 13 15:04:05.378992 kernel: loop: module loaded Dec 13 15:04:05.379001 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 15:04:05.379010 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 15:04:05.379019 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 15:04:05.379028 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 15:04:05.379037 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 15:04:05.379046 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 15:04:05.379055 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 15:04:05.379085 systemd-journald[2188]: Collecting audit messages is disabled. Dec 13 15:04:05.379105 systemd-journald[2188]: Journal started Dec 13 15:04:05.379129 systemd-journald[2188]: Runtime Journal (/run/log/journal/49b8c27046174d26bcbb1c964c134d6f) is 8.0M, max 4.0G, 3.9G free. Dec 13 15:04:04.116315 systemd[1]: Queued start job for default target multi-user.target. Dec 13 15:04:04.136284 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 15:04:04.136585 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 15:04:04.137966 systemd[1]: systemd-journald.service: Consumed 3.389s CPU time. Dec 13 15:04:05.402805 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 15:04:05.429806 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 15:04:05.450806 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 15:04:05.473199 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 15:04:05.473236 systemd[1]: Stopped verity-setup.service. Dec 13 15:04:05.497868 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 15:04:05.503407 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 15:04:05.508877 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 15:04:05.514219 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 15:04:05.519490 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 15:04:05.524738 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 15:04:05.530219 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 15:04:05.535504 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 15:04:05.540798 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 15:04:05.547192 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 15:04:05.547357 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 15:04:05.552590 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 15:04:05.552736 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 15:04:05.557988 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 15:04:05.558139 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 15:04:05.563475 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 15:04:05.563627 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 15:04:05.568849 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 15:04:05.568994 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 15:04:05.574069 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 15:04:05.574215 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 15:04:05.579300 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 15:04:05.584412 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 15:04:05.590819 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 15:04:05.596016 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 15:04:05.611855 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 15:04:05.629935 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 15:04:05.635950 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 15:04:05.640778 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 15:04:05.640807 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 15:04:05.646350 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 15:04:05.652164 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 15:04:05.658056 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 15:04:05.662938 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 15:04:05.664378 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 15:04:05.670065 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 15:04:05.674915 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 15:04:05.675962 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 15:04:05.680702 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 15:04:05.681376 systemd-journald[2188]: Time spent on flushing to /var/log/journal/49b8c27046174d26bcbb1c964c134d6f is 26.016ms for 2352 entries. Dec 13 15:04:05.681376 systemd-journald[2188]: System Journal (/var/log/journal/49b8c27046174d26bcbb1c964c134d6f) is 8.0M, max 195.6M, 187.6M free. Dec 13 15:04:05.724477 systemd-journald[2188]: Received client request to flush runtime journal. Dec 13 15:04:05.724522 kernel: loop0: detected capacity change from 0 to 116784 Dec 13 15:04:05.682298 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 15:04:05.700906 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 15:04:05.706745 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 15:04:05.712460 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 15:04:05.738798 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 15:04:05.749169 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 15:04:05.753847 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 15:04:05.758433 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 15:04:05.763870 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 15:04:05.769188 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 15:04:05.773920 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 15:04:05.778606 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 15:04:05.782801 kernel: loop1: detected capacity change from 0 to 113552 Dec 13 15:04:05.799035 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 15:04:05.815210 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 15:04:05.821161 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 15:04:05.826881 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 15:04:05.827620 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 15:04:05.833460 udevadm[2226]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 15:04:05.842866 systemd-tmpfiles[2249]: ACLs are not supported, ignoring. Dec 13 15:04:05.842878 systemd-tmpfiles[2249]: ACLs are not supported, ignoring. Dec 13 15:04:05.846735 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 15:04:05.846798 kernel: loop2: detected capacity change from 0 to 8 Dec 13 15:04:05.913808 kernel: loop3: detected capacity change from 0 to 194512 Dec 13 15:04:05.919366 ldconfig[2216]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 15:04:05.921085 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 15:04:05.964805 kernel: loop4: detected capacity change from 0 to 116784 Dec 13 15:04:05.980805 kernel: loop5: detected capacity change from 0 to 113552 Dec 13 15:04:05.996805 kernel: loop6: detected capacity change from 0 to 8 Dec 13 15:04:06.008800 kernel: loop7: detected capacity change from 0 to 194512 Dec 13 15:04:06.015040 (sd-merge)[2256]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Dec 13 15:04:06.015470 (sd-merge)[2256]: Merged extensions into '/usr'. Dec 13 15:04:06.023111 systemd[1]: Reloading requested from client PID 2222 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 15:04:06.023123 systemd[1]: Reloading... Dec 13 15:04:06.064802 zram_generator::config[2282]: No configuration found. Dec 13 15:04:06.156474 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 15:04:06.204849 systemd[1]: Reloading finished in 181 ms. Dec 13 15:04:06.232327 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 15:04:06.237434 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 15:04:06.258006 systemd[1]: Starting ensure-sysext.service... Dec 13 15:04:06.264078 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 15:04:06.270616 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 15:04:06.277418 systemd[1]: Reloading requested from client PID 2335 ('systemctl') (unit ensure-sysext.service)... Dec 13 15:04:06.277427 systemd[1]: Reloading... Dec 13 15:04:06.284091 systemd-tmpfiles[2337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 15:04:06.284288 systemd-tmpfiles[2337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 15:04:06.284895 systemd-tmpfiles[2337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 15:04:06.285091 systemd-tmpfiles[2337]: ACLs are not supported, ignoring. Dec 13 15:04:06.285139 systemd-tmpfiles[2337]: ACLs are not supported, ignoring. Dec 13 15:04:06.287714 systemd-tmpfiles[2337]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 15:04:06.287721 systemd-tmpfiles[2337]: Skipping /boot Dec 13 15:04:06.295926 systemd-tmpfiles[2337]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 15:04:06.295934 systemd-tmpfiles[2337]: Skipping /boot Dec 13 15:04:06.296199 systemd-udevd[2338]: Using default interface naming scheme 'v255'. Dec 13 15:04:06.320800 zram_generator::config[2367]: No configuration found. Dec 13 15:04:06.321902 (udev-worker)[2379]: loop4: Failed to create/update device symlink '/dev/disk/by-loop-inode/0:34-30523', ignoring: No such file or directory Dec 13 15:04:06.350837 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (2374) Dec 13 15:04:06.350912 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2394) Dec 13 15:04:06.371812 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (2374) Dec 13 15:04:06.409802 kernel: IPMI message handler: version 39.2 Dec 13 15:04:06.419799 kernel: ipmi device interface Dec 13 15:04:06.431799 kernel: ipmi_ssif: IPMI SSIF Interface driver Dec 13 15:04:06.431830 kernel: ipmi_si: IPMI System Interface driver Dec 13 15:04:06.447914 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 15:04:06.451798 kernel: ipmi_si: Unable to find any System Interface(s) Dec 13 15:04:06.509893 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 15:04:06.510143 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Dec 13 15:04:06.514927 systemd[1]: Reloading finished in 237 ms. Dec 13 15:04:06.530508 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 15:04:06.547137 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 15:04:06.564850 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 15:04:06.574623 systemd[1]: Finished ensure-sysext.service. Dec 13 15:04:06.607908 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 15:04:06.614404 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 15:04:06.619852 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 15:04:06.620945 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 15:04:06.627173 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 15:04:06.633275 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 15:04:06.639285 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 15:04:06.639576 lvm[2556]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 15:04:06.645229 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 15:04:06.646035 augenrules[2578]: No rules Dec 13 15:04:06.650213 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 15:04:06.651152 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 15:04:06.657177 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 15:04:06.663874 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 15:04:06.670700 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 15:04:06.677212 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 15:04:06.683208 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 15:04:06.689360 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 15:04:06.695173 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 15:04:06.695947 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 15:04:06.701434 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 15:04:06.706722 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 15:04:06.711728 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 15:04:06.711990 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 15:04:06.716996 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 15:04:06.717128 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 15:04:06.722123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 15:04:06.722255 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 15:04:06.727211 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 15:04:06.727327 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 15:04:06.733131 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 15:04:06.738164 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 15:04:06.743099 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 15:04:06.757260 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 15:04:06.762332 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 15:04:06.781968 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 15:04:06.786276 lvm[2614]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 15:04:06.786562 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 15:04:06.786628 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 15:04:06.787816 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 15:04:06.794298 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 15:04:06.799272 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 15:04:06.802658 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 15:04:06.824249 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 15:04:06.829284 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 15:04:06.883359 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 15:04:06.888120 systemd-resolved[2591]: Positive Trust Anchors: Dec 13 15:04:06.888132 systemd-resolved[2591]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 15:04:06.888163 systemd-resolved[2591]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 15:04:06.888435 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 15:04:06.892007 systemd-resolved[2591]: Using system hostname 'ci-4186.0.0-a-a49a1da819'. Dec 13 15:04:06.893642 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 15:04:06.893925 systemd-networkd[2590]: lo: Link UP Dec 13 15:04:06.893932 systemd-networkd[2590]: lo: Gained carrier Dec 13 15:04:06.897556 systemd-networkd[2590]: bond0: netdev ready Dec 13 15:04:06.899036 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 15:04:06.903387 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 15:04:06.906736 systemd-networkd[2590]: Enumeration completed Dec 13 15:04:06.907780 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 15:04:06.912103 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 15:04:06.916896 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 15:04:06.920806 systemd-networkd[2590]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:5a:9e:28.network. Dec 13 15:04:06.921414 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 15:04:06.925971 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 15:04:06.930374 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 15:04:06.930395 systemd[1]: Reached target paths.target - Path Units. Dec 13 15:04:06.934758 systemd[1]: Reached target timers.target - Timer Units. Dec 13 15:04:06.939884 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 15:04:06.945760 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 15:04:06.957901 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 15:04:06.962802 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 15:04:06.967371 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 15:04:06.971931 systemd[1]: Reached target network.target - Network. Dec 13 15:04:06.976349 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 15:04:06.980606 systemd[1]: Reached target basic.target - Basic System. Dec 13 15:04:06.984811 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 15:04:06.984833 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 15:04:06.995863 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 15:04:07.001414 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 15:04:07.006957 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 15:04:07.012558 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 15:04:07.018185 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 15:04:07.022679 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 15:04:07.022962 jq[2645]: false Dec 13 15:04:07.023179 coreos-metadata[2641]: Dec 13 15:04:07.023 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 15:04:07.023828 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 15:04:07.025615 coreos-metadata[2641]: Dec 13 15:04:07.025 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Dec 13 15:04:07.028730 dbus-daemon[2642]: [system] SELinux support is enabled Dec 13 15:04:07.029456 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 15:04:07.035094 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 15:04:07.038085 extend-filesystems[2646]: Found loop4 Dec 13 15:04:07.044447 extend-filesystems[2646]: Found loop5 Dec 13 15:04:07.044447 extend-filesystems[2646]: Found loop6 Dec 13 15:04:07.044447 extend-filesystems[2646]: Found loop7 Dec 13 15:04:07.044447 extend-filesystems[2646]: Found nvme1n1 Dec 13 15:04:07.044447 extend-filesystems[2646]: Found nvme0n1 Dec 13 15:04:07.044447 extend-filesystems[2646]: Found nvme0n1p1 Dec 13 15:04:07.044447 extend-filesystems[2646]: Found nvme0n1p2 Dec 13 15:04:07.044447 extend-filesystems[2646]: Found nvme0n1p3 Dec 13 15:04:07.044447 extend-filesystems[2646]: Found usr Dec 13 15:04:07.044447 extend-filesystems[2646]: Found nvme0n1p4 Dec 13 15:04:07.044447 extend-filesystems[2646]: Found nvme0n1p6 Dec 13 15:04:07.044447 extend-filesystems[2646]: Found nvme0n1p7 Dec 13 15:04:07.044447 extend-filesystems[2646]: Found nvme0n1p9 Dec 13 15:04:07.044447 extend-filesystems[2646]: Checking size of /dev/nvme0n1p9 Dec 13 15:04:07.181789 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks Dec 13 15:04:07.181848 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2488) Dec 13 15:04:07.040872 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 15:04:07.181979 extend-filesystems[2646]: Resized partition /dev/nvme0n1p9 Dec 13 15:04:07.170234 dbus-daemon[2642]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 15:04:07.051917 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 15:04:07.191425 extend-filesystems[2668]: resize2fs 1.47.1 (20-May-2024) Dec 13 15:04:07.059079 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 15:04:07.100263 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 15:04:07.101003 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 15:04:07.101683 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 15:04:07.205523 update_engine[2677]: I20241213 15:04:07.147480 2677 main.cc:92] Flatcar Update Engine starting Dec 13 15:04:07.205523 update_engine[2677]: I20241213 15:04:07.150051 2677 update_check_scheduler.cc:74] Next update check in 9m43s Dec 13 15:04:07.108639 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 15:04:07.206327 jq[2678]: true Dec 13 15:04:07.117166 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 15:04:07.130175 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 15:04:07.206661 tar[2680]: linux-arm64/helm Dec 13 15:04:07.130382 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 15:04:07.217003 jq[2681]: true Dec 13 15:04:07.130668 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 15:04:07.130842 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 15:04:07.132280 systemd-logind[2667]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 15:04:07.217530 bash[2706]: Updated "/home/core/.ssh/authorized_keys" Dec 13 15:04:07.135548 systemd-logind[2667]: New seat seat0. Dec 13 15:04:07.141005 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 15:04:07.141161 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 15:04:07.154693 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 15:04:07.159904 (ntainerd)[2682]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 15:04:07.179909 systemd[1]: Started update-engine.service - Update Engine. Dec 13 15:04:07.187981 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 15:04:07.188301 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 15:04:07.195935 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 15:04:07.196044 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 15:04:07.216992 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 15:04:07.225828 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 15:04:07.235877 systemd[1]: Starting sshkeys.service... Dec 13 15:04:07.247632 locksmithd[2707]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 15:04:07.249169 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 15:04:07.255348 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 15:04:07.275098 coreos-metadata[2723]: Dec 13 15:04:07.275 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 15:04:07.276141 coreos-metadata[2723]: Dec 13 15:04:07.276 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Dec 13 15:04:07.302988 containerd[2682]: time="2024-12-13T15:04:07.302900000Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 15:04:07.324661 containerd[2682]: time="2024-12-13T15:04:07.324625840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 15:04:07.326019 containerd[2682]: time="2024-12-13T15:04:07.325870680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 15:04:07.326041 containerd[2682]: time="2024-12-13T15:04:07.326021760Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 15:04:07.326338 containerd[2682]: time="2024-12-13T15:04:07.326243520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 15:04:07.326670 containerd[2682]: time="2024-12-13T15:04:07.326646280Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 15:04:07.326693 containerd[2682]: time="2024-12-13T15:04:07.326678320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 15:04:07.326751 containerd[2682]: time="2024-12-13T15:04:07.326737840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 15:04:07.326770 containerd[2682]: time="2024-12-13T15:04:07.326752000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 15:04:07.326936 containerd[2682]: time="2024-12-13T15:04:07.326917200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 15:04:07.326961 containerd[2682]: time="2024-12-13T15:04:07.326934440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 15:04:07.326961 containerd[2682]: time="2024-12-13T15:04:07.326947040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 15:04:07.326961 containerd[2682]: time="2024-12-13T15:04:07.326956120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 15:04:07.327044 containerd[2682]: time="2024-12-13T15:04:07.327033120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 15:04:07.327247 containerd[2682]: time="2024-12-13T15:04:07.327234440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 15:04:07.327350 containerd[2682]: time="2024-12-13T15:04:07.327337840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 15:04:07.327373 containerd[2682]: time="2024-12-13T15:04:07.327351520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 15:04:07.327440 containerd[2682]: time="2024-12-13T15:04:07.327429600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 15:04:07.327481 containerd[2682]: time="2024-12-13T15:04:07.327472000Z" level=info msg="metadata content store policy set" policy=shared Dec 13 15:04:07.338677 containerd[2682]: time="2024-12-13T15:04:07.338649240Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 15:04:07.338717 containerd[2682]: time="2024-12-13T15:04:07.338704880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 15:04:07.338743 containerd[2682]: time="2024-12-13T15:04:07.338723560Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 15:04:07.338743 containerd[2682]: time="2024-12-13T15:04:07.338738280Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 15:04:07.338779 containerd[2682]: time="2024-12-13T15:04:07.338754400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 15:04:07.338914 containerd[2682]: time="2024-12-13T15:04:07.338900760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 15:04:07.339193 containerd[2682]: time="2024-12-13T15:04:07.339180760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 15:04:07.339295 containerd[2682]: time="2024-12-13T15:04:07.339283400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 15:04:07.339314 containerd[2682]: time="2024-12-13T15:04:07.339300560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 15:04:07.339331 containerd[2682]: time="2024-12-13T15:04:07.339315640Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 15:04:07.339353 containerd[2682]: time="2024-12-13T15:04:07.339330120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 15:04:07.339353 containerd[2682]: time="2024-12-13T15:04:07.339342440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 15:04:07.339388 containerd[2682]: time="2024-12-13T15:04:07.339354720Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 15:04:07.339388 containerd[2682]: time="2024-12-13T15:04:07.339367400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 15:04:07.339388 containerd[2682]: time="2024-12-13T15:04:07.339380560Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 15:04:07.339439 containerd[2682]: time="2024-12-13T15:04:07.339393160Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 15:04:07.339439 containerd[2682]: time="2024-12-13T15:04:07.339405320Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 15:04:07.339439 containerd[2682]: time="2024-12-13T15:04:07.339415920Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 15:04:07.339439 containerd[2682]: time="2024-12-13T15:04:07.339436840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 15:04:07.339507 containerd[2682]: time="2024-12-13T15:04:07.339449760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 15:04:07.339507 containerd[2682]: time="2024-12-13T15:04:07.339462320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 15:04:07.339507 containerd[2682]: time="2024-12-13T15:04:07.339474280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 15:04:07.339507 containerd[2682]: time="2024-12-13T15:04:07.339485840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 15:04:07.339507 containerd[2682]: time="2024-12-13T15:04:07.339497920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 15:04:07.339589 containerd[2682]: time="2024-12-13T15:04:07.339509400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 15:04:07.339589 containerd[2682]: time="2024-12-13T15:04:07.339521320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 15:04:07.339589 containerd[2682]: time="2024-12-13T15:04:07.339533000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 15:04:07.339589 containerd[2682]: time="2024-12-13T15:04:07.339546640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 15:04:07.339589 containerd[2682]: time="2024-12-13T15:04:07.339561160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 15:04:07.339589 containerd[2682]: time="2024-12-13T15:04:07.339572400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 15:04:07.339589 containerd[2682]: time="2024-12-13T15:04:07.339587720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 15:04:07.339707 containerd[2682]: time="2024-12-13T15:04:07.339604840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 15:04:07.339707 containerd[2682]: time="2024-12-13T15:04:07.339624680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 15:04:07.339707 containerd[2682]: time="2024-12-13T15:04:07.339638000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 15:04:07.339707 containerd[2682]: time="2024-12-13T15:04:07.339649160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 15:04:07.339836 containerd[2682]: time="2024-12-13T15:04:07.339823040Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 15:04:07.339858 containerd[2682]: time="2024-12-13T15:04:07.339840640Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 15:04:07.339858 containerd[2682]: time="2024-12-13T15:04:07.339850680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 15:04:07.339891 containerd[2682]: time="2024-12-13T15:04:07.339864000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 15:04:07.339891 containerd[2682]: time="2024-12-13T15:04:07.339873600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 15:04:07.339891 containerd[2682]: time="2024-12-13T15:04:07.339887880Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 15:04:07.339948 containerd[2682]: time="2024-12-13T15:04:07.339898920Z" level=info msg="NRI interface is disabled by configuration." Dec 13 15:04:07.339948 containerd[2682]: time="2024-12-13T15:04:07.339909040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 15:04:07.340272 containerd[2682]: time="2024-12-13T15:04:07.340232760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 15:04:07.340384 containerd[2682]: time="2024-12-13T15:04:07.340277480Z" level=info msg="Connect containerd service" Dec 13 15:04:07.340384 containerd[2682]: time="2024-12-13T15:04:07.340305800Z" level=info msg="using legacy CRI server" Dec 13 15:04:07.340384 containerd[2682]: time="2024-12-13T15:04:07.340312480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 15:04:07.340553 containerd[2682]: time="2024-12-13T15:04:07.340539080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 15:04:07.341148 containerd[2682]: time="2024-12-13T15:04:07.341125400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 15:04:07.341361 containerd[2682]: time="2024-12-13T15:04:07.341329440Z" level=info msg="Start subscribing containerd event" Dec 13 15:04:07.341389 containerd[2682]: time="2024-12-13T15:04:07.341377320Z" level=info msg="Start recovering state" Dec 13 15:04:07.341455 containerd[2682]: time="2024-12-13T15:04:07.341444960Z" level=info msg="Start event monitor" Dec 13 15:04:07.341475 containerd[2682]: time="2024-12-13T15:04:07.341460320Z" level=info msg="Start snapshots syncer" Dec 13 15:04:07.341475 containerd[2682]: time="2024-12-13T15:04:07.341470080Z" level=info msg="Start cni network conf syncer for default" Dec 13 15:04:07.341508 containerd[2682]: time="2024-12-13T15:04:07.341479120Z" level=info msg="Start streaming server" Dec 13 15:04:07.341647 containerd[2682]: time="2024-12-13T15:04:07.341633440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 15:04:07.341684 containerd[2682]: time="2024-12-13T15:04:07.341674720Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 15:04:07.341732 containerd[2682]: time="2024-12-13T15:04:07.341721240Z" level=info msg="containerd successfully booted in 0.039646s" Dec 13 15:04:07.341770 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 15:04:07.490121 tar[2680]: linux-arm64/LICENSE Dec 13 15:04:07.490200 tar[2680]: linux-arm64/README.md Dec 13 15:04:07.506842 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 15:04:07.592774 sshd_keygen[2673]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 15:04:07.611623 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 15:04:07.625038 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 15:04:07.634192 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 15:04:07.634374 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 15:04:07.640924 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 15:04:07.653592 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 15:04:07.659843 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 15:04:07.665984 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 15:04:07.670835 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 15:04:08.025782 coreos-metadata[2641]: Dec 13 15:04:08.025 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 15:04:08.026321 coreos-metadata[2641]: Dec 13 15:04:08.026 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Dec 13 15:04:08.252807 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Dec 13 15:04:08.269798 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link Dec 13 15:04:08.270744 systemd-networkd[2590]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:5a:9e:29.network. Dec 13 15:04:08.276258 coreos-metadata[2723]: Dec 13 15:04:08.276 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 15:04:08.276609 coreos-metadata[2723]: Dec 13 15:04:08.276 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Dec 13 15:04:08.392800 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 Dec 13 15:04:08.411957 extend-filesystems[2668]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 15:04:08.411957 extend-filesystems[2668]: old_desc_blocks = 1, new_desc_blocks = 112 Dec 13 15:04:08.411957 extend-filesystems[2668]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. Dec 13 15:04:08.439684 extend-filesystems[2646]: Resized filesystem in /dev/nvme0n1p9 Dec 13 15:04:08.414353 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 15:04:08.414644 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 15:04:08.881809 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Dec 13 15:04:08.898210 systemd-networkd[2590]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Dec 13 15:04:08.898796 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link Dec 13 15:04:08.899676 systemd-networkd[2590]: enP1p1s0f0np0: Link UP Dec 13 15:04:08.900018 systemd-networkd[2590]: enP1p1s0f0np0: Gained carrier Dec 13 15:04:08.917799 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 15:04:08.931148 systemd-networkd[2590]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:5a:9e:28.network. Dec 13 15:04:08.931447 systemd-networkd[2590]: enP1p1s0f1np1: Link UP Dec 13 15:04:08.931748 systemd-networkd[2590]: enP1p1s0f1np1: Gained carrier Dec 13 15:04:08.947086 systemd-networkd[2590]: bond0: Link UP Dec 13 15:04:08.947381 systemd-networkd[2590]: bond0: Gained carrier Dec 13 15:04:08.947553 systemd-timesyncd[2592]: Network configuration changed, trying to establish connection. Dec 13 15:04:08.948106 systemd-timesyncd[2592]: Network configuration changed, trying to establish connection. Dec 13 15:04:08.948412 systemd-timesyncd[2592]: Network configuration changed, trying to establish connection. Dec 13 15:04:08.948561 systemd-timesyncd[2592]: Network configuration changed, trying to establish connection. Dec 13 15:04:09.021390 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex Dec 13 15:04:09.021422 kernel: bond0: active interface up! Dec 13 15:04:09.145795 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex Dec 13 15:04:10.026420 coreos-metadata[2641]: Dec 13 15:04:10.026 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Dec 13 15:04:10.276768 coreos-metadata[2723]: Dec 13 15:04:10.276 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Dec 13 15:04:10.306170 systemd-timesyncd[2592]: Network configuration changed, trying to establish connection. Dec 13 15:04:10.881870 systemd-networkd[2590]: bond0: Gained IPv6LL Dec 13 15:04:10.882157 systemd-timesyncd[2592]: Network configuration changed, trying to establish connection. Dec 13 15:04:10.884756 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 15:04:10.892436 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 15:04:10.910015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 15:04:10.916912 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 15:04:10.938155 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 15:04:11.505331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 15:04:11.511380 (kubelet)[2786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 15:04:11.972682 kubelet[2786]: E1213 15:04:11.972606 2786 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 15:04:11.975140 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 15:04:11.975284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 15:04:12.205960 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 15:04:12.224107 systemd[1]: Started sshd@0-147.28.228.225:22-147.75.109.163:51912.service - OpenSSH per-connection server daemon (147.75.109.163:51912). Dec 13 15:04:12.291896 coreos-metadata[2641]: Dec 13 15:04:12.291 INFO Fetch successful Dec 13 15:04:12.351247 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 15:04:12.358224 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Dec 13 15:04:12.526248 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 Dec 13 15:04:12.526434 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity Dec 13 15:04:12.662356 sshd[2806]: Accepted publickey for core from 147.75.109.163 port 51912 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:04:12.664173 sshd-session[2806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:04:12.672203 systemd-logind[2667]: New session 1 of user core. Dec 13 15:04:12.673643 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 15:04:12.688038 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 15:04:12.695310 agetty[2763]: failed to open credentials directory Dec 13 15:04:12.695349 agetty[2764]: failed to open credentials directory Dec 13 15:04:12.696932 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 15:04:12.699662 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Dec 13 15:04:12.701315 login[2763]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:04:12.702250 login[2764]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:04:12.702318 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 15:04:12.705364 systemd-logind[2667]: New session 3 of user core. Dec 13 15:04:12.707527 systemd-logind[2667]: New session 2 of user core. Dec 13 15:04:12.708388 (systemd)[2824]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:04:12.810310 systemd[2824]: Queued start job for default target default.target. Dec 13 15:04:12.821839 systemd[2824]: Created slice app.slice - User Application Slice. Dec 13 15:04:12.821866 systemd[2824]: Reached target paths.target - Paths. Dec 13 15:04:12.821877 systemd[2824]: Reached target timers.target - Timers. Dec 13 15:04:12.823063 systemd[2824]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 15:04:12.831799 systemd[2824]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 15:04:12.831851 systemd[2824]: Reached target sockets.target - Sockets. Dec 13 15:04:12.831864 systemd[2824]: Reached target basic.target - Basic System. Dec 13 15:04:12.831905 systemd[2824]: Reached target default.target - Main User Target. Dec 13 15:04:12.831928 systemd[2824]: Startup finished in 119ms. Dec 13 15:04:12.832214 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 15:04:12.833746 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 15:04:12.834779 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 15:04:12.835747 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 15:04:13.153247 systemd[1]: Started sshd@1-147.28.228.225:22-147.75.109.163:51926.service - OpenSSH per-connection server daemon (147.75.109.163:51926). Dec 13 15:04:13.210326 coreos-metadata[2723]: Dec 13 15:04:13.210 INFO Fetch successful Dec 13 15:04:13.256256 unknown[2723]: wrote ssh authorized keys file for user: core Dec 13 15:04:13.289431 update-ssh-keys[2860]: Updated "/home/core/.ssh/authorized_keys" Dec 13 15:04:13.290541 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 15:04:13.292029 systemd[1]: Finished sshkeys.service. Dec 13 15:04:13.292944 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 15:04:13.293109 systemd[1]: Startup finished in 3.225s (kernel) + 20.123s (initrd) + 9.813s (userspace) = 33.162s. Dec 13 15:04:13.576007 sshd[2857]: Accepted publickey for core from 147.75.109.163 port 51926 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:04:13.577123 sshd-session[2857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:04:13.580010 systemd-logind[2667]: New session 4 of user core. Dec 13 15:04:13.591897 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 15:04:13.875560 sshd[2863]: Connection closed by 147.75.109.163 port 51926 Dec 13 15:04:13.875947 sshd-session[2857]: pam_unix(sshd:session): session closed for user core Dec 13 15:04:13.879034 systemd[1]: sshd@1-147.28.228.225:22-147.75.109.163:51926.service: Deactivated successfully. Dec 13 15:04:13.880476 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 15:04:13.880949 systemd-logind[2667]: Session 4 logged out. Waiting for processes to exit. Dec 13 15:04:13.881479 systemd-logind[2667]: Removed session 4. Dec 13 15:04:13.946866 systemd[1]: Started sshd@2-147.28.228.225:22-147.75.109.163:51930.service - OpenSSH per-connection server daemon (147.75.109.163:51930). Dec 13 15:04:14.365321 sshd[2868]: Accepted publickey for core from 147.75.109.163 port 51930 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:04:14.366325 sshd-session[2868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:04:14.368974 systemd-logind[2667]: New session 5 of user core. Dec 13 15:04:14.381891 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 15:04:14.657927 sshd[2870]: Connection closed by 147.75.109.163 port 51930 Dec 13 15:04:14.658375 sshd-session[2868]: pam_unix(sshd:session): session closed for user core Dec 13 15:04:14.661834 systemd[1]: sshd@2-147.28.228.225:22-147.75.109.163:51930.service: Deactivated successfully. Dec 13 15:04:14.664415 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 15:04:14.664895 systemd-logind[2667]: Session 5 logged out. Waiting for processes to exit. Dec 13 15:04:14.665423 systemd-logind[2667]: Removed session 5. Dec 13 15:04:14.733828 systemd[1]: Started sshd@3-147.28.228.225:22-147.75.109.163:51944.service - OpenSSH per-connection server daemon (147.75.109.163:51944). Dec 13 15:04:15.161291 sshd[2875]: Accepted publickey for core from 147.75.109.163 port 51944 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:04:15.162358 sshd-session[2875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:04:15.165123 systemd-logind[2667]: New session 6 of user core. Dec 13 15:04:15.178899 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 15:04:15.461301 sshd[2877]: Connection closed by 147.75.109.163 port 51944 Dec 13 15:04:15.461683 sshd-session[2875]: pam_unix(sshd:session): session closed for user core Dec 13 15:04:15.464210 systemd[1]: sshd@3-147.28.228.225:22-147.75.109.163:51944.service: Deactivated successfully. Dec 13 15:04:15.465598 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 15:04:15.466048 systemd-logind[2667]: Session 6 logged out. Waiting for processes to exit. Dec 13 15:04:15.466538 systemd-logind[2667]: Removed session 6. Dec 13 15:04:15.534773 systemd[1]: Started sshd@4-147.28.228.225:22-147.75.109.163:51948.service - OpenSSH per-connection server daemon (147.75.109.163:51948). Dec 13 15:04:15.966169 sshd[2884]: Accepted publickey for core from 147.75.109.163 port 51948 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:04:15.967272 sshd-session[2884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:04:15.970193 systemd-logind[2667]: New session 7 of user core. Dec 13 15:04:15.984951 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 15:04:16.211606 sudo[2887]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 15:04:16.211878 sudo[2887]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 15:04:16.222781 sudo[2887]: pam_unix(sudo:session): session closed for user root Dec 13 15:04:16.287365 sshd[2886]: Connection closed by 147.75.109.163 port 51948 Dec 13 15:04:16.287726 sshd-session[2884]: pam_unix(sshd:session): session closed for user core Dec 13 15:04:16.290546 systemd[1]: sshd@4-147.28.228.225:22-147.75.109.163:51948.service: Deactivated successfully. Dec 13 15:04:16.292006 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 15:04:16.292495 systemd-logind[2667]: Session 7 logged out. Waiting for processes to exit. Dec 13 15:04:16.293083 systemd-logind[2667]: Removed session 7. Dec 13 15:04:16.364151 systemd[1]: Started sshd@5-147.28.228.225:22-147.75.109.163:51960.service - OpenSSH per-connection server daemon (147.75.109.163:51960). Dec 13 15:04:16.804240 sshd[2894]: Accepted publickey for core from 147.75.109.163 port 51960 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:04:16.805296 sshd-session[2894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:04:16.807973 systemd-logind[2667]: New session 8 of user core. Dec 13 15:04:16.820939 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 15:04:17.047501 sudo[2898]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 15:04:17.047760 sudo[2898]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 15:04:17.050387 sudo[2898]: pam_unix(sudo:session): session closed for user root Dec 13 15:04:17.054571 sudo[2897]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 15:04:17.054830 sudo[2897]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 15:04:17.069139 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 15:04:17.090766 augenrules[2920]: No rules Dec 13 15:04:17.091872 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 15:04:17.092883 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 15:04:17.093714 sudo[2897]: pam_unix(sudo:session): session closed for user root Dec 13 15:04:17.159522 sshd[2896]: Connection closed by 147.75.109.163 port 51960 Dec 13 15:04:17.159920 sshd-session[2894]: pam_unix(sshd:session): session closed for user core Dec 13 15:04:17.162535 systemd[1]: sshd@5-147.28.228.225:22-147.75.109.163:51960.service: Deactivated successfully. Dec 13 15:04:17.163907 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 15:04:17.164376 systemd-logind[2667]: Session 8 logged out. Waiting for processes to exit. Dec 13 15:04:17.164928 systemd-logind[2667]: Removed session 8. Dec 13 15:04:17.229853 systemd[1]: Started sshd@6-147.28.228.225:22-147.75.109.163:38688.service - OpenSSH per-connection server daemon (147.75.109.163:38688). Dec 13 15:04:17.656983 sshd[2929]: Accepted publickey for core from 147.75.109.163 port 38688 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:04:17.657957 sshd-session[2929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:04:17.660596 systemd-logind[2667]: New session 9 of user core. Dec 13 15:04:17.668892 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 15:04:17.893332 sudo[2932]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 15:04:17.893604 sudo[2932]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 15:04:18.180988 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 15:04:18.181134 (dockerd)[2962]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 15:04:18.433167 dockerd[2962]: time="2024-12-13T15:04:18.433078000Z" level=info msg="Starting up" Dec 13 15:04:18.505835 dockerd[2962]: time="2024-12-13T15:04:18.505805120Z" level=info msg="Loading containers: start." Dec 13 15:04:18.639813 kernel: Initializing XFRM netlink socket Dec 13 15:04:18.657879 systemd-timesyncd[2592]: Network configuration changed, trying to establish connection. Dec 13 15:04:18.706316 systemd-networkd[2590]: docker0: Link UP Dec 13 15:04:18.743860 dockerd[2962]: time="2024-12-13T15:04:18.743830120Z" level=info msg="Loading containers: done." Dec 13 15:04:18.752506 dockerd[2962]: time="2024-12-13T15:04:18.752476120Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 15:04:18.752583 dockerd[2962]: time="2024-12-13T15:04:18.752548160Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 15:04:18.753130 dockerd[2962]: time="2024-12-13T15:04:18.752710520Z" level=info msg="Daemon has completed initialization" Dec 13 15:04:18.772300 dockerd[2962]: time="2024-12-13T15:04:18.772186760Z" level=info msg="API listen on /run/docker.sock" Dec 13 15:04:18.772306 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 15:04:18.356007 systemd-resolved[2591]: Clock change detected. Flushing caches. Dec 13 15:04:18.364010 systemd-journald[2188]: Time jumped backwards, rotating. Dec 13 15:04:18.356166 systemd-timesyncd[2592]: Contacted time server [2600:3c00::f03c:93ff:fe5b:29d1]:123 (2.flatcar.pool.ntp.org). Dec 13 15:04:18.356213 systemd-timesyncd[2592]: Initial clock synchronization to Fri 2024-12-13 15:04:18.355948 UTC. Dec 13 15:04:18.625229 containerd[2682]: time="2024-12-13T15:04:18.625160432Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 15:04:18.706306 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2062273424-merged.mount: Deactivated successfully. Dec 13 15:04:19.064448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount263629621.mount: Deactivated successfully. Dec 13 15:04:20.985761 containerd[2682]: time="2024-12-13T15:04:20.985719912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:20.986050 containerd[2682]: time="2024-12-13T15:04:20.985760872Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201250" Dec 13 15:04:20.986731 containerd[2682]: time="2024-12-13T15:04:20.986708552Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:20.989712 containerd[2682]: time="2024-12-13T15:04:20.989685552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:20.990657 containerd[2682]: time="2024-12-13T15:04:20.990626832Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.3654272s" Dec 13 15:04:20.990724 containerd[2682]: time="2024-12-13T15:04:20.990665232Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 15:04:21.010167 containerd[2682]: time="2024-12-13T15:04:21.010141392Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 15:04:21.417397 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 15:04:21.431922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 15:04:21.523556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 15:04:21.527067 (kubelet)[3308]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 15:04:21.563374 kubelet[3308]: E1213 15:04:21.563331 3308 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 15:04:21.566416 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 15:04:21.566552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 15:04:23.293277 containerd[2682]: time="2024-12-13T15:04:23.293238432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:23.293492 containerd[2682]: time="2024-12-13T15:04:23.293276072Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381297" Dec 13 15:04:23.294276 containerd[2682]: time="2024-12-13T15:04:23.294258272Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:23.296945 containerd[2682]: time="2024-12-13T15:04:23.296926112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:23.298115 containerd[2682]: time="2024-12-13T15:04:23.298084912Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 2.28790392s" Dec 13 15:04:23.298133 containerd[2682]: time="2024-12-13T15:04:23.298122472Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 15:04:23.317363 containerd[2682]: time="2024-12-13T15:04:23.317332992Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 15:04:24.819450 containerd[2682]: time="2024-12-13T15:04:24.819413672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:24.819708 containerd[2682]: time="2024-12-13T15:04:24.819475512Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765640" Dec 13 15:04:24.820481 containerd[2682]: time="2024-12-13T15:04:24.820461192Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:24.823135 containerd[2682]: time="2024-12-13T15:04:24.823109272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:24.824195 containerd[2682]: time="2024-12-13T15:04:24.824166352Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.50679544s" Dec 13 15:04:24.824218 containerd[2682]: time="2024-12-13T15:04:24.824202832Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 15:04:24.842211 containerd[2682]: time="2024-12-13T15:04:24.842174152Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 15:04:25.589276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2823019186.mount: Deactivated successfully. Dec 13 15:04:26.237228 containerd[2682]: time="2024-12-13T15:04:26.237186552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:26.237522 containerd[2682]: time="2024-12-13T15:04:26.237213352Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Dec 13 15:04:26.237927 containerd[2682]: time="2024-12-13T15:04:26.237907152Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:26.239546 containerd[2682]: time="2024-12-13T15:04:26.239525032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:26.240217 containerd[2682]: time="2024-12-13T15:04:26.240191152Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.39797456s" Dec 13 15:04:26.240243 containerd[2682]: time="2024-12-13T15:04:26.240223272Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 15:04:26.259414 containerd[2682]: time="2024-12-13T15:04:26.259391672Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 15:04:26.572775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1397610403.mount: Deactivated successfully. Dec 13 15:04:27.229745 containerd[2682]: time="2024-12-13T15:04:27.229709112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:27.229816 containerd[2682]: time="2024-12-13T15:04:27.229784072Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Dec 13 15:04:27.230844 containerd[2682]: time="2024-12-13T15:04:27.230822472Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:27.233593 containerd[2682]: time="2024-12-13T15:04:27.233563792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:27.234650 containerd[2682]: time="2024-12-13T15:04:27.234620832Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 975.19952ms" Dec 13 15:04:27.234681 containerd[2682]: time="2024-12-13T15:04:27.234654992Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 15:04:27.253116 containerd[2682]: time="2024-12-13T15:04:27.253092072Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 15:04:27.487595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount8663017.mount: Deactivated successfully. Dec 13 15:04:27.488182 containerd[2682]: time="2024-12-13T15:04:27.488152392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:27.488274 containerd[2682]: time="2024-12-13T15:04:27.488246712Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Dec 13 15:04:27.488894 containerd[2682]: time="2024-12-13T15:04:27.488874512Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:27.490822 containerd[2682]: time="2024-12-13T15:04:27.490804192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:27.491654 containerd[2682]: time="2024-12-13T15:04:27.491631552Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 238.50716ms" Dec 13 15:04:27.491680 containerd[2682]: time="2024-12-13T15:04:27.491659872Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 15:04:27.510244 containerd[2682]: time="2024-12-13T15:04:27.510221752Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 15:04:27.813512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3071466465.mount: Deactivated successfully. Dec 13 15:04:30.678884 containerd[2682]: time="2024-12-13T15:04:30.678840392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:30.679176 containerd[2682]: time="2024-12-13T15:04:30.678880992Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Dec 13 15:04:30.679978 containerd[2682]: time="2024-12-13T15:04:30.679954712Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:30.682774 containerd[2682]: time="2024-12-13T15:04:30.682751232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:30.683814 containerd[2682]: time="2024-12-13T15:04:30.683791312Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.17354208s" Dec 13 15:04:30.683844 containerd[2682]: time="2024-12-13T15:04:30.683820752Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 15:04:31.667393 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 15:04:31.676809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 15:04:31.766518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 15:04:31.770032 (kubelet)[3715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 15:04:31.805670 kubelet[3715]: E1213 15:04:31.805633 3715 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 15:04:31.807959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 15:04:31.808092 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 15:04:35.123091 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 15:04:35.131940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 15:04:35.144395 systemd[1]: Reloading requested from client PID 3752 ('systemctl') (unit session-9.scope)... Dec 13 15:04:35.144406 systemd[1]: Reloading... Dec 13 15:04:35.202687 zram_generator::config[3794]: No configuration found. Dec 13 15:04:35.292198 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 15:04:35.362712 systemd[1]: Reloading finished in 218 ms. Dec 13 15:04:35.417116 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 15:04:35.419490 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 15:04:35.419684 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 15:04:35.421192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 15:04:35.515727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 15:04:35.519371 (kubelet)[3857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 15:04:35.553297 kubelet[3857]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 15:04:35.553297 kubelet[3857]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 15:04:35.553297 kubelet[3857]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 15:04:35.553471 kubelet[3857]: I1213 15:04:35.553339 3857 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 15:04:36.496107 kubelet[3857]: I1213 15:04:36.496078 3857 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 15:04:36.496107 kubelet[3857]: I1213 15:04:36.496099 3857 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 15:04:36.496270 kubelet[3857]: I1213 15:04:36.496263 3857 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 15:04:36.510733 kubelet[3857]: I1213 15:04:36.510705 3857 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 15:04:36.511411 kubelet[3857]: E1213 15:04:36.511393 3857 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.28.228.225:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.28.228.225:6443: connect: connection refused Dec 13 15:04:36.534177 kubelet[3857]: I1213 15:04:36.534152 3857 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 15:04:36.540583 kubelet[3857]: I1213 15:04:36.540560 3857 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 15:04:36.540940 kubelet[3857]: I1213 15:04:36.540925 3857 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 15:04:36.541010 kubelet[3857]: I1213 15:04:36.540949 3857 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 15:04:36.541010 kubelet[3857]: I1213 15:04:36.540958 3857 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 15:04:36.541088 kubelet[3857]: I1213 15:04:36.541078 3857 state_mem.go:36] "Initialized new in-memory state store" Dec 13 15:04:36.543444 kubelet[3857]: I1213 15:04:36.543428 3857 kubelet.go:396] "Attempting to sync node with API server" Dec 13 15:04:36.543485 kubelet[3857]: I1213 15:04:36.543450 3857 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 15:04:36.543485 kubelet[3857]: I1213 15:04:36.543469 3857 kubelet.go:312] "Adding apiserver pod source" Dec 13 15:04:36.543485 kubelet[3857]: I1213 15:04:36.543481 3857 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 15:04:36.543951 kubelet[3857]: W1213 15:04:36.543908 3857 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://147.28.228.225:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.0.0-a-a49a1da819&limit=500&resourceVersion=0": dial tcp 147.28.228.225:6443: connect: connection refused Dec 13 15:04:36.543975 kubelet[3857]: E1213 15:04:36.543967 3857 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.28.228.225:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.0.0-a-a49a1da819&limit=500&resourceVersion=0": dial tcp 147.28.228.225:6443: connect: connection refused Dec 13 15:04:36.544010 kubelet[3857]: W1213 15:04:36.543974 3857 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://147.28.228.225:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.228.225:6443: connect: connection refused Dec 13 15:04:36.544032 kubelet[3857]: E1213 15:04:36.544019 3857 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.28.228.225:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.228.225:6443: connect: connection refused Dec 13 15:04:36.544971 kubelet[3857]: I1213 15:04:36.544955 3857 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 15:04:36.545426 kubelet[3857]: I1213 15:04:36.545415 3857 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 15:04:36.545923 kubelet[3857]: W1213 15:04:36.545910 3857 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 15:04:36.546773 kubelet[3857]: I1213 15:04:36.546761 3857 server.go:1256] "Started kubelet" Dec 13 15:04:36.546835 kubelet[3857]: I1213 15:04:36.546820 3857 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 15:04:36.546855 kubelet[3857]: I1213 15:04:36.546838 3857 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 15:04:36.549955 kubelet[3857]: I1213 15:04:36.549938 3857 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 15:04:36.553340 kubelet[3857]: I1213 15:04:36.553314 3857 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 15:04:36.553559 kubelet[3857]: I1213 15:04:36.553431 3857 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 15:04:36.553559 kubelet[3857]: I1213 15:04:36.553498 3857 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 15:04:36.553559 kubelet[3857]: I1213 15:04:36.553543 3857 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 15:04:36.553710 kubelet[3857]: E1213 15:04:36.553694 3857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.228.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.0.0-a-a49a1da819?timeout=10s\": dial tcp 147.28.228.225:6443: connect: connection refused" interval="200ms" Dec 13 15:04:36.553744 kubelet[3857]: W1213 15:04:36.553705 3857 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://147.28.228.225:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.228.225:6443: connect: connection refused Dec 13 15:04:36.553768 kubelet[3857]: E1213 15:04:36.553749 3857 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.28.228.225:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.228.225:6443: connect: connection refused Dec 13 15:04:36.553841 kubelet[3857]: E1213 15:04:36.553824 3857 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 15:04:36.553884 kubelet[3857]: I1213 15:04:36.553867 3857 server.go:461] "Adding debug handlers to kubelet server" Dec 13 15:04:36.554054 kubelet[3857]: E1213 15:04:36.554039 3857 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.228.225:6443/api/v1/namespaces/default/events\": dial tcp 147.28.228.225:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.0.0-a-a49a1da819.1810c4d535c3b018 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.0.0-a-a49a1da819,UID:ci-4186.0.0-a-a49a1da819,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.0.0-a-a49a1da819,},FirstTimestamp:2024-12-13 15:04:36.546736152 +0000 UTC m=+1.024328281,LastTimestamp:2024-12-13 15:04:36.546736152 +0000 UTC m=+1.024328281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.0.0-a-a49a1da819,}" Dec 13 15:04:36.554102 kubelet[3857]: I1213 15:04:36.554043 3857 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 15:04:36.554734 kubelet[3857]: I1213 15:04:36.554719 3857 factory.go:221] Registration of the containerd container factory successfully Dec 13 15:04:36.554761 kubelet[3857]: I1213 15:04:36.554736 3857 factory.go:221] Registration of the systemd container factory successfully Dec 13 15:04:36.566516 kubelet[3857]: I1213 15:04:36.566496 3857 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 15:04:36.567532 kubelet[3857]: I1213 15:04:36.567523 3857 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 15:04:36.567551 kubelet[3857]: I1213 15:04:36.567539 3857 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 15:04:36.567572 kubelet[3857]: I1213 15:04:36.567558 3857 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 15:04:36.567606 kubelet[3857]: E1213 15:04:36.567599 3857 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 15:04:36.567986 kubelet[3857]: W1213 15:04:36.567953 3857 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://147.28.228.225:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.228.225:6443: connect: connection refused Dec 13 15:04:36.568009 kubelet[3857]: E1213 15:04:36.568000 3857 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.28.228.225:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.228.225:6443: connect: connection refused Dec 13 15:04:36.569217 kubelet[3857]: I1213 15:04:36.569204 3857 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 15:04:36.569240 kubelet[3857]: I1213 15:04:36.569219 3857 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 15:04:36.569240 kubelet[3857]: I1213 15:04:36.569233 3857 state_mem.go:36] "Initialized new in-memory state store" Dec 13 15:04:36.569744 kubelet[3857]: I1213 15:04:36.569732 3857 policy_none.go:49] "None policy: Start" Dec 13 15:04:36.570090 kubelet[3857]: I1213 15:04:36.570079 3857 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 15:04:36.570119 kubelet[3857]: I1213 15:04:36.570113 3857 state_mem.go:35] "Initializing new in-memory state store" Dec 13 15:04:36.574766 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 15:04:36.593457 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 15:04:36.607951 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 15:04:36.608772 kubelet[3857]: I1213 15:04:36.608752 3857 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 15:04:36.609011 kubelet[3857]: I1213 15:04:36.608998 3857 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 15:04:36.609822 kubelet[3857]: E1213 15:04:36.609805 3857 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.0.0-a-a49a1da819\" not found" Dec 13 15:04:36.655305 kubelet[3857]: I1213 15:04:36.655285 3857 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-a49a1da819" Dec 13 15:04:36.655646 kubelet[3857]: E1213 15:04:36.655631 3857 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.228.225:6443/api/v1/nodes\": dial tcp 147.28.228.225:6443: connect: connection refused" node="ci-4186.0.0-a-a49a1da819" Dec 13 15:04:36.667794 kubelet[3857]: I1213 15:04:36.667767 3857 topology_manager.go:215] "Topology Admit Handler" podUID="836de6ce488707c895e3d898704cb80e" podNamespace="kube-system" podName="kube-apiserver-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:36.669155 kubelet[3857]: I1213 15:04:36.669135 3857 topology_manager.go:215] "Topology Admit Handler" podUID="fd680b1cb849f5f4fea9238ef2117cba" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:36.670709 kubelet[3857]: I1213 15:04:36.670682 3857 topology_manager.go:215] "Topology Admit Handler" podUID="b7973d4f4610e213dc6e12dd15421c7c" podNamespace="kube-system" podName="kube-scheduler-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:36.674375 systemd[1]: Created slice kubepods-burstable-pod836de6ce488707c895e3d898704cb80e.slice - libcontainer container kubepods-burstable-pod836de6ce488707c895e3d898704cb80e.slice. Dec 13 15:04:36.688539 systemd[1]: Created slice kubepods-burstable-podfd680b1cb849f5f4fea9238ef2117cba.slice - libcontainer container kubepods-burstable-podfd680b1cb849f5f4fea9238ef2117cba.slice. Dec 13 15:04:36.691514 systemd[1]: Created slice kubepods-burstable-podb7973d4f4610e213dc6e12dd15421c7c.slice - libcontainer container kubepods-burstable-podb7973d4f4610e213dc6e12dd15421c7c.slice. Dec 13 15:04:36.754114 kubelet[3857]: E1213 15:04:36.754053 3857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.228.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.0.0-a-a49a1da819?timeout=10s\": dial tcp 147.28.228.225:6443: connect: connection refused" interval="400ms" Dec 13 15:04:36.854315 kubelet[3857]: I1213 15:04:36.854287 3857 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd680b1cb849f5f4fea9238ef2117cba-k8s-certs\") pod \"kube-controller-manager-ci-4186.0.0-a-a49a1da819\" (UID: \"fd680b1cb849f5f4fea9238ef2117cba\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:36.854359 kubelet[3857]: I1213 15:04:36.854327 3857 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd680b1cb849f5f4fea9238ef2117cba-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.0.0-a-a49a1da819\" (UID: \"fd680b1cb849f5f4fea9238ef2117cba\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:36.854359 kubelet[3857]: I1213 15:04:36.854348 3857 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/836de6ce488707c895e3d898704cb80e-ca-certs\") pod \"kube-apiserver-ci-4186.0.0-a-a49a1da819\" (UID: \"836de6ce488707c895e3d898704cb80e\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:36.854484 kubelet[3857]: I1213 15:04:36.854427 3857 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd680b1cb849f5f4fea9238ef2117cba-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.0.0-a-a49a1da819\" (UID: \"fd680b1cb849f5f4fea9238ef2117cba\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:36.854508 kubelet[3857]: I1213 15:04:36.854476 3857 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/836de6ce488707c895e3d898704cb80e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.0.0-a-a49a1da819\" (UID: \"836de6ce488707c895e3d898704cb80e\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:36.854528 kubelet[3857]: I1213 15:04:36.854515 3857 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd680b1cb849f5f4fea9238ef2117cba-ca-certs\") pod \"kube-controller-manager-ci-4186.0.0-a-a49a1da819\" (UID: \"fd680b1cb849f5f4fea9238ef2117cba\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:36.854568 kubelet[3857]: I1213 15:04:36.854551 3857 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd680b1cb849f5f4fea9238ef2117cba-kubeconfig\") pod \"kube-controller-manager-ci-4186.0.0-a-a49a1da819\" (UID: \"fd680b1cb849f5f4fea9238ef2117cba\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:36.854595 kubelet[3857]: I1213 15:04:36.854580 3857 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7973d4f4610e213dc6e12dd15421c7c-kubeconfig\") pod \"kube-scheduler-ci-4186.0.0-a-a49a1da819\" (UID: \"b7973d4f4610e213dc6e12dd15421c7c\") " pod="kube-system/kube-scheduler-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:36.854616 kubelet[3857]: I1213 15:04:36.854600 3857 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/836de6ce488707c895e3d898704cb80e-k8s-certs\") pod \"kube-apiserver-ci-4186.0.0-a-a49a1da819\" (UID: \"836de6ce488707c895e3d898704cb80e\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:36.857668 kubelet[3857]: I1213 15:04:36.857649 3857 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-a49a1da819" Dec 13 15:04:36.857922 kubelet[3857]: E1213 15:04:36.857901 3857 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.228.225:6443/api/v1/nodes\": dial tcp 147.28.228.225:6443: connect: connection refused" node="ci-4186.0.0-a-a49a1da819" Dec 13 15:04:36.988053 containerd[2682]: time="2024-12-13T15:04:36.988020192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.0.0-a-a49a1da819,Uid:836de6ce488707c895e3d898704cb80e,Namespace:kube-system,Attempt:0,}" Dec 13 15:04:36.990409 containerd[2682]: time="2024-12-13T15:04:36.990387672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.0.0-a-a49a1da819,Uid:fd680b1cb849f5f4fea9238ef2117cba,Namespace:kube-system,Attempt:0,}" Dec 13 15:04:36.993884 containerd[2682]: time="2024-12-13T15:04:36.993862392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.0.0-a-a49a1da819,Uid:b7973d4f4610e213dc6e12dd15421c7c,Namespace:kube-system,Attempt:0,}" Dec 13 15:04:37.154942 kubelet[3857]: E1213 15:04:37.154874 3857 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.228.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.0.0-a-a49a1da819?timeout=10s\": dial tcp 147.28.228.225:6443: connect: connection refused" interval="800ms" Dec 13 15:04:37.250208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2709363337.mount: Deactivated successfully. Dec 13 15:04:37.250921 containerd[2682]: time="2024-12-13T15:04:37.250891152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 15:04:37.251556 containerd[2682]: time="2024-12-13T15:04:37.251521792Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 15:04:37.251814 containerd[2682]: time="2024-12-13T15:04:37.251785432Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 15:04:37.252153 containerd[2682]: time="2024-12-13T15:04:37.252120552Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 15:04:37.252471 containerd[2682]: time="2024-12-13T15:04:37.252449872Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 15:04:37.252491 containerd[2682]: time="2024-12-13T15:04:37.252469152Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 15:04:37.255656 containerd[2682]: time="2024-12-13T15:04:37.255625752Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 15:04:37.256962 containerd[2682]: time="2024-12-13T15:04:37.256941872Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 268.84368ms" Dec 13 15:04:37.257561 containerd[2682]: time="2024-12-13T15:04:37.257540632Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 267.10264ms" Dec 13 15:04:37.258031 containerd[2682]: time="2024-12-13T15:04:37.258009072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 15:04:37.259353 containerd[2682]: time="2024-12-13T15:04:37.259328072Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 265.41508ms" Dec 13 15:04:37.260061 kubelet[3857]: I1213 15:04:37.260044 3857 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-a49a1da819" Dec 13 15:04:37.260321 kubelet[3857]: E1213 15:04:37.260305 3857 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.228.225:6443/api/v1/nodes\": dial tcp 147.28.228.225:6443: connect: connection refused" node="ci-4186.0.0-a-a49a1da819" Dec 13 15:04:37.383942 containerd[2682]: time="2024-12-13T15:04:37.383856192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:04:37.383942 containerd[2682]: time="2024-12-13T15:04:37.383922552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:04:37.383942 containerd[2682]: time="2024-12-13T15:04:37.383936592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:04:37.384045 containerd[2682]: time="2024-12-13T15:04:37.383965192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:04:37.384045 containerd[2682]: time="2024-12-13T15:04:37.384011712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:04:37.384045 containerd[2682]: time="2024-12-13T15:04:37.384023952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:04:37.384045 containerd[2682]: time="2024-12-13T15:04:37.384036232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:04:37.384135 containerd[2682]: time="2024-12-13T15:04:37.383733352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:04:37.384135 containerd[2682]: time="2024-12-13T15:04:37.384074312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:04:37.384135 containerd[2682]: time="2024-12-13T15:04:37.384087432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:04:37.384135 containerd[2682]: time="2024-12-13T15:04:37.384111352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:04:37.384205 containerd[2682]: time="2024-12-13T15:04:37.384158072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:04:37.408788 systemd[1]: Started cri-containerd-4ac1192b7b46ebf34f2b748515625cd1c89e83ef520b67538374e6aa259cd48e.scope - libcontainer container 4ac1192b7b46ebf34f2b748515625cd1c89e83ef520b67538374e6aa259cd48e. Dec 13 15:04:37.410034 systemd[1]: Started cri-containerd-6fef5e09fbd590fb76cc83c30c0717fb2b9797ee5c5b7a32c19ea0f9ff2f24ca.scope - libcontainer container 6fef5e09fbd590fb76cc83c30c0717fb2b9797ee5c5b7a32c19ea0f9ff2f24ca. Dec 13 15:04:37.411283 systemd[1]: Started cri-containerd-7d76969e77a074d6b53bd7075fe15bb059f999e8e4d9b139dda3a0fcf26c4709.scope - libcontainer container 7d76969e77a074d6b53bd7075fe15bb059f999e8e4d9b139dda3a0fcf26c4709. Dec 13 15:04:37.426926 kubelet[3857]: W1213 15:04:37.426877 3857 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://147.28.228.225:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.228.225:6443: connect: connection refused Dec 13 15:04:37.426987 kubelet[3857]: E1213 15:04:37.426934 3857 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.28.228.225:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.228.225:6443: connect: connection refused Dec 13 15:04:37.432441 containerd[2682]: time="2024-12-13T15:04:37.432409352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.0.0-a-a49a1da819,Uid:836de6ce488707c895e3d898704cb80e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ac1192b7b46ebf34f2b748515625cd1c89e83ef520b67538374e6aa259cd48e\"" Dec 13 15:04:37.433219 containerd[2682]: time="2024-12-13T15:04:37.433200752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.0.0-a-a49a1da819,Uid:b7973d4f4610e213dc6e12dd15421c7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fef5e09fbd590fb76cc83c30c0717fb2b9797ee5c5b7a32c19ea0f9ff2f24ca\"" Dec 13 15:04:37.434292 containerd[2682]: time="2024-12-13T15:04:37.434271032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.0.0-a-a49a1da819,Uid:fd680b1cb849f5f4fea9238ef2117cba,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d76969e77a074d6b53bd7075fe15bb059f999e8e4d9b139dda3a0fcf26c4709\"" Dec 13 15:04:37.435308 containerd[2682]: time="2024-12-13T15:04:37.435284352Z" level=info msg="CreateContainer within sandbox \"6fef5e09fbd590fb76cc83c30c0717fb2b9797ee5c5b7a32c19ea0f9ff2f24ca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 15:04:37.435378 containerd[2682]: time="2024-12-13T15:04:37.435292872Z" level=info msg="CreateContainer within sandbox \"4ac1192b7b46ebf34f2b748515625cd1c89e83ef520b67538374e6aa259cd48e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 15:04:37.436091 containerd[2682]: time="2024-12-13T15:04:37.436065632Z" level=info msg="CreateContainer within sandbox \"7d76969e77a074d6b53bd7075fe15bb059f999e8e4d9b139dda3a0fcf26c4709\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 15:04:37.443504 containerd[2682]: time="2024-12-13T15:04:37.443465152Z" level=info msg="CreateContainer within sandbox \"4ac1192b7b46ebf34f2b748515625cd1c89e83ef520b67538374e6aa259cd48e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6e1f1f7bda175504645bf4e5f4ebf89af893219a44614021326c6cb491503a4a\"" Dec 13 15:04:37.443978 containerd[2682]: time="2024-12-13T15:04:37.443950112Z" level=info msg="CreateContainer within sandbox \"6fef5e09fbd590fb76cc83c30c0717fb2b9797ee5c5b7a32c19ea0f9ff2f24ca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3800e9594e664e608b79c8bb06e330e33c5df6842996e802545e38447caddd3f\"" Dec 13 15:04:37.444063 containerd[2682]: time="2024-12-13T15:04:37.444043992Z" level=info msg="CreateContainer within sandbox \"7d76969e77a074d6b53bd7075fe15bb059f999e8e4d9b139dda3a0fcf26c4709\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d6c4c22cb4d51ab0d0e2f93777d7eaca64173e728af14c14a496b1ecd6bcf6cf\"" Dec 13 15:04:37.444087 containerd[2682]: time="2024-12-13T15:04:37.444067592Z" level=info msg="StartContainer for \"6e1f1f7bda175504645bf4e5f4ebf89af893219a44614021326c6cb491503a4a\"" Dec 13 15:04:37.444236 containerd[2682]: time="2024-12-13T15:04:37.444219432Z" level=info msg="StartContainer for \"3800e9594e664e608b79c8bb06e330e33c5df6842996e802545e38447caddd3f\"" Dec 13 15:04:37.444319 containerd[2682]: time="2024-12-13T15:04:37.444305272Z" level=info msg="StartContainer for \"d6c4c22cb4d51ab0d0e2f93777d7eaca64173e728af14c14a496b1ecd6bcf6cf\"" Dec 13 15:04:37.477854 systemd[1]: Started cri-containerd-3800e9594e664e608b79c8bb06e330e33c5df6842996e802545e38447caddd3f.scope - libcontainer container 3800e9594e664e608b79c8bb06e330e33c5df6842996e802545e38447caddd3f. Dec 13 15:04:37.479024 systemd[1]: Started cri-containerd-6e1f1f7bda175504645bf4e5f4ebf89af893219a44614021326c6cb491503a4a.scope - libcontainer container 6e1f1f7bda175504645bf4e5f4ebf89af893219a44614021326c6cb491503a4a. Dec 13 15:04:37.480104 systemd[1]: Started cri-containerd-d6c4c22cb4d51ab0d0e2f93777d7eaca64173e728af14c14a496b1ecd6bcf6cf.scope - libcontainer container d6c4c22cb4d51ab0d0e2f93777d7eaca64173e728af14c14a496b1ecd6bcf6cf. Dec 13 15:04:37.502322 containerd[2682]: time="2024-12-13T15:04:37.502287232Z" level=info msg="StartContainer for \"3800e9594e664e608b79c8bb06e330e33c5df6842996e802545e38447caddd3f\" returns successfully" Dec 13 15:04:37.503719 containerd[2682]: time="2024-12-13T15:04:37.503691672Z" level=info msg="StartContainer for \"6e1f1f7bda175504645bf4e5f4ebf89af893219a44614021326c6cb491503a4a\" returns successfully" Dec 13 15:04:37.504633 containerd[2682]: time="2024-12-13T15:04:37.504609872Z" level=info msg="StartContainer for \"d6c4c22cb4d51ab0d0e2f93777d7eaca64173e728af14c14a496b1ecd6bcf6cf\" returns successfully" Dec 13 15:04:37.577441 kubelet[3857]: W1213 15:04:37.577390 3857 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://147.28.228.225:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.228.225:6443: connect: connection refused Dec 13 15:04:37.577689 kubelet[3857]: E1213 15:04:37.577448 3857 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.28.228.225:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.228.225:6443: connect: connection refused Dec 13 15:04:38.062952 kubelet[3857]: I1213 15:04:38.062934 3857 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-a49a1da819" Dec 13 15:04:38.839556 kubelet[3857]: E1213 15:04:38.839520 3857 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.0.0-a-a49a1da819\" not found" node="ci-4186.0.0-a-a49a1da819" Dec 13 15:04:38.840976 kubelet[3857]: I1213 15:04:38.840961 3857 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.0.0-a-a49a1da819" Dec 13 15:04:39.419227 kubelet[3857]: E1213 15:04:39.419194 3857 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186.0.0-a-a49a1da819\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:39.545502 kubelet[3857]: I1213 15:04:39.545476 3857 apiserver.go:52] "Watching apiserver" Dec 13 15:04:39.554038 kubelet[3857]: I1213 15:04:39.554020 3857 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 15:04:40.830604 kubelet[3857]: W1213 15:04:40.830581 3857 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 15:04:41.603480 systemd[1]: Reloading requested from client PID 4272 ('systemctl') (unit session-9.scope)... Dec 13 15:04:41.603491 systemd[1]: Reloading... Dec 13 15:04:41.665688 zram_generator::config[4317]: No configuration found. Dec 13 15:04:41.755415 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 15:04:41.837350 systemd[1]: Reloading finished in 233 ms. Dec 13 15:04:41.873061 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 15:04:41.885393 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 15:04:41.885633 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 15:04:41.885688 systemd[1]: kubelet.service: Consumed 1.440s CPU time, 133.7M memory peak, 0B memory swap peak. Dec 13 15:04:41.900951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 15:04:41.995139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 15:04:41.998911 (kubelet)[4377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 15:04:42.032042 kubelet[4377]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 15:04:42.032042 kubelet[4377]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 15:04:42.032042 kubelet[4377]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 15:04:42.032274 kubelet[4377]: I1213 15:04:42.032094 4377 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 15:04:42.035597 kubelet[4377]: I1213 15:04:42.035578 4377 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 15:04:42.035623 kubelet[4377]: I1213 15:04:42.035600 4377 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 15:04:42.037148 kubelet[4377]: I1213 15:04:42.037127 4377 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 15:04:42.038601 kubelet[4377]: I1213 15:04:42.038588 4377 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 15:04:42.040385 kubelet[4377]: I1213 15:04:42.040369 4377 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 15:04:42.076068 kubelet[4377]: I1213 15:04:42.076038 4377 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 15:04:42.076253 kubelet[4377]: I1213 15:04:42.076240 4377 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 15:04:42.076424 kubelet[4377]: I1213 15:04:42.076412 4377 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 15:04:42.076494 kubelet[4377]: I1213 15:04:42.076432 4377 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 15:04:42.076494 kubelet[4377]: I1213 15:04:42.076441 4377 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 15:04:42.076494 kubelet[4377]: I1213 15:04:42.076472 4377 state_mem.go:36] "Initialized new in-memory state store" Dec 13 15:04:42.076575 kubelet[4377]: I1213 15:04:42.076564 4377 kubelet.go:396] "Attempting to sync node with API server" Dec 13 15:04:42.076598 kubelet[4377]: I1213 15:04:42.076578 4377 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 15:04:42.076618 kubelet[4377]: I1213 15:04:42.076600 4377 kubelet.go:312] "Adding apiserver pod source" Dec 13 15:04:42.076618 kubelet[4377]: I1213 15:04:42.076613 4377 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 15:04:42.077096 kubelet[4377]: I1213 15:04:42.077081 4377 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 15:04:42.077266 kubelet[4377]: I1213 15:04:42.077255 4377 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 15:04:42.077600 kubelet[4377]: I1213 15:04:42.077588 4377 server.go:1256] "Started kubelet" Dec 13 15:04:42.077702 kubelet[4377]: I1213 15:04:42.077691 4377 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 15:04:42.077728 kubelet[4377]: I1213 15:04:42.077712 4377 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 15:04:42.078320 kubelet[4377]: I1213 15:04:42.078287 4377 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 15:04:42.078516 kubelet[4377]: I1213 15:04:42.078501 4377 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 15:04:42.078585 kubelet[4377]: E1213 15:04:42.078572 4377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4186.0.0-a-a49a1da819\" not found" Dec 13 15:04:42.078609 kubelet[4377]: I1213 15:04:42.078571 4377 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 15:04:42.078609 kubelet[4377]: I1213 15:04:42.078593 4377 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 15:04:42.078761 kubelet[4377]: I1213 15:04:42.078745 4377 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 15:04:42.079036 kubelet[4377]: I1213 15:04:42.079023 4377 factory.go:221] Registration of the systemd container factory successfully Dec 13 15:04:42.079121 kubelet[4377]: I1213 15:04:42.079106 4377 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 15:04:42.079273 kubelet[4377]: E1213 15:04:42.079258 4377 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 15:04:42.081040 kubelet[4377]: I1213 15:04:42.081023 4377 server.go:461] "Adding debug handlers to kubelet server" Dec 13 15:04:42.081201 kubelet[4377]: I1213 15:04:42.081183 4377 factory.go:221] Registration of the containerd container factory successfully Dec 13 15:04:42.085968 kubelet[4377]: I1213 15:04:42.085948 4377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 15:04:42.086926 kubelet[4377]: I1213 15:04:42.086908 4377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 15:04:42.086952 kubelet[4377]: I1213 15:04:42.086932 4377 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 15:04:42.086970 kubelet[4377]: I1213 15:04:42.086954 4377 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 15:04:42.087030 kubelet[4377]: E1213 15:04:42.087010 4377 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 15:04:42.112102 kubelet[4377]: I1213 15:04:42.112083 4377 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 15:04:42.112126 kubelet[4377]: I1213 15:04:42.112104 4377 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 15:04:42.112126 kubelet[4377]: I1213 15:04:42.112120 4377 state_mem.go:36] "Initialized new in-memory state store" Dec 13 15:04:42.112265 kubelet[4377]: I1213 15:04:42.112256 4377 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 15:04:42.112289 kubelet[4377]: I1213 15:04:42.112278 4377 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 15:04:42.112289 kubelet[4377]: I1213 15:04:42.112285 4377 policy_none.go:49] "None policy: Start" Dec 13 15:04:42.112802 kubelet[4377]: I1213 15:04:42.112789 4377 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 15:04:42.112825 kubelet[4377]: I1213 15:04:42.112806 4377 state_mem.go:35] "Initializing new in-memory state store" Dec 13 15:04:42.112929 kubelet[4377]: I1213 15:04:42.112922 4377 state_mem.go:75] "Updated machine memory state" Dec 13 15:04:42.115981 kubelet[4377]: I1213 15:04:42.115968 4377 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 15:04:42.116183 kubelet[4377]: I1213 15:04:42.116174 4377 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 15:04:42.181931 kubelet[4377]: I1213 15:04:42.181914 4377 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-a49a1da819" Dec 13 15:04:42.186306 kubelet[4377]: I1213 15:04:42.186287 4377 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186.0.0-a-a49a1da819" Dec 13 15:04:42.186361 kubelet[4377]: I1213 15:04:42.186351 4377 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.0.0-a-a49a1da819" Dec 13 15:04:42.187323 kubelet[4377]: I1213 15:04:42.187163 4377 topology_manager.go:215] "Topology Admit Handler" podUID="fd680b1cb849f5f4fea9238ef2117cba" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:42.187323 kubelet[4377]: I1213 15:04:42.187228 4377 topology_manager.go:215] "Topology Admit Handler" podUID="b7973d4f4610e213dc6e12dd15421c7c" podNamespace="kube-system" podName="kube-scheduler-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:42.187323 kubelet[4377]: I1213 15:04:42.187277 4377 topology_manager.go:215] "Topology Admit Handler" podUID="836de6ce488707c895e3d898704cb80e" podNamespace="kube-system" podName="kube-apiserver-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:42.190200 kubelet[4377]: W1213 15:04:42.190179 4377 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 15:04:42.190433 kubelet[4377]: W1213 15:04:42.190416 4377 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 15:04:42.190824 kubelet[4377]: W1213 15:04:42.190813 4377 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 15:04:42.190865 kubelet[4377]: E1213 15:04:42.190853 4377 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.0.0-a-a49a1da819\" already exists" pod="kube-system/kube-apiserver-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:42.279316 kubelet[4377]: I1213 15:04:42.279294 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd680b1cb849f5f4fea9238ef2117cba-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.0.0-a-a49a1da819\" (UID: \"fd680b1cb849f5f4fea9238ef2117cba\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:42.279403 kubelet[4377]: I1213 15:04:42.279330 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/836de6ce488707c895e3d898704cb80e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.0.0-a-a49a1da819\" (UID: \"836de6ce488707c895e3d898704cb80e\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:42.279403 kubelet[4377]: I1213 15:04:42.279352 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/836de6ce488707c895e3d898704cb80e-ca-certs\") pod \"kube-apiserver-ci-4186.0.0-a-a49a1da819\" (UID: \"836de6ce488707c895e3d898704cb80e\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:42.279507 kubelet[4377]: I1213 15:04:42.279439 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/836de6ce488707c895e3d898704cb80e-k8s-certs\") pod \"kube-apiserver-ci-4186.0.0-a-a49a1da819\" (UID: \"836de6ce488707c895e3d898704cb80e\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:42.279507 kubelet[4377]: I1213 15:04:42.279475 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd680b1cb849f5f4fea9238ef2117cba-ca-certs\") pod \"kube-controller-manager-ci-4186.0.0-a-a49a1da819\" (UID: \"fd680b1cb849f5f4fea9238ef2117cba\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:42.279583 kubelet[4377]: I1213 15:04:42.279513 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd680b1cb849f5f4fea9238ef2117cba-k8s-certs\") pod \"kube-controller-manager-ci-4186.0.0-a-a49a1da819\" (UID: \"fd680b1cb849f5f4fea9238ef2117cba\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:42.279583 kubelet[4377]: I1213 15:04:42.279548 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd680b1cb849f5f4fea9238ef2117cba-kubeconfig\") pod \"kube-controller-manager-ci-4186.0.0-a-a49a1da819\" (UID: \"fd680b1cb849f5f4fea9238ef2117cba\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:42.279699 kubelet[4377]: I1213 15:04:42.279593 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd680b1cb849f5f4fea9238ef2117cba-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.0.0-a-a49a1da819\" (UID: \"fd680b1cb849f5f4fea9238ef2117cba\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:42.279699 kubelet[4377]: I1213 15:04:42.279623 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7973d4f4610e213dc6e12dd15421c7c-kubeconfig\") pod \"kube-scheduler-ci-4186.0.0-a-a49a1da819\" (UID: \"b7973d4f4610e213dc6e12dd15421c7c\") " pod="kube-system/kube-scheduler-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:43.077465 kubelet[4377]: I1213 15:04:43.077423 4377 apiserver.go:52] "Watching apiserver" Dec 13 15:04:43.096095 kubelet[4377]: W1213 15:04:43.096072 4377 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 15:04:43.096156 kubelet[4377]: E1213 15:04:43.096123 4377 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.0.0-a-a49a1da819\" already exists" pod="kube-system/kube-apiserver-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:43.096187 kubelet[4377]: W1213 15:04:43.096162 4377 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 15:04:43.096227 kubelet[4377]: E1213 15:04:43.096215 4377 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4186.0.0-a-a49a1da819\" already exists" pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a49a1da819" Dec 13 15:04:43.112477 kubelet[4377]: I1213 15:04:43.112450 4377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.0.0-a-a49a1da819" podStartSLOduration=3.112403712 podStartE2EDuration="3.112403712s" podCreationTimestamp="2024-12-13 15:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 15:04:43.106368032 +0000 UTC m=+1.104336441" watchObservedRunningTime="2024-12-13 15:04:43.112403712 +0000 UTC m=+1.110372121" Dec 13 15:04:43.126745 kubelet[4377]: I1213 15:04:43.126722 4377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.0.0-a-a49a1da819" podStartSLOduration=1.126686112 podStartE2EDuration="1.126686112s" podCreationTimestamp="2024-12-13 15:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 15:04:43.112544792 +0000 UTC m=+1.110513201" watchObservedRunningTime="2024-12-13 15:04:43.126686112 +0000 UTC m=+1.124654521" Dec 13 15:04:43.126816 kubelet[4377]: I1213 15:04:43.126804 4377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.0.0-a-a49a1da819" podStartSLOduration=1.126788832 podStartE2EDuration="1.126788832s" podCreationTimestamp="2024-12-13 15:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 15:04:43.126786392 +0000 UTC m=+1.124754761" watchObservedRunningTime="2024-12-13 15:04:43.126788832 +0000 UTC m=+1.124757241" Dec 13 15:04:43.179124 kubelet[4377]: I1213 15:04:43.179102 4377 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 15:04:45.992695 sudo[2932]: pam_unix(sudo:session): session closed for user root Dec 13 15:04:46.056506 sshd[2931]: Connection closed by 147.75.109.163 port 38688 Dec 13 15:04:46.056906 sshd-session[2929]: pam_unix(sshd:session): session closed for user core Dec 13 15:04:46.059878 systemd[1]: sshd@6-147.28.228.225:22-147.75.109.163:38688.service: Deactivated successfully. Dec 13 15:04:46.062006 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 15:04:46.062227 systemd[1]: session-9.scope: Consumed 6.973s CPU time, 217.2M memory peak, 0B memory swap peak. Dec 13 15:04:46.062581 systemd-logind[2667]: Session 9 logged out. Waiting for processes to exit. Dec 13 15:04:46.063272 systemd-logind[2667]: Removed session 9. Dec 13 15:04:51.348950 update_engine[2677]: I20241213 15:04:51.348879 2677 update_attempter.cc:509] Updating boot flags... Dec 13 15:04:51.391689 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (4644) Dec 13 15:04:51.421689 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (4646) Dec 13 15:04:56.403390 kubelet[4377]: I1213 15:04:56.403356 4377 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 15:04:56.403776 containerd[2682]: time="2024-12-13T15:04:56.403672386Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 15:04:56.403939 kubelet[4377]: I1213 15:04:56.403838 4377 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 15:04:56.743864 kubelet[4377]: I1213 15:04:56.743834 4377 topology_manager.go:215] "Topology Admit Handler" podUID="15d73fe4-134f-47e1-a33f-79d7969c388f" podNamespace="kube-system" podName="kube-proxy-bhpz4" Dec 13 15:04:56.748238 systemd[1]: Created slice kubepods-besteffort-pod15d73fe4_134f_47e1_a33f_79d7969c388f.slice - libcontainer container kubepods-besteffort-pod15d73fe4_134f_47e1_a33f_79d7969c388f.slice. Dec 13 15:04:56.772538 kubelet[4377]: I1213 15:04:56.772509 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/15d73fe4-134f-47e1-a33f-79d7969c388f-kube-proxy\") pod \"kube-proxy-bhpz4\" (UID: \"15d73fe4-134f-47e1-a33f-79d7969c388f\") " pod="kube-system/kube-proxy-bhpz4" Dec 13 15:04:56.772602 kubelet[4377]: I1213 15:04:56.772564 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15d73fe4-134f-47e1-a33f-79d7969c388f-xtables-lock\") pod \"kube-proxy-bhpz4\" (UID: \"15d73fe4-134f-47e1-a33f-79d7969c388f\") " pod="kube-system/kube-proxy-bhpz4" Dec 13 15:04:56.772700 kubelet[4377]: I1213 15:04:56.772683 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15d73fe4-134f-47e1-a33f-79d7969c388f-lib-modules\") pod \"kube-proxy-bhpz4\" (UID: \"15d73fe4-134f-47e1-a33f-79d7969c388f\") " pod="kube-system/kube-proxy-bhpz4" Dec 13 15:04:56.772760 kubelet[4377]: I1213 15:04:56.772740 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l24wx\" (UniqueName: \"kubernetes.io/projected/15d73fe4-134f-47e1-a33f-79d7969c388f-kube-api-access-l24wx\") pod \"kube-proxy-bhpz4\" (UID: \"15d73fe4-134f-47e1-a33f-79d7969c388f\") " pod="kube-system/kube-proxy-bhpz4" Dec 13 15:04:56.880826 kubelet[4377]: E1213 15:04:56.880796 4377 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 15:04:56.880863 kubelet[4377]: E1213 15:04:56.880831 4377 projected.go:200] Error preparing data for projected volume kube-api-access-l24wx for pod kube-system/kube-proxy-bhpz4: configmap "kube-root-ca.crt" not found Dec 13 15:04:56.880907 kubelet[4377]: E1213 15:04:56.880896 4377 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/15d73fe4-134f-47e1-a33f-79d7969c388f-kube-api-access-l24wx podName:15d73fe4-134f-47e1-a33f-79d7969c388f nodeName:}" failed. No retries permitted until 2024-12-13 15:04:57.380875007 +0000 UTC m=+15.378843416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l24wx" (UniqueName: "kubernetes.io/projected/15d73fe4-134f-47e1-a33f-79d7969c388f-kube-api-access-l24wx") pod "kube-proxy-bhpz4" (UID: "15d73fe4-134f-47e1-a33f-79d7969c388f") : configmap "kube-root-ca.crt" not found Dec 13 15:04:56.896062 kubelet[4377]: I1213 15:04:56.896036 4377 topology_manager.go:215] "Topology Admit Handler" podUID="69615856-7942-474a-8301-489b23158304" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-pttc4" Dec 13 15:04:56.900443 systemd[1]: Created slice kubepods-besteffort-pod69615856_7942_474a_8301_489b23158304.slice - libcontainer container kubepods-besteffort-pod69615856_7942_474a_8301_489b23158304.slice. Dec 13 15:04:56.974152 kubelet[4377]: I1213 15:04:56.974127 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/69615856-7942-474a-8301-489b23158304-var-lib-calico\") pod \"tigera-operator-c7ccbd65-pttc4\" (UID: \"69615856-7942-474a-8301-489b23158304\") " pod="tigera-operator/tigera-operator-c7ccbd65-pttc4" Dec 13 15:04:56.974211 kubelet[4377]: I1213 15:04:56.974160 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knlsm\" (UniqueName: \"kubernetes.io/projected/69615856-7942-474a-8301-489b23158304-kube-api-access-knlsm\") pod \"tigera-operator-c7ccbd65-pttc4\" (UID: \"69615856-7942-474a-8301-489b23158304\") " pod="tigera-operator/tigera-operator-c7ccbd65-pttc4" Dec 13 15:04:57.083495 kubelet[4377]: E1213 15:04:57.083425 4377 projected.go:294] Couldn't get configMap tigera-operator/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 15:04:57.083495 kubelet[4377]: E1213 15:04:57.083459 4377 projected.go:200] Error preparing data for projected volume kube-api-access-knlsm for pod tigera-operator/tigera-operator-c7ccbd65-pttc4: configmap "kube-root-ca.crt" not found Dec 13 15:04:57.083579 kubelet[4377]: E1213 15:04:57.083536 4377 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69615856-7942-474a-8301-489b23158304-kube-api-access-knlsm podName:69615856-7942-474a-8301-489b23158304 nodeName:}" failed. No retries permitted until 2024-12-13 15:04:57.583512972 +0000 UTC m=+15.581481381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-knlsm" (UniqueName: "kubernetes.io/projected/69615856-7942-474a-8301-489b23158304-kube-api-access-knlsm") pod "tigera-operator-c7ccbd65-pttc4" (UID: "69615856-7942-474a-8301-489b23158304") : configmap "kube-root-ca.crt" not found Dec 13 15:04:57.668841 containerd[2682]: time="2024-12-13T15:04:57.668802129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bhpz4,Uid:15d73fe4-134f-47e1-a33f-79d7969c388f,Namespace:kube-system,Attempt:0,}" Dec 13 15:04:57.681359 containerd[2682]: time="2024-12-13T15:04:57.681297531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:04:57.681410 containerd[2682]: time="2024-12-13T15:04:57.681358489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:04:57.681410 containerd[2682]: time="2024-12-13T15:04:57.681370009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:04:57.681459 containerd[2682]: time="2024-12-13T15:04:57.681444566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:04:57.701793 systemd[1]: Started cri-containerd-65314e73f74ab4848f0b93ebfa7d2c7dcd2527aad3c4e44ab542724e3ebcb1e7.scope - libcontainer container 65314e73f74ab4848f0b93ebfa7d2c7dcd2527aad3c4e44ab542724e3ebcb1e7. Dec 13 15:04:57.717692 containerd[2682]: time="2024-12-13T15:04:57.717653729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bhpz4,Uid:15d73fe4-134f-47e1-a33f-79d7969c388f,Namespace:kube-system,Attempt:0,} returns sandbox id \"65314e73f74ab4848f0b93ebfa7d2c7dcd2527aad3c4e44ab542724e3ebcb1e7\"" Dec 13 15:04:57.719770 containerd[2682]: time="2024-12-13T15:04:57.719743510Z" level=info msg="CreateContainer within sandbox \"65314e73f74ab4848f0b93ebfa7d2c7dcd2527aad3c4e44ab542724e3ebcb1e7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 15:04:57.728129 containerd[2682]: time="2024-12-13T15:04:57.728091590Z" level=info msg="CreateContainer within sandbox \"65314e73f74ab4848f0b93ebfa7d2c7dcd2527aad3c4e44ab542724e3ebcb1e7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"66a1303a8d68dbdbf2a065002e473aad58d428a7c03f54058263bfa17d11ee5d\"" Dec 13 15:04:57.728492 containerd[2682]: time="2024-12-13T15:04:57.728466260Z" level=info msg="StartContainer for \"66a1303a8d68dbdbf2a065002e473aad58d428a7c03f54058263bfa17d11ee5d\"" Dec 13 15:04:57.753797 systemd[1]: Started cri-containerd-66a1303a8d68dbdbf2a065002e473aad58d428a7c03f54058263bfa17d11ee5d.scope - libcontainer container 66a1303a8d68dbdbf2a065002e473aad58d428a7c03f54058263bfa17d11ee5d. Dec 13 15:04:57.774348 containerd[2682]: time="2024-12-13T15:04:57.774317786Z" level=info msg="StartContainer for \"66a1303a8d68dbdbf2a065002e473aad58d428a7c03f54058263bfa17d11ee5d\" returns successfully" Dec 13 15:04:57.802450 containerd[2682]: time="2024-12-13T15:04:57.802424381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-pttc4,Uid:69615856-7942-474a-8301-489b23158304,Namespace:tigera-operator,Attempt:0,}" Dec 13 15:04:57.815107 containerd[2682]: time="2024-12-13T15:04:57.815043260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:04:57.815133 containerd[2682]: time="2024-12-13T15:04:57.815108338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:04:57.815133 containerd[2682]: time="2024-12-13T15:04:57.815120218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:04:57.815214 containerd[2682]: time="2024-12-13T15:04:57.815195776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:04:57.832874 systemd[1]: Started cri-containerd-086e57c1c60d497e0b404eba0b65792d5e4379fa915bbc53d969547e59004f76.scope - libcontainer container 086e57c1c60d497e0b404eba0b65792d5e4379fa915bbc53d969547e59004f76. Dec 13 15:04:57.855787 containerd[2682]: time="2024-12-13T15:04:57.855742334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-pttc4,Uid:69615856-7942-474a-8301-489b23158304,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"086e57c1c60d497e0b404eba0b65792d5e4379fa915bbc53d969547e59004f76\"" Dec 13 15:04:57.856933 containerd[2682]: time="2024-12-13T15:04:57.856915301Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 15:04:58.986662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1336495532.mount: Deactivated successfully. Dec 13 15:04:59.728559 containerd[2682]: time="2024-12-13T15:04:59.728508766Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125928" Dec 13 15:04:59.728559 containerd[2682]: time="2024-12-13T15:04:59.728522046Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:59.729352 containerd[2682]: time="2024-12-13T15:04:59.729327146Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:59.731303 containerd[2682]: time="2024-12-13T15:04:59.731281536Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:04:59.732064 containerd[2682]: time="2024-12-13T15:04:59.732042237Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 1.875101217s" Dec 13 15:04:59.732086 containerd[2682]: time="2024-12-13T15:04:59.732070316Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 15:04:59.733485 containerd[2682]: time="2024-12-13T15:04:59.733453522Z" level=info msg="CreateContainer within sandbox \"086e57c1c60d497e0b404eba0b65792d5e4379fa915bbc53d969547e59004f76\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 15:04:59.738203 containerd[2682]: time="2024-12-13T15:04:59.738180843Z" level=info msg="CreateContainer within sandbox \"086e57c1c60d497e0b404eba0b65792d5e4379fa915bbc53d969547e59004f76\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"58d89d221379ac4ad8e73c0a887dbbfbc4ecc634d70c877bb73a471fb92b91ff\"" Dec 13 15:04:59.738518 containerd[2682]: time="2024-12-13T15:04:59.738495835Z" level=info msg="StartContainer for \"58d89d221379ac4ad8e73c0a887dbbfbc4ecc634d70c877bb73a471fb92b91ff\"" Dec 13 15:04:59.773843 systemd[1]: Started cri-containerd-58d89d221379ac4ad8e73c0a887dbbfbc4ecc634d70c877bb73a471fb92b91ff.scope - libcontainer container 58d89d221379ac4ad8e73c0a887dbbfbc4ecc634d70c877bb73a471fb92b91ff. Dec 13 15:04:59.790503 containerd[2682]: time="2024-12-13T15:04:59.790471766Z" level=info msg="StartContainer for \"58d89d221379ac4ad8e73c0a887dbbfbc4ecc634d70c877bb73a471fb92b91ff\" returns successfully" Dec 13 15:05:00.128723 kubelet[4377]: I1213 15:05:00.128650 4377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bhpz4" podStartSLOduration=4.128612295 podStartE2EDuration="4.128612295s" podCreationTimestamp="2024-12-13 15:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 15:04:58.125835181 +0000 UTC m=+16.123803590" watchObservedRunningTime="2024-12-13 15:05:00.128612295 +0000 UTC m=+18.126580704" Dec 13 15:05:00.129011 kubelet[4377]: I1213 15:05:00.128739 4377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-pttc4" podStartSLOduration=2.252990973 podStartE2EDuration="4.128721573s" podCreationTimestamp="2024-12-13 15:04:56 +0000 UTC" firstStartedPulling="2024-12-13 15:04:57.856561951 +0000 UTC m=+15.854530320" lastFinishedPulling="2024-12-13 15:04:59.732292471 +0000 UTC m=+17.730260920" observedRunningTime="2024-12-13 15:05:00.128506138 +0000 UTC m=+18.126474587" watchObservedRunningTime="2024-12-13 15:05:00.128721573 +0000 UTC m=+18.126689982" Dec 13 15:05:03.360698 kubelet[4377]: I1213 15:05:03.360639 4377 topology_manager.go:215] "Topology Admit Handler" podUID="84a1ea1f-37fa-4760-b722-83cd399d3310" podNamespace="calico-system" podName="calico-typha-d9948d456-bj2xj" Dec 13 15:05:03.365602 systemd[1]: Created slice kubepods-besteffort-pod84a1ea1f_37fa_4760_b722_83cd399d3310.slice - libcontainer container kubepods-besteffort-pod84a1ea1f_37fa_4760_b722_83cd399d3310.slice. Dec 13 15:05:03.392796 kubelet[4377]: I1213 15:05:03.392760 4377 topology_manager.go:215] "Topology Admit Handler" podUID="783045f4-fac8-4ee4-9970-c2bcc3c7e312" podNamespace="calico-system" podName="calico-node-swnk4" Dec 13 15:05:03.398133 systemd[1]: Created slice kubepods-besteffort-pod783045f4_fac8_4ee4_9970_c2bcc3c7e312.slice - libcontainer container kubepods-besteffort-pod783045f4_fac8_4ee4_9970_c2bcc3c7e312.slice. Dec 13 15:05:03.414869 kubelet[4377]: I1213 15:05:03.414839 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/783045f4-fac8-4ee4-9970-c2bcc3c7e312-lib-modules\") pod \"calico-node-swnk4\" (UID: \"783045f4-fac8-4ee4-9970-c2bcc3c7e312\") " pod="calico-system/calico-node-swnk4" Dec 13 15:05:03.414941 kubelet[4377]: I1213 15:05:03.414881 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/783045f4-fac8-4ee4-9970-c2bcc3c7e312-var-run-calico\") pod \"calico-node-swnk4\" (UID: \"783045f4-fac8-4ee4-9970-c2bcc3c7e312\") " pod="calico-system/calico-node-swnk4" Dec 13 15:05:03.414985 kubelet[4377]: I1213 15:05:03.414954 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/783045f4-fac8-4ee4-9970-c2bcc3c7e312-cni-net-dir\") pod \"calico-node-swnk4\" (UID: \"783045f4-fac8-4ee4-9970-c2bcc3c7e312\") " pod="calico-system/calico-node-swnk4" Dec 13 15:05:03.415040 kubelet[4377]: I1213 15:05:03.414995 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/783045f4-fac8-4ee4-9970-c2bcc3c7e312-tigera-ca-bundle\") pod \"calico-node-swnk4\" (UID: \"783045f4-fac8-4ee4-9970-c2bcc3c7e312\") " pod="calico-system/calico-node-swnk4" Dec 13 15:05:03.415040 kubelet[4377]: I1213 15:05:03.415017 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/783045f4-fac8-4ee4-9970-c2bcc3c7e312-cni-bin-dir\") pod \"calico-node-swnk4\" (UID: \"783045f4-fac8-4ee4-9970-c2bcc3c7e312\") " pod="calico-system/calico-node-swnk4" Dec 13 15:05:03.415105 kubelet[4377]: I1213 15:05:03.415046 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/783045f4-fac8-4ee4-9970-c2bcc3c7e312-node-certs\") pod \"calico-node-swnk4\" (UID: \"783045f4-fac8-4ee4-9970-c2bcc3c7e312\") " pod="calico-system/calico-node-swnk4" Dec 13 15:05:03.415105 kubelet[4377]: I1213 15:05:03.415086 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/783045f4-fac8-4ee4-9970-c2bcc3c7e312-var-lib-calico\") pod \"calico-node-swnk4\" (UID: \"783045f4-fac8-4ee4-9970-c2bcc3c7e312\") " pod="calico-system/calico-node-swnk4" Dec 13 15:05:03.415147 kubelet[4377]: I1213 15:05:03.415120 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/783045f4-fac8-4ee4-9970-c2bcc3c7e312-flexvol-driver-host\") pod \"calico-node-swnk4\" (UID: \"783045f4-fac8-4ee4-9970-c2bcc3c7e312\") " pod="calico-system/calico-node-swnk4" Dec 13 15:05:03.415168 kubelet[4377]: I1213 15:05:03.415148 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/84a1ea1f-37fa-4760-b722-83cd399d3310-tigera-ca-bundle\") pod \"calico-typha-d9948d456-bj2xj\" (UID: \"84a1ea1f-37fa-4760-b722-83cd399d3310\") " pod="calico-system/calico-typha-d9948d456-bj2xj" Dec 13 15:05:03.415189 kubelet[4377]: I1213 15:05:03.415172 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w42cw\" (UniqueName: \"kubernetes.io/projected/84a1ea1f-37fa-4760-b722-83cd399d3310-kube-api-access-w42cw\") pod \"calico-typha-d9948d456-bj2xj\" (UID: \"84a1ea1f-37fa-4760-b722-83cd399d3310\") " pod="calico-system/calico-typha-d9948d456-bj2xj" Dec 13 15:05:03.415226 kubelet[4377]: I1213 15:05:03.415210 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/783045f4-fac8-4ee4-9970-c2bcc3c7e312-xtables-lock\") pod \"calico-node-swnk4\" (UID: \"783045f4-fac8-4ee4-9970-c2bcc3c7e312\") " pod="calico-system/calico-node-swnk4" Dec 13 15:05:03.415256 kubelet[4377]: I1213 15:05:03.415248 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xll96\" (UniqueName: \"kubernetes.io/projected/783045f4-fac8-4ee4-9970-c2bcc3c7e312-kube-api-access-xll96\") pod \"calico-node-swnk4\" (UID: \"783045f4-fac8-4ee4-9970-c2bcc3c7e312\") " pod="calico-system/calico-node-swnk4" Dec 13 15:05:03.415291 kubelet[4377]: I1213 15:05:03.415283 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/783045f4-fac8-4ee4-9970-c2bcc3c7e312-policysync\") pod \"calico-node-swnk4\" (UID: \"783045f4-fac8-4ee4-9970-c2bcc3c7e312\") " pod="calico-system/calico-node-swnk4" Dec 13 15:05:03.415342 kubelet[4377]: I1213 15:05:03.415331 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/783045f4-fac8-4ee4-9970-c2bcc3c7e312-cni-log-dir\") pod \"calico-node-swnk4\" (UID: \"783045f4-fac8-4ee4-9970-c2bcc3c7e312\") " pod="calico-system/calico-node-swnk4" Dec 13 15:05:03.415368 kubelet[4377]: I1213 15:05:03.415359 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/84a1ea1f-37fa-4760-b722-83cd399d3310-typha-certs\") pod \"calico-typha-d9948d456-bj2xj\" (UID: \"84a1ea1f-37fa-4760-b722-83cd399d3310\") " pod="calico-system/calico-typha-d9948d456-bj2xj" Dec 13 15:05:03.486479 kubelet[4377]: I1213 15:05:03.486456 4377 topology_manager.go:215] "Topology Admit Handler" podUID="7add470f-d95a-4158-a5d4-494b47592652" podNamespace="calico-system" podName="csi-node-driver-lh26w" Dec 13 15:05:03.486721 kubelet[4377]: E1213 15:05:03.486706 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lh26w" podUID="7add470f-d95a-4158-a5d4-494b47592652" Dec 13 15:05:03.515953 kubelet[4377]: I1213 15:05:03.515920 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7add470f-d95a-4158-a5d4-494b47592652-socket-dir\") pod \"csi-node-driver-lh26w\" (UID: \"7add470f-d95a-4158-a5d4-494b47592652\") " pod="calico-system/csi-node-driver-lh26w" Dec 13 15:05:03.515953 kubelet[4377]: I1213 15:05:03.515963 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvc54\" (UniqueName: \"kubernetes.io/projected/7add470f-d95a-4158-a5d4-494b47592652-kube-api-access-bvc54\") pod \"csi-node-driver-lh26w\" (UID: \"7add470f-d95a-4158-a5d4-494b47592652\") " pod="calico-system/csi-node-driver-lh26w" Dec 13 15:05:03.516145 kubelet[4377]: I1213 15:05:03.516117 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7add470f-d95a-4158-a5d4-494b47592652-registration-dir\") pod \"csi-node-driver-lh26w\" (UID: \"7add470f-d95a-4158-a5d4-494b47592652\") " pod="calico-system/csi-node-driver-lh26w" Dec 13 15:05:03.516245 kubelet[4377]: I1213 15:05:03.516230 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7add470f-d95a-4158-a5d4-494b47592652-varrun\") pod \"csi-node-driver-lh26w\" (UID: \"7add470f-d95a-4158-a5d4-494b47592652\") " pod="calico-system/csi-node-driver-lh26w" Dec 13 15:05:03.516490 kubelet[4377]: I1213 15:05:03.516471 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7add470f-d95a-4158-a5d4-494b47592652-kubelet-dir\") pod \"csi-node-driver-lh26w\" (UID: \"7add470f-d95a-4158-a5d4-494b47592652\") " pod="calico-system/csi-node-driver-lh26w" Dec 13 15:05:03.517415 kubelet[4377]: E1213 15:05:03.517398 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.517443 kubelet[4377]: W1213 15:05:03.517414 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.517443 kubelet[4377]: E1213 15:05:03.517437 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.517687 kubelet[4377]: E1213 15:05:03.517660 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.517687 kubelet[4377]: W1213 15:05:03.517670 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.517737 kubelet[4377]: E1213 15:05:03.517691 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.518719 kubelet[4377]: E1213 15:05:03.518700 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.518719 kubelet[4377]: W1213 15:05:03.518717 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.518761 kubelet[4377]: E1213 15:05:03.518736 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.518987 kubelet[4377]: E1213 15:05:03.518976 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.519017 kubelet[4377]: W1213 15:05:03.518984 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.519017 kubelet[4377]: E1213 15:05:03.518999 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.524287 kubelet[4377]: E1213 15:05:03.524271 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.524287 kubelet[4377]: W1213 15:05:03.524285 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.524335 kubelet[4377]: E1213 15:05:03.524300 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.524837 kubelet[4377]: E1213 15:05:03.524821 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.524862 kubelet[4377]: W1213 15:05:03.524835 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.524862 kubelet[4377]: E1213 15:05:03.524851 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.617603 kubelet[4377]: E1213 15:05:03.617542 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.617603 kubelet[4377]: W1213 15:05:03.617556 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.617603 kubelet[4377]: E1213 15:05:03.617570 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.617812 kubelet[4377]: E1213 15:05:03.617794 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.617812 kubelet[4377]: W1213 15:05:03.617803 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.617894 kubelet[4377]: E1213 15:05:03.617816 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.618096 kubelet[4377]: E1213 15:05:03.618081 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.618120 kubelet[4377]: W1213 15:05:03.618095 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.618120 kubelet[4377]: E1213 15:05:03.618114 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.618330 kubelet[4377]: E1213 15:05:03.618319 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.618330 kubelet[4377]: W1213 15:05:03.618327 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.618376 kubelet[4377]: E1213 15:05:03.618340 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.618497 kubelet[4377]: E1213 15:05:03.618483 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.618497 kubelet[4377]: W1213 15:05:03.618492 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.618551 kubelet[4377]: E1213 15:05:03.618504 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.618726 kubelet[4377]: E1213 15:05:03.618708 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.618726 kubelet[4377]: W1213 15:05:03.618723 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.618776 kubelet[4377]: E1213 15:05:03.618742 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.618938 kubelet[4377]: E1213 15:05:03.618926 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.618960 kubelet[4377]: W1213 15:05:03.618935 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.618960 kubelet[4377]: E1213 15:05:03.618952 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.619162 kubelet[4377]: E1213 15:05:03.619146 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.619186 kubelet[4377]: W1213 15:05:03.619161 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.619186 kubelet[4377]: E1213 15:05:03.619179 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.619373 kubelet[4377]: E1213 15:05:03.619363 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.619373 kubelet[4377]: W1213 15:05:03.619371 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.619418 kubelet[4377]: E1213 15:05:03.619384 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.619547 kubelet[4377]: E1213 15:05:03.619536 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.619547 kubelet[4377]: W1213 15:05:03.619544 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.619589 kubelet[4377]: E1213 15:05:03.619558 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.619813 kubelet[4377]: E1213 15:05:03.619801 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.619835 kubelet[4377]: W1213 15:05:03.619813 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.619835 kubelet[4377]: E1213 15:05:03.619830 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.619999 kubelet[4377]: E1213 15:05:03.619988 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.619999 kubelet[4377]: W1213 15:05:03.619996 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.620043 kubelet[4377]: E1213 15:05:03.620009 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.620220 kubelet[4377]: E1213 15:05:03.620213 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.620241 kubelet[4377]: W1213 15:05:03.620220 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.620262 kubelet[4377]: E1213 15:05:03.620243 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.620405 kubelet[4377]: E1213 15:05:03.620396 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.620405 kubelet[4377]: W1213 15:05:03.620402 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.620449 kubelet[4377]: E1213 15:05:03.620420 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.620605 kubelet[4377]: E1213 15:05:03.620595 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.620605 kubelet[4377]: W1213 15:05:03.620603 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.620644 kubelet[4377]: E1213 15:05:03.620623 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.620831 kubelet[4377]: E1213 15:05:03.620820 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.620831 kubelet[4377]: W1213 15:05:03.620828 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.620877 kubelet[4377]: E1213 15:05:03.620842 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.621005 kubelet[4377]: E1213 15:05:03.620994 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.621028 kubelet[4377]: W1213 15:05:03.621004 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.621028 kubelet[4377]: E1213 15:05:03.621018 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.621183 kubelet[4377]: E1213 15:05:03.621173 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.621183 kubelet[4377]: W1213 15:05:03.621180 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.621225 kubelet[4377]: E1213 15:05:03.621193 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.621480 kubelet[4377]: E1213 15:05:03.621470 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.621502 kubelet[4377]: W1213 15:05:03.621484 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.621520 kubelet[4377]: E1213 15:05:03.621501 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.621689 kubelet[4377]: E1213 15:05:03.621679 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.621689 kubelet[4377]: W1213 15:05:03.621687 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.621731 kubelet[4377]: E1213 15:05:03.621700 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.621851 kubelet[4377]: E1213 15:05:03.621840 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.621851 kubelet[4377]: W1213 15:05:03.621847 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.621890 kubelet[4377]: E1213 15:05:03.621868 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.621996 kubelet[4377]: E1213 15:05:03.621986 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.621996 kubelet[4377]: W1213 15:05:03.621994 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.622038 kubelet[4377]: E1213 15:05:03.622006 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.622144 kubelet[4377]: E1213 15:05:03.622133 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.622144 kubelet[4377]: W1213 15:05:03.622140 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.622185 kubelet[4377]: E1213 15:05:03.622153 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.622383 kubelet[4377]: E1213 15:05:03.622375 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.622404 kubelet[4377]: W1213 15:05:03.622383 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.622404 kubelet[4377]: E1213 15:05:03.622396 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.622622 kubelet[4377]: E1213 15:05:03.622611 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.622622 kubelet[4377]: W1213 15:05:03.622619 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.622658 kubelet[4377]: E1213 15:05:03.622630 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.629551 kubelet[4377]: E1213 15:05:03.629537 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:03.629551 kubelet[4377]: W1213 15:05:03.629549 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:03.629596 kubelet[4377]: E1213 15:05:03.629562 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:03.668990 containerd[2682]: time="2024-12-13T15:05:03.668953893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d9948d456-bj2xj,Uid:84a1ea1f-37fa-4760-b722-83cd399d3310,Namespace:calico-system,Attempt:0,}" Dec 13 15:05:03.681677 containerd[2682]: time="2024-12-13T15:05:03.681612487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:05:03.681709 containerd[2682]: time="2024-12-13T15:05:03.681667365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:05:03.681709 containerd[2682]: time="2024-12-13T15:05:03.681684365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:05:03.681771 containerd[2682]: time="2024-12-13T15:05:03.681757124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:05:03.696866 systemd[1]: Started cri-containerd-629ecda96dea46e42324f04c8bf2963697b1f08fb7ed9896ca92fe7df0f156e8.scope - libcontainer container 629ecda96dea46e42324f04c8bf2963697b1f08fb7ed9896ca92fe7df0f156e8. Dec 13 15:05:03.700757 containerd[2682]: time="2024-12-13T15:05:03.700725515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-swnk4,Uid:783045f4-fac8-4ee4-9970-c2bcc3c7e312,Namespace:calico-system,Attempt:0,}" Dec 13 15:05:03.712715 containerd[2682]: time="2024-12-13T15:05:03.712654443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:05:03.712737 containerd[2682]: time="2024-12-13T15:05:03.712711482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:05:03.712737 containerd[2682]: time="2024-12-13T15:05:03.712723802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:05:03.712813 containerd[2682]: time="2024-12-13T15:05:03.712794840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:05:03.719758 containerd[2682]: time="2024-12-13T15:05:03.719463031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d9948d456-bj2xj,Uid:84a1ea1f-37fa-4760-b722-83cd399d3310,Namespace:calico-system,Attempt:0,} returns sandbox id \"629ecda96dea46e42324f04c8bf2963697b1f08fb7ed9896ca92fe7df0f156e8\"" Dec 13 15:05:03.721826 systemd[1]: Started cri-containerd-0ee47ba19a78f893c6b72a8a3c1e28f76a2bce9ffdbbaaf26580e73d8d81e14f.scope - libcontainer container 0ee47ba19a78f893c6b72a8a3c1e28f76a2bce9ffdbbaaf26580e73d8d81e14f. Dec 13 15:05:03.722194 containerd[2682]: time="2024-12-13T15:05:03.722169298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 15:05:03.738307 containerd[2682]: time="2024-12-13T15:05:03.738267745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-swnk4,Uid:783045f4-fac8-4ee4-9970-c2bcc3c7e312,Namespace:calico-system,Attempt:0,} returns sandbox id \"0ee47ba19a78f893c6b72a8a3c1e28f76a2bce9ffdbbaaf26580e73d8d81e14f\"" Dec 13 15:05:04.691621 containerd[2682]: time="2024-12-13T15:05:04.691584007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:04.692045 containerd[2682]: time="2024-12-13T15:05:04.692009679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Dec 13 15:05:04.692452 containerd[2682]: time="2024-12-13T15:05:04.692438391Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:04.694020 containerd[2682]: time="2024-12-13T15:05:04.693997923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:04.694640 containerd[2682]: time="2024-12-13T15:05:04.694617511Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 972.412935ms" Dec 13 15:05:04.694668 containerd[2682]: time="2024-12-13T15:05:04.694646751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 15:05:04.695047 containerd[2682]: time="2024-12-13T15:05:04.695029104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 15:05:04.699771 containerd[2682]: time="2024-12-13T15:05:04.699750658Z" level=info msg="CreateContainer within sandbox \"629ecda96dea46e42324f04c8bf2963697b1f08fb7ed9896ca92fe7df0f156e8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 15:05:04.704582 containerd[2682]: time="2024-12-13T15:05:04.704559690Z" level=info msg="CreateContainer within sandbox \"629ecda96dea46e42324f04c8bf2963697b1f08fb7ed9896ca92fe7df0f156e8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d4a4a607ff4efd4fa4adebe8607bbbc4b44420db7100f7fe699e67d0f34412a5\"" Dec 13 15:05:04.704961 containerd[2682]: time="2024-12-13T15:05:04.704939203Z" level=info msg="StartContainer for \"d4a4a607ff4efd4fa4adebe8607bbbc4b44420db7100f7fe699e67d0f34412a5\"" Dec 13 15:05:04.731842 systemd[1]: Started cri-containerd-d4a4a607ff4efd4fa4adebe8607bbbc4b44420db7100f7fe699e67d0f34412a5.scope - libcontainer container d4a4a607ff4efd4fa4adebe8607bbbc4b44420db7100f7fe699e67d0f34412a5. Dec 13 15:05:04.756375 containerd[2682]: time="2024-12-13T15:05:04.756343706Z" level=info msg="StartContainer for \"d4a4a607ff4efd4fa4adebe8607bbbc4b44420db7100f7fe699e67d0f34412a5\" returns successfully" Dec 13 15:05:05.088199 kubelet[4377]: E1213 15:05:05.088121 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lh26w" podUID="7add470f-d95a-4158-a5d4-494b47592652" Dec 13 15:05:05.136169 kubelet[4377]: I1213 15:05:05.136144 4377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-d9948d456-bj2xj" podStartSLOduration=1.163150253 podStartE2EDuration="2.136106857s" podCreationTimestamp="2024-12-13 15:05:03 +0000 UTC" firstStartedPulling="2024-12-13 15:05:03.721905743 +0000 UTC m=+21.719874152" lastFinishedPulling="2024-12-13 15:05:04.694862307 +0000 UTC m=+22.692830756" observedRunningTime="2024-12-13 15:05:05.135858381 +0000 UTC m=+23.133826790" watchObservedRunningTime="2024-12-13 15:05:05.136106857 +0000 UTC m=+23.134075266" Dec 13 15:05:05.156577 containerd[2682]: time="2024-12-13T15:05:05.156545908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:05.156648 containerd[2682]: time="2024-12-13T15:05:05.156598987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Dec 13 15:05:05.157302 containerd[2682]: time="2024-12-13T15:05:05.157284255Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:05.159018 containerd[2682]: time="2024-12-13T15:05:05.158999026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:05.159686 containerd[2682]: time="2024-12-13T15:05:05.159654175Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 464.599111ms" Dec 13 15:05:05.159712 containerd[2682]: time="2024-12-13T15:05:05.159690974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 15:05:05.161108 containerd[2682]: time="2024-12-13T15:05:05.161090670Z" level=info msg="CreateContainer within sandbox \"0ee47ba19a78f893c6b72a8a3c1e28f76a2bce9ffdbbaaf26580e73d8d81e14f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 15:05:05.166527 containerd[2682]: time="2024-12-13T15:05:05.166501258Z" level=info msg="CreateContainer within sandbox \"0ee47ba19a78f893c6b72a8a3c1e28f76a2bce9ffdbbaaf26580e73d8d81e14f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"49d0679c838b8c19cc9284104ac2378d82904615beac563a1d4fa6ee927e20c6\"" Dec 13 15:05:05.166886 containerd[2682]: time="2024-12-13T15:05:05.166857972Z" level=info msg="StartContainer for \"49d0679c838b8c19cc9284104ac2378d82904615beac563a1d4fa6ee927e20c6\"" Dec 13 15:05:05.192860 systemd[1]: Started cri-containerd-49d0679c838b8c19cc9284104ac2378d82904615beac563a1d4fa6ee927e20c6.scope - libcontainer container 49d0679c838b8c19cc9284104ac2378d82904615beac563a1d4fa6ee927e20c6. Dec 13 15:05:05.203799 kubelet[4377]: E1213 15:05:05.203780 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:05.203840 kubelet[4377]: W1213 15:05:05.203799 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:05.203840 kubelet[4377]: E1213 15:05:05.203818 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:05.204104 kubelet[4377]: E1213 15:05:05.204095 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:05.204140 kubelet[4377]: W1213 15:05:05.204104 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:05.204140 kubelet[4377]: E1213 15:05:05.204115 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:05.204360 kubelet[4377]: E1213 15:05:05.204351 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:05.204360 kubelet[4377]: W1213 15:05:05.204360 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:05.204410 kubelet[4377]: E1213 15:05:05.204370 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:05.204574 kubelet[4377]: E1213 15:05:05.204565 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:05.204574 kubelet[4377]: W1213 15:05:05.204573 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:05.204626 kubelet[4377]: E1213 15:05:05.204583 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:05.205102 kubelet[4377]: E1213 15:05:05.205093 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:05.205102 kubelet[4377]: W1213 15:05:05.205102 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:05.205147 kubelet[4377]: E1213 15:05:05.205112 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:05.205263 kubelet[4377]: E1213 15:05:05.205254 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:05.205294 kubelet[4377]: W1213 15:05:05.205262 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:05.205294 kubelet[4377]: E1213 15:05:05.205273 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:05.205468 kubelet[4377]: E1213 15:05:05.205460 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:05.205468 kubelet[4377]: W1213 15:05:05.205468 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:05.205514 kubelet[4377]: E1213 15:05:05.205477 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:05.205664 kubelet[4377]: E1213 15:05:05.205656 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:05.205664 kubelet[4377]: W1213 15:05:05.205663 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:05.205721 kubelet[4377]: E1213 15:05:05.205672 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:05.205837 kubelet[4377]: E1213 15:05:05.205828 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:05.205863 kubelet[4377]: W1213 15:05:05.205837 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:05.205863 kubelet[4377]: E1213 15:05:05.205847 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:05.206101 kubelet[4377]: E1213 15:05:05.205982 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:05.206101 kubelet[4377]: W1213 15:05:05.205988 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:05.206101 kubelet[4377]: E1213 15:05:05.205999 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:05.206197 kubelet[4377]: E1213 15:05:05.206186 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:05.206197 kubelet[4377]: W1213 15:05:05.206194 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:05.206242 kubelet[4377]: E1213 15:05:05.206203 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:05.206534 kubelet[4377]: E1213 15:05:05.206400 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:05.206534 kubelet[4377]: W1213 15:05:05.206408 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:05.206534 kubelet[4377]: E1213 15:05:05.206417 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:05.206653 kubelet[4377]: E1213 15:05:05.206624 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:05.206653 kubelet[4377]: W1213 15:05:05.206631 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:05.206653 kubelet[4377]: E1213 15:05:05.206640 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:05.206791 kubelet[4377]: E1213 15:05:05.206781 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:05.206791 kubelet[4377]: W1213 15:05:05.206789 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:05.206854 kubelet[4377]: E1213 15:05:05.206798 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:05.206939 kubelet[4377]: E1213 15:05:05.206931 4377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 15:05:05.206961 kubelet[4377]: W1213 15:05:05.206940 4377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 15:05:05.206961 kubelet[4377]: E1213 15:05:05.206949 4377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 15:05:05.212382 containerd[2682]: time="2024-12-13T15:05:05.212352194Z" level=info msg="StartContainer for \"49d0679c838b8c19cc9284104ac2378d82904615beac563a1d4fa6ee927e20c6\" returns successfully" Dec 13 15:05:05.225234 systemd[1]: cri-containerd-49d0679c838b8c19cc9284104ac2378d82904615beac563a1d4fa6ee927e20c6.scope: Deactivated successfully. Dec 13 15:05:05.359060 containerd[2682]: time="2024-12-13T15:05:05.358954809Z" level=info msg="shim disconnected" id=49d0679c838b8c19cc9284104ac2378d82904615beac563a1d4fa6ee927e20c6 namespace=k8s.io Dec 13 15:05:05.359060 containerd[2682]: time="2024-12-13T15:05:05.359001008Z" level=warning msg="cleaning up after shim disconnected" id=49d0679c838b8c19cc9284104ac2378d82904615beac563a1d4fa6ee927e20c6 namespace=k8s.io Dec 13 15:05:05.359060 containerd[2682]: time="2024-12-13T15:05:05.359009168Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 15:05:06.130454 kubelet[4377]: I1213 15:05:06.130429 4377 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 15:05:06.131203 containerd[2682]: time="2024-12-13T15:05:06.131179190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 15:05:07.087985 kubelet[4377]: E1213 15:05:07.087957 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lh26w" podUID="7add470f-d95a-4158-a5d4-494b47592652" Dec 13 15:05:08.310698 containerd[2682]: time="2024-12-13T15:05:08.310653911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:08.311085 containerd[2682]: time="2024-12-13T15:05:08.310682831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 15:05:08.312899 containerd[2682]: time="2024-12-13T15:05:08.312877280Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:08.313698 containerd[2682]: time="2024-12-13T15:05:08.313620589Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.182409s" Dec 13 15:05:08.313698 containerd[2682]: time="2024-12-13T15:05:08.313650029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 15:05:08.314207 containerd[2682]: time="2024-12-13T15:05:08.314186181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:08.315476 containerd[2682]: time="2024-12-13T15:05:08.315451763Z" level=info msg="CreateContainer within sandbox \"0ee47ba19a78f893c6b72a8a3c1e28f76a2bce9ffdbbaaf26580e73d8d81e14f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 15:05:08.321261 containerd[2682]: time="2024-12-13T15:05:08.321233682Z" level=info msg="CreateContainer within sandbox \"0ee47ba19a78f893c6b72a8a3c1e28f76a2bce9ffdbbaaf26580e73d8d81e14f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4cee2fc6e6d4e6a4927b5b89bce3abc0bfee875b68b705566bb85f29a0d5c44e\"" Dec 13 15:05:08.321567 containerd[2682]: time="2024-12-13T15:05:08.321541598Z" level=info msg="StartContainer for \"4cee2fc6e6d4e6a4927b5b89bce3abc0bfee875b68b705566bb85f29a0d5c44e\"" Dec 13 15:05:08.353852 systemd[1]: Started cri-containerd-4cee2fc6e6d4e6a4927b5b89bce3abc0bfee875b68b705566bb85f29a0d5c44e.scope - libcontainer container 4cee2fc6e6d4e6a4927b5b89bce3abc0bfee875b68b705566bb85f29a0d5c44e. Dec 13 15:05:08.374539 containerd[2682]: time="2024-12-13T15:05:08.374510092Z" level=info msg="StartContainer for \"4cee2fc6e6d4e6a4927b5b89bce3abc0bfee875b68b705566bb85f29a0d5c44e\" returns successfully" Dec 13 15:05:08.722552 containerd[2682]: time="2024-12-13T15:05:08.722518831Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 15:05:08.724062 systemd[1]: cri-containerd-4cee2fc6e6d4e6a4927b5b89bce3abc0bfee875b68b705566bb85f29a0d5c44e.scope: Deactivated successfully. Dec 13 15:05:08.795356 kubelet[4377]: I1213 15:05:08.795330 4377 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 15:05:08.809987 kubelet[4377]: I1213 15:05:08.809960 4377 topology_manager.go:215] "Topology Admit Handler" podUID="f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b" podNamespace="kube-system" podName="coredns-76f75df574-l2n54" Dec 13 15:05:08.810262 kubelet[4377]: I1213 15:05:08.810168 4377 topology_manager.go:215] "Topology Admit Handler" podUID="69da9d02-8a41-4e0e-ae25-f567a3c8af61" podNamespace="calico-system" podName="calico-kube-controllers-677fb55ddc-c2ww4" Dec 13 15:05:08.810430 kubelet[4377]: I1213 15:05:08.810409 4377 topology_manager.go:215] "Topology Admit Handler" podUID="bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4" podNamespace="kube-system" podName="coredns-76f75df574-f56qp" Dec 13 15:05:08.810661 kubelet[4377]: I1213 15:05:08.810646 4377 topology_manager.go:215] "Topology Admit Handler" podUID="9d479dc0-c58f-4c21-b54a-803d52ba2263" podNamespace="calico-apiserver" podName="calico-apiserver-7c9f6c48-q86wb" Dec 13 15:05:08.810911 kubelet[4377]: I1213 15:05:08.810893 4377 topology_manager.go:215] "Topology Admit Handler" podUID="4488c13d-4962-45a7-8c16-59ac8004d33d" podNamespace="calico-apiserver" podName="calico-apiserver-7c9f6c48-9zlz5" Dec 13 15:05:08.814549 systemd[1]: Created slice kubepods-burstable-podf8ddf960_bc9e_4ff9_9f45_1e5c6b99c17b.slice - libcontainer container kubepods-burstable-podf8ddf960_bc9e_4ff9_9f45_1e5c6b99c17b.slice. Dec 13 15:05:08.818819 systemd[1]: Created slice kubepods-besteffort-pod69da9d02_8a41_4e0e_ae25_f567a3c8af61.slice - libcontainer container kubepods-besteffort-pod69da9d02_8a41_4e0e_ae25_f567a3c8af61.slice. Dec 13 15:05:08.822190 systemd[1]: Created slice kubepods-burstable-podbd2476e6_6bf2_4c4f_98bf_e85082f2b5c4.slice - libcontainer container kubepods-burstable-podbd2476e6_6bf2_4c4f_98bf_e85082f2b5c4.slice. Dec 13 15:05:08.825969 systemd[1]: Created slice kubepods-besteffort-pod9d479dc0_c58f_4c21_b54a_803d52ba2263.slice - libcontainer container kubepods-besteffort-pod9d479dc0_c58f_4c21_b54a_803d52ba2263.slice. Dec 13 15:05:08.829455 systemd[1]: Created slice kubepods-besteffort-pod4488c13d_4962_45a7_8c16_59ac8004d33d.slice - libcontainer container kubepods-besteffort-pod4488c13d_4962_45a7_8c16_59ac8004d33d.slice. Dec 13 15:05:08.851002 kubelet[4377]: I1213 15:05:08.850977 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69da9d02-8a41-4e0e-ae25-f567a3c8af61-tigera-ca-bundle\") pod \"calico-kube-controllers-677fb55ddc-c2ww4\" (UID: \"69da9d02-8a41-4e0e-ae25-f567a3c8af61\") " pod="calico-system/calico-kube-controllers-677fb55ddc-c2ww4" Dec 13 15:05:08.851112 kubelet[4377]: I1213 15:05:08.851016 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4488c13d-4962-45a7-8c16-59ac8004d33d-calico-apiserver-certs\") pod \"calico-apiserver-7c9f6c48-9zlz5\" (UID: \"4488c13d-4962-45a7-8c16-59ac8004d33d\") " pod="calico-apiserver/calico-apiserver-7c9f6c48-9zlz5" Dec 13 15:05:08.851112 kubelet[4377]: I1213 15:05:08.851056 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9dlh\" (UniqueName: \"kubernetes.io/projected/f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b-kube-api-access-f9dlh\") pod \"coredns-76f75df574-l2n54\" (UID: \"f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b\") " pod="kube-system/coredns-76f75df574-l2n54" Dec 13 15:05:08.851112 kubelet[4377]: I1213 15:05:08.851081 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6m4c\" (UniqueName: \"kubernetes.io/projected/9d479dc0-c58f-4c21-b54a-803d52ba2263-kube-api-access-q6m4c\") pod \"calico-apiserver-7c9f6c48-q86wb\" (UID: \"9d479dc0-c58f-4c21-b54a-803d52ba2263\") " pod="calico-apiserver/calico-apiserver-7c9f6c48-q86wb" Dec 13 15:05:08.851112 kubelet[4377]: I1213 15:05:08.851105 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prxlk\" (UniqueName: \"kubernetes.io/projected/4488c13d-4962-45a7-8c16-59ac8004d33d-kube-api-access-prxlk\") pod \"calico-apiserver-7c9f6c48-9zlz5\" (UID: \"4488c13d-4962-45a7-8c16-59ac8004d33d\") " pod="calico-apiserver/calico-apiserver-7c9f6c48-9zlz5" Dec 13 15:05:08.851197 kubelet[4377]: I1213 15:05:08.851149 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk55m\" (UniqueName: \"kubernetes.io/projected/69da9d02-8a41-4e0e-ae25-f567a3c8af61-kube-api-access-gk55m\") pod \"calico-kube-controllers-677fb55ddc-c2ww4\" (UID: \"69da9d02-8a41-4e0e-ae25-f567a3c8af61\") " pod="calico-system/calico-kube-controllers-677fb55ddc-c2ww4" Dec 13 15:05:08.851197 kubelet[4377]: I1213 15:05:08.851186 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9d479dc0-c58f-4c21-b54a-803d52ba2263-calico-apiserver-certs\") pod \"calico-apiserver-7c9f6c48-q86wb\" (UID: \"9d479dc0-c58f-4c21-b54a-803d52ba2263\") " pod="calico-apiserver/calico-apiserver-7c9f6c48-q86wb" Dec 13 15:05:08.851244 kubelet[4377]: I1213 15:05:08.851212 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4-config-volume\") pod \"coredns-76f75df574-f56qp\" (UID: \"bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4\") " pod="kube-system/coredns-76f75df574-f56qp" Dec 13 15:05:08.851326 kubelet[4377]: I1213 15:05:08.851306 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b-config-volume\") pod \"coredns-76f75df574-l2n54\" (UID: \"f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b\") " pod="kube-system/coredns-76f75df574-l2n54" Dec 13 15:05:08.851356 kubelet[4377]: I1213 15:05:08.851349 4377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zbft\" (UniqueName: \"kubernetes.io/projected/bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4-kube-api-access-9zbft\") pod \"coredns-76f75df574-f56qp\" (UID: \"bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4\") " pod="kube-system/coredns-76f75df574-f56qp" Dec 13 15:05:08.895727 containerd[2682]: time="2024-12-13T15:05:08.895656233Z" level=info msg="shim disconnected" id=4cee2fc6e6d4e6a4927b5b89bce3abc0bfee875b68b705566bb85f29a0d5c44e namespace=k8s.io Dec 13 15:05:08.895727 containerd[2682]: time="2024-12-13T15:05:08.895724472Z" level=warning msg="cleaning up after shim disconnected" id=4cee2fc6e6d4e6a4927b5b89bce3abc0bfee875b68b705566bb85f29a0d5c44e namespace=k8s.io Dec 13 15:05:08.895819 containerd[2682]: time="2024-12-13T15:05:08.895733912Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 15:05:09.091662 systemd[1]: Created slice kubepods-besteffort-pod7add470f_d95a_4158_a5d4_494b47592652.slice - libcontainer container kubepods-besteffort-pod7add470f_d95a_4158_a5d4_494b47592652.slice. Dec 13 15:05:09.093497 containerd[2682]: time="2024-12-13T15:05:09.093460409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lh26w,Uid:7add470f-d95a-4158-a5d4-494b47592652,Namespace:calico-system,Attempt:0,}" Dec 13 15:05:09.116953 containerd[2682]: time="2024-12-13T15:05:09.116922059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l2n54,Uid:f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b,Namespace:kube-system,Attempt:0,}" Dec 13 15:05:09.121441 containerd[2682]: time="2024-12-13T15:05:09.121411080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677fb55ddc-c2ww4,Uid:69da9d02-8a41-4e0e-ae25-f567a3c8af61,Namespace:calico-system,Attempt:0,}" Dec 13 15:05:09.124924 containerd[2682]: time="2024-12-13T15:05:09.124896634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f56qp,Uid:bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4,Namespace:kube-system,Attempt:0,}" Dec 13 15:05:09.128427 containerd[2682]: time="2024-12-13T15:05:09.128401508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9f6c48-q86wb,Uid:9d479dc0-c58f-4c21-b54a-803d52ba2263,Namespace:calico-apiserver,Attempt:0,}" Dec 13 15:05:09.131964 containerd[2682]: time="2024-12-13T15:05:09.131934621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9f6c48-9zlz5,Uid:4488c13d-4962-45a7-8c16-59ac8004d33d,Namespace:calico-apiserver,Attempt:0,}" Dec 13 15:05:09.136679 containerd[2682]: time="2024-12-13T15:05:09.136653119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 15:05:09.153397 containerd[2682]: time="2024-12-13T15:05:09.153356018Z" level=error msg="Failed to destroy network for sandbox \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.153977 containerd[2682]: time="2024-12-13T15:05:09.153951850Z" level=error msg="encountered an error cleaning up failed sandbox \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.154032 containerd[2682]: time="2024-12-13T15:05:09.154016729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lh26w,Uid:7add470f-d95a-4158-a5d4-494b47592652,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.154255 kubelet[4377]: E1213 15:05:09.154199 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.154255 kubelet[4377]: E1213 15:05:09.154254 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lh26w" Dec 13 15:05:09.154375 kubelet[4377]: E1213 15:05:09.154275 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lh26w" Dec 13 15:05:09.154375 kubelet[4377]: E1213 15:05:09.154324 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lh26w_calico-system(7add470f-d95a-4158-a5d4-494b47592652)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lh26w_calico-system(7add470f-d95a-4158-a5d4-494b47592652)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lh26w" podUID="7add470f-d95a-4158-a5d4-494b47592652" Dec 13 15:05:09.161464 containerd[2682]: time="2024-12-13T15:05:09.161424872Z" level=error msg="Failed to destroy network for sandbox \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.161884 containerd[2682]: time="2024-12-13T15:05:09.161762587Z" level=error msg="encountered an error cleaning up failed sandbox \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.161884 containerd[2682]: time="2024-12-13T15:05:09.161820546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l2n54,Uid:f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.161994 kubelet[4377]: E1213 15:05:09.161977 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.162028 kubelet[4377]: E1213 15:05:09.162014 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-l2n54" Dec 13 15:05:09.162051 kubelet[4377]: E1213 15:05:09.162031 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-l2n54" Dec 13 15:05:09.162167 kubelet[4377]: E1213 15:05:09.162072 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-l2n54_kube-system(f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-l2n54_kube-system(f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-l2n54" podUID="f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b" Dec 13 15:05:09.165345 containerd[2682]: time="2024-12-13T15:05:09.165312140Z" level=error msg="Failed to destroy network for sandbox \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.165913 containerd[2682]: time="2024-12-13T15:05:09.165645696Z" level=error msg="encountered an error cleaning up failed sandbox \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.165913 containerd[2682]: time="2024-12-13T15:05:09.165698815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677fb55ddc-c2ww4,Uid:69da9d02-8a41-4e0e-ae25-f567a3c8af61,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.166032 kubelet[4377]: E1213 15:05:09.165849 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.166032 kubelet[4377]: E1213 15:05:09.165887 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677fb55ddc-c2ww4" Dec 13 15:05:09.166032 kubelet[4377]: E1213 15:05:09.165907 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677fb55ddc-c2ww4" Dec 13 15:05:09.166108 kubelet[4377]: E1213 15:05:09.165953 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-677fb55ddc-c2ww4_calico-system(69da9d02-8a41-4e0e-ae25-f567a3c8af61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-677fb55ddc-c2ww4_calico-system(69da9d02-8a41-4e0e-ae25-f567a3c8af61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-677fb55ddc-c2ww4" podUID="69da9d02-8a41-4e0e-ae25-f567a3c8af61" Dec 13 15:05:09.173645 containerd[2682]: time="2024-12-13T15:05:09.173599951Z" level=error msg="Failed to destroy network for sandbox \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.173995 containerd[2682]: time="2024-12-13T15:05:09.173971426Z" level=error msg="encountered an error cleaning up failed sandbox \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.174042 containerd[2682]: time="2024-12-13T15:05:09.174026265Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9f6c48-q86wb,Uid:9d479dc0-c58f-4c21-b54a-803d52ba2263,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.174193 kubelet[4377]: E1213 15:05:09.174177 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.174229 kubelet[4377]: E1213 15:05:09.174222 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9f6c48-q86wb" Dec 13 15:05:09.174253 kubelet[4377]: E1213 15:05:09.174241 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9f6c48-q86wb" Dec 13 15:05:09.174305 kubelet[4377]: E1213 15:05:09.174297 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c9f6c48-q86wb_calico-apiserver(9d479dc0-c58f-4c21-b54a-803d52ba2263)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c9f6c48-q86wb_calico-apiserver(9d479dc0-c58f-4c21-b54a-803d52ba2263)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c9f6c48-q86wb" podUID="9d479dc0-c58f-4c21-b54a-803d52ba2263" Dec 13 15:05:09.183784 containerd[2682]: time="2024-12-13T15:05:09.183739697Z" level=error msg="Failed to destroy network for sandbox \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.184106 containerd[2682]: time="2024-12-13T15:05:09.184085892Z" level=error msg="encountered an error cleaning up failed sandbox \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.184149 containerd[2682]: time="2024-12-13T15:05:09.184132772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f56qp,Uid:bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.184199 containerd[2682]: time="2024-12-13T15:05:09.184090412Z" level=error msg="Failed to destroy network for sandbox \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.184316 kubelet[4377]: E1213 15:05:09.184299 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.184358 kubelet[4377]: E1213 15:05:09.184347 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-f56qp" Dec 13 15:05:09.184389 kubelet[4377]: E1213 15:05:09.184381 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-f56qp" Dec 13 15:05:09.184445 containerd[2682]: time="2024-12-13T15:05:09.184427048Z" level=error msg="encountered an error cleaning up failed sandbox \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.184469 kubelet[4377]: E1213 15:05:09.184447 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-f56qp_kube-system(bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-f56qp_kube-system(bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-f56qp" podUID="bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4" Dec 13 15:05:09.184512 containerd[2682]: time="2024-12-13T15:05:09.184468447Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9f6c48-9zlz5,Uid:4488c13d-4962-45a7-8c16-59ac8004d33d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.184582 kubelet[4377]: E1213 15:05:09.184572 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:09.184604 kubelet[4377]: E1213 15:05:09.184596 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9f6c48-9zlz5" Dec 13 15:05:09.184628 kubelet[4377]: E1213 15:05:09.184612 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9f6c48-9zlz5" Dec 13 15:05:09.184655 kubelet[4377]: E1213 15:05:09.184647 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c9f6c48-9zlz5_calico-apiserver(4488c13d-4962-45a7-8c16-59ac8004d33d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c9f6c48-9zlz5_calico-apiserver(4488c13d-4962-45a7-8c16-59ac8004d33d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c9f6c48-9zlz5" podUID="4488c13d-4962-45a7-8c16-59ac8004d33d" Dec 13 15:05:09.329127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cee2fc6e6d4e6a4927b5b89bce3abc0bfee875b68b705566bb85f29a0d5c44e-rootfs.mount: Deactivated successfully. Dec 13 15:05:10.138213 kubelet[4377]: I1213 15:05:10.138192 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467" Dec 13 15:05:10.138660 containerd[2682]: time="2024-12-13T15:05:10.138633164Z" level=info msg="StopPodSandbox for \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\"" Dec 13 15:05:10.138842 containerd[2682]: time="2024-12-13T15:05:10.138803762Z" level=info msg="Ensure that sandbox 4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467 in task-service has been cleanup successfully" Dec 13 15:05:10.138868 kubelet[4377]: I1213 15:05:10.138820 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342" Dec 13 15:05:10.139079 containerd[2682]: time="2024-12-13T15:05:10.139061919Z" level=info msg="TearDown network for sandbox \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\" successfully" Dec 13 15:05:10.139105 containerd[2682]: time="2024-12-13T15:05:10.139078319Z" level=info msg="StopPodSandbox for \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\" returns successfully" Dec 13 15:05:10.139165 containerd[2682]: time="2024-12-13T15:05:10.139143678Z" level=info msg="StopPodSandbox for \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\"" Dec 13 15:05:10.139294 containerd[2682]: time="2024-12-13T15:05:10.139281756Z" level=info msg="Ensure that sandbox 6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342 in task-service has been cleanup successfully" Dec 13 15:05:10.139440 containerd[2682]: time="2024-12-13T15:05:10.139427234Z" level=info msg="TearDown network for sandbox \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\" successfully" Dec 13 15:05:10.139463 containerd[2682]: time="2024-12-13T15:05:10.139440594Z" level=info msg="StopPodSandbox for \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\" returns successfully" Dec 13 15:05:10.139593 containerd[2682]: time="2024-12-13T15:05:10.139572352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9f6c48-9zlz5,Uid:4488c13d-4962-45a7-8c16-59ac8004d33d,Namespace:calico-apiserver,Attempt:1,}" Dec 13 15:05:10.139647 kubelet[4377]: I1213 15:05:10.139635 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260" Dec 13 15:05:10.139758 containerd[2682]: time="2024-12-13T15:05:10.139742710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9f6c48-q86wb,Uid:9d479dc0-c58f-4c21-b54a-803d52ba2263,Namespace:calico-apiserver,Attempt:1,}" Dec 13 15:05:10.139976 containerd[2682]: time="2024-12-13T15:05:10.139958028Z" level=info msg="StopPodSandbox for \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\"" Dec 13 15:05:10.140106 containerd[2682]: time="2024-12-13T15:05:10.140093546Z" level=info msg="Ensure that sandbox 9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260 in task-service has been cleanup successfully" Dec 13 15:05:10.140297 kubelet[4377]: I1213 15:05:10.140284 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c" Dec 13 15:05:10.140562 systemd[1]: run-netns-cni\x2d7d433fc5\x2dc68b\x2d4a70\x2d57f6\x2d1f6ae5e6d409.mount: Deactivated successfully. Dec 13 15:05:10.140726 containerd[2682]: time="2024-12-13T15:05:10.140598220Z" level=info msg="TearDown network for sandbox \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\" successfully" Dec 13 15:05:10.140726 containerd[2682]: time="2024-12-13T15:05:10.140618219Z" level=info msg="StopPodSandbox for \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\" returns successfully" Dec 13 15:05:10.140726 containerd[2682]: time="2024-12-13T15:05:10.140638139Z" level=info msg="StopPodSandbox for \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\"" Dec 13 15:05:10.140840 containerd[2682]: time="2024-12-13T15:05:10.140823257Z" level=info msg="Ensure that sandbox ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c in task-service has been cleanup successfully" Dec 13 15:05:10.140963 containerd[2682]: time="2024-12-13T15:05:10.140942535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f56qp,Uid:bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4,Namespace:kube-system,Attempt:1,}" Dec 13 15:05:10.140984 containerd[2682]: time="2024-12-13T15:05:10.140971215Z" level=info msg="TearDown network for sandbox \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\" successfully" Dec 13 15:05:10.141002 containerd[2682]: time="2024-12-13T15:05:10.140983655Z" level=info msg="StopPodSandbox for \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\" returns successfully" Dec 13 15:05:10.141112 kubelet[4377]: I1213 15:05:10.141099 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93" Dec 13 15:05:10.141320 containerd[2682]: time="2024-12-13T15:05:10.141301451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l2n54,Uid:f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b,Namespace:kube-system,Attempt:1,}" Dec 13 15:05:10.141427 containerd[2682]: time="2024-12-13T15:05:10.141410970Z" level=info msg="StopPodSandbox for \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\"" Dec 13 15:05:10.141547 containerd[2682]: time="2024-12-13T15:05:10.141534768Z" level=info msg="Ensure that sandbox d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93 in task-service has been cleanup successfully" Dec 13 15:05:10.141696 containerd[2682]: time="2024-12-13T15:05:10.141680966Z" level=info msg="TearDown network for sandbox \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\" successfully" Dec 13 15:05:10.141725 containerd[2682]: time="2024-12-13T15:05:10.141696806Z" level=info msg="StopPodSandbox for \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\" returns successfully" Dec 13 15:05:10.141825 kubelet[4377]: I1213 15:05:10.141809 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb" Dec 13 15:05:10.142054 containerd[2682]: time="2024-12-13T15:05:10.142034802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677fb55ddc-c2ww4,Uid:69da9d02-8a41-4e0e-ae25-f567a3c8af61,Namespace:calico-system,Attempt:1,}" Dec 13 15:05:10.142152 containerd[2682]: time="2024-12-13T15:05:10.142130841Z" level=info msg="StopPodSandbox for \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\"" Dec 13 15:05:10.142359 containerd[2682]: time="2024-12-13T15:05:10.142345358Z" level=info msg="Ensure that sandbox 3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb in task-service has been cleanup successfully" Dec 13 15:05:10.142442 systemd[1]: run-netns-cni\x2d39b7f0a1\x2d920e\x2ddb59\x2da9b5\x2dd945332f1ba7.mount: Deactivated successfully. Dec 13 15:05:10.142497 containerd[2682]: time="2024-12-13T15:05:10.142485036Z" level=info msg="TearDown network for sandbox \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\" successfully" Dec 13 15:05:10.142517 containerd[2682]: time="2024-12-13T15:05:10.142497036Z" level=info msg="StopPodSandbox for \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\" returns successfully" Dec 13 15:05:10.142522 systemd[1]: run-netns-cni\x2defc5aa29\x2d4e38\x2d168d\x2d1c30\x2d2906aebe1c11.mount: Deactivated successfully. Dec 13 15:05:10.142573 systemd[1]: run-netns-cni\x2d16e76eb4\x2d4a11\x2d0635\x2dad5c\x2d27019b11f0e1.mount: Deactivated successfully. Dec 13 15:05:10.142802 containerd[2682]: time="2024-12-13T15:05:10.142781633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lh26w,Uid:7add470f-d95a-4158-a5d4-494b47592652,Namespace:calico-system,Attempt:1,}" Dec 13 15:05:10.145577 systemd[1]: run-netns-cni\x2dcb228304\x2df360\x2d1820\x2d1c86\x2d70e034f3e74d.mount: Deactivated successfully. Dec 13 15:05:10.145645 systemd[1]: run-netns-cni\x2d350ab707\x2d541f\x2d89ca\x2df5b3\x2dd75d49f2a1a0.mount: Deactivated successfully. Dec 13 15:05:10.185956 containerd[2682]: time="2024-12-13T15:05:10.185911019Z" level=error msg="Failed to destroy network for sandbox \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.186347 containerd[2682]: time="2024-12-13T15:05:10.186322214Z" level=error msg="encountered an error cleaning up failed sandbox \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.186412 containerd[2682]: time="2024-12-13T15:05:10.186391573Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9f6c48-9zlz5,Uid:4488c13d-4962-45a7-8c16-59ac8004d33d,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.186624 kubelet[4377]: E1213 15:05:10.186599 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.186671 kubelet[4377]: E1213 15:05:10.186658 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9f6c48-9zlz5" Dec 13 15:05:10.186697 kubelet[4377]: E1213 15:05:10.186687 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9f6c48-9zlz5" Dec 13 15:05:10.186753 kubelet[4377]: E1213 15:05:10.186742 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c9f6c48-9zlz5_calico-apiserver(4488c13d-4962-45a7-8c16-59ac8004d33d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c9f6c48-9zlz5_calico-apiserver(4488c13d-4962-45a7-8c16-59ac8004d33d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c9f6c48-9zlz5" podUID="4488c13d-4962-45a7-8c16-59ac8004d33d" Dec 13 15:05:10.186861 containerd[2682]: time="2024-12-13T15:05:10.186826968Z" level=error msg="Failed to destroy network for sandbox \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.187149 containerd[2682]: time="2024-12-13T15:05:10.187125844Z" level=error msg="encountered an error cleaning up failed sandbox \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.187195 containerd[2682]: time="2024-12-13T15:05:10.187177243Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9f6c48-q86wb,Uid:9d479dc0-c58f-4c21-b54a-803d52ba2263,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.187320 kubelet[4377]: E1213 15:05:10.187302 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.187357 kubelet[4377]: E1213 15:05:10.187344 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9f6c48-q86wb" Dec 13 15:05:10.187379 kubelet[4377]: E1213 15:05:10.187363 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9f6c48-q86wb" Dec 13 15:05:10.187414 kubelet[4377]: E1213 15:05:10.187407 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c9f6c48-q86wb_calico-apiserver(9d479dc0-c58f-4c21-b54a-803d52ba2263)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c9f6c48-q86wb_calico-apiserver(9d479dc0-c58f-4c21-b54a-803d52ba2263)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c9f6c48-q86wb" podUID="9d479dc0-c58f-4c21-b54a-803d52ba2263" Dec 13 15:05:10.188741 containerd[2682]: time="2024-12-13T15:05:10.188709024Z" level=error msg="Failed to destroy network for sandbox \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.189054 containerd[2682]: time="2024-12-13T15:05:10.189030180Z" level=error msg="encountered an error cleaning up failed sandbox \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.189098 containerd[2682]: time="2024-12-13T15:05:10.189081300Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f56qp,Uid:bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.189245 kubelet[4377]: E1213 15:05:10.189228 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.189279 kubelet[4377]: E1213 15:05:10.189271 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-f56qp" Dec 13 15:05:10.189301 kubelet[4377]: E1213 15:05:10.189291 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-f56qp" Dec 13 15:05:10.189342 kubelet[4377]: E1213 15:05:10.189334 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-f56qp_kube-system(bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-f56qp_kube-system(bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-f56qp" podUID="bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4" Dec 13 15:05:10.189727 containerd[2682]: time="2024-12-13T15:05:10.189700412Z" level=error msg="Failed to destroy network for sandbox \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.190010 containerd[2682]: time="2024-12-13T15:05:10.189988728Z" level=error msg="encountered an error cleaning up failed sandbox \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.190048 containerd[2682]: time="2024-12-13T15:05:10.190032608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lh26w,Uid:7add470f-d95a-4158-a5d4-494b47592652,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.190176 kubelet[4377]: E1213 15:05:10.190162 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.190203 kubelet[4377]: E1213 15:05:10.190196 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lh26w" Dec 13 15:05:10.190226 kubelet[4377]: E1213 15:05:10.190215 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lh26w" Dec 13 15:05:10.190263 kubelet[4377]: E1213 15:05:10.190253 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lh26w_calico-system(7add470f-d95a-4158-a5d4-494b47592652)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lh26w_calico-system(7add470f-d95a-4158-a5d4-494b47592652)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lh26w" podUID="7add470f-d95a-4158-a5d4-494b47592652" Dec 13 15:05:10.190298 containerd[2682]: time="2024-12-13T15:05:10.190264765Z" level=error msg="Failed to destroy network for sandbox \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.190563 containerd[2682]: time="2024-12-13T15:05:10.190543922Z" level=error msg="encountered an error cleaning up failed sandbox \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.190596 containerd[2682]: time="2024-12-13T15:05:10.190581721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l2n54,Uid:f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.190702 kubelet[4377]: E1213 15:05:10.190691 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.190761 kubelet[4377]: E1213 15:05:10.190753 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-l2n54" Dec 13 15:05:10.190782 kubelet[4377]: E1213 15:05:10.190773 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-l2n54" Dec 13 15:05:10.190815 kubelet[4377]: E1213 15:05:10.190808 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-l2n54_kube-system(f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-l2n54_kube-system(f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-l2n54" podUID="f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b" Dec 13 15:05:10.193912 containerd[2682]: time="2024-12-13T15:05:10.193892640Z" level=error msg="Failed to destroy network for sandbox \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.194182 containerd[2682]: time="2024-12-13T15:05:10.194163797Z" level=error msg="encountered an error cleaning up failed sandbox \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.194217 containerd[2682]: time="2024-12-13T15:05:10.194202836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677fb55ddc-c2ww4,Uid:69da9d02-8a41-4e0e-ae25-f567a3c8af61,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.194327 kubelet[4377]: E1213 15:05:10.194314 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:10.194354 kubelet[4377]: E1213 15:05:10.194347 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677fb55ddc-c2ww4" Dec 13 15:05:10.194378 kubelet[4377]: E1213 15:05:10.194365 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677fb55ddc-c2ww4" Dec 13 15:05:10.194413 kubelet[4377]: E1213 15:05:10.194405 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-677fb55ddc-c2ww4_calico-system(69da9d02-8a41-4e0e-ae25-f567a3c8af61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-677fb55ddc-c2ww4_calico-system(69da9d02-8a41-4e0e-ae25-f567a3c8af61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-677fb55ddc-c2ww4" podUID="69da9d02-8a41-4e0e-ae25-f567a3c8af61" Dec 13 15:05:10.322945 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e-shm.mount: Deactivated successfully. Dec 13 15:05:10.323030 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab-shm.mount: Deactivated successfully. Dec 13 15:05:10.323078 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5-shm.mount: Deactivated successfully. Dec 13 15:05:11.144148 kubelet[4377]: I1213 15:05:11.144121 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab" Dec 13 15:05:11.144579 containerd[2682]: time="2024-12-13T15:05:11.144549745Z" level=info msg="StopPodSandbox for \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\"" Dec 13 15:05:11.144768 containerd[2682]: time="2024-12-13T15:05:11.144729903Z" level=info msg="Ensure that sandbox c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab in task-service has been cleanup successfully" Dec 13 15:05:11.144908 containerd[2682]: time="2024-12-13T15:05:11.144894221Z" level=info msg="TearDown network for sandbox \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\" successfully" Dec 13 15:05:11.144938 containerd[2682]: time="2024-12-13T15:05:11.144908341Z" level=info msg="StopPodSandbox for \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\" returns successfully" Dec 13 15:05:11.146126 kubelet[4377]: I1213 15:05:11.146108 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e" Dec 13 15:05:11.146199 containerd[2682]: time="2024-12-13T15:05:11.146181246Z" level=info msg="StopPodSandbox for \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\"" Dec 13 15:05:11.146275 containerd[2682]: time="2024-12-13T15:05:11.146264725Z" level=info msg="TearDown network for sandbox \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\" successfully" Dec 13 15:05:11.146301 containerd[2682]: time="2024-12-13T15:05:11.146275645Z" level=info msg="StopPodSandbox for \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\" returns successfully" Dec 13 15:05:11.146472 containerd[2682]: time="2024-12-13T15:05:11.146455923Z" level=info msg="StopPodSandbox for \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\"" Dec 13 15:05:11.146572 systemd[1]: run-netns-cni\x2d79dbec8e\x2d5c57\x2d0333\x2d7d6d\x2da5a580929d55.mount: Deactivated successfully. Dec 13 15:05:11.146739 containerd[2682]: time="2024-12-13T15:05:11.146589201Z" level=info msg="Ensure that sandbox fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e in task-service has been cleanup successfully" Dec 13 15:05:11.146739 containerd[2682]: time="2024-12-13T15:05:11.146631881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9f6c48-q86wb,Uid:9d479dc0-c58f-4c21-b54a-803d52ba2263,Namespace:calico-apiserver,Attempt:2,}" Dec 13 15:05:11.146739 containerd[2682]: time="2024-12-13T15:05:11.146734840Z" level=info msg="TearDown network for sandbox \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\" successfully" Dec 13 15:05:11.146798 containerd[2682]: time="2024-12-13T15:05:11.146747320Z" level=info msg="StopPodSandbox for \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\" returns successfully" Dec 13 15:05:11.146969 containerd[2682]: time="2024-12-13T15:05:11.146951597Z" level=info msg="StopPodSandbox for \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\"" Dec 13 15:05:11.147029 containerd[2682]: time="2024-12-13T15:05:11.147018636Z" level=info msg="TearDown network for sandbox \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\" successfully" Dec 13 15:05:11.147052 containerd[2682]: time="2024-12-13T15:05:11.147028916Z" level=info msg="StopPodSandbox for \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\" returns successfully" Dec 13 15:05:11.147071 kubelet[4377]: I1213 15:05:11.147016 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f" Dec 13 15:05:11.147399 containerd[2682]: time="2024-12-13T15:05:11.147381392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f56qp,Uid:bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4,Namespace:kube-system,Attempt:2,}" Dec 13 15:05:11.147499 containerd[2682]: time="2024-12-13T15:05:11.147389192Z" level=info msg="StopPodSandbox for \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\"" Dec 13 15:05:11.147630 containerd[2682]: time="2024-12-13T15:05:11.147617509Z" level=info msg="Ensure that sandbox 246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f in task-service has been cleanup successfully" Dec 13 15:05:11.147800 containerd[2682]: time="2024-12-13T15:05:11.147785947Z" level=info msg="TearDown network for sandbox \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\" successfully" Dec 13 15:05:11.147820 containerd[2682]: time="2024-12-13T15:05:11.147800147Z" level=info msg="StopPodSandbox for \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\" returns successfully" Dec 13 15:05:11.147910 kubelet[4377]: I1213 15:05:11.147898 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae" Dec 13 15:05:11.148027 containerd[2682]: time="2024-12-13T15:05:11.148011865Z" level=info msg="StopPodSandbox for \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\"" Dec 13 15:05:11.148096 containerd[2682]: time="2024-12-13T15:05:11.148086624Z" level=info msg="TearDown network for sandbox \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\" successfully" Dec 13 15:05:11.148116 containerd[2682]: time="2024-12-13T15:05:11.148096864Z" level=info msg="StopPodSandbox for \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\" returns successfully" Dec 13 15:05:11.148327 containerd[2682]: time="2024-12-13T15:05:11.148309741Z" level=info msg="StopPodSandbox for \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\"" Dec 13 15:05:11.148473 containerd[2682]: time="2024-12-13T15:05:11.148459740Z" level=info msg="Ensure that sandbox 27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae in task-service has been cleanup successfully" Dec 13 15:05:11.148623 containerd[2682]: time="2024-12-13T15:05:11.148610338Z" level=info msg="TearDown network for sandbox \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\" successfully" Dec 13 15:05:11.148642 containerd[2682]: time="2024-12-13T15:05:11.148623178Z" level=info msg="StopPodSandbox for \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\" returns successfully" Dec 13 15:05:11.148643 systemd[1]: run-netns-cni\x2d99ff1727\x2db22f\x2d82f0\x2dd70d\x2d9cd0c3285f97.mount: Deactivated successfully. Dec 13 15:05:11.149088 containerd[2682]: time="2024-12-13T15:05:11.149067933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l2n54,Uid:f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b,Namespace:kube-system,Attempt:2,}" Dec 13 15:05:11.149190 containerd[2682]: time="2024-12-13T15:05:11.149174291Z" level=info msg="StopPodSandbox for \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\"" Dec 13 15:05:11.149255 containerd[2682]: time="2024-12-13T15:05:11.149245331Z" level=info msg="TearDown network for sandbox \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\" successfully" Dec 13 15:05:11.149277 containerd[2682]: time="2024-12-13T15:05:11.149255690Z" level=info msg="StopPodSandbox for \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\" returns successfully" Dec 13 15:05:11.149922 containerd[2682]: time="2024-12-13T15:05:11.149902243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677fb55ddc-c2ww4,Uid:69da9d02-8a41-4e0e-ae25-f567a3c8af61,Namespace:calico-system,Attempt:2,}" Dec 13 15:05:11.150601 kubelet[4377]: I1213 15:05:11.150582 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2" Dec 13 15:05:11.150668 systemd[1]: run-netns-cni\x2d1b32d056\x2d3d77\x2d0983\x2dc4d9\x2d010ef254ba09.mount: Deactivated successfully. Dec 13 15:05:11.150748 systemd[1]: run-netns-cni\x2dcbaf3538\x2d1fa9\x2d89a6\x2dcbd5\x2da5d396f06ccd.mount: Deactivated successfully. Dec 13 15:05:11.151734 containerd[2682]: time="2024-12-13T15:05:11.151704822Z" level=info msg="StopPodSandbox for \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\"" Dec 13 15:05:11.152019 containerd[2682]: time="2024-12-13T15:05:11.151993579Z" level=info msg="Ensure that sandbox 5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2 in task-service has been cleanup successfully" Dec 13 15:05:11.152251 containerd[2682]: time="2024-12-13T15:05:11.152224936Z" level=info msg="TearDown network for sandbox \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\" successfully" Dec 13 15:05:11.152270 containerd[2682]: time="2024-12-13T15:05:11.152252576Z" level=info msg="StopPodSandbox for \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\" returns successfully" Dec 13 15:05:11.153397 containerd[2682]: time="2024-12-13T15:05:11.153374963Z" level=info msg="StopPodSandbox for \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\"" Dec 13 15:05:11.153499 containerd[2682]: time="2024-12-13T15:05:11.153486321Z" level=info msg="TearDown network for sandbox \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\" successfully" Dec 13 15:05:11.153523 containerd[2682]: time="2024-12-13T15:05:11.153499121Z" level=info msg="StopPodSandbox for \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\" returns successfully" Dec 13 15:05:11.153542 kubelet[4377]: I1213 15:05:11.153484 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5" Dec 13 15:05:11.153876 containerd[2682]: time="2024-12-13T15:05:11.153857157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lh26w,Uid:7add470f-d95a-4158-a5d4-494b47592652,Namespace:calico-system,Attempt:2,}" Dec 13 15:05:11.153922 containerd[2682]: time="2024-12-13T15:05:11.153902476Z" level=info msg="StopPodSandbox for \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\"" Dec 13 15:05:11.154081 containerd[2682]: time="2024-12-13T15:05:11.154068435Z" level=info msg="Ensure that sandbox a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5 in task-service has been cleanup successfully" Dec 13 15:05:11.154221 containerd[2682]: time="2024-12-13T15:05:11.154208273Z" level=info msg="TearDown network for sandbox \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\" successfully" Dec 13 15:05:11.154241 containerd[2682]: time="2024-12-13T15:05:11.154221313Z" level=info msg="StopPodSandbox for \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\" returns successfully" Dec 13 15:05:11.154521 containerd[2682]: time="2024-12-13T15:05:11.154502270Z" level=info msg="StopPodSandbox for \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\"" Dec 13 15:05:11.154593 containerd[2682]: time="2024-12-13T15:05:11.154582349Z" level=info msg="TearDown network for sandbox \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\" successfully" Dec 13 15:05:11.154611 containerd[2682]: time="2024-12-13T15:05:11.154593228Z" level=info msg="StopPodSandbox for \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\" returns successfully" Dec 13 15:05:11.154981 containerd[2682]: time="2024-12-13T15:05:11.154963704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9f6c48-9zlz5,Uid:4488c13d-4962-45a7-8c16-59ac8004d33d,Namespace:calico-apiserver,Attempt:2,}" Dec 13 15:05:11.205808 containerd[2682]: time="2024-12-13T15:05:11.205764275Z" level=error msg="Failed to destroy network for sandbox \"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.206118 containerd[2682]: time="2024-12-13T15:05:11.206082631Z" level=error msg="Failed to destroy network for sandbox \"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.206147 containerd[2682]: time="2024-12-13T15:05:11.206123271Z" level=error msg="encountered an error cleaning up failed sandbox \"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.206202 containerd[2682]: time="2024-12-13T15:05:11.206181990Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f56qp,Uid:bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.206420 kubelet[4377]: E1213 15:05:11.206397 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.206463 kubelet[4377]: E1213 15:05:11.206458 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-f56qp" Dec 13 15:05:11.206485 containerd[2682]: time="2024-12-13T15:05:11.206429347Z" level=error msg="encountered an error cleaning up failed sandbox \"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.206485 containerd[2682]: time="2024-12-13T15:05:11.206453947Z" level=error msg="Failed to destroy network for sandbox \"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.206527 kubelet[4377]: E1213 15:05:11.206479 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-f56qp" Dec 13 15:05:11.206551 containerd[2682]: time="2024-12-13T15:05:11.206477986Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677fb55ddc-c2ww4,Uid:69da9d02-8a41-4e0e-ae25-f567a3c8af61,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.206598 kubelet[4377]: E1213 15:05:11.206534 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-f56qp_kube-system(bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-f56qp_kube-system(bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-f56qp" podUID="bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4" Dec 13 15:05:11.206653 kubelet[4377]: E1213 15:05:11.206620 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.206685 kubelet[4377]: E1213 15:05:11.206663 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677fb55ddc-c2ww4" Dec 13 15:05:11.206707 kubelet[4377]: E1213 15:05:11.206686 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677fb55ddc-c2ww4" Dec 13 15:05:11.206741 kubelet[4377]: E1213 15:05:11.206727 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-677fb55ddc-c2ww4_calico-system(69da9d02-8a41-4e0e-ae25-f567a3c8af61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-677fb55ddc-c2ww4_calico-system(69da9d02-8a41-4e0e-ae25-f567a3c8af61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-677fb55ddc-c2ww4" podUID="69da9d02-8a41-4e0e-ae25-f567a3c8af61" Dec 13 15:05:11.207040 containerd[2682]: time="2024-12-13T15:05:11.207018580Z" level=error msg="encountered an error cleaning up failed sandbox \"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.207082 containerd[2682]: time="2024-12-13T15:05:11.207062660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9f6c48-q86wb,Uid:9d479dc0-c58f-4c21-b54a-803d52ba2263,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.207206 kubelet[4377]: E1213 15:05:11.207193 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.207233 kubelet[4377]: E1213 15:05:11.207225 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9f6c48-q86wb" Dec 13 15:05:11.207258 kubelet[4377]: E1213 15:05:11.207243 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9f6c48-q86wb" Dec 13 15:05:11.207294 kubelet[4377]: E1213 15:05:11.207285 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c9f6c48-q86wb_calico-apiserver(9d479dc0-c58f-4c21-b54a-803d52ba2263)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c9f6c48-q86wb_calico-apiserver(9d479dc0-c58f-4c21-b54a-803d52ba2263)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c9f6c48-q86wb" podUID="9d479dc0-c58f-4c21-b54a-803d52ba2263" Dec 13 15:05:11.207973 containerd[2682]: time="2024-12-13T15:05:11.207945969Z" level=error msg="Failed to destroy network for sandbox \"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.208077 containerd[2682]: time="2024-12-13T15:05:11.208055208Z" level=error msg="Failed to destroy network for sandbox \"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.208307 containerd[2682]: time="2024-12-13T15:05:11.208283685Z" level=error msg="encountered an error cleaning up failed sandbox \"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.208349 containerd[2682]: time="2024-12-13T15:05:11.208332605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l2n54,Uid:f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.208409 containerd[2682]: time="2024-12-13T15:05:11.208335885Z" level=error msg="Failed to destroy network for sandbox \"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.208462 kubelet[4377]: E1213 15:05:11.208446 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.208486 containerd[2682]: time="2024-12-13T15:05:11.208444204Z" level=error msg="encountered an error cleaning up failed sandbox \"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.208506 containerd[2682]: time="2024-12-13T15:05:11.208486843Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lh26w,Uid:7add470f-d95a-4158-a5d4-494b47592652,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.208526 kubelet[4377]: E1213 15:05:11.208503 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-l2n54" Dec 13 15:05:11.208526 kubelet[4377]: E1213 15:05:11.208522 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-l2n54" Dec 13 15:05:11.208575 kubelet[4377]: E1213 15:05:11.208564 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-l2n54_kube-system(f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-l2n54_kube-system(f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-l2n54" podUID="f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b" Dec 13 15:05:11.208610 kubelet[4377]: E1213 15:05:11.208583 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.208631 kubelet[4377]: E1213 15:05:11.208610 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lh26w" Dec 13 15:05:11.208631 kubelet[4377]: E1213 15:05:11.208627 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lh26w" Dec 13 15:05:11.208671 kubelet[4377]: E1213 15:05:11.208663 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lh26w_calico-system(7add470f-d95a-4158-a5d4-494b47592652)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lh26w_calico-system(7add470f-d95a-4158-a5d4-494b47592652)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lh26w" podUID="7add470f-d95a-4158-a5d4-494b47592652" Dec 13 15:05:11.208797 containerd[2682]: time="2024-12-13T15:05:11.208773240Z" level=error msg="encountered an error cleaning up failed sandbox \"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.208840 containerd[2682]: time="2024-12-13T15:05:11.208825439Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9f6c48-9zlz5,Uid:4488c13d-4962-45a7-8c16-59ac8004d33d,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.208937 kubelet[4377]: E1213 15:05:11.208928 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:11.208958 kubelet[4377]: E1213 15:05:11.208953 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9f6c48-9zlz5" Dec 13 15:05:11.208985 kubelet[4377]: E1213 15:05:11.208971 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9f6c48-9zlz5" Dec 13 15:05:11.209015 kubelet[4377]: E1213 15:05:11.209005 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c9f6c48-9zlz5_calico-apiserver(4488c13d-4962-45a7-8c16-59ac8004d33d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c9f6c48-9zlz5_calico-apiserver(4488c13d-4962-45a7-8c16-59ac8004d33d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c9f6c48-9zlz5" podUID="4488c13d-4962-45a7-8c16-59ac8004d33d" Dec 13 15:05:11.323589 systemd[1]: run-netns-cni\x2d93515dc1\x2d86e2\x2d2222\x2da98a\x2dde644313cdaa.mount: Deactivated successfully. Dec 13 15:05:11.323666 systemd[1]: run-netns-cni\x2d2a0fbce5\x2d1b76\x2d7b35\x2d9d3b\x2d0ba33c3339a4.mount: Deactivated successfully. Dec 13 15:05:12.156047 kubelet[4377]: I1213 15:05:12.156018 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68" Dec 13 15:05:12.156420 containerd[2682]: time="2024-12-13T15:05:12.156390157Z" level=info msg="StopPodSandbox for \"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\"" Dec 13 15:05:12.156585 containerd[2682]: time="2024-12-13T15:05:12.156570715Z" level=info msg="Ensure that sandbox 41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68 in task-service has been cleanup successfully" Dec 13 15:05:12.156752 containerd[2682]: time="2024-12-13T15:05:12.156738633Z" level=info msg="TearDown network for sandbox \"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\" successfully" Dec 13 15:05:12.156771 containerd[2682]: time="2024-12-13T15:05:12.156752753Z" level=info msg="StopPodSandbox for \"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\" returns successfully" Dec 13 15:05:12.156971 containerd[2682]: time="2024-12-13T15:05:12.156956831Z" level=info msg="StopPodSandbox for \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\"" Dec 13 15:05:12.157040 containerd[2682]: time="2024-12-13T15:05:12.157029670Z" level=info msg="TearDown network for sandbox \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\" successfully" Dec 13 15:05:12.157062 containerd[2682]: time="2024-12-13T15:05:12.157045270Z" level=info msg="StopPodSandbox for \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\" returns successfully" Dec 13 15:05:12.157214 kubelet[4377]: I1213 15:05:12.157199 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b" Dec 13 15:05:12.157279 containerd[2682]: time="2024-12-13T15:05:12.157256548Z" level=info msg="StopPodSandbox for \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\"" Dec 13 15:05:12.157356 containerd[2682]: time="2024-12-13T15:05:12.157343747Z" level=info msg="TearDown network for sandbox \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\" successfully" Dec 13 15:05:12.157379 containerd[2682]: time="2024-12-13T15:05:12.157356427Z" level=info msg="StopPodSandbox for \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\" returns successfully" Dec 13 15:05:12.157561 containerd[2682]: time="2024-12-13T15:05:12.157547424Z" level=info msg="StopPodSandbox for \"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\"" Dec 13 15:05:12.158474 systemd[1]: run-netns-cni\x2d0cae0fc2\x2d6532\x2dde10\x2d00aa\x2db00a3ff371cd.mount: Deactivated successfully. Dec 13 15:05:12.158630 containerd[2682]: time="2024-12-13T15:05:12.158597293Z" level=info msg="Ensure that sandbox fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b in task-service has been cleanup successfully" Dec 13 15:05:12.158844 containerd[2682]: time="2024-12-13T15:05:12.158828290Z" level=info msg="TearDown network for sandbox \"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\" successfully" Dec 13 15:05:12.158865 containerd[2682]: time="2024-12-13T15:05:12.158844450Z" level=info msg="StopPodSandbox for \"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\" returns successfully" Dec 13 15:05:12.158967 containerd[2682]: time="2024-12-13T15:05:12.158950449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677fb55ddc-c2ww4,Uid:69da9d02-8a41-4e0e-ae25-f567a3c8af61,Namespace:calico-system,Attempt:3,}" Dec 13 15:05:12.159580 containerd[2682]: time="2024-12-13T15:05:12.159559483Z" level=info msg="StopPodSandbox for \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\"" Dec 13 15:05:12.159655 containerd[2682]: time="2024-12-13T15:05:12.159643042Z" level=info msg="TearDown network for sandbox \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\" successfully" Dec 13 15:05:12.159689 containerd[2682]: time="2024-12-13T15:05:12.159654522Z" level=info msg="StopPodSandbox for \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\" returns successfully" Dec 13 15:05:12.160030 containerd[2682]: time="2024-12-13T15:05:12.159885599Z" level=info msg="StopPodSandbox for \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\"" Dec 13 15:05:12.160030 containerd[2682]: time="2024-12-13T15:05:12.159966638Z" level=info msg="TearDown network for sandbox \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\" successfully" Dec 13 15:05:12.160030 containerd[2682]: time="2024-12-13T15:05:12.159976878Z" level=info msg="StopPodSandbox for \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\" returns successfully" Dec 13 15:05:12.160126 kubelet[4377]: I1213 15:05:12.159925 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47" Dec 13 15:05:12.160392 containerd[2682]: time="2024-12-13T15:05:12.160369474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lh26w,Uid:7add470f-d95a-4158-a5d4-494b47592652,Namespace:calico-system,Attempt:3,}" Dec 13 15:05:12.160484 containerd[2682]: time="2024-12-13T15:05:12.160371634Z" level=info msg="StopPodSandbox for \"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\"" Dec 13 15:05:12.160526 systemd[1]: run-netns-cni\x2d1c9d42e9\x2d3638\x2db94e\x2dc486\x2d938222477ea0.mount: Deactivated successfully. Dec 13 15:05:12.160758 containerd[2682]: time="2024-12-13T15:05:12.160609831Z" level=info msg="Ensure that sandbox bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47 in task-service has been cleanup successfully" Dec 13 15:05:12.160817 containerd[2682]: time="2024-12-13T15:05:12.160785069Z" level=info msg="TearDown network for sandbox \"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\" successfully" Dec 13 15:05:12.160817 containerd[2682]: time="2024-12-13T15:05:12.160802629Z" level=info msg="StopPodSandbox for \"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\" returns successfully" Dec 13 15:05:12.160990 containerd[2682]: time="2024-12-13T15:05:12.160969507Z" level=info msg="StopPodSandbox for \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\"" Dec 13 15:05:12.161109 containerd[2682]: time="2024-12-13T15:05:12.161092306Z" level=info msg="TearDown network for sandbox \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\" successfully" Dec 13 15:05:12.161156 containerd[2682]: time="2024-12-13T15:05:12.161145065Z" level=info msg="StopPodSandbox for \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\" returns successfully" Dec 13 15:05:12.161461 kubelet[4377]: I1213 15:05:12.161445 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b" Dec 13 15:05:12.161501 containerd[2682]: time="2024-12-13T15:05:12.161465862Z" level=info msg="StopPodSandbox for \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\"" Dec 13 15:05:12.161557 containerd[2682]: time="2024-12-13T15:05:12.161544181Z" level=info msg="TearDown network for sandbox \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\" successfully" Dec 13 15:05:12.161557 containerd[2682]: time="2024-12-13T15:05:12.161555261Z" level=info msg="StopPodSandbox for \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\" returns successfully" Dec 13 15:05:12.161819 containerd[2682]: time="2024-12-13T15:05:12.161803258Z" level=info msg="StopPodSandbox for \"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\"" Dec 13 15:05:12.161948 containerd[2682]: time="2024-12-13T15:05:12.161929457Z" level=info msg="Ensure that sandbox 695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b in task-service has been cleanup successfully" Dec 13 15:05:12.161980 containerd[2682]: time="2024-12-13T15:05:12.161935897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9f6c48-9zlz5,Uid:4488c13d-4962-45a7-8c16-59ac8004d33d,Namespace:calico-apiserver,Attempt:3,}" Dec 13 15:05:12.162120 containerd[2682]: time="2024-12-13T15:05:12.162072655Z" level=info msg="TearDown network for sandbox \"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\" successfully" Dec 13 15:05:12.162120 containerd[2682]: time="2024-12-13T15:05:12.162088095Z" level=info msg="StopPodSandbox for \"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\" returns successfully" Dec 13 15:05:12.162326 containerd[2682]: time="2024-12-13T15:05:12.162306573Z" level=info msg="StopPodSandbox for \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\"" Dec 13 15:05:12.162436 containerd[2682]: time="2024-12-13T15:05:12.162422651Z" level=info msg="TearDown network for sandbox \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\" successfully" Dec 13 15:05:12.162484 containerd[2682]: time="2024-12-13T15:05:12.162472251Z" level=info msg="StopPodSandbox for \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\" returns successfully" Dec 13 15:05:12.162503 systemd[1]: run-netns-cni\x2d7524163d\x2d282a\x2dfbb2\x2d2f99\x2d61ede9ed5e35.mount: Deactivated successfully. Dec 13 15:05:12.162755 containerd[2682]: time="2024-12-13T15:05:12.162732368Z" level=info msg="StopPodSandbox for \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\"" Dec 13 15:05:12.162832 kubelet[4377]: I1213 15:05:12.162814 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9" Dec 13 15:05:12.162871 containerd[2682]: time="2024-12-13T15:05:12.162819287Z" level=info msg="TearDown network for sandbox \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\" successfully" Dec 13 15:05:12.162871 containerd[2682]: time="2024-12-13T15:05:12.162832087Z" level=info msg="StopPodSandbox for \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\" returns successfully" Dec 13 15:05:12.163266 containerd[2682]: time="2024-12-13T15:05:12.163170443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9f6c48-q86wb,Uid:9d479dc0-c58f-4c21-b54a-803d52ba2263,Namespace:calico-apiserver,Attempt:3,}" Dec 13 15:05:12.163266 containerd[2682]: time="2024-12-13T15:05:12.163204483Z" level=info msg="StopPodSandbox for \"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\"" Dec 13 15:05:12.163372 containerd[2682]: time="2024-12-13T15:05:12.163357481Z" level=info msg="Ensure that sandbox 5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9 in task-service has been cleanup successfully" Dec 13 15:05:12.163526 containerd[2682]: time="2024-12-13T15:05:12.163510440Z" level=info msg="TearDown network for sandbox \"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\" successfully" Dec 13 15:05:12.163526 containerd[2682]: time="2024-12-13T15:05:12.163523679Z" level=info msg="StopPodSandbox for \"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\" returns successfully" Dec 13 15:05:12.163761 containerd[2682]: time="2024-12-13T15:05:12.163742517Z" level=info msg="StopPodSandbox for \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\"" Dec 13 15:05:12.163845 containerd[2682]: time="2024-12-13T15:05:12.163833156Z" level=info msg="TearDown network for sandbox \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\" successfully" Dec 13 15:05:12.163878 containerd[2682]: time="2024-12-13T15:05:12.163844436Z" level=info msg="StopPodSandbox for \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\" returns successfully" Dec 13 15:05:12.164320 containerd[2682]: time="2024-12-13T15:05:12.164022634Z" level=info msg="StopPodSandbox for \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\"" Dec 13 15:05:12.164320 containerd[2682]: time="2024-12-13T15:05:12.164107233Z" level=info msg="TearDown network for sandbox \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\" successfully" Dec 13 15:05:12.164320 containerd[2682]: time="2024-12-13T15:05:12.164116913Z" level=info msg="StopPodSandbox for \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\" returns successfully" Dec 13 15:05:12.164400 kubelet[4377]: I1213 15:05:12.164059 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d" Dec 13 15:05:12.164435 containerd[2682]: time="2024-12-13T15:05:12.164414350Z" level=info msg="StopPodSandbox for \"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\"" Dec 13 15:05:12.164462 systemd[1]: run-netns-cni\x2d55427ab6\x2d53c8\x2d4c09\x2d3423\x2d602ea44912b2.mount: Deactivated successfully. Dec 13 15:05:12.164513 containerd[2682]: time="2024-12-13T15:05:12.164422870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f56qp,Uid:bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4,Namespace:kube-system,Attempt:3,}" Dec 13 15:05:12.164654 containerd[2682]: time="2024-12-13T15:05:12.164631667Z" level=info msg="Ensure that sandbox a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d in task-service has been cleanup successfully" Dec 13 15:05:12.164813 containerd[2682]: time="2024-12-13T15:05:12.164798426Z" level=info msg="TearDown network for sandbox \"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\" successfully" Dec 13 15:05:12.164833 containerd[2682]: time="2024-12-13T15:05:12.164813625Z" level=info msg="StopPodSandbox for \"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\" returns successfully" Dec 13 15:05:12.165045 containerd[2682]: time="2024-12-13T15:05:12.165030863Z" level=info msg="StopPodSandbox for \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\"" Dec 13 15:05:12.165113 containerd[2682]: time="2024-12-13T15:05:12.165103302Z" level=info msg="TearDown network for sandbox \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\" successfully" Dec 13 15:05:12.165133 containerd[2682]: time="2024-12-13T15:05:12.165113542Z" level=info msg="StopPodSandbox for \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\" returns successfully" Dec 13 15:05:12.165348 containerd[2682]: time="2024-12-13T15:05:12.165327780Z" level=info msg="StopPodSandbox for \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\"" Dec 13 15:05:12.165424 containerd[2682]: time="2024-12-13T15:05:12.165412659Z" level=info msg="TearDown network for sandbox \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\" successfully" Dec 13 15:05:12.165443 containerd[2682]: time="2024-12-13T15:05:12.165424139Z" level=info msg="StopPodSandbox for \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\" returns successfully" Dec 13 15:05:12.165764 containerd[2682]: time="2024-12-13T15:05:12.165744815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l2n54,Uid:f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b,Namespace:kube-system,Attempt:3,}" Dec 13 15:05:12.211172 containerd[2682]: time="2024-12-13T15:05:12.211122002Z" level=error msg="Failed to destroy network for sandbox \"feaab4ac2af5381429a8bfc4576bd06eaf4053a7a6402f4c48d1d19305ee7e6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.211389 containerd[2682]: time="2024-12-13T15:05:12.211359839Z" level=error msg="Failed to destroy network for sandbox \"67539a92b23a4e9aeba4e806b800b28dcdb78e38808f539e749b3eef60a862b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.211435 containerd[2682]: time="2024-12-13T15:05:12.211405519Z" level=error msg="Failed to destroy network for sandbox \"a53e430253fda639169b002c446fffdf68953ecc4626088c00be8a6eeed89286\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.211515 containerd[2682]: time="2024-12-13T15:05:12.211493558Z" level=error msg="encountered an error cleaning up failed sandbox \"feaab4ac2af5381429a8bfc4576bd06eaf4053a7a6402f4c48d1d19305ee7e6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.211566 containerd[2682]: time="2024-12-13T15:05:12.211543637Z" level=error msg="Failed to destroy network for sandbox \"d5f8ff7a17cdc6f9221d5ce8a4c1d65baec4f49dc51a1404018d191014a2ab69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.211659 containerd[2682]: time="2024-12-13T15:05:12.211553437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677fb55ddc-c2ww4,Uid:69da9d02-8a41-4e0e-ae25-f567a3c8af61,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"feaab4ac2af5381429a8bfc4576bd06eaf4053a7a6402f4c48d1d19305ee7e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.211751 containerd[2682]: time="2024-12-13T15:05:12.211690355Z" level=error msg="encountered an error cleaning up failed sandbox \"67539a92b23a4e9aeba4e806b800b28dcdb78e38808f539e749b3eef60a862b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.211773 containerd[2682]: time="2024-12-13T15:05:12.211750395Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lh26w,Uid:7add470f-d95a-4158-a5d4-494b47592652,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"67539a92b23a4e9aeba4e806b800b28dcdb78e38808f539e749b3eef60a862b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.211874 containerd[2682]: time="2024-12-13T15:05:12.211840834Z" level=error msg="encountered an error cleaning up failed sandbox \"a53e430253fda639169b002c446fffdf68953ecc4626088c00be8a6eeed89286\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.211919 containerd[2682]: time="2024-12-13T15:05:12.211896633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f56qp,Uid:bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a53e430253fda639169b002c446fffdf68953ecc4626088c00be8a6eeed89286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.211942 kubelet[4377]: E1213 15:05:12.211859 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feaab4ac2af5381429a8bfc4576bd06eaf4053a7a6402f4c48d1d19305ee7e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.211942 kubelet[4377]: E1213 15:05:12.211903 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67539a92b23a4e9aeba4e806b800b28dcdb78e38808f539e749b3eef60a862b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.211992 kubelet[4377]: E1213 15:05:12.211949 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67539a92b23a4e9aeba4e806b800b28dcdb78e38808f539e749b3eef60a862b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lh26w" Dec 13 15:05:12.211992 kubelet[4377]: E1213 15:05:12.211970 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67539a92b23a4e9aeba4e806b800b28dcdb78e38808f539e749b3eef60a862b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lh26w" Dec 13 15:05:12.211992 kubelet[4377]: E1213 15:05:12.211915 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feaab4ac2af5381429a8bfc4576bd06eaf4053a7a6402f4c48d1d19305ee7e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677fb55ddc-c2ww4" Dec 13 15:05:12.212055 containerd[2682]: time="2024-12-13T15:05:12.211864274Z" level=error msg="encountered an error cleaning up failed sandbox \"d5f8ff7a17cdc6f9221d5ce8a4c1d65baec4f49dc51a1404018d191014a2ab69\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.212055 containerd[2682]: time="2024-12-13T15:05:12.211977192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9f6c48-q86wb,Uid:9d479dc0-c58f-4c21-b54a-803d52ba2263,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d5f8ff7a17cdc6f9221d5ce8a4c1d65baec4f49dc51a1404018d191014a2ab69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.212114 kubelet[4377]: E1213 15:05:12.212004 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feaab4ac2af5381429a8bfc4576bd06eaf4053a7a6402f4c48d1d19305ee7e6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-677fb55ddc-c2ww4" Dec 13 15:05:12.212114 kubelet[4377]: E1213 15:05:12.212026 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lh26w_calico-system(7add470f-d95a-4158-a5d4-494b47592652)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lh26w_calico-system(7add470f-d95a-4158-a5d4-494b47592652)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67539a92b23a4e9aeba4e806b800b28dcdb78e38808f539e749b3eef60a862b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lh26w" podUID="7add470f-d95a-4158-a5d4-494b47592652" Dec 13 15:05:12.212114 kubelet[4377]: E1213 15:05:12.212048 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-677fb55ddc-c2ww4_calico-system(69da9d02-8a41-4e0e-ae25-f567a3c8af61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-677fb55ddc-c2ww4_calico-system(69da9d02-8a41-4e0e-ae25-f567a3c8af61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"feaab4ac2af5381429a8bfc4576bd06eaf4053a7a6402f4c48d1d19305ee7e6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-677fb55ddc-c2ww4" podUID="69da9d02-8a41-4e0e-ae25-f567a3c8af61" Dec 13 15:05:12.212212 containerd[2682]: time="2024-12-13T15:05:12.212017192Z" level=error msg="Failed to destroy network for sandbox \"7337577e62fb573617c57f0a689b5cc36c4ef377c1aaa0187fe0c24f87c5bdbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.212241 kubelet[4377]: E1213 15:05:12.212048 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a53e430253fda639169b002c446fffdf68953ecc4626088c00be8a6eeed89286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.212241 kubelet[4377]: E1213 15:05:12.212083 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a53e430253fda639169b002c446fffdf68953ecc4626088c00be8a6eeed89286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-f56qp" Dec 13 15:05:12.212241 kubelet[4377]: E1213 15:05:12.212100 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a53e430253fda639169b002c446fffdf68953ecc4626088c00be8a6eeed89286\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-f56qp" Dec 13 15:05:12.212299 kubelet[4377]: E1213 15:05:12.212132 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-f56qp_kube-system(bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-f56qp_kube-system(bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a53e430253fda639169b002c446fffdf68953ecc4626088c00be8a6eeed89286\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-f56qp" podUID="bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4" Dec 13 15:05:12.212299 kubelet[4377]: E1213 15:05:12.212084 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5f8ff7a17cdc6f9221d5ce8a4c1d65baec4f49dc51a1404018d191014a2ab69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.212299 kubelet[4377]: E1213 15:05:12.212166 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5f8ff7a17cdc6f9221d5ce8a4c1d65baec4f49dc51a1404018d191014a2ab69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9f6c48-q86wb" Dec 13 15:05:12.212369 kubelet[4377]: E1213 15:05:12.212182 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5f8ff7a17cdc6f9221d5ce8a4c1d65baec4f49dc51a1404018d191014a2ab69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9f6c48-q86wb" Dec 13 15:05:12.212369 kubelet[4377]: E1213 15:05:12.212211 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c9f6c48-q86wb_calico-apiserver(9d479dc0-c58f-4c21-b54a-803d52ba2263)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c9f6c48-q86wb_calico-apiserver(9d479dc0-c58f-4c21-b54a-803d52ba2263)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5f8ff7a17cdc6f9221d5ce8a4c1d65baec4f49dc51a1404018d191014a2ab69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c9f6c48-q86wb" podUID="9d479dc0-c58f-4c21-b54a-803d52ba2263" Dec 13 15:05:12.212458 containerd[2682]: time="2024-12-13T15:05:12.212430547Z" level=error msg="encountered an error cleaning up failed sandbox \"7337577e62fb573617c57f0a689b5cc36c4ef377c1aaa0187fe0c24f87c5bdbb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.212508 containerd[2682]: time="2024-12-13T15:05:12.212489387Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9f6c48-9zlz5,Uid:4488c13d-4962-45a7-8c16-59ac8004d33d,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7337577e62fb573617c57f0a689b5cc36c4ef377c1aaa0187fe0c24f87c5bdbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.212646 kubelet[4377]: E1213 15:05:12.212628 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7337577e62fb573617c57f0a689b5cc36c4ef377c1aaa0187fe0c24f87c5bdbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.212679 kubelet[4377]: E1213 15:05:12.212668 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7337577e62fb573617c57f0a689b5cc36c4ef377c1aaa0187fe0c24f87c5bdbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9f6c48-9zlz5" Dec 13 15:05:12.212702 kubelet[4377]: E1213 15:05:12.212693 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7337577e62fb573617c57f0a689b5cc36c4ef377c1aaa0187fe0c24f87c5bdbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c9f6c48-9zlz5" Dec 13 15:05:12.212742 kubelet[4377]: E1213 15:05:12.212732 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c9f6c48-9zlz5_calico-apiserver(4488c13d-4962-45a7-8c16-59ac8004d33d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c9f6c48-9zlz5_calico-apiserver(4488c13d-4962-45a7-8c16-59ac8004d33d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7337577e62fb573617c57f0a689b5cc36c4ef377c1aaa0187fe0c24f87c5bdbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c9f6c48-9zlz5" podUID="4488c13d-4962-45a7-8c16-59ac8004d33d" Dec 13 15:05:12.213405 containerd[2682]: time="2024-12-13T15:05:12.213378057Z" level=error msg="Failed to destroy network for sandbox \"170e6a2ccf62d6a7becfa95072dc85aec554a5418573aa7df99d31b255289de9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.213701 containerd[2682]: time="2024-12-13T15:05:12.213678854Z" level=error msg="encountered an error cleaning up failed sandbox \"170e6a2ccf62d6a7becfa95072dc85aec554a5418573aa7df99d31b255289de9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.213729 containerd[2682]: time="2024-12-13T15:05:12.213717293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l2n54,Uid:f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"170e6a2ccf62d6a7becfa95072dc85aec554a5418573aa7df99d31b255289de9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.213832 kubelet[4377]: E1213 15:05:12.213822 4377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"170e6a2ccf62d6a7becfa95072dc85aec554a5418573aa7df99d31b255289de9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 15:05:12.213863 kubelet[4377]: E1213 15:05:12.213853 4377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"170e6a2ccf62d6a7becfa95072dc85aec554a5418573aa7df99d31b255289de9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-l2n54" Dec 13 15:05:12.213890 kubelet[4377]: E1213 15:05:12.213873 4377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"170e6a2ccf62d6a7becfa95072dc85aec554a5418573aa7df99d31b255289de9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-l2n54" Dec 13 15:05:12.213924 kubelet[4377]: E1213 15:05:12.213913 4377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-l2n54_kube-system(f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-l2n54_kube-system(f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"170e6a2ccf62d6a7becfa95072dc85aec554a5418573aa7df99d31b255289de9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-l2n54" podUID="f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b" Dec 13 15:05:12.319639 containerd[2682]: time="2024-12-13T15:05:12.319558062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 15:05:12.319701 containerd[2682]: time="2024-12-13T15:05:12.319581942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:12.320304 containerd[2682]: time="2024-12-13T15:05:12.320283294Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:12.321915 containerd[2682]: time="2024-12-13T15:05:12.321887797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:12.322495 containerd[2682]: time="2024-12-13T15:05:12.322470270Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 3.185779952s" Dec 13 15:05:12.322537 containerd[2682]: time="2024-12-13T15:05:12.322494350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 15:05:12.323554 systemd[1]: run-netns-cni\x2df05b6449\x2dfd45\x2da4d5\x2d11fa\x2d35c618e6ba8b.mount: Deactivated successfully. Dec 13 15:05:12.323628 systemd[1]: run-netns-cni\x2d7425f5a8\x2d7c3c\x2dc3d5\x2d91bc\x2de659f2cb2b3a.mount: Deactivated successfully. Dec 13 15:05:12.323680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount359560952.mount: Deactivated successfully. Dec 13 15:05:12.327886 containerd[2682]: time="2024-12-13T15:05:12.327864092Z" level=info msg="CreateContainer within sandbox \"0ee47ba19a78f893c6b72a8a3c1e28f76a2bce9ffdbbaaf26580e73d8d81e14f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 15:05:12.343320 containerd[2682]: time="2024-12-13T15:05:12.343288644Z" level=info msg="CreateContainer within sandbox \"0ee47ba19a78f893c6b72a8a3c1e28f76a2bce9ffdbbaaf26580e73d8d81e14f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cc31d0115d2b19d2f057939ee72df92286aaeca9fc0f89050022aa4f4c1c9740\"" Dec 13 15:05:12.343631 containerd[2682]: time="2024-12-13T15:05:12.343609040Z" level=info msg="StartContainer for \"cc31d0115d2b19d2f057939ee72df92286aaeca9fc0f89050022aa4f4c1c9740\"" Dec 13 15:05:12.376845 systemd[1]: Started cri-containerd-cc31d0115d2b19d2f057939ee72df92286aaeca9fc0f89050022aa4f4c1c9740.scope - libcontainer container cc31d0115d2b19d2f057939ee72df92286aaeca9fc0f89050022aa4f4c1c9740. Dec 13 15:05:12.399282 containerd[2682]: time="2024-12-13T15:05:12.399254355Z" level=info msg="StartContainer for \"cc31d0115d2b19d2f057939ee72df92286aaeca9fc0f89050022aa4f4c1c9740\" returns successfully" Dec 13 15:05:12.517750 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 15:05:12.517804 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 15:05:12.827156 kubelet[4377]: I1213 15:05:12.827047 4377 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 15:05:13.168326 kubelet[4377]: I1213 15:05:13.168306 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67539a92b23a4e9aeba4e806b800b28dcdb78e38808f539e749b3eef60a862b8" Dec 13 15:05:13.168716 containerd[2682]: time="2024-12-13T15:05:13.168692378Z" level=info msg="StopPodSandbox for \"67539a92b23a4e9aeba4e806b800b28dcdb78e38808f539e749b3eef60a862b8\"" Dec 13 15:05:13.168881 containerd[2682]: time="2024-12-13T15:05:13.168846857Z" level=info msg="Ensure that sandbox 67539a92b23a4e9aeba4e806b800b28dcdb78e38808f539e749b3eef60a862b8 in task-service has been cleanup successfully" Dec 13 15:05:13.169037 containerd[2682]: time="2024-12-13T15:05:13.169021695Z" level=info msg="TearDown network for sandbox \"67539a92b23a4e9aeba4e806b800b28dcdb78e38808f539e749b3eef60a862b8\" successfully" Dec 13 15:05:13.169057 containerd[2682]: time="2024-12-13T15:05:13.169036215Z" level=info msg="StopPodSandbox for \"67539a92b23a4e9aeba4e806b800b28dcdb78e38808f539e749b3eef60a862b8\" returns successfully" Dec 13 15:05:13.169301 containerd[2682]: time="2024-12-13T15:05:13.169287612Z" level=info msg="StopPodSandbox for \"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\"" Dec 13 15:05:13.169364 containerd[2682]: time="2024-12-13T15:05:13.169352532Z" level=info msg="TearDown network for sandbox \"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\" successfully" Dec 13 15:05:13.169397 containerd[2682]: time="2024-12-13T15:05:13.169363411Z" level=info msg="StopPodSandbox for \"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\" returns successfully" Dec 13 15:05:13.169570 containerd[2682]: time="2024-12-13T15:05:13.169548970Z" level=info msg="StopPodSandbox for \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\"" Dec 13 15:05:13.169624 kubelet[4377]: I1213 15:05:13.169611 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7337577e62fb573617c57f0a689b5cc36c4ef377c1aaa0187fe0c24f87c5bdbb" Dec 13 15:05:13.169647 containerd[2682]: time="2024-12-13T15:05:13.169628849Z" level=info msg="TearDown network for sandbox \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\" successfully" Dec 13 15:05:13.169647 containerd[2682]: time="2024-12-13T15:05:13.169640009Z" level=info msg="StopPodSandbox for \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\" returns successfully" Dec 13 15:05:13.169826 containerd[2682]: time="2024-12-13T15:05:13.169810127Z" level=info msg="StopPodSandbox for \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\"" Dec 13 15:05:13.169892 containerd[2682]: time="2024-12-13T15:05:13.169881286Z" level=info msg="TearDown network for sandbox \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\" successfully" Dec 13 15:05:13.169918 containerd[2682]: time="2024-12-13T15:05:13.169893766Z" level=info msg="StopPodSandbox for \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\" returns successfully" Dec 13 15:05:13.169986 containerd[2682]: time="2024-12-13T15:05:13.169971125Z" level=info msg="StopPodSandbox for \"7337577e62fb573617c57f0a689b5cc36c4ef377c1aaa0187fe0c24f87c5bdbb\"" Dec 13 15:05:13.170130 containerd[2682]: time="2024-12-13T15:05:13.170118284Z" level=info msg="Ensure that sandbox 7337577e62fb573617c57f0a689b5cc36c4ef377c1aaa0187fe0c24f87c5bdbb in task-service has been cleanup successfully" Dec 13 15:05:13.170244 containerd[2682]: time="2024-12-13T15:05:13.170217723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lh26w,Uid:7add470f-d95a-4158-a5d4-494b47592652,Namespace:calico-system,Attempt:4,}" Dec 13 15:05:13.170279 containerd[2682]: time="2024-12-13T15:05:13.170265842Z" level=info msg="TearDown network for sandbox \"7337577e62fb573617c57f0a689b5cc36c4ef377c1aaa0187fe0c24f87c5bdbb\" successfully" Dec 13 15:05:13.170298 containerd[2682]: time="2024-12-13T15:05:13.170279322Z" level=info msg="StopPodSandbox for \"7337577e62fb573617c57f0a689b5cc36c4ef377c1aaa0187fe0c24f87c5bdbb\" returns successfully" Dec 13 15:05:13.170474 containerd[2682]: time="2024-12-13T15:05:13.170459400Z" level=info msg="StopPodSandbox for \"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\"" Dec 13 15:05:13.170553 containerd[2682]: time="2024-12-13T15:05:13.170542719Z" level=info msg="TearDown network for sandbox \"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\" successfully" Dec 13 15:05:13.170573 containerd[2682]: time="2024-12-13T15:05:13.170553279Z" level=info msg="StopPodSandbox for \"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\" returns successfully" Dec 13 15:05:13.170791 containerd[2682]: time="2024-12-13T15:05:13.170777277Z" level=info msg="StopPodSandbox for \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\"" Dec 13 15:05:13.170853 containerd[2682]: time="2024-12-13T15:05:13.170843796Z" level=info msg="TearDown network for sandbox \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\" successfully" Dec 13 15:05:13.170872 containerd[2682]: time="2024-12-13T15:05:13.170854156Z" level=info msg="StopPodSandbox for \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\" returns successfully" Dec 13 15:05:13.170960 kubelet[4377]: I1213 15:05:13.170945 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5f8ff7a17cdc6f9221d5ce8a4c1d65baec4f49dc51a1404018d191014a2ab69" Dec 13 15:05:13.171017 containerd[2682]: time="2024-12-13T15:05:13.171003635Z" level=info msg="StopPodSandbox for \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\"" Dec 13 15:05:13.171078 containerd[2682]: time="2024-12-13T15:05:13.171067234Z" level=info msg="TearDown network for sandbox \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\" successfully" Dec 13 15:05:13.171097 containerd[2682]: time="2024-12-13T15:05:13.171078714Z" level=info msg="StopPodSandbox for \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\" returns successfully" Dec 13 15:05:13.171369 containerd[2682]: time="2024-12-13T15:05:13.171354111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9f6c48-9zlz5,Uid:4488c13d-4962-45a7-8c16-59ac8004d33d,Namespace:calico-apiserver,Attempt:4,}" Dec 13 15:05:13.171445 containerd[2682]: time="2024-12-13T15:05:13.171363791Z" level=info msg="StopPodSandbox for \"d5f8ff7a17cdc6f9221d5ce8a4c1d65baec4f49dc51a1404018d191014a2ab69\"" Dec 13 15:05:13.171588 containerd[2682]: time="2024-12-13T15:05:13.171574709Z" level=info msg="Ensure that sandbox d5f8ff7a17cdc6f9221d5ce8a4c1d65baec4f49dc51a1404018d191014a2ab69 in task-service has been cleanup successfully" Dec 13 15:05:13.171747 containerd[2682]: time="2024-12-13T15:05:13.171729187Z" level=info msg="TearDown network for sandbox \"d5f8ff7a17cdc6f9221d5ce8a4c1d65baec4f49dc51a1404018d191014a2ab69\" successfully" Dec 13 15:05:13.171766 containerd[2682]: time="2024-12-13T15:05:13.171747227Z" level=info msg="StopPodSandbox for \"d5f8ff7a17cdc6f9221d5ce8a4c1d65baec4f49dc51a1404018d191014a2ab69\" returns successfully" Dec 13 15:05:13.172037 containerd[2682]: time="2024-12-13T15:05:13.172021384Z" level=info msg="StopPodSandbox for \"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\"" Dec 13 15:05:13.172101 containerd[2682]: time="2024-12-13T15:05:13.172091104Z" level=info msg="TearDown network for sandbox \"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\" successfully" Dec 13 15:05:13.172120 containerd[2682]: time="2024-12-13T15:05:13.172101344Z" level=info msg="StopPodSandbox for \"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\" returns successfully" Dec 13 15:05:13.172264 containerd[2682]: time="2024-12-13T15:05:13.172248942Z" level=info msg="StopPodSandbox for \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\"" Dec 13 15:05:13.172330 containerd[2682]: time="2024-12-13T15:05:13.172320381Z" level=info msg="TearDown network for sandbox \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\" successfully" Dec 13 15:05:13.172350 containerd[2682]: time="2024-12-13T15:05:13.172330701Z" level=info msg="StopPodSandbox for \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\" returns successfully" Dec 13 15:05:13.172462 kubelet[4377]: I1213 15:05:13.172448 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a53e430253fda639169b002c446fffdf68953ecc4626088c00be8a6eeed89286" Dec 13 15:05:13.172518 containerd[2682]: time="2024-12-13T15:05:13.172507459Z" level=info msg="StopPodSandbox for \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\"" Dec 13 15:05:13.172575 containerd[2682]: time="2024-12-13T15:05:13.172565419Z" level=info msg="TearDown network for sandbox \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\" successfully" Dec 13 15:05:13.172598 containerd[2682]: time="2024-12-13T15:05:13.172576499Z" level=info msg="StopPodSandbox for \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\" returns successfully" Dec 13 15:05:13.172844 containerd[2682]: time="2024-12-13T15:05:13.172830896Z" level=info msg="StopPodSandbox for \"a53e430253fda639169b002c446fffdf68953ecc4626088c00be8a6eeed89286\"" Dec 13 15:05:13.172872 containerd[2682]: time="2024-12-13T15:05:13.172851776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9f6c48-q86wb,Uid:9d479dc0-c58f-4c21-b54a-803d52ba2263,Namespace:calico-apiserver,Attempt:4,}" Dec 13 15:05:13.172974 containerd[2682]: time="2024-12-13T15:05:13.172961655Z" level=info msg="Ensure that sandbox a53e430253fda639169b002c446fffdf68953ecc4626088c00be8a6eeed89286 in task-service has been cleanup successfully" Dec 13 15:05:13.173106 containerd[2682]: time="2024-12-13T15:05:13.173094213Z" level=info msg="TearDown network for sandbox \"a53e430253fda639169b002c446fffdf68953ecc4626088c00be8a6eeed89286\" successfully" Dec 13 15:05:13.173129 containerd[2682]: time="2024-12-13T15:05:13.173106573Z" level=info msg="StopPodSandbox for \"a53e430253fda639169b002c446fffdf68953ecc4626088c00be8a6eeed89286\" returns successfully" Dec 13 15:05:13.173304 containerd[2682]: time="2024-12-13T15:05:13.173287091Z" level=info msg="StopPodSandbox for \"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\"" Dec 13 15:05:13.173370 containerd[2682]: time="2024-12-13T15:05:13.173360651Z" level=info msg="TearDown network for sandbox \"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\" successfully" Dec 13 15:05:13.173392 containerd[2682]: time="2024-12-13T15:05:13.173370371Z" level=info msg="StopPodSandbox for \"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\" returns successfully" Dec 13 15:05:13.173578 containerd[2682]: time="2024-12-13T15:05:13.173563769Z" level=info msg="StopPodSandbox for \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\"" Dec 13 15:05:13.173641 containerd[2682]: time="2024-12-13T15:05:13.173631288Z" level=info msg="TearDown network for sandbox \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\" successfully" Dec 13 15:05:13.173689 containerd[2682]: time="2024-12-13T15:05:13.173641048Z" level=info msg="StopPodSandbox for \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\" returns successfully" Dec 13 15:05:13.173719 kubelet[4377]: I1213 15:05:13.173696 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="170e6a2ccf62d6a7becfa95072dc85aec554a5418573aa7df99d31b255289de9" Dec 13 15:05:13.173892 containerd[2682]: time="2024-12-13T15:05:13.173874485Z" level=info msg="StopPodSandbox for \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\"" Dec 13 15:05:13.173961 containerd[2682]: time="2024-12-13T15:05:13.173951685Z" level=info msg="TearDown network for sandbox \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\" successfully" Dec 13 15:05:13.173981 containerd[2682]: time="2024-12-13T15:05:13.173961645Z" level=info msg="StopPodSandbox for \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\" returns successfully" Dec 13 15:05:13.174782 containerd[2682]: time="2024-12-13T15:05:13.174070483Z" level=info msg="StopPodSandbox for \"170e6a2ccf62d6a7becfa95072dc85aec554a5418573aa7df99d31b255289de9\"" Dec 13 15:05:13.175018 containerd[2682]: time="2024-12-13T15:05:13.175002874Z" level=info msg="Ensure that sandbox 170e6a2ccf62d6a7becfa95072dc85aec554a5418573aa7df99d31b255289de9 in task-service has been cleanup successfully" Dec 13 15:05:13.175090 containerd[2682]: time="2024-12-13T15:05:13.175059033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f56qp,Uid:bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4,Namespace:kube-system,Attempt:4,}" Dec 13 15:05:13.175242 containerd[2682]: time="2024-12-13T15:05:13.175222232Z" level=info msg="TearDown network for sandbox \"170e6a2ccf62d6a7becfa95072dc85aec554a5418573aa7df99d31b255289de9\" successfully" Dec 13 15:05:13.175262 containerd[2682]: time="2024-12-13T15:05:13.175243672Z" level=info msg="StopPodSandbox for \"170e6a2ccf62d6a7becfa95072dc85aec554a5418573aa7df99d31b255289de9\" returns successfully" Dec 13 15:05:13.176330 containerd[2682]: time="2024-12-13T15:05:13.176302981Z" level=info msg="StopPodSandbox for \"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\"" Dec 13 15:05:13.176435 containerd[2682]: time="2024-12-13T15:05:13.176416300Z" level=info msg="TearDown network for sandbox \"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\" successfully" Dec 13 15:05:13.176435 containerd[2682]: time="2024-12-13T15:05:13.176429419Z" level=info msg="StopPodSandbox for \"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\" returns successfully" Dec 13 15:05:13.176738 containerd[2682]: time="2024-12-13T15:05:13.176713657Z" level=info msg="StopPodSandbox for \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\"" Dec 13 15:05:13.176791 kubelet[4377]: I1213 15:05:13.176746 4377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feaab4ac2af5381429a8bfc4576bd06eaf4053a7a6402f4c48d1d19305ee7e6c" Dec 13 15:05:13.176817 containerd[2682]: time="2024-12-13T15:05:13.176803296Z" level=info msg="TearDown network for sandbox \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\" successfully" Dec 13 15:05:13.176817 containerd[2682]: time="2024-12-13T15:05:13.176813016Z" level=info msg="StopPodSandbox for \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\" returns successfully" Dec 13 15:05:13.177153 kubelet[4377]: I1213 15:05:13.177130 4377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-swnk4" podStartSLOduration=1.593481434 podStartE2EDuration="10.177096933s" podCreationTimestamp="2024-12-13 15:05:03 +0000 UTC" firstStartedPulling="2024-12-13 15:05:03.739068609 +0000 UTC m=+21.737037018" lastFinishedPulling="2024-12-13 15:05:12.322684068 +0000 UTC m=+30.320652517" observedRunningTime="2024-12-13 15:05:13.176969814 +0000 UTC m=+31.174938183" watchObservedRunningTime="2024-12-13 15:05:13.177096933 +0000 UTC m=+31.175065382" Dec 13 15:05:13.177217 containerd[2682]: time="2024-12-13T15:05:13.177143932Z" level=info msg="StopPodSandbox for \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\"" Dec 13 15:05:13.177280 containerd[2682]: time="2024-12-13T15:05:13.177229011Z" level=info msg="TearDown network for sandbox \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\" successfully" Dec 13 15:05:13.177280 containerd[2682]: time="2024-12-13T15:05:13.177243931Z" level=info msg="StopPodSandbox for \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\" returns successfully" Dec 13 15:05:13.177280 containerd[2682]: time="2024-12-13T15:05:13.177230051Z" level=info msg="StopPodSandbox for \"feaab4ac2af5381429a8bfc4576bd06eaf4053a7a6402f4c48d1d19305ee7e6c\"" Dec 13 15:05:13.177418 containerd[2682]: time="2024-12-13T15:05:13.177403769Z" level=info msg="Ensure that sandbox feaab4ac2af5381429a8bfc4576bd06eaf4053a7a6402f4c48d1d19305ee7e6c in task-service has been cleanup successfully" Dec 13 15:05:13.177586 containerd[2682]: time="2024-12-13T15:05:13.177569928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l2n54,Uid:f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b,Namespace:kube-system,Attempt:4,}" Dec 13 15:05:13.177880 containerd[2682]: time="2024-12-13T15:05:13.177861525Z" level=info msg="TearDown network for sandbox \"feaab4ac2af5381429a8bfc4576bd06eaf4053a7a6402f4c48d1d19305ee7e6c\" successfully" Dec 13 15:05:13.177880 containerd[2682]: time="2024-12-13T15:05:13.177877405Z" level=info msg="StopPodSandbox for \"feaab4ac2af5381429a8bfc4576bd06eaf4053a7a6402f4c48d1d19305ee7e6c\" returns successfully" Dec 13 15:05:13.178183 containerd[2682]: time="2024-12-13T15:05:13.178159882Z" level=info msg="StopPodSandbox for \"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\"" Dec 13 15:05:13.178260 containerd[2682]: time="2024-12-13T15:05:13.178249881Z" level=info msg="TearDown network for sandbox \"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\" successfully" Dec 13 15:05:13.178286 containerd[2682]: time="2024-12-13T15:05:13.178260361Z" level=info msg="StopPodSandbox for \"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\" returns successfully" Dec 13 15:05:13.178512 containerd[2682]: time="2024-12-13T15:05:13.178493598Z" level=info msg="StopPodSandbox for \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\"" Dec 13 15:05:13.178584 containerd[2682]: time="2024-12-13T15:05:13.178573158Z" level=info msg="TearDown network for sandbox \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\" successfully" Dec 13 15:05:13.178604 containerd[2682]: time="2024-12-13T15:05:13.178585397Z" level=info msg="StopPodSandbox for \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\" returns successfully" Dec 13 15:05:13.179115 containerd[2682]: time="2024-12-13T15:05:13.179091392Z" level=info msg="StopPodSandbox for \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\"" Dec 13 15:05:13.179191 containerd[2682]: time="2024-12-13T15:05:13.179179951Z" level=info msg="TearDown network for sandbox \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\" successfully" Dec 13 15:05:13.179211 containerd[2682]: time="2024-12-13T15:05:13.179191471Z" level=info msg="StopPodSandbox for \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\" returns successfully" Dec 13 15:05:13.179555 containerd[2682]: time="2024-12-13T15:05:13.179533948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677fb55ddc-c2ww4,Uid:69da9d02-8a41-4e0e-ae25-f567a3c8af61,Namespace:calico-system,Attempt:4,}" Dec 13 15:05:13.274379 systemd-networkd[2590]: calif3355432327: Link UP Dec 13 15:05:13.274567 systemd-networkd[2590]: calif3355432327: Gained carrier Dec 13 15:05:13.281300 containerd[2682]: 2024-12-13 15:05:13.201 [INFO][7031] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 15:05:13.281300 containerd[2682]: 2024-12-13 15:05:13.211 [INFO][7031] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.0.0--a--a49a1da819-k8s-calico--kube--controllers--677fb55ddc--c2ww4-eth0 calico-kube-controllers-677fb55ddc- calico-system 69da9d02-8a41-4e0e-ae25-f567a3c8af61 654 0 2024-12-13 15:05:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:677fb55ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4186.0.0-a-a49a1da819 calico-kube-controllers-677fb55ddc-c2ww4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif3355432327 [] []}} ContainerID="86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a" Namespace="calico-system" Pod="calico-kube-controllers-677fb55ddc-c2ww4" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--kube--controllers--677fb55ddc--c2ww4-" Dec 13 15:05:13.281300 containerd[2682]: 2024-12-13 15:05:13.211 [INFO][7031] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a" Namespace="calico-system" Pod="calico-kube-controllers-677fb55ddc-c2ww4" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--kube--controllers--677fb55ddc--c2ww4-eth0" Dec 13 15:05:13.281300 containerd[2682]: 2024-12-13 15:05:13.241 [INFO][7148] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a" HandleID="k8s-pod-network.86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a" Workload="ci--4186.0.0--a--a49a1da819-k8s-calico--kube--controllers--677fb55ddc--c2ww4-eth0" Dec 13 15:05:13.281300 containerd[2682]: 2024-12-13 15:05:13.250 [INFO][7148] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a" HandleID="k8s-pod-network.86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a" Workload="ci--4186.0.0--a--a49a1da819-k8s-calico--kube--controllers--677fb55ddc--c2ww4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000910960), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.0.0-a-a49a1da819", "pod":"calico-kube-controllers-677fb55ddc-c2ww4", "timestamp":"2024-12-13 15:05:13.241027121 +0000 UTC"}, Hostname:"ci-4186.0.0-a-a49a1da819", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 15:05:13.281300 containerd[2682]: 2024-12-13 15:05:13.250 [INFO][7148] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 15:05:13.281300 containerd[2682]: 2024-12-13 15:05:13.250 [INFO][7148] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 15:05:13.281300 containerd[2682]: 2024-12-13 15:05:13.250 [INFO][7148] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.0.0-a-a49a1da819' Dec 13 15:05:13.281300 containerd[2682]: 2024-12-13 15:05:13.251 [INFO][7148] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.281300 containerd[2682]: 2024-12-13 15:05:13.255 [INFO][7148] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.281300 containerd[2682]: 2024-12-13 15:05:13.257 [INFO][7148] ipam/ipam.go 489: Trying affinity for 192.168.53.128/26 host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.281300 containerd[2682]: 2024-12-13 15:05:13.259 [INFO][7148] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.128/26 host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.281300 containerd[2682]: 2024-12-13 15:05:13.260 [INFO][7148] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.128/26 host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.281300 containerd[2682]: 2024-12-13 15:05:13.260 [INFO][7148] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.128/26 handle="k8s-pod-network.86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.281300 containerd[2682]: 2024-12-13 15:05:13.261 [INFO][7148] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a Dec 13 15:05:13.281300 containerd[2682]: 2024-12-13 15:05:13.264 [INFO][7148] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.128/26 handle="k8s-pod-network.86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.281300 containerd[2682]: 2024-12-13 15:05:13.267 [INFO][7148] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.129/26] block=192.168.53.128/26 handle="k8s-pod-network.86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.281300 containerd[2682]: 2024-12-13 15:05:13.267 [INFO][7148] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.129/26] handle="k8s-pod-network.86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.281300 containerd[2682]: 2024-12-13 15:05:13.267 [INFO][7148] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 15:05:13.281300 containerd[2682]: 2024-12-13 15:05:13.267 [INFO][7148] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.129/26] IPv6=[] ContainerID="86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a" HandleID="k8s-pod-network.86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a" Workload="ci--4186.0.0--a--a49a1da819-k8s-calico--kube--controllers--677fb55ddc--c2ww4-eth0" Dec 13 15:05:13.281771 containerd[2682]: 2024-12-13 15:05:13.269 [INFO][7031] cni-plugin/k8s.go 386: Populated endpoint ContainerID="86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a" Namespace="calico-system" Pod="calico-kube-controllers-677fb55ddc-c2ww4" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--kube--controllers--677fb55ddc--c2ww4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--a49a1da819-k8s-calico--kube--controllers--677fb55ddc--c2ww4-eth0", GenerateName:"calico-kube-controllers-677fb55ddc-", Namespace:"calico-system", SelfLink:"", UID:"69da9d02-8a41-4e0e-ae25-f567a3c8af61", ResourceVersion:"654", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 15, 5, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"677fb55ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-a49a1da819", ContainerID:"", Pod:"calico-kube-controllers-677fb55ddc-c2ww4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.53.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif3355432327", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 15:05:13.281771 containerd[2682]: 2024-12-13 15:05:13.269 [INFO][7031] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.129/32] ContainerID="86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a" Namespace="calico-system" Pod="calico-kube-controllers-677fb55ddc-c2ww4" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--kube--controllers--677fb55ddc--c2ww4-eth0" Dec 13 15:05:13.281771 containerd[2682]: 2024-12-13 15:05:13.269 [INFO][7031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3355432327 ContainerID="86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a" Namespace="calico-system" Pod="calico-kube-controllers-677fb55ddc-c2ww4" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--kube--controllers--677fb55ddc--c2ww4-eth0" Dec 13 15:05:13.281771 containerd[2682]: 2024-12-13 15:05:13.274 [INFO][7031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a" Namespace="calico-system" Pod="calico-kube-controllers-677fb55ddc-c2ww4" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--kube--controllers--677fb55ddc--c2ww4-eth0" Dec 13 15:05:13.281771 containerd[2682]: 2024-12-13 15:05:13.275 [INFO][7031] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a" Namespace="calico-system" Pod="calico-kube-controllers-677fb55ddc-c2ww4" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--kube--controllers--677fb55ddc--c2ww4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--a49a1da819-k8s-calico--kube--controllers--677fb55ddc--c2ww4-eth0", GenerateName:"calico-kube-controllers-677fb55ddc-", Namespace:"calico-system", SelfLink:"", UID:"69da9d02-8a41-4e0e-ae25-f567a3c8af61", ResourceVersion:"654", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 15, 5, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"677fb55ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-a49a1da819", ContainerID:"86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a", Pod:"calico-kube-controllers-677fb55ddc-c2ww4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.53.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif3355432327", MAC:"ae:98:80:5a:52:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 15:05:13.281771 containerd[2682]: 2024-12-13 15:05:13.280 [INFO][7031] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a" Namespace="calico-system" Pod="calico-kube-controllers-677fb55ddc-c2ww4" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--kube--controllers--677fb55ddc--c2ww4-eth0" Dec 13 15:05:13.287744 systemd-networkd[2590]: calief8311527dd: Link UP Dec 13 15:05:13.287909 systemd-networkd[2590]: calief8311527dd: Gained carrier Dec 13 15:05:13.294212 containerd[2682]: 2024-12-13 15:05:13.198 [INFO][6990] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 15:05:13.294212 containerd[2682]: 2024-12-13 15:05:13.208 [INFO][6990] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--f56qp-eth0 coredns-76f75df574- kube-system bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4 655 0 2024-12-13 15:04:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.0.0-a-a49a1da819 coredns-76f75df574-f56qp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calief8311527dd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f" Namespace="kube-system" Pod="coredns-76f75df574-f56qp" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--f56qp-" Dec 13 15:05:13.294212 containerd[2682]: 2024-12-13 15:05:13.208 [INFO][6990] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f" Namespace="kube-system" Pod="coredns-76f75df574-f56qp" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--f56qp-eth0" Dec 13 15:05:13.294212 containerd[2682]: 2024-12-13 15:05:13.241 [INFO][7126] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f" HandleID="k8s-pod-network.00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f" Workload="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--f56qp-eth0" Dec 13 15:05:13.294212 containerd[2682]: 2024-12-13 15:05:13.254 [INFO][7126] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f" HandleID="k8s-pod-network.00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f" Workload="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--f56qp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000a0c760), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.0.0-a-a49a1da819", "pod":"coredns-76f75df574-f56qp", "timestamp":"2024-12-13 15:05:13.241023481 +0000 UTC"}, Hostname:"ci-4186.0.0-a-a49a1da819", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 15:05:13.294212 containerd[2682]: 2024-12-13 15:05:13.254 [INFO][7126] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 15:05:13.294212 containerd[2682]: 2024-12-13 15:05:13.267 [INFO][7126] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 15:05:13.294212 containerd[2682]: 2024-12-13 15:05:13.267 [INFO][7126] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.0.0-a-a49a1da819' Dec 13 15:05:13.294212 containerd[2682]: 2024-12-13 15:05:13.268 [INFO][7126] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.294212 containerd[2682]: 2024-12-13 15:05:13.271 [INFO][7126] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.294212 containerd[2682]: 2024-12-13 15:05:13.274 [INFO][7126] ipam/ipam.go 489: Trying affinity for 192.168.53.128/26 host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.294212 containerd[2682]: 2024-12-13 15:05:13.275 [INFO][7126] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.128/26 host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.294212 containerd[2682]: 2024-12-13 15:05:13.277 [INFO][7126] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.128/26 host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.294212 containerd[2682]: 2024-12-13 15:05:13.277 [INFO][7126] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.128/26 handle="k8s-pod-network.00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.294212 containerd[2682]: 2024-12-13 15:05:13.278 [INFO][7126] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f Dec 13 15:05:13.294212 containerd[2682]: 2024-12-13 15:05:13.281 [INFO][7126] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.128/26 handle="k8s-pod-network.00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.294212 containerd[2682]: 2024-12-13 15:05:13.284 [INFO][7126] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.130/26] block=192.168.53.128/26 handle="k8s-pod-network.00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.294212 containerd[2682]: 2024-12-13 15:05:13.284 [INFO][7126] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.130/26] handle="k8s-pod-network.00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.294212 containerd[2682]: 2024-12-13 15:05:13.285 [INFO][7126] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 15:05:13.294212 containerd[2682]: 2024-12-13 15:05:13.285 [INFO][7126] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.130/26] IPv6=[] ContainerID="00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f" HandleID="k8s-pod-network.00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f" Workload="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--f56qp-eth0" Dec 13 15:05:13.294610 containerd[2682]: 2024-12-13 15:05:13.286 [INFO][6990] cni-plugin/k8s.go 386: Populated endpoint ContainerID="00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f" Namespace="kube-system" Pod="coredns-76f75df574-f56qp" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--f56qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--f56qp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4", ResourceVersion:"655", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 15, 4, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-a49a1da819", ContainerID:"", Pod:"coredns-76f75df574-f56qp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief8311527dd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 15:05:13.294610 containerd[2682]: 2024-12-13 15:05:13.286 [INFO][6990] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.130/32] ContainerID="00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f" Namespace="kube-system" Pod="coredns-76f75df574-f56qp" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--f56qp-eth0" Dec 13 15:05:13.294610 containerd[2682]: 2024-12-13 15:05:13.286 [INFO][6990] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief8311527dd ContainerID="00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f" Namespace="kube-system" Pod="coredns-76f75df574-f56qp" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--f56qp-eth0" Dec 13 15:05:13.294610 containerd[2682]: 2024-12-13 15:05:13.287 [INFO][6990] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f" Namespace="kube-system" Pod="coredns-76f75df574-f56qp" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--f56qp-eth0" Dec 13 15:05:13.294610 containerd[2682]: 2024-12-13 15:05:13.288 [INFO][6990] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f" Namespace="kube-system" Pod="coredns-76f75df574-f56qp" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--f56qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--f56qp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4", ResourceVersion:"655", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 15, 4, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-a49a1da819", ContainerID:"00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f", Pod:"coredns-76f75df574-f56qp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief8311527dd", MAC:"26:be:61:4e:28:93", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 15:05:13.294610 containerd[2682]: 2024-12-13 15:05:13.293 [INFO][6990] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f" Namespace="kube-system" Pod="coredns-76f75df574-f56qp" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--f56qp-eth0" Dec 13 15:05:13.301245 containerd[2682]: time="2024-12-13T15:05:13.301172707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:05:13.301245 containerd[2682]: time="2024-12-13T15:05:13.301236387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:05:13.301293 containerd[2682]: time="2024-12-13T15:05:13.301246746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:05:13.301337 containerd[2682]: time="2024-12-13T15:05:13.301320386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:05:13.305049 systemd-networkd[2590]: calib383fc38d42: Link UP Dec 13 15:05:13.305221 systemd-networkd[2590]: calib383fc38d42: Gained carrier Dec 13 15:05:13.307659 containerd[2682]: time="2024-12-13T15:05:13.307556442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:05:13.307659 containerd[2682]: time="2024-12-13T15:05:13.307610202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:05:13.307659 containerd[2682]: time="2024-12-13T15:05:13.307621521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:05:13.307800 containerd[2682]: time="2024-12-13T15:05:13.307703241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:05:13.311957 containerd[2682]: 2024-12-13 15:05:13.192 [INFO][6971] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 15:05:13.311957 containerd[2682]: 2024-12-13 15:05:13.207 [INFO][6971] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--9zlz5-eth0 calico-apiserver-7c9f6c48- calico-apiserver 4488c13d-4962-45a7-8c16-59ac8004d33d 657 0 2024-12-13 15:05:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c9f6c48 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.0.0-a-a49a1da819 calico-apiserver-7c9f6c48-9zlz5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib383fc38d42 [] []}} ContainerID="55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33" Namespace="calico-apiserver" Pod="calico-apiserver-7c9f6c48-9zlz5" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--9zlz5-" Dec 13 15:05:13.311957 containerd[2682]: 2024-12-13 15:05:13.207 [INFO][6971] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33" Namespace="calico-apiserver" Pod="calico-apiserver-7c9f6c48-9zlz5" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--9zlz5-eth0" Dec 13 15:05:13.311957 containerd[2682]: 2024-12-13 15:05:13.241 [INFO][7124] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33" HandleID="k8s-pod-network.55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33" Workload="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--9zlz5-eth0" Dec 13 15:05:13.311957 containerd[2682]: 2024-12-13 15:05:13.254 [INFO][7124] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33" HandleID="k8s-pod-network.55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33" Workload="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--9zlz5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004bbeb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.0.0-a-a49a1da819", "pod":"calico-apiserver-7c9f6c48-9zlz5", "timestamp":"2024-12-13 15:05:13.241020201 +0000 UTC"}, Hostname:"ci-4186.0.0-a-a49a1da819", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 15:05:13.311957 containerd[2682]: 2024-12-13 15:05:13.254 [INFO][7124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 15:05:13.311957 containerd[2682]: 2024-12-13 15:05:13.285 [INFO][7124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 15:05:13.311957 containerd[2682]: 2024-12-13 15:05:13.285 [INFO][7124] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.0.0-a-a49a1da819' Dec 13 15:05:13.311957 containerd[2682]: 2024-12-13 15:05:13.286 [INFO][7124] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.311957 containerd[2682]: 2024-12-13 15:05:13.289 [INFO][7124] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.311957 containerd[2682]: 2024-12-13 15:05:13.292 [INFO][7124] ipam/ipam.go 489: Trying affinity for 192.168.53.128/26 host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.311957 containerd[2682]: 2024-12-13 15:05:13.293 [INFO][7124] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.128/26 host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.311957 containerd[2682]: 2024-12-13 15:05:13.295 [INFO][7124] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.128/26 host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.311957 containerd[2682]: 2024-12-13 15:05:13.295 [INFO][7124] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.128/26 handle="k8s-pod-network.55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.311957 containerd[2682]: 2024-12-13 15:05:13.296 [INFO][7124] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33 Dec 13 15:05:13.311957 containerd[2682]: 2024-12-13 15:05:13.299 [INFO][7124] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.128/26 handle="k8s-pod-network.55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.311957 containerd[2682]: 2024-12-13 15:05:13.302 [INFO][7124] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.131/26] block=192.168.53.128/26 handle="k8s-pod-network.55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.311957 containerd[2682]: 2024-12-13 15:05:13.302 [INFO][7124] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.131/26] handle="k8s-pod-network.55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.311957 containerd[2682]: 2024-12-13 15:05:13.302 [INFO][7124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 15:05:13.311957 containerd[2682]: 2024-12-13 15:05:13.302 [INFO][7124] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.131/26] IPv6=[] ContainerID="55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33" HandleID="k8s-pod-network.55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33" Workload="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--9zlz5-eth0" Dec 13 15:05:13.312368 containerd[2682]: 2024-12-13 15:05:13.303 [INFO][6971] cni-plugin/k8s.go 386: Populated endpoint ContainerID="55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33" Namespace="calico-apiserver" Pod="calico-apiserver-7c9f6c48-9zlz5" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--9zlz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--9zlz5-eth0", GenerateName:"calico-apiserver-7c9f6c48-", Namespace:"calico-apiserver", SelfLink:"", UID:"4488c13d-4962-45a7-8c16-59ac8004d33d", ResourceVersion:"657", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 15, 5, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c9f6c48", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-a49a1da819", ContainerID:"", Pod:"calico-apiserver-7c9f6c48-9zlz5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib383fc38d42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 15:05:13.312368 containerd[2682]: 2024-12-13 15:05:13.303 [INFO][6971] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.131/32] ContainerID="55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33" Namespace="calico-apiserver" Pod="calico-apiserver-7c9f6c48-9zlz5" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--9zlz5-eth0" Dec 13 15:05:13.312368 containerd[2682]: 2024-12-13 15:05:13.304 [INFO][6971] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib383fc38d42 ContainerID="55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33" Namespace="calico-apiserver" Pod="calico-apiserver-7c9f6c48-9zlz5" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--9zlz5-eth0" Dec 13 15:05:13.312368 containerd[2682]: 2024-12-13 15:05:13.305 [INFO][6971] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33" Namespace="calico-apiserver" Pod="calico-apiserver-7c9f6c48-9zlz5" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--9zlz5-eth0" Dec 13 15:05:13.312368 containerd[2682]: 2024-12-13 15:05:13.305 [INFO][6971] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33" Namespace="calico-apiserver" Pod="calico-apiserver-7c9f6c48-9zlz5" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--9zlz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--9zlz5-eth0", GenerateName:"calico-apiserver-7c9f6c48-", Namespace:"calico-apiserver", SelfLink:"", UID:"4488c13d-4962-45a7-8c16-59ac8004d33d", ResourceVersion:"657", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 15, 5, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c9f6c48", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-a49a1da819", ContainerID:"55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33", Pod:"calico-apiserver-7c9f6c48-9zlz5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib383fc38d42", MAC:"e6:3c:e1:20:64:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 15:05:13.312368 containerd[2682]: 2024-12-13 15:05:13.310 [INFO][6971] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33" Namespace="calico-apiserver" Pod="calico-apiserver-7c9f6c48-9zlz5" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--9zlz5-eth0" Dec 13 15:05:13.322815 systemd[1]: Started cri-containerd-86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a.scope - libcontainer container 86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a. Dec 13 15:05:13.325711 containerd[2682]: time="2024-12-13T15:05:13.325459940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:05:13.325761 containerd[2682]: time="2024-12-13T15:05:13.325713777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:05:13.325761 containerd[2682]: time="2024-12-13T15:05:13.325726417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:05:13.325819 containerd[2682]: time="2024-12-13T15:05:13.325803616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:05:13.332217 systemd[1]: run-netns-cni\x2dc34f07e9\x2d37b5\x2dcf17\x2d0db1\x2de203221b5332.mount: Deactivated successfully. Dec 13 15:05:13.332293 systemd[1]: run-netns-cni\x2dc608d572\x2def3a\x2d7b4d\x2d5841\x2dd31672205f93.mount: Deactivated successfully. Dec 13 15:05:13.332350 systemd[1]: run-netns-cni\x2dd7b04ec1\x2df551\x2ddf64\x2d9693\x2d9b283e58034e.mount: Deactivated successfully. Dec 13 15:05:13.332396 systemd[1]: run-netns-cni\x2d81ea56d6\x2d348b\x2d2767\x2d88dd\x2da75670191fbc.mount: Deactivated successfully. Dec 13 15:05:13.332441 systemd[1]: run-netns-cni\x2d38d8ebd0\x2de7d5\x2d4d3e\x2d7e13\x2dc263926aa099.mount: Deactivated successfully. Dec 13 15:05:13.332494 systemd[1]: run-netns-cni\x2d3de82e38\x2d1390\x2deec3\x2d1d10\x2ddbe60bef6f47.mount: Deactivated successfully. Dec 13 15:05:13.333042 systemd-networkd[2590]: cali3d0702ae608: Link UP Dec 13 15:05:13.333198 systemd-networkd[2590]: cali3d0702ae608: Gained carrier Dec 13 15:05:13.340855 containerd[2682]: 2024-12-13 15:05:13.198 [INFO][6987] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 15:05:13.340855 containerd[2682]: 2024-12-13 15:05:13.208 [INFO][6987] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--q86wb-eth0 calico-apiserver-7c9f6c48- calico-apiserver 9d479dc0-c58f-4c21-b54a-803d52ba2263 656 0 2024-12-13 15:05:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c9f6c48 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.0.0-a-a49a1da819 calico-apiserver-7c9f6c48-q86wb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3d0702ae608 [] []}} ContainerID="7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c" Namespace="calico-apiserver" Pod="calico-apiserver-7c9f6c48-q86wb" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--q86wb-" Dec 13 15:05:13.340855 containerd[2682]: 2024-12-13 15:05:13.208 [INFO][6987] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c" Namespace="calico-apiserver" Pod="calico-apiserver-7c9f6c48-q86wb" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--q86wb-eth0" Dec 13 15:05:13.340855 containerd[2682]: 2024-12-13 15:05:13.241 [INFO][7123] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c" HandleID="k8s-pod-network.7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c" Workload="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--q86wb-eth0" Dec 13 15:05:13.340855 containerd[2682]: 2024-12-13 15:05:13.254 [INFO][7123] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c" HandleID="k8s-pod-network.7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c" Workload="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--q86wb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039e860), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.0.0-a-a49a1da819", "pod":"calico-apiserver-7c9f6c48-q86wb", "timestamp":"2024-12-13 15:05:13.241022441 +0000 UTC"}, Hostname:"ci-4186.0.0-a-a49a1da819", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 15:05:13.340855 containerd[2682]: 2024-12-13 15:05:13.254 [INFO][7123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 15:05:13.340855 containerd[2682]: 2024-12-13 15:05:13.302 [INFO][7123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 15:05:13.340855 containerd[2682]: 2024-12-13 15:05:13.302 [INFO][7123] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.0.0-a-a49a1da819' Dec 13 15:05:13.340855 containerd[2682]: 2024-12-13 15:05:13.304 [INFO][7123] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.340855 containerd[2682]: 2024-12-13 15:05:13.307 [INFO][7123] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.340855 containerd[2682]: 2024-12-13 15:05:13.310 [INFO][7123] ipam/ipam.go 489: Trying affinity for 192.168.53.128/26 host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.340855 containerd[2682]: 2024-12-13 15:05:13.311 [INFO][7123] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.128/26 host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.340855 containerd[2682]: 2024-12-13 15:05:13.313 [INFO][7123] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.128/26 host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.340855 containerd[2682]: 2024-12-13 15:05:13.313 [INFO][7123] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.128/26 handle="k8s-pod-network.7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.340855 containerd[2682]: 2024-12-13 15:05:13.314 [INFO][7123] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c Dec 13 15:05:13.340855 containerd[2682]: 2024-12-13 15:05:13.326 [INFO][7123] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.128/26 handle="k8s-pod-network.7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.340855 containerd[2682]: 2024-12-13 15:05:13.330 [INFO][7123] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.132/26] block=192.168.53.128/26 handle="k8s-pod-network.7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.340855 containerd[2682]: 2024-12-13 15:05:13.330 [INFO][7123] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.132/26] handle="k8s-pod-network.7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.340855 containerd[2682]: 2024-12-13 15:05:13.330 [INFO][7123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 15:05:13.340855 containerd[2682]: 2024-12-13 15:05:13.330 [INFO][7123] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.132/26] IPv6=[] ContainerID="7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c" HandleID="k8s-pod-network.7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c" Workload="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--q86wb-eth0" Dec 13 15:05:13.341266 containerd[2682]: 2024-12-13 15:05:13.331 [INFO][6987] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c" Namespace="calico-apiserver" Pod="calico-apiserver-7c9f6c48-q86wb" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--q86wb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--q86wb-eth0", GenerateName:"calico-apiserver-7c9f6c48-", Namespace:"calico-apiserver", SelfLink:"", UID:"9d479dc0-c58f-4c21-b54a-803d52ba2263", ResourceVersion:"656", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 15, 5, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c9f6c48", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-a49a1da819", ContainerID:"", Pod:"calico-apiserver-7c9f6c48-q86wb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3d0702ae608", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 15:05:13.341266 containerd[2682]: 2024-12-13 15:05:13.331 [INFO][6987] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.132/32] ContainerID="7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c" Namespace="calico-apiserver" Pod="calico-apiserver-7c9f6c48-q86wb" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--q86wb-eth0" Dec 13 15:05:13.341266 containerd[2682]: 2024-12-13 15:05:13.331 [INFO][6987] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d0702ae608 ContainerID="7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c" Namespace="calico-apiserver" Pod="calico-apiserver-7c9f6c48-q86wb" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--q86wb-eth0" Dec 13 15:05:13.341266 containerd[2682]: 2024-12-13 15:05:13.333 [INFO][6987] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c" Namespace="calico-apiserver" Pod="calico-apiserver-7c9f6c48-q86wb" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--q86wb-eth0" Dec 13 15:05:13.341266 containerd[2682]: 2024-12-13 15:05:13.334 [INFO][6987] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c" Namespace="calico-apiserver" Pod="calico-apiserver-7c9f6c48-q86wb" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--q86wb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--q86wb-eth0", GenerateName:"calico-apiserver-7c9f6c48-", Namespace:"calico-apiserver", SelfLink:"", UID:"9d479dc0-c58f-4c21-b54a-803d52ba2263", ResourceVersion:"656", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 15, 5, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c9f6c48", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-a49a1da819", ContainerID:"7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c", Pod:"calico-apiserver-7c9f6c48-q86wb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3d0702ae608", MAC:"d6:d6:8b:ba:91:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 15:05:13.341266 containerd[2682]: 2024-12-13 15:05:13.339 [INFO][6987] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c" Namespace="calico-apiserver" Pod="calico-apiserver-7c9f6c48-q86wb" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-calico--apiserver--7c9f6c48--q86wb-eth0" Dec 13 15:05:13.343041 systemd[1]: Started cri-containerd-00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f.scope - libcontainer container 00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f. Dec 13 15:05:13.344136 systemd[1]: Started cri-containerd-55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33.scope - libcontainer container 55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33. Dec 13 15:05:13.351287 systemd-networkd[2590]: cali78442a4ddca: Link UP Dec 13 15:05:13.351469 systemd-networkd[2590]: cali78442a4ddca: Gained carrier Dec 13 15:05:13.356181 containerd[2682]: time="2024-12-13T15:05:13.356122627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:05:13.356181 containerd[2682]: time="2024-12-13T15:05:13.356174386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:05:13.356241 containerd[2682]: time="2024-12-13T15:05:13.356186546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:05:13.356283 containerd[2682]: time="2024-12-13T15:05:13.356259025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:05:13.357584 containerd[2682]: time="2024-12-13T15:05:13.357555172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-677fb55ddc-c2ww4,Uid:69da9d02-8a41-4e0e-ae25-f567a3c8af61,Namespace:calico-system,Attempt:4,} returns sandbox id \"86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a\"" Dec 13 15:05:13.359780 containerd[2682]: time="2024-12-13T15:05:13.359760830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 15:05:13.360519 containerd[2682]: 2024-12-13 15:05:13.192 [INFO][6965] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 15:05:13.360519 containerd[2682]: 2024-12-13 15:05:13.207 [INFO][6965] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.0.0--a--a49a1da819-k8s-csi--node--driver--lh26w-eth0 csi-node-driver- calico-system 7add470f-d95a-4158-a5d4-494b47592652 595 0 2024-12-13 15:05:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4186.0.0-a-a49a1da819 csi-node-driver-lh26w eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali78442a4ddca [] []}} ContainerID="9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099" Namespace="calico-system" Pod="csi-node-driver-lh26w" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-csi--node--driver--lh26w-" Dec 13 15:05:13.360519 containerd[2682]: 2024-12-13 15:05:13.207 [INFO][6965] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099" Namespace="calico-system" Pod="csi-node-driver-lh26w" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-csi--node--driver--lh26w-eth0" Dec 13 15:05:13.360519 containerd[2682]: 2024-12-13 15:05:13.241 [INFO][7125] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099" HandleID="k8s-pod-network.9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099" Workload="ci--4186.0.0--a--a49a1da819-k8s-csi--node--driver--lh26w-eth0" Dec 13 15:05:13.360519 containerd[2682]: 2024-12-13 15:05:13.255 [INFO][7125] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099" HandleID="k8s-pod-network.9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099" Workload="ci--4186.0.0--a--a49a1da819-k8s-csi--node--driver--lh26w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032c970), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.0.0-a-a49a1da819", "pod":"csi-node-driver-lh26w", "timestamp":"2024-12-13 15:05:13.241021441 +0000 UTC"}, Hostname:"ci-4186.0.0-a-a49a1da819", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 15:05:13.360519 containerd[2682]: 2024-12-13 15:05:13.255 [INFO][7125] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 15:05:13.360519 containerd[2682]: 2024-12-13 15:05:13.330 [INFO][7125] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 15:05:13.360519 containerd[2682]: 2024-12-13 15:05:13.330 [INFO][7125] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.0.0-a-a49a1da819' Dec 13 15:05:13.360519 containerd[2682]: 2024-12-13 15:05:13.332 [INFO][7125] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.360519 containerd[2682]: 2024-12-13 15:05:13.335 [INFO][7125] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.360519 containerd[2682]: 2024-12-13 15:05:13.338 [INFO][7125] ipam/ipam.go 489: Trying affinity for 192.168.53.128/26 host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.360519 containerd[2682]: 2024-12-13 15:05:13.339 [INFO][7125] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.128/26 host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.360519 containerd[2682]: 2024-12-13 15:05:13.341 [INFO][7125] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.128/26 host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.360519 containerd[2682]: 2024-12-13 15:05:13.341 [INFO][7125] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.128/26 handle="k8s-pod-network.9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.360519 containerd[2682]: 2024-12-13 15:05:13.342 [INFO][7125] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099 Dec 13 15:05:13.360519 containerd[2682]: 2024-12-13 15:05:13.344 [INFO][7125] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.128/26 handle="k8s-pod-network.9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.360519 containerd[2682]: 2024-12-13 15:05:13.348 [INFO][7125] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.133/26] block=192.168.53.128/26 handle="k8s-pod-network.9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.360519 containerd[2682]: 2024-12-13 15:05:13.348 [INFO][7125] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.133/26] handle="k8s-pod-network.9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.360519 containerd[2682]: 2024-12-13 15:05:13.348 [INFO][7125] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 15:05:13.360519 containerd[2682]: 2024-12-13 15:05:13.348 [INFO][7125] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.133/26] IPv6=[] ContainerID="9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099" HandleID="k8s-pod-network.9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099" Workload="ci--4186.0.0--a--a49a1da819-k8s-csi--node--driver--lh26w-eth0" Dec 13 15:05:13.360925 containerd[2682]: 2024-12-13 15:05:13.349 [INFO][6965] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099" Namespace="calico-system" Pod="csi-node-driver-lh26w" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-csi--node--driver--lh26w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--a49a1da819-k8s-csi--node--driver--lh26w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7add470f-d95a-4158-a5d4-494b47592652", ResourceVersion:"595", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 15, 5, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-a49a1da819", ContainerID:"", Pod:"csi-node-driver-lh26w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.53.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali78442a4ddca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 15:05:13.360925 containerd[2682]: 2024-12-13 15:05:13.350 [INFO][6965] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.133/32] ContainerID="9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099" Namespace="calico-system" Pod="csi-node-driver-lh26w" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-csi--node--driver--lh26w-eth0" Dec 13 15:05:13.360925 containerd[2682]: 2024-12-13 15:05:13.350 [INFO][6965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali78442a4ddca ContainerID="9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099" Namespace="calico-system" Pod="csi-node-driver-lh26w" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-csi--node--driver--lh26w-eth0" Dec 13 15:05:13.360925 containerd[2682]: 2024-12-13 15:05:13.351 [INFO][6965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099" Namespace="calico-system" Pod="csi-node-driver-lh26w" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-csi--node--driver--lh26w-eth0" Dec 13 15:05:13.360925 containerd[2682]: 2024-12-13 15:05:13.351 [INFO][6965] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099" Namespace="calico-system" Pod="csi-node-driver-lh26w" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-csi--node--driver--lh26w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--a49a1da819-k8s-csi--node--driver--lh26w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7add470f-d95a-4158-a5d4-494b47592652", ResourceVersion:"595", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 15, 5, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-a49a1da819", ContainerID:"9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099", Pod:"csi-node-driver-lh26w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.53.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali78442a4ddca", MAC:"c2:cf:bd:cb:b6:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 15:05:13.360925 containerd[2682]: 2024-12-13 15:05:13.357 [INFO][6965] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099" Namespace="calico-system" Pod="csi-node-driver-lh26w" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-csi--node--driver--lh26w-eth0" Dec 13 15:05:13.370647 systemd-networkd[2590]: cali427c53c2d1d: Link UP Dec 13 15:05:13.371300 systemd-networkd[2590]: cali427c53c2d1d: Gained carrier Dec 13 15:05:13.374415 containerd[2682]: time="2024-12-13T15:05:13.374062164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:05:13.374463 containerd[2682]: time="2024-12-13T15:05:13.374414440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:05:13.374463 containerd[2682]: time="2024-12-13T15:05:13.374428160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:05:13.374516 containerd[2682]: time="2024-12-13T15:05:13.374500279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:05:13.378877 containerd[2682]: 2024-12-13 15:05:13.199 [INFO][7002] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 15:05:13.378877 containerd[2682]: 2024-12-13 15:05:13.209 [INFO][7002] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--l2n54-eth0 coredns-76f75df574- kube-system f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b 651 0 2024-12-13 15:04:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.0.0-a-a49a1da819 coredns-76f75df574-l2n54 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali427c53c2d1d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae" Namespace="kube-system" Pod="coredns-76f75df574-l2n54" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--l2n54-" Dec 13 15:05:13.378877 containerd[2682]: 2024-12-13 15:05:13.209 [INFO][7002] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae" Namespace="kube-system" Pod="coredns-76f75df574-l2n54" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--l2n54-eth0" Dec 13 15:05:13.378877 containerd[2682]: 2024-12-13 15:05:13.241 [INFO][7127] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae" HandleID="k8s-pod-network.16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae" Workload="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--l2n54-eth0" Dec 13 15:05:13.378877 containerd[2682]: 2024-12-13 15:05:13.255 [INFO][7127] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae" HandleID="k8s-pod-network.16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae" Workload="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--l2n54-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400059b4a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.0.0-a-a49a1da819", "pod":"coredns-76f75df574-l2n54", "timestamp":"2024-12-13 15:05:13.241026641 +0000 UTC"}, Hostname:"ci-4186.0.0-a-a49a1da819", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 15:05:13.378877 containerd[2682]: 2024-12-13 15:05:13.255 [INFO][7127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 15:05:13.378877 containerd[2682]: 2024-12-13 15:05:13.348 [INFO][7127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 15:05:13.378877 containerd[2682]: 2024-12-13 15:05:13.348 [INFO][7127] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.0.0-a-a49a1da819' Dec 13 15:05:13.378877 containerd[2682]: 2024-12-13 15:05:13.350 [INFO][7127] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.378877 containerd[2682]: 2024-12-13 15:05:13.353 [INFO][7127] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.378877 containerd[2682]: 2024-12-13 15:05:13.356 [INFO][7127] ipam/ipam.go 489: Trying affinity for 192.168.53.128/26 host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.378877 containerd[2682]: 2024-12-13 15:05:13.358 [INFO][7127] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.128/26 host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.378877 containerd[2682]: 2024-12-13 15:05:13.359 [INFO][7127] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.128/26 host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.378877 containerd[2682]: 2024-12-13 15:05:13.360 [INFO][7127] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.128/26 handle="k8s-pod-network.16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.378877 containerd[2682]: 2024-12-13 15:05:13.361 [INFO][7127] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae Dec 13 15:05:13.378877 containerd[2682]: 2024-12-13 15:05:13.363 [INFO][7127] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.128/26 handle="k8s-pod-network.16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.378877 containerd[2682]: 2024-12-13 15:05:13.367 [INFO][7127] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.134/26] block=192.168.53.128/26 handle="k8s-pod-network.16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.378877 containerd[2682]: 2024-12-13 15:05:13.367 [INFO][7127] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.134/26] handle="k8s-pod-network.16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae" host="ci-4186.0.0-a-a49a1da819" Dec 13 15:05:13.378877 containerd[2682]: 2024-12-13 15:05:13.367 [INFO][7127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 15:05:13.378877 containerd[2682]: 2024-12-13 15:05:13.367 [INFO][7127] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.134/26] IPv6=[] ContainerID="16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae" HandleID="k8s-pod-network.16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae" Workload="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--l2n54-eth0" Dec 13 15:05:13.379299 containerd[2682]: 2024-12-13 15:05:13.369 [INFO][7002] cni-plugin/k8s.go 386: Populated endpoint ContainerID="16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae" Namespace="kube-system" Pod="coredns-76f75df574-l2n54" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--l2n54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--l2n54-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b", ResourceVersion:"651", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 15, 4, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-a49a1da819", ContainerID:"", Pod:"coredns-76f75df574-l2n54", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali427c53c2d1d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 15:05:13.379299 containerd[2682]: 2024-12-13 15:05:13.369 [INFO][7002] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.134/32] ContainerID="16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae" Namespace="kube-system" Pod="coredns-76f75df574-l2n54" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--l2n54-eth0" Dec 13 15:05:13.379299 containerd[2682]: 2024-12-13 15:05:13.369 [INFO][7002] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali427c53c2d1d ContainerID="16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae" Namespace="kube-system" Pod="coredns-76f75df574-l2n54" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--l2n54-eth0" Dec 13 15:05:13.379299 containerd[2682]: 2024-12-13 15:05:13.370 [INFO][7002] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae" Namespace="kube-system" Pod="coredns-76f75df574-l2n54" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--l2n54-eth0" Dec 13 15:05:13.379299 containerd[2682]: 2024-12-13 15:05:13.371 [INFO][7002] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae" Namespace="kube-system" Pod="coredns-76f75df574-l2n54" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--l2n54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--l2n54-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b", ResourceVersion:"651", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 15, 4, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.0.0-a-a49a1da819", ContainerID:"16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae", Pod:"coredns-76f75df574-l2n54", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali427c53c2d1d", MAC:"32:da:bc:87:76:65", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 15:05:13.379299 containerd[2682]: 2024-12-13 15:05:13.377 [INFO][7002] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae" Namespace="kube-system" Pod="coredns-76f75df574-l2n54" WorkloadEndpoint="ci--4186.0.0--a--a49a1da819-k8s-coredns--76f75df574--l2n54-eth0" Dec 13 15:05:13.388835 systemd[1]: Started cri-containerd-7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c.scope - libcontainer container 7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c. Dec 13 15:05:13.389296 containerd[2682]: time="2024-12-13T15:05:13.389268329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9f6c48-9zlz5,Uid:4488c13d-4962-45a7-8c16-59ac8004d33d,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33\"" Dec 13 15:05:13.389342 containerd[2682]: time="2024-12-13T15:05:13.389280369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-f56qp,Uid:bd2476e6-6bf2-4c4f-98bf-e85082f2b5c4,Namespace:kube-system,Attempt:4,} returns sandbox id \"00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f\"" Dec 13 15:05:13.391358 containerd[2682]: time="2024-12-13T15:05:13.391332388Z" level=info msg="CreateContainer within sandbox \"00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 15:05:13.392400 containerd[2682]: time="2024-12-13T15:05:13.392328138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:05:13.392449 containerd[2682]: time="2024-12-13T15:05:13.392395017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:05:13.392449 containerd[2682]: time="2024-12-13T15:05:13.392411857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:05:13.392523 containerd[2682]: time="2024-12-13T15:05:13.392495536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:05:13.395342 systemd[1]: Started cri-containerd-9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099.scope - libcontainer container 9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099. Dec 13 15:05:13.397629 containerd[2682]: time="2024-12-13T15:05:13.397562564Z" level=info msg="CreateContainer within sandbox \"00f49841025fe78c1121e4ce7a2396162c6598c2bd03be3d918c1a4bd5c1592f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7388222a06c2a7573084fa5cb5b3cf8b1b74129482232af1bd9920b36640b7fa\"" Dec 13 15:05:13.399819 containerd[2682]: time="2024-12-13T15:05:13.399793421Z" level=info msg="StartContainer for \"7388222a06c2a7573084fa5cb5b3cf8b1b74129482232af1bd9920b36640b7fa\"" Dec 13 15:05:13.402836 systemd[1]: Started cri-containerd-16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae.scope - libcontainer container 16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae. Dec 13 15:05:13.413107 containerd[2682]: time="2024-12-13T15:05:13.413079766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lh26w,Uid:7add470f-d95a-4158-a5d4-494b47592652,Namespace:calico-system,Attempt:4,} returns sandbox id \"9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099\"" Dec 13 15:05:13.413161 containerd[2682]: time="2024-12-13T15:05:13.413083246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9f6c48-q86wb,Uid:9d479dc0-c58f-4c21-b54a-803d52ba2263,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c\"" Dec 13 15:05:13.414605 systemd[1]: Started cri-containerd-7388222a06c2a7573084fa5cb5b3cf8b1b74129482232af1bd9920b36640b7fa.scope - libcontainer container 7388222a06c2a7573084fa5cb5b3cf8b1b74129482232af1bd9920b36640b7fa. Dec 13 15:05:13.426458 containerd[2682]: time="2024-12-13T15:05:13.426386230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l2n54,Uid:f8ddf960-bc9e-4ff9-9f45-1e5c6b99c17b,Namespace:kube-system,Attempt:4,} returns sandbox id \"16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae\"" Dec 13 15:05:13.428530 containerd[2682]: time="2024-12-13T15:05:13.428507809Z" level=info msg="CreateContainer within sandbox \"16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 15:05:13.431769 containerd[2682]: time="2024-12-13T15:05:13.431740456Z" level=info msg="StartContainer for \"7388222a06c2a7573084fa5cb5b3cf8b1b74129482232af1bd9920b36640b7fa\" returns successfully" Dec 13 15:05:13.434091 containerd[2682]: time="2024-12-13T15:05:13.434064872Z" level=info msg="CreateContainer within sandbox \"16ca2383421a3d0c1518dd86310c1f978318269c71282bd769e7452a567ca9ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"116ae13353d863105bae0b56f3f30d009723a555fed170d28fab8662f9d360b5\"" Dec 13 15:05:13.434429 containerd[2682]: time="2024-12-13T15:05:13.434409908Z" level=info msg="StartContainer for \"116ae13353d863105bae0b56f3f30d009723a555fed170d28fab8662f9d360b5\"" Dec 13 15:05:13.462808 systemd[1]: Started cri-containerd-116ae13353d863105bae0b56f3f30d009723a555fed170d28fab8662f9d360b5.scope - libcontainer container 116ae13353d863105bae0b56f3f30d009723a555fed170d28fab8662f9d360b5. Dec 13 15:05:13.480919 containerd[2682]: time="2024-12-13T15:05:13.480884954Z" level=info msg="StartContainer for \"116ae13353d863105bae0b56f3f30d009723a555fed170d28fab8662f9d360b5\" returns successfully" Dec 13 15:05:13.783694 kernel: bpftool[7827]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 15:05:13.939184 systemd-networkd[2590]: vxlan.calico: Link UP Dec 13 15:05:13.939189 systemd-networkd[2590]: vxlan.calico: Gained carrier Dec 13 15:05:14.032223 containerd[2682]: time="2024-12-13T15:05:14.032173872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Dec 13 15:05:14.032223 containerd[2682]: time="2024-12-13T15:05:14.032205832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:14.032935 containerd[2682]: time="2024-12-13T15:05:14.032909745Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:14.034630 containerd[2682]: time="2024-12-13T15:05:14.034601089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:14.035268 containerd[2682]: time="2024-12-13T15:05:14.035243243Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 675.460173ms" Dec 13 15:05:14.035297 containerd[2682]: time="2024-12-13T15:05:14.035271403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 15:05:14.035875 containerd[2682]: time="2024-12-13T15:05:14.035853517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 15:05:14.040304 containerd[2682]: time="2024-12-13T15:05:14.040278715Z" level=info msg="CreateContainer within sandbox \"86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 15:05:14.047197 containerd[2682]: time="2024-12-13T15:05:14.047164489Z" level=info msg="CreateContainer within sandbox \"86874a0c3df258ff4bf9d07bb4a1cf60938b4722bfbc5f892cb4706ddfc1687a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ef34c3a440f425c9f83153c9a7be5b9fe40f44caa2a191fa0b4b859c798ed9ec\"" Dec 13 15:05:14.047546 containerd[2682]: time="2024-12-13T15:05:14.047523886Z" level=info msg="StartContainer for \"ef34c3a440f425c9f83153c9a7be5b9fe40f44caa2a191fa0b4b859c798ed9ec\"" Dec 13 15:05:14.074789 systemd[1]: Started cri-containerd-ef34c3a440f425c9f83153c9a7be5b9fe40f44caa2a191fa0b4b859c798ed9ec.scope - libcontainer container ef34c3a440f425c9f83153c9a7be5b9fe40f44caa2a191fa0b4b859c798ed9ec. Dec 13 15:05:14.099561 containerd[2682]: time="2024-12-13T15:05:14.099527428Z" level=info msg="StartContainer for \"ef34c3a440f425c9f83153c9a7be5b9fe40f44caa2a191fa0b4b859c798ed9ec\" returns successfully" Dec 13 15:05:14.187535 kubelet[4377]: I1213 15:05:14.187508 4377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-f56qp" podStartSLOduration=18.187468588 podStartE2EDuration="18.187468588s" podCreationTimestamp="2024-12-13 15:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 15:05:14.18725235 +0000 UTC m=+32.185220759" watchObservedRunningTime="2024-12-13 15:05:14.187468588 +0000 UTC m=+32.185436957" Dec 13 15:05:14.188900 kubelet[4377]: I1213 15:05:14.188881 4377 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 15:05:14.194234 kubelet[4377]: I1213 15:05:14.194163 4377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-677fb55ddc-c2ww4" podStartSLOduration=10.518130356 podStartE2EDuration="11.194128644s" podCreationTimestamp="2024-12-13 15:05:03 +0000 UTC" firstStartedPulling="2024-12-13 15:05:13.359531232 +0000 UTC m=+31.357499641" lastFinishedPulling="2024-12-13 15:05:14.03552952 +0000 UTC m=+32.033497929" observedRunningTime="2024-12-13 15:05:14.193831567 +0000 UTC m=+32.191799976" watchObservedRunningTime="2024-12-13 15:05:14.194128644 +0000 UTC m=+32.192097013" Dec 13 15:05:14.541893 systemd-networkd[2590]: cali427c53c2d1d: Gained IPv6LL Dec 13 15:05:14.605758 systemd-networkd[2590]: cali78442a4ddca: Gained IPv6LL Dec 13 15:05:14.939758 containerd[2682]: time="2024-12-13T15:05:14.939689716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Dec 13 15:05:14.939758 containerd[2682]: time="2024-12-13T15:05:14.939704955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:14.940453 containerd[2682]: time="2024-12-13T15:05:14.940434548Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:14.942228 containerd[2682]: time="2024-12-13T15:05:14.942203091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:14.942944 containerd[2682]: time="2024-12-13T15:05:14.942928165Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 907.047488ms" Dec 13 15:05:14.942992 containerd[2682]: time="2024-12-13T15:05:14.942948004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 15:05:14.943403 containerd[2682]: time="2024-12-13T15:05:14.943385200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 15:05:14.944379 containerd[2682]: time="2024-12-13T15:05:14.944361311Z" level=info msg="CreateContainer within sandbox \"55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 15:05:14.949261 containerd[2682]: time="2024-12-13T15:05:14.949233464Z" level=info msg="CreateContainer within sandbox \"55af3a8e20d793509611456dbbb0ff127a650b833b86dbae5c69c0f9f74e0f33\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d5985df3b5eeb7b3d05c55447411b3e2c26fa0a70bdb57b2b224d1ed94c6606c\"" Dec 13 15:05:14.949559 containerd[2682]: time="2024-12-13T15:05:14.949527581Z" level=info msg="StartContainer for \"d5985df3b5eeb7b3d05c55447411b3e2c26fa0a70bdb57b2b224d1ed94c6606c\"" Dec 13 15:05:14.980793 systemd[1]: Started cri-containerd-d5985df3b5eeb7b3d05c55447411b3e2c26fa0a70bdb57b2b224d1ed94c6606c.scope - libcontainer container d5985df3b5eeb7b3d05c55447411b3e2c26fa0a70bdb57b2b224d1ed94c6606c. Dec 13 15:05:15.005611 containerd[2682]: time="2024-12-13T15:05:15.005580888Z" level=info msg="StartContainer for \"d5985df3b5eeb7b3d05c55447411b3e2c26fa0a70bdb57b2b224d1ed94c6606c\" returns successfully" Dec 13 15:05:15.192373 kubelet[4377]: I1213 15:05:15.192292 4377 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 15:05:15.200243 kubelet[4377]: I1213 15:05:15.200221 4377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-l2n54" podStartSLOduration=19.200184504 podStartE2EDuration="19.200184504s" podCreationTimestamp="2024-12-13 15:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 15:05:14.213766816 +0000 UTC m=+32.211735225" watchObservedRunningTime="2024-12-13 15:05:15.200184504 +0000 UTC m=+33.198152913" Dec 13 15:05:15.200322 kubelet[4377]: I1213 15:05:15.200316 4377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c9f6c48-9zlz5" podStartSLOduration=10.647290581 podStartE2EDuration="12.200299703s" podCreationTimestamp="2024-12-13 15:05:03 +0000 UTC" firstStartedPulling="2024-12-13 15:05:13.39017852 +0000 UTC m=+31.388146929" lastFinishedPulling="2024-12-13 15:05:14.943187642 +0000 UTC m=+32.941156051" observedRunningTime="2024-12-13 15:05:15.200036425 +0000 UTC m=+33.198004834" watchObservedRunningTime="2024-12-13 15:05:15.200299703 +0000 UTC m=+33.198268112" Dec 13 15:05:15.245774 systemd-networkd[2590]: calib383fc38d42: Gained IPv6LL Dec 13 15:05:15.309747 systemd-networkd[2590]: cali3d0702ae608: Gained IPv6LL Dec 13 15:05:15.309999 systemd-networkd[2590]: calif3355432327: Gained IPv6LL Dec 13 15:05:15.373739 systemd-networkd[2590]: calief8311527dd: Gained IPv6LL Dec 13 15:05:15.559628 containerd[2682]: time="2024-12-13T15:05:15.559514643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:15.559756 containerd[2682]: time="2024-12-13T15:05:15.559547243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 15:05:15.560240 containerd[2682]: time="2024-12-13T15:05:15.560208117Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:15.562026 containerd[2682]: time="2024-12-13T15:05:15.562002101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:15.562786 containerd[2682]: time="2024-12-13T15:05:15.562756654Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 619.341454ms" Dec 13 15:05:15.562844 containerd[2682]: time="2024-12-13T15:05:15.562785654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 15:05:15.563290 containerd[2682]: time="2024-12-13T15:05:15.563272130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 15:05:15.564193 containerd[2682]: time="2024-12-13T15:05:15.564172362Z" level=info msg="CreateContainer within sandbox \"9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 15:05:15.570621 containerd[2682]: time="2024-12-13T15:05:15.570592784Z" level=info msg="CreateContainer within sandbox \"9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ae8ebcab4acc158c4e249daccaeeece19ea31942438628a6f5dae251c0347b51\"" Dec 13 15:05:15.570966 containerd[2682]: time="2024-12-13T15:05:15.570942781Z" level=info msg="StartContainer for \"ae8ebcab4acc158c4e249daccaeeece19ea31942438628a6f5dae251c0347b51\"" Dec 13 15:05:15.597807 systemd[1]: Started cri-containerd-ae8ebcab4acc158c4e249daccaeeece19ea31942438628a6f5dae251c0347b51.scope - libcontainer container ae8ebcab4acc158c4e249daccaeeece19ea31942438628a6f5dae251c0347b51. Dec 13 15:05:15.618326 containerd[2682]: time="2024-12-13T15:05:15.618292076Z" level=info msg="StartContainer for \"ae8ebcab4acc158c4e249daccaeeece19ea31942438628a6f5dae251c0347b51\" returns successfully" Dec 13 15:05:15.637886 containerd[2682]: time="2024-12-13T15:05:15.637855901Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:15.637949 containerd[2682]: time="2024-12-13T15:05:15.637912741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 15:05:15.640290 containerd[2682]: time="2024-12-13T15:05:15.640266880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 76.966591ms" Dec 13 15:05:15.640319 containerd[2682]: time="2024-12-13T15:05:15.640293359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 15:05:15.640695 containerd[2682]: time="2024-12-13T15:05:15.640669356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 15:05:15.641676 containerd[2682]: time="2024-12-13T15:05:15.641655027Z" level=info msg="CreateContainer within sandbox \"7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 15:05:15.646310 containerd[2682]: time="2024-12-13T15:05:15.646284746Z" level=info msg="CreateContainer within sandbox \"7345f385585547a63c18dad0454e503c61bcfa4a26ff368ca14b8beb38b7cc4c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5d8ef26f0f42acc1bdc254a864986f9fd9fedc758aff657de7c9107d09c92420\"" Dec 13 15:05:15.646628 containerd[2682]: time="2024-12-13T15:05:15.646606583Z" level=info msg="StartContainer for \"5d8ef26f0f42acc1bdc254a864986f9fd9fedc758aff657de7c9107d09c92420\"" Dec 13 15:05:15.675857 systemd[1]: Started cri-containerd-5d8ef26f0f42acc1bdc254a864986f9fd9fedc758aff657de7c9107d09c92420.scope - libcontainer container 5d8ef26f0f42acc1bdc254a864986f9fd9fedc758aff657de7c9107d09c92420. Dec 13 15:05:15.700663 containerd[2682]: time="2024-12-13T15:05:15.700627458Z" level=info msg="StartContainer for \"5d8ef26f0f42acc1bdc254a864986f9fd9fedc758aff657de7c9107d09c92420\" returns successfully" Dec 13 15:05:15.885779 systemd-networkd[2590]: vxlan.calico: Gained IPv6LL Dec 13 15:05:16.199665 kubelet[4377]: I1213 15:05:16.199644 4377 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 15:05:16.207942 kubelet[4377]: I1213 15:05:16.207920 4377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c9f6c48-q86wb" podStartSLOduration=10.981852223 podStartE2EDuration="13.207882188s" podCreationTimestamp="2024-12-13 15:05:03 +0000 UTC" firstStartedPulling="2024-12-13 15:05:13.414469632 +0000 UTC m=+31.412438041" lastFinishedPulling="2024-12-13 15:05:15.640499517 +0000 UTC m=+33.638468006" observedRunningTime="2024-12-13 15:05:16.20760459 +0000 UTC m=+34.205572999" watchObservedRunningTime="2024-12-13 15:05:16.207882188 +0000 UTC m=+34.205850597" Dec 13 15:05:16.224102 containerd[2682]: time="2024-12-13T15:05:16.224057972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:16.224421 containerd[2682]: time="2024-12-13T15:05:16.224077292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 15:05:16.224814 containerd[2682]: time="2024-12-13T15:05:16.224795846Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:16.226543 containerd[2682]: time="2024-12-13T15:05:16.226520431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 15:05:16.227177 containerd[2682]: time="2024-12-13T15:05:16.227162266Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 586.46075ms" Dec 13 15:05:16.227199 containerd[2682]: time="2024-12-13T15:05:16.227184586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 15:05:16.228574 containerd[2682]: time="2024-12-13T15:05:16.228558374Z" level=info msg="CreateContainer within sandbox \"9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 15:05:16.234200 containerd[2682]: time="2024-12-13T15:05:16.234171727Z" level=info msg="CreateContainer within sandbox \"9cc2eb95f1a1c2580a7e0e831823927313bc9fb7d8d0d942dbb58b5e7fefb099\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0973be168231d85fbda3f109ee2d462a2f64d1d28bb2a0fde7ecfec96bbee5aa\"" Dec 13 15:05:16.234501 containerd[2682]: time="2024-12-13T15:05:16.234480924Z" level=info msg="StartContainer for \"0973be168231d85fbda3f109ee2d462a2f64d1d28bb2a0fde7ecfec96bbee5aa\"" Dec 13 15:05:16.262852 systemd[1]: Started cri-containerd-0973be168231d85fbda3f109ee2d462a2f64d1d28bb2a0fde7ecfec96bbee5aa.scope - libcontainer container 0973be168231d85fbda3f109ee2d462a2f64d1d28bb2a0fde7ecfec96bbee5aa. Dec 13 15:05:16.294398 containerd[2682]: time="2024-12-13T15:05:16.294363661Z" level=info msg="StartContainer for \"0973be168231d85fbda3f109ee2d462a2f64d1d28bb2a0fde7ecfec96bbee5aa\" returns successfully" Dec 13 15:05:17.136623 kubelet[4377]: I1213 15:05:17.136601 4377 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 15:05:17.136623 kubelet[4377]: I1213 15:05:17.136628 4377 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 15:05:17.203069 kubelet[4377]: I1213 15:05:17.203044 4377 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 15:05:17.211080 kubelet[4377]: I1213 15:05:17.211061 4377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-lh26w" podStartSLOduration=11.398144957 podStartE2EDuration="14.211026469s" podCreationTimestamp="2024-12-13 15:05:03 +0000 UTC" firstStartedPulling="2024-12-13 15:05:13.414451192 +0000 UTC m=+31.412419601" lastFinishedPulling="2024-12-13 15:05:16.227332704 +0000 UTC m=+34.225301113" observedRunningTime="2024-12-13 15:05:17.210568672 +0000 UTC m=+35.208537081" watchObservedRunningTime="2024-12-13 15:05:17.211026469 +0000 UTC m=+35.208994878" Dec 13 15:05:18.672929 kubelet[4377]: I1213 15:05:18.672892 4377 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 15:05:23.098622 kubelet[4377]: I1213 15:05:23.098582 4377 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 15:05:23.203017 kubelet[4377]: I1213 15:05:23.202979 4377 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 15:05:32.831834 kubelet[4377]: I1213 15:05:32.831793 4377 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 15:05:42.081937 containerd[2682]: time="2024-12-13T15:05:42.081853218Z" level=info msg="StopPodSandbox for \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\"" Dec 13 15:05:42.082270 containerd[2682]: time="2024-12-13T15:05:42.081960418Z" level=info msg="TearDown network for sandbox \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\" successfully" Dec 13 15:05:42.082270 containerd[2682]: time="2024-12-13T15:05:42.081973578Z" level=info msg="StopPodSandbox for \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\" returns successfully" Dec 13 15:05:42.082314 containerd[2682]: time="2024-12-13T15:05:42.082289657Z" level=info msg="RemovePodSandbox for \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\"" Dec 13 15:05:42.082336 containerd[2682]: time="2024-12-13T15:05:42.082316537Z" level=info msg="Forcibly stopping sandbox \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\"" Dec 13 15:05:42.082396 containerd[2682]: time="2024-12-13T15:05:42.082382937Z" level=info msg="TearDown network for sandbox \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\" successfully" Dec 13 15:05:42.083791 containerd[2682]: time="2024-12-13T15:05:42.083770095Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.083828 containerd[2682]: time="2024-12-13T15:05:42.083818215Z" level=info msg="RemovePodSandbox \"6c1ef51d0500b5ba7d56830c6b46820dc22c3709b7c70b18f9663ebc1c656342\" returns successfully" Dec 13 15:05:42.084178 containerd[2682]: time="2024-12-13T15:05:42.084160774Z" level=info msg="StopPodSandbox for \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\"" Dec 13 15:05:42.084244 containerd[2682]: time="2024-12-13T15:05:42.084232894Z" level=info msg="TearDown network for sandbox \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\" successfully" Dec 13 15:05:42.084268 containerd[2682]: time="2024-12-13T15:05:42.084244214Z" level=info msg="StopPodSandbox for \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\" returns successfully" Dec 13 15:05:42.084450 containerd[2682]: time="2024-12-13T15:05:42.084436054Z" level=info msg="RemovePodSandbox for \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\"" Dec 13 15:05:42.084476 containerd[2682]: time="2024-12-13T15:05:42.084457254Z" level=info msg="Forcibly stopping sandbox \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\"" Dec 13 15:05:42.084528 containerd[2682]: time="2024-12-13T15:05:42.084517934Z" level=info msg="TearDown network for sandbox \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\" successfully" Dec 13 15:05:42.085961 containerd[2682]: time="2024-12-13T15:05:42.085938812Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.085995 containerd[2682]: time="2024-12-13T15:05:42.085984692Z" level=info msg="RemovePodSandbox \"c08640d1f37025fff0d4c01addcfa8c2117c72cf2602e04bf4c959ce8e9e80ab\" returns successfully" Dec 13 15:05:42.086201 containerd[2682]: time="2024-12-13T15:05:42.086185171Z" level=info msg="StopPodSandbox for \"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\"" Dec 13 15:05:42.086272 containerd[2682]: time="2024-12-13T15:05:42.086262531Z" level=info msg="TearDown network for sandbox \"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\" successfully" Dec 13 15:05:42.086297 containerd[2682]: time="2024-12-13T15:05:42.086273731Z" level=info msg="StopPodSandbox for \"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\" returns successfully" Dec 13 15:05:42.086470 containerd[2682]: time="2024-12-13T15:05:42.086456651Z" level=info msg="RemovePodSandbox for \"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\"" Dec 13 15:05:42.086490 containerd[2682]: time="2024-12-13T15:05:42.086476531Z" level=info msg="Forcibly stopping sandbox \"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\"" Dec 13 15:05:42.086543 containerd[2682]: time="2024-12-13T15:05:42.086534251Z" level=info msg="TearDown network for sandbox \"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\" successfully" Dec 13 15:05:42.087684 containerd[2682]: time="2024-12-13T15:05:42.087659289Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.087730 containerd[2682]: time="2024-12-13T15:05:42.087702849Z" level=info msg="RemovePodSandbox \"695a65b3d8d1d0d847e743652dac1fedd2906d52e7b5fd560016c6a0e34edc4b\" returns successfully" Dec 13 15:05:42.087941 containerd[2682]: time="2024-12-13T15:05:42.087924689Z" level=info msg="StopPodSandbox for \"d5f8ff7a17cdc6f9221d5ce8a4c1d65baec4f49dc51a1404018d191014a2ab69\"" Dec 13 15:05:42.088013 containerd[2682]: time="2024-12-13T15:05:42.088003528Z" level=info msg="TearDown network for sandbox \"d5f8ff7a17cdc6f9221d5ce8a4c1d65baec4f49dc51a1404018d191014a2ab69\" successfully" Dec 13 15:05:42.088037 containerd[2682]: time="2024-12-13T15:05:42.088013808Z" level=info msg="StopPodSandbox for \"d5f8ff7a17cdc6f9221d5ce8a4c1d65baec4f49dc51a1404018d191014a2ab69\" returns successfully" Dec 13 15:05:42.088240 containerd[2682]: time="2024-12-13T15:05:42.088226168Z" level=info msg="RemovePodSandbox for \"d5f8ff7a17cdc6f9221d5ce8a4c1d65baec4f49dc51a1404018d191014a2ab69\"" Dec 13 15:05:42.088263 containerd[2682]: time="2024-12-13T15:05:42.088245768Z" level=info msg="Forcibly stopping sandbox \"d5f8ff7a17cdc6f9221d5ce8a4c1d65baec4f49dc51a1404018d191014a2ab69\"" Dec 13 15:05:42.088322 containerd[2682]: time="2024-12-13T15:05:42.088312128Z" level=info msg="TearDown network for sandbox \"d5f8ff7a17cdc6f9221d5ce8a4c1d65baec4f49dc51a1404018d191014a2ab69\" successfully" Dec 13 15:05:42.089464 containerd[2682]: time="2024-12-13T15:05:42.089446406Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5f8ff7a17cdc6f9221d5ce8a4c1d65baec4f49dc51a1404018d191014a2ab69\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.089499 containerd[2682]: time="2024-12-13T15:05:42.089485206Z" level=info msg="RemovePodSandbox \"d5f8ff7a17cdc6f9221d5ce8a4c1d65baec4f49dc51a1404018d191014a2ab69\" returns successfully" Dec 13 15:05:42.089721 containerd[2682]: time="2024-12-13T15:05:42.089690606Z" level=info msg="StopPodSandbox for \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\"" Dec 13 15:05:42.089786 containerd[2682]: time="2024-12-13T15:05:42.089776006Z" level=info msg="TearDown network for sandbox \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\" successfully" Dec 13 15:05:42.089808 containerd[2682]: time="2024-12-13T15:05:42.089786446Z" level=info msg="StopPodSandbox for \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\" returns successfully" Dec 13 15:05:42.089985 containerd[2682]: time="2024-12-13T15:05:42.089968565Z" level=info msg="RemovePodSandbox for \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\"" Dec 13 15:05:42.090009 containerd[2682]: time="2024-12-13T15:05:42.089991605Z" level=info msg="Forcibly stopping sandbox \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\"" Dec 13 15:05:42.090065 containerd[2682]: time="2024-12-13T15:05:42.090055645Z" level=info msg="TearDown network for sandbox \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\" successfully" Dec 13 15:05:42.091266 containerd[2682]: time="2024-12-13T15:05:42.091245683Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.091301 containerd[2682]: time="2024-12-13T15:05:42.091290923Z" level=info msg="RemovePodSandbox \"d90ef3f9ef0cbbf4e376b9d77e9c4860ad7758ed64ebff59b76874a34211aa93\" returns successfully" Dec 13 15:05:42.091531 containerd[2682]: time="2024-12-13T15:05:42.091510803Z" level=info msg="StopPodSandbox for \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\"" Dec 13 15:05:42.091608 containerd[2682]: time="2024-12-13T15:05:42.091596683Z" level=info msg="TearDown network for sandbox \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\" successfully" Dec 13 15:05:42.091633 containerd[2682]: time="2024-12-13T15:05:42.091609043Z" level=info msg="StopPodSandbox for \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\" returns successfully" Dec 13 15:05:42.091803 containerd[2682]: time="2024-12-13T15:05:42.091788682Z" level=info msg="RemovePodSandbox for \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\"" Dec 13 15:05:42.091827 containerd[2682]: time="2024-12-13T15:05:42.091809842Z" level=info msg="Forcibly stopping sandbox \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\"" Dec 13 15:05:42.091881 containerd[2682]: time="2024-12-13T15:05:42.091872002Z" level=info msg="TearDown network for sandbox \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\" successfully" Dec 13 15:05:42.093019 containerd[2682]: time="2024-12-13T15:05:42.092999081Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.093052 containerd[2682]: time="2024-12-13T15:05:42.093042761Z" level=info msg="RemovePodSandbox \"27b779a5134e03186c8c94d85a0ad62d3edaac0a72843656920e49f58527fdae\" returns successfully" Dec 13 15:05:42.093281 containerd[2682]: time="2024-12-13T15:05:42.093267840Z" level=info msg="StopPodSandbox for \"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\"" Dec 13 15:05:42.093355 containerd[2682]: time="2024-12-13T15:05:42.093345080Z" level=info msg="TearDown network for sandbox \"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\" successfully" Dec 13 15:05:42.093376 containerd[2682]: time="2024-12-13T15:05:42.093355160Z" level=info msg="StopPodSandbox for \"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\" returns successfully" Dec 13 15:05:42.093591 containerd[2682]: time="2024-12-13T15:05:42.093576320Z" level=info msg="RemovePodSandbox for \"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\"" Dec 13 15:05:42.093610 containerd[2682]: time="2024-12-13T15:05:42.093597600Z" level=info msg="Forcibly stopping sandbox \"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\"" Dec 13 15:05:42.093668 containerd[2682]: time="2024-12-13T15:05:42.093658600Z" level=info msg="TearDown network for sandbox \"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\" successfully" Dec 13 15:05:42.094861 containerd[2682]: time="2024-12-13T15:05:42.094836518Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.094895 containerd[2682]: time="2024-12-13T15:05:42.094880798Z" level=info msg="RemovePodSandbox \"41ed443b0c4038a13536bb7807d1ccfed3c1927bb4c59d57ec8bb41e64704d68\" returns successfully" Dec 13 15:05:42.095103 containerd[2682]: time="2024-12-13T15:05:42.095086477Z" level=info msg="StopPodSandbox for \"feaab4ac2af5381429a8bfc4576bd06eaf4053a7a6402f4c48d1d19305ee7e6c\"" Dec 13 15:05:42.095168 containerd[2682]: time="2024-12-13T15:05:42.095156477Z" level=info msg="TearDown network for sandbox \"feaab4ac2af5381429a8bfc4576bd06eaf4053a7a6402f4c48d1d19305ee7e6c\" successfully" Dec 13 15:05:42.095168 containerd[2682]: time="2024-12-13T15:05:42.095166037Z" level=info msg="StopPodSandbox for \"feaab4ac2af5381429a8bfc4576bd06eaf4053a7a6402f4c48d1d19305ee7e6c\" returns successfully" Dec 13 15:05:42.095409 containerd[2682]: time="2024-12-13T15:05:42.095392437Z" level=info msg="RemovePodSandbox for \"feaab4ac2af5381429a8bfc4576bd06eaf4053a7a6402f4c48d1d19305ee7e6c\"" Dec 13 15:05:42.095430 containerd[2682]: time="2024-12-13T15:05:42.095412317Z" level=info msg="Forcibly stopping sandbox \"feaab4ac2af5381429a8bfc4576bd06eaf4053a7a6402f4c48d1d19305ee7e6c\"" Dec 13 15:05:42.095476 containerd[2682]: time="2024-12-13T15:05:42.095463917Z" level=info msg="TearDown network for sandbox \"feaab4ac2af5381429a8bfc4576bd06eaf4053a7a6402f4c48d1d19305ee7e6c\" successfully" Dec 13 15:05:42.096615 containerd[2682]: time="2024-12-13T15:05:42.096593435Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"feaab4ac2af5381429a8bfc4576bd06eaf4053a7a6402f4c48d1d19305ee7e6c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.096649 containerd[2682]: time="2024-12-13T15:05:42.096637635Z" level=info msg="RemovePodSandbox \"feaab4ac2af5381429a8bfc4576bd06eaf4053a7a6402f4c48d1d19305ee7e6c\" returns successfully" Dec 13 15:05:42.096884 containerd[2682]: time="2024-12-13T15:05:42.096864235Z" level=info msg="StopPodSandbox for \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\"" Dec 13 15:05:42.096954 containerd[2682]: time="2024-12-13T15:05:42.096942314Z" level=info msg="TearDown network for sandbox \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\" successfully" Dec 13 15:05:42.096954 containerd[2682]: time="2024-12-13T15:05:42.096952074Z" level=info msg="StopPodSandbox for \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\" returns successfully" Dec 13 15:05:42.097188 containerd[2682]: time="2024-12-13T15:05:42.097169794Z" level=info msg="RemovePodSandbox for \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\"" Dec 13 15:05:42.097216 containerd[2682]: time="2024-12-13T15:05:42.097190514Z" level=info msg="Forcibly stopping sandbox \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\"" Dec 13 15:05:42.097258 containerd[2682]: time="2024-12-13T15:05:42.097246834Z" level=info msg="TearDown network for sandbox \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\" successfully" Dec 13 15:05:42.098491 containerd[2682]: time="2024-12-13T15:05:42.098466752Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.098525 containerd[2682]: time="2024-12-13T15:05:42.098513752Z" level=info msg="RemovePodSandbox \"4f3bb74f423f5290c5e97c4e9de51618d45d433ea299085998fd942746ef9467\" returns successfully" Dec 13 15:05:42.098764 containerd[2682]: time="2024-12-13T15:05:42.098744072Z" level=info msg="StopPodSandbox for \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\"" Dec 13 15:05:42.098834 containerd[2682]: time="2024-12-13T15:05:42.098817191Z" level=info msg="TearDown network for sandbox \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\" successfully" Dec 13 15:05:42.098834 containerd[2682]: time="2024-12-13T15:05:42.098827991Z" level=info msg="StopPodSandbox for \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\" returns successfully" Dec 13 15:05:42.099017 containerd[2682]: time="2024-12-13T15:05:42.099000151Z" level=info msg="RemovePodSandbox for \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\"" Dec 13 15:05:42.099041 containerd[2682]: time="2024-12-13T15:05:42.099020231Z" level=info msg="Forcibly stopping sandbox \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\"" Dec 13 15:05:42.099088 containerd[2682]: time="2024-12-13T15:05:42.099076991Z" level=info msg="TearDown network for sandbox \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\" successfully" Dec 13 15:05:42.100252 containerd[2682]: time="2024-12-13T15:05:42.100229229Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.100287 containerd[2682]: time="2024-12-13T15:05:42.100274309Z" level=info msg="RemovePodSandbox \"a74788ffb1826fad92b59bc7fa2738e1cfff03275e237f9ba8a73580cb0900b5\" returns successfully" Dec 13 15:05:42.100519 containerd[2682]: time="2024-12-13T15:05:42.100502349Z" level=info msg="StopPodSandbox for \"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\"" Dec 13 15:05:42.100591 containerd[2682]: time="2024-12-13T15:05:42.100578789Z" level=info msg="TearDown network for sandbox \"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\" successfully" Dec 13 15:05:42.100615 containerd[2682]: time="2024-12-13T15:05:42.100589709Z" level=info msg="StopPodSandbox for \"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\" returns successfully" Dec 13 15:05:42.100783 containerd[2682]: time="2024-12-13T15:05:42.100765028Z" level=info msg="RemovePodSandbox for \"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\"" Dec 13 15:05:42.100808 containerd[2682]: time="2024-12-13T15:05:42.100787228Z" level=info msg="Forcibly stopping sandbox \"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\"" Dec 13 15:05:42.100859 containerd[2682]: time="2024-12-13T15:05:42.100846188Z" level=info msg="TearDown network for sandbox \"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\" successfully" Dec 13 15:05:42.114353 containerd[2682]: time="2024-12-13T15:05:42.114324407Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.114398 containerd[2682]: time="2024-12-13T15:05:42.114372247Z" level=info msg="RemovePodSandbox \"bdd9bf5fcb0e8731f90d88b392e02c471429e46db9b421e0ae748f0bbba41b47\" returns successfully" Dec 13 15:05:42.114657 containerd[2682]: time="2024-12-13T15:05:42.114637967Z" level=info msg="StopPodSandbox for \"7337577e62fb573617c57f0a689b5cc36c4ef377c1aaa0187fe0c24f87c5bdbb\"" Dec 13 15:05:42.114725 containerd[2682]: time="2024-12-13T15:05:42.114712367Z" level=info msg="TearDown network for sandbox \"7337577e62fb573617c57f0a689b5cc36c4ef377c1aaa0187fe0c24f87c5bdbb\" successfully" Dec 13 15:05:42.114750 containerd[2682]: time="2024-12-13T15:05:42.114723486Z" level=info msg="StopPodSandbox for \"7337577e62fb573617c57f0a689b5cc36c4ef377c1aaa0187fe0c24f87c5bdbb\" returns successfully" Dec 13 15:05:42.114915 containerd[2682]: time="2024-12-13T15:05:42.114896126Z" level=info msg="RemovePodSandbox for \"7337577e62fb573617c57f0a689b5cc36c4ef377c1aaa0187fe0c24f87c5bdbb\"" Dec 13 15:05:42.114941 containerd[2682]: time="2024-12-13T15:05:42.114917686Z" level=info msg="Forcibly stopping sandbox \"7337577e62fb573617c57f0a689b5cc36c4ef377c1aaa0187fe0c24f87c5bdbb\"" Dec 13 15:05:42.114991 containerd[2682]: time="2024-12-13T15:05:42.114979126Z" level=info msg="TearDown network for sandbox \"7337577e62fb573617c57f0a689b5cc36c4ef377c1aaa0187fe0c24f87c5bdbb\" successfully" Dec 13 15:05:42.116163 containerd[2682]: time="2024-12-13T15:05:42.116141564Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7337577e62fb573617c57f0a689b5cc36c4ef377c1aaa0187fe0c24f87c5bdbb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.116197 containerd[2682]: time="2024-12-13T15:05:42.116184604Z" level=info msg="RemovePodSandbox \"7337577e62fb573617c57f0a689b5cc36c4ef377c1aaa0187fe0c24f87c5bdbb\" returns successfully" Dec 13 15:05:42.116454 containerd[2682]: time="2024-12-13T15:05:42.116435284Z" level=info msg="StopPodSandbox for \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\"" Dec 13 15:05:42.116524 containerd[2682]: time="2024-12-13T15:05:42.116513804Z" level=info msg="TearDown network for sandbox \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\" successfully" Dec 13 15:05:42.116549 containerd[2682]: time="2024-12-13T15:05:42.116524124Z" level=info msg="StopPodSandbox for \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\" returns successfully" Dec 13 15:05:42.116739 containerd[2682]: time="2024-12-13T15:05:42.116720563Z" level=info msg="RemovePodSandbox for \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\"" Dec 13 15:05:42.116764 containerd[2682]: time="2024-12-13T15:05:42.116743523Z" level=info msg="Forcibly stopping sandbox \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\"" Dec 13 15:05:42.116819 containerd[2682]: time="2024-12-13T15:05:42.116808523Z" level=info msg="TearDown network for sandbox \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\" successfully" Dec 13 15:05:42.118036 containerd[2682]: time="2024-12-13T15:05:42.118011921Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.118067 containerd[2682]: time="2024-12-13T15:05:42.118057401Z" level=info msg="RemovePodSandbox \"3dc2ee0895015c933cd6e51989b5ad3c0d4ffbcb37af53141e406b38ed8c1cfb\" returns successfully" Dec 13 15:05:42.118263 containerd[2682]: time="2024-12-13T15:05:42.118247801Z" level=info msg="StopPodSandbox for \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\"" Dec 13 15:05:42.118329 containerd[2682]: time="2024-12-13T15:05:42.118319041Z" level=info msg="TearDown network for sandbox \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\" successfully" Dec 13 15:05:42.118349 containerd[2682]: time="2024-12-13T15:05:42.118329001Z" level=info msg="StopPodSandbox for \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\" returns successfully" Dec 13 15:05:42.118564 containerd[2682]: time="2024-12-13T15:05:42.118549240Z" level=info msg="RemovePodSandbox for \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\"" Dec 13 15:05:42.118586 containerd[2682]: time="2024-12-13T15:05:42.118569560Z" level=info msg="Forcibly stopping sandbox \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\"" Dec 13 15:05:42.118640 containerd[2682]: time="2024-12-13T15:05:42.118630120Z" level=info msg="TearDown network for sandbox \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\" successfully" Dec 13 15:05:42.119854 containerd[2682]: time="2024-12-13T15:05:42.119833558Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.119886 containerd[2682]: time="2024-12-13T15:05:42.119875478Z" level=info msg="RemovePodSandbox \"5b079cd22582ba370443e6819ff610112a0418838512495990518c268a9521f2\" returns successfully" Dec 13 15:05:42.120111 containerd[2682]: time="2024-12-13T15:05:42.120096878Z" level=info msg="StopPodSandbox for \"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\"" Dec 13 15:05:42.120174 containerd[2682]: time="2024-12-13T15:05:42.120165118Z" level=info msg="TearDown network for sandbox \"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\" successfully" Dec 13 15:05:42.120196 containerd[2682]: time="2024-12-13T15:05:42.120174438Z" level=info msg="StopPodSandbox for \"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\" returns successfully" Dec 13 15:05:42.120357 containerd[2682]: time="2024-12-13T15:05:42.120343518Z" level=info msg="RemovePodSandbox for \"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\"" Dec 13 15:05:42.120382 containerd[2682]: time="2024-12-13T15:05:42.120362278Z" level=info msg="Forcibly stopping sandbox \"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\"" Dec 13 15:05:42.120426 containerd[2682]: time="2024-12-13T15:05:42.120416158Z" level=info msg="TearDown network for sandbox \"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\" successfully" Dec 13 15:05:42.121580 containerd[2682]: time="2024-12-13T15:05:42.121556556Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.121616 containerd[2682]: time="2024-12-13T15:05:42.121604316Z" level=info msg="RemovePodSandbox \"fa96c81b1fca5730e2fb2c7ea74da7440dc03c8b4e74e03e7ab5b8e3ea069f8b\" returns successfully" Dec 13 15:05:42.121863 containerd[2682]: time="2024-12-13T15:05:42.121846795Z" level=info msg="StopPodSandbox for \"67539a92b23a4e9aeba4e806b800b28dcdb78e38808f539e749b3eef60a862b8\"" Dec 13 15:05:42.121923 containerd[2682]: time="2024-12-13T15:05:42.121912115Z" level=info msg="TearDown network for sandbox \"67539a92b23a4e9aeba4e806b800b28dcdb78e38808f539e749b3eef60a862b8\" successfully" Dec 13 15:05:42.121991 containerd[2682]: time="2024-12-13T15:05:42.121921715Z" level=info msg="StopPodSandbox for \"67539a92b23a4e9aeba4e806b800b28dcdb78e38808f539e749b3eef60a862b8\" returns successfully" Dec 13 15:05:42.122143 containerd[2682]: time="2024-12-13T15:05:42.122124995Z" level=info msg="RemovePodSandbox for \"67539a92b23a4e9aeba4e806b800b28dcdb78e38808f539e749b3eef60a862b8\"" Dec 13 15:05:42.122168 containerd[2682]: time="2024-12-13T15:05:42.122146035Z" level=info msg="Forcibly stopping sandbox \"67539a92b23a4e9aeba4e806b800b28dcdb78e38808f539e749b3eef60a862b8\"" Dec 13 15:05:42.122223 containerd[2682]: time="2024-12-13T15:05:42.122211075Z" level=info msg="TearDown network for sandbox \"67539a92b23a4e9aeba4e806b800b28dcdb78e38808f539e749b3eef60a862b8\" successfully" Dec 13 15:05:42.123366 containerd[2682]: time="2024-12-13T15:05:42.123344753Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67539a92b23a4e9aeba4e806b800b28dcdb78e38808f539e749b3eef60a862b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.123413 containerd[2682]: time="2024-12-13T15:05:42.123393193Z" level=info msg="RemovePodSandbox \"67539a92b23a4e9aeba4e806b800b28dcdb78e38808f539e749b3eef60a862b8\" returns successfully" Dec 13 15:05:42.123607 containerd[2682]: time="2024-12-13T15:05:42.123589473Z" level=info msg="StopPodSandbox for \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\"" Dec 13 15:05:42.123677 containerd[2682]: time="2024-12-13T15:05:42.123665472Z" level=info msg="TearDown network for sandbox \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\" successfully" Dec 13 15:05:42.123697 containerd[2682]: time="2024-12-13T15:05:42.123681192Z" level=info msg="StopPodSandbox for \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\" returns successfully" Dec 13 15:05:42.123869 containerd[2682]: time="2024-12-13T15:05:42.123854872Z" level=info msg="RemovePodSandbox for \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\"" Dec 13 15:05:42.123889 containerd[2682]: time="2024-12-13T15:05:42.123874112Z" level=info msg="Forcibly stopping sandbox \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\"" Dec 13 15:05:42.123948 containerd[2682]: time="2024-12-13T15:05:42.123937992Z" level=info msg="TearDown network for sandbox \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\" successfully" Dec 13 15:05:42.125128 containerd[2682]: time="2024-12-13T15:05:42.125106910Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.125163 containerd[2682]: time="2024-12-13T15:05:42.125152270Z" level=info msg="RemovePodSandbox \"ab23ff9dc3c30b56885d5ccebf2f571ba0e4ca423a6a96e4fe5406fbf9bb6e6c\" returns successfully" Dec 13 15:05:42.125444 containerd[2682]: time="2024-12-13T15:05:42.125417750Z" level=info msg="StopPodSandbox for \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\"" Dec 13 15:05:42.125521 containerd[2682]: time="2024-12-13T15:05:42.125511230Z" level=info msg="TearDown network for sandbox \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\" successfully" Dec 13 15:05:42.125543 containerd[2682]: time="2024-12-13T15:05:42.125520990Z" level=info msg="StopPodSandbox for \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\" returns successfully" Dec 13 15:05:42.125774 containerd[2682]: time="2024-12-13T15:05:42.125757189Z" level=info msg="RemovePodSandbox for \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\"" Dec 13 15:05:42.125799 containerd[2682]: time="2024-12-13T15:05:42.125778789Z" level=info msg="Forcibly stopping sandbox \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\"" Dec 13 15:05:42.125844 containerd[2682]: time="2024-12-13T15:05:42.125834109Z" level=info msg="TearDown network for sandbox \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\" successfully" Dec 13 15:05:42.127020 containerd[2682]: time="2024-12-13T15:05:42.126995907Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.127053 containerd[2682]: time="2024-12-13T15:05:42.127040947Z" level=info msg="RemovePodSandbox \"246a85885b6415155a4aa8b8bbb15c08c9dbe81837708c45b937d53ca249d46f\" returns successfully" Dec 13 15:05:42.127271 containerd[2682]: time="2024-12-13T15:05:42.127253427Z" level=info msg="StopPodSandbox for \"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\"" Dec 13 15:05:42.127343 containerd[2682]: time="2024-12-13T15:05:42.127330707Z" level=info msg="TearDown network for sandbox \"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\" successfully" Dec 13 15:05:42.127343 containerd[2682]: time="2024-12-13T15:05:42.127340867Z" level=info msg="StopPodSandbox for \"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\" returns successfully" Dec 13 15:05:42.127523 containerd[2682]: time="2024-12-13T15:05:42.127505546Z" level=info msg="RemovePodSandbox for \"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\"" Dec 13 15:05:42.127547 containerd[2682]: time="2024-12-13T15:05:42.127526306Z" level=info msg="Forcibly stopping sandbox \"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\"" Dec 13 15:05:42.127599 containerd[2682]: time="2024-12-13T15:05:42.127587866Z" level=info msg="TearDown network for sandbox \"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\" successfully" Dec 13 15:05:42.128826 containerd[2682]: time="2024-12-13T15:05:42.128804264Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.128859 containerd[2682]: time="2024-12-13T15:05:42.128849424Z" level=info msg="RemovePodSandbox \"a73c21bbb41308e7814f2fdce9c1d55d7c34f8ad3e90111ff899d0ef3edd4a6d\" returns successfully" Dec 13 15:05:42.129072 containerd[2682]: time="2024-12-13T15:05:42.129055864Z" level=info msg="StopPodSandbox for \"170e6a2ccf62d6a7becfa95072dc85aec554a5418573aa7df99d31b255289de9\"" Dec 13 15:05:42.129139 containerd[2682]: time="2024-12-13T15:05:42.129128104Z" level=info msg="TearDown network for sandbox \"170e6a2ccf62d6a7becfa95072dc85aec554a5418573aa7df99d31b255289de9\" successfully" Dec 13 15:05:42.129161 containerd[2682]: time="2024-12-13T15:05:42.129138824Z" level=info msg="StopPodSandbox for \"170e6a2ccf62d6a7becfa95072dc85aec554a5418573aa7df99d31b255289de9\" returns successfully" Dec 13 15:05:42.129325 containerd[2682]: time="2024-12-13T15:05:42.129309704Z" level=info msg="RemovePodSandbox for \"170e6a2ccf62d6a7becfa95072dc85aec554a5418573aa7df99d31b255289de9\"" Dec 13 15:05:42.129347 containerd[2682]: time="2024-12-13T15:05:42.129331704Z" level=info msg="Forcibly stopping sandbox \"170e6a2ccf62d6a7becfa95072dc85aec554a5418573aa7df99d31b255289de9\"" Dec 13 15:05:42.129400 containerd[2682]: time="2024-12-13T15:05:42.129389543Z" level=info msg="TearDown network for sandbox \"170e6a2ccf62d6a7becfa95072dc85aec554a5418573aa7df99d31b255289de9\" successfully" Dec 13 15:05:42.130612 containerd[2682]: time="2024-12-13T15:05:42.130589302Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"170e6a2ccf62d6a7becfa95072dc85aec554a5418573aa7df99d31b255289de9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.130659 containerd[2682]: time="2024-12-13T15:05:42.130635902Z" level=info msg="RemovePodSandbox \"170e6a2ccf62d6a7becfa95072dc85aec554a5418573aa7df99d31b255289de9\" returns successfully" Dec 13 15:05:42.130841 containerd[2682]: time="2024-12-13T15:05:42.130821541Z" level=info msg="StopPodSandbox for \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\"" Dec 13 15:05:42.130912 containerd[2682]: time="2024-12-13T15:05:42.130901581Z" level=info msg="TearDown network for sandbox \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\" successfully" Dec 13 15:05:42.130935 containerd[2682]: time="2024-12-13T15:05:42.130911941Z" level=info msg="StopPodSandbox for \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\" returns successfully" Dec 13 15:05:42.131098 containerd[2682]: time="2024-12-13T15:05:42.131083341Z" level=info msg="RemovePodSandbox for \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\"" Dec 13 15:05:42.131119 containerd[2682]: time="2024-12-13T15:05:42.131104781Z" level=info msg="Forcibly stopping sandbox \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\"" Dec 13 15:05:42.131171 containerd[2682]: time="2024-12-13T15:05:42.131162421Z" level=info msg="TearDown network for sandbox \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\" successfully" Dec 13 15:05:42.132374 containerd[2682]: time="2024-12-13T15:05:42.132349899Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.132424 containerd[2682]: time="2024-12-13T15:05:42.132412139Z" level=info msg="RemovePodSandbox \"9af49bd43a1515850698dde0f5660ba62b6d24023c081e6c9f1233223e394260\" returns successfully" Dec 13 15:05:42.132633 containerd[2682]: time="2024-12-13T15:05:42.132613418Z" level=info msg="StopPodSandbox for \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\"" Dec 13 15:05:42.132713 containerd[2682]: time="2024-12-13T15:05:42.132699778Z" level=info msg="TearDown network for sandbox \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\" successfully" Dec 13 15:05:42.132713 containerd[2682]: time="2024-12-13T15:05:42.132709778Z" level=info msg="StopPodSandbox for \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\" returns successfully" Dec 13 15:05:42.132904 containerd[2682]: time="2024-12-13T15:05:42.132883218Z" level=info msg="RemovePodSandbox for \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\"" Dec 13 15:05:42.132931 containerd[2682]: time="2024-12-13T15:05:42.132906938Z" level=info msg="Forcibly stopping sandbox \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\"" Dec 13 15:05:42.132988 containerd[2682]: time="2024-12-13T15:05:42.132974658Z" level=info msg="TearDown network for sandbox \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\" successfully" Dec 13 15:05:42.134186 containerd[2682]: time="2024-12-13T15:05:42.134163616Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.134216 containerd[2682]: time="2024-12-13T15:05:42.134206576Z" level=info msg="RemovePodSandbox \"fc15e4b9a884920d2adce1aec06d067ebca46b5bfaf5eef7fe501d6aa3bbef1e\" returns successfully" Dec 13 15:05:42.134425 containerd[2682]: time="2024-12-13T15:05:42.134402256Z" level=info msg="StopPodSandbox for \"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\"" Dec 13 15:05:42.134500 containerd[2682]: time="2024-12-13T15:05:42.134488975Z" level=info msg="TearDown network for sandbox \"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\" successfully" Dec 13 15:05:42.134523 containerd[2682]: time="2024-12-13T15:05:42.134500735Z" level=info msg="StopPodSandbox for \"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\" returns successfully" Dec 13 15:05:42.134676 containerd[2682]: time="2024-12-13T15:05:42.134657615Z" level=info msg="RemovePodSandbox for \"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\"" Dec 13 15:05:42.134700 containerd[2682]: time="2024-12-13T15:05:42.134684615Z" level=info msg="Forcibly stopping sandbox \"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\"" Dec 13 15:05:42.134758 containerd[2682]: time="2024-12-13T15:05:42.134748655Z" level=info msg="TearDown network for sandbox \"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\" successfully" Dec 13 15:05:42.135900 containerd[2682]: time="2024-12-13T15:05:42.135880213Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.135954 containerd[2682]: time="2024-12-13T15:05:42.135943893Z" level=info msg="RemovePodSandbox \"5997db888c0f5e4e8551473dfc8d78449016c788f3a96c9a0f12046f3b5c9ae9\" returns successfully" Dec 13 15:05:42.136165 containerd[2682]: time="2024-12-13T15:05:42.136150613Z" level=info msg="StopPodSandbox for \"a53e430253fda639169b002c446fffdf68953ecc4626088c00be8a6eeed89286\"" Dec 13 15:05:42.136237 containerd[2682]: time="2024-12-13T15:05:42.136227813Z" level=info msg="TearDown network for sandbox \"a53e430253fda639169b002c446fffdf68953ecc4626088c00be8a6eeed89286\" successfully" Dec 13 15:05:42.136257 containerd[2682]: time="2024-12-13T15:05:42.136237213Z" level=info msg="StopPodSandbox for \"a53e430253fda639169b002c446fffdf68953ecc4626088c00be8a6eeed89286\" returns successfully" Dec 13 15:05:42.136437 containerd[2682]: time="2024-12-13T15:05:42.136420772Z" level=info msg="RemovePodSandbox for \"a53e430253fda639169b002c446fffdf68953ecc4626088c00be8a6eeed89286\"" Dec 13 15:05:42.136464 containerd[2682]: time="2024-12-13T15:05:42.136443412Z" level=info msg="Forcibly stopping sandbox \"a53e430253fda639169b002c446fffdf68953ecc4626088c00be8a6eeed89286\"" Dec 13 15:05:42.136530 containerd[2682]: time="2024-12-13T15:05:42.136521652Z" level=info msg="TearDown network for sandbox \"a53e430253fda639169b002c446fffdf68953ecc4626088c00be8a6eeed89286\" successfully" Dec 13 15:05:42.137738 containerd[2682]: time="2024-12-13T15:05:42.137709450Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a53e430253fda639169b002c446fffdf68953ecc4626088c00be8a6eeed89286\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 15:05:42.137788 containerd[2682]: time="2024-12-13T15:05:42.137754770Z" level=info msg="RemovePodSandbox \"a53e430253fda639169b002c446fffdf68953ecc4626088c00be8a6eeed89286\" returns successfully" Dec 13 15:11:27.438119 systemd[1]: Started sshd@7-147.28.228.225:22-2.57.122.33:53340.service - OpenSSH per-connection server daemon (2.57.122.33:53340). Dec 13 15:11:27.559773 sshd[9483]: Connection closed by 2.57.122.33 port 53340 Dec 13 15:11:27.560850 systemd[1]: sshd@7-147.28.228.225:22-2.57.122.33:53340.service: Deactivated successfully. Dec 13 15:13:46.288143 systemd[1]: Started sshd@8-147.28.228.225:22-147.75.109.163:49178.service - OpenSSH per-connection server daemon (147.75.109.163:49178). Dec 13 15:13:46.721094 sshd[9795]: Accepted publickey for core from 147.75.109.163 port 49178 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:13:46.722200 sshd-session[9795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:13:46.725502 systemd-logind[2667]: New session 10 of user core. Dec 13 15:13:46.745825 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 15:13:47.090578 sshd[9797]: Connection closed by 147.75.109.163 port 49178 Dec 13 15:13:47.090189 sshd-session[9795]: pam_unix(sshd:session): session closed for user core Dec 13 15:13:47.095048 systemd[1]: sshd@8-147.28.228.225:22-147.75.109.163:49178.service: Deactivated successfully. Dec 13 15:13:47.096754 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 15:13:47.098754 systemd-logind[2667]: Session 10 logged out. Waiting for processes to exit. Dec 13 15:13:47.099402 systemd-logind[2667]: Removed session 10. Dec 13 15:13:49.345243 update_engine[2677]: I20241213 15:13:49.345183 2677 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 15:13:49.345243 update_engine[2677]: I20241213 15:13:49.345238 2677 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 15:13:49.345660 update_engine[2677]: I20241213 15:13:49.345459 2677 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 15:13:49.345824 update_engine[2677]: I20241213 15:13:49.345808 2677 omaha_request_params.cc:62] Current group set to alpha Dec 13 15:13:49.345900 update_engine[2677]: I20241213 15:13:49.345888 2677 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 15:13:49.345927 update_engine[2677]: I20241213 15:13:49.345897 2677 update_attempter.cc:643] Scheduling an action processor start. Dec 13 15:13:49.345927 update_engine[2677]: I20241213 15:13:49.345911 2677 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 15:13:49.345964 update_engine[2677]: I20241213 15:13:49.345937 2677 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 15:13:49.345994 update_engine[2677]: I20241213 15:13:49.345983 2677 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 15:13:49.346012 update_engine[2677]: I20241213 15:13:49.345992 2677 omaha_request_action.cc:272] Request: Dec 13 15:13:49.346012 update_engine[2677]: Dec 13 15:13:49.346012 update_engine[2677]: Dec 13 15:13:49.346012 update_engine[2677]: Dec 13 15:13:49.346012 update_engine[2677]: Dec 13 15:13:49.346012 update_engine[2677]: Dec 13 15:13:49.346012 update_engine[2677]: Dec 13 15:13:49.346012 update_engine[2677]: Dec 13 15:13:49.346012 update_engine[2677]: Dec 13 15:13:49.346012 update_engine[2677]: I20241213 15:13:49.345999 2677 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 15:13:49.346175 locksmithd[2707]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 15:13:49.346962 update_engine[2677]: I20241213 15:13:49.346946 2677 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 15:13:49.347238 update_engine[2677]: I20241213 15:13:49.347218 2677 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 15:13:49.347627 update_engine[2677]: E20241213 15:13:49.347612 2677 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 15:13:49.347665 update_engine[2677]: I20241213 15:13:49.347654 2677 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 15:13:52.162089 systemd[1]: Started sshd@9-147.28.228.225:22-147.75.109.163:49186.service - OpenSSH per-connection server daemon (147.75.109.163:49186). Dec 13 15:13:52.592892 sshd[9862]: Accepted publickey for core from 147.75.109.163 port 49186 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:13:52.593978 sshd-session[9862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:13:52.596958 systemd-logind[2667]: New session 11 of user core. Dec 13 15:13:52.605844 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 15:13:52.953712 sshd[9864]: Connection closed by 147.75.109.163 port 49186 Dec 13 15:13:52.954330 sshd-session[9862]: pam_unix(sshd:session): session closed for user core Dec 13 15:13:52.957837 systemd[1]: sshd@9-147.28.228.225:22-147.75.109.163:49186.service: Deactivated successfully. Dec 13 15:13:52.960029 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 15:13:52.960524 systemd-logind[2667]: Session 11 logged out. Waiting for processes to exit. Dec 13 15:13:52.961099 systemd-logind[2667]: Removed session 11. Dec 13 15:13:53.023804 systemd[1]: Started sshd@10-147.28.228.225:22-147.75.109.163:49196.service - OpenSSH per-connection server daemon (147.75.109.163:49196). Dec 13 15:13:53.444324 sshd[9903]: Accepted publickey for core from 147.75.109.163 port 49196 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:13:53.445408 sshd-session[9903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:13:53.448211 systemd-logind[2667]: New session 12 of user core. Dec 13 15:13:53.463774 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 15:13:53.821854 sshd[9926]: Connection closed by 147.75.109.163 port 49196 Dec 13 15:13:53.822150 sshd-session[9903]: pam_unix(sshd:session): session closed for user core Dec 13 15:13:53.824938 systemd[1]: sshd@10-147.28.228.225:22-147.75.109.163:49196.service: Deactivated successfully. Dec 13 15:13:53.826493 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 15:13:53.827518 systemd-logind[2667]: Session 12 logged out. Waiting for processes to exit. Dec 13 15:13:53.828101 systemd-logind[2667]: Removed session 12. Dec 13 15:13:53.892924 systemd[1]: Started sshd@11-147.28.228.225:22-147.75.109.163:49212.service - OpenSSH per-connection server daemon (147.75.109.163:49212). Dec 13 15:13:54.305508 sshd[9960]: Accepted publickey for core from 147.75.109.163 port 49212 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:13:54.306538 sshd-session[9960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:13:54.309389 systemd-logind[2667]: New session 13 of user core. Dec 13 15:13:54.324777 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 15:13:54.652440 sshd[9962]: Connection closed by 147.75.109.163 port 49212 Dec 13 15:13:54.652792 sshd-session[9960]: pam_unix(sshd:session): session closed for user core Dec 13 15:13:54.655552 systemd[1]: sshd@11-147.28.228.225:22-147.75.109.163:49212.service: Deactivated successfully. Dec 13 15:13:54.657721 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 15:13:54.658220 systemd-logind[2667]: Session 13 logged out. Waiting for processes to exit. Dec 13 15:13:54.658739 systemd-logind[2667]: Removed session 13. Dec 13 15:13:59.348188 update_engine[2677]: I20241213 15:13:59.348119 2677 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 15:13:59.348548 update_engine[2677]: I20241213 15:13:59.348402 2677 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 15:13:59.348603 update_engine[2677]: I20241213 15:13:59.348584 2677 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 15:13:59.349115 update_engine[2677]: E20241213 15:13:59.349099 2677 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 15:13:59.349147 update_engine[2677]: I20241213 15:13:59.349136 2677 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 15:13:59.731054 systemd[1]: Started sshd@12-147.28.228.225:22-147.75.109.163:50834.service - OpenSSH per-connection server daemon (147.75.109.163:50834). Dec 13 15:14:00.166409 sshd[10020]: Accepted publickey for core from 147.75.109.163 port 50834 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:14:00.167427 sshd-session[10020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:14:00.170356 systemd-logind[2667]: New session 14 of user core. Dec 13 15:14:00.179774 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 15:14:00.528296 sshd[10022]: Connection closed by 147.75.109.163 port 50834 Dec 13 15:14:00.528680 sshd-session[10020]: pam_unix(sshd:session): session closed for user core Dec 13 15:14:00.531547 systemd[1]: sshd@12-147.28.228.225:22-147.75.109.163:50834.service: Deactivated successfully. Dec 13 15:14:00.533789 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 15:14:00.534302 systemd-logind[2667]: Session 14 logged out. Waiting for processes to exit. Dec 13 15:14:00.534846 systemd-logind[2667]: Removed session 14. Dec 13 15:14:05.597051 systemd[1]: Started sshd@13-147.28.228.225:22-147.75.109.163:50836.service - OpenSSH per-connection server daemon (147.75.109.163:50836). Dec 13 15:14:06.009220 sshd[10058]: Accepted publickey for core from 147.75.109.163 port 50836 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:14:06.010226 sshd-session[10058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:14:06.013172 systemd-logind[2667]: New session 15 of user core. Dec 13 15:14:06.021778 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 15:14:06.356705 sshd[10060]: Connection closed by 147.75.109.163 port 50836 Dec 13 15:14:06.357019 sshd-session[10058]: pam_unix(sshd:session): session closed for user core Dec 13 15:14:06.359829 systemd[1]: sshd@13-147.28.228.225:22-147.75.109.163:50836.service: Deactivated successfully. Dec 13 15:14:06.362065 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 15:14:06.362607 systemd-logind[2667]: Session 15 logged out. Waiting for processes to exit. Dec 13 15:14:06.363269 systemd-logind[2667]: Removed session 15. Dec 13 15:14:09.346733 update_engine[2677]: I20241213 15:14:09.346652 2677 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 15:14:09.347040 update_engine[2677]: I20241213 15:14:09.346963 2677 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 15:14:09.347178 update_engine[2677]: I20241213 15:14:09.347159 2677 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 15:14:09.347580 update_engine[2677]: E20241213 15:14:09.347566 2677 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 15:14:09.347611 update_engine[2677]: I20241213 15:14:09.347600 2677 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 15:14:11.434053 systemd[1]: Started sshd@14-147.28.228.225:22-147.75.109.163:39266.service - OpenSSH per-connection server daemon (147.75.109.163:39266). Dec 13 15:14:11.867188 sshd[10093]: Accepted publickey for core from 147.75.109.163 port 39266 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:14:11.868150 sshd-session[10093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:14:11.871096 systemd-logind[2667]: New session 16 of user core. Dec 13 15:14:11.888773 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 15:14:12.227297 sshd[10095]: Connection closed by 147.75.109.163 port 39266 Dec 13 15:14:12.227671 sshd-session[10093]: pam_unix(sshd:session): session closed for user core Dec 13 15:14:12.230547 systemd[1]: sshd@14-147.28.228.225:22-147.75.109.163:39266.service: Deactivated successfully. Dec 13 15:14:12.233103 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 15:14:12.233628 systemd-logind[2667]: Session 16 logged out. Waiting for processes to exit. Dec 13 15:14:12.234193 systemd-logind[2667]: Removed session 16. Dec 13 15:14:17.297249 systemd[1]: Started sshd@15-147.28.228.225:22-147.75.109.163:32856.service - OpenSSH per-connection server daemon (147.75.109.163:32856). Dec 13 15:14:17.707293 sshd[10133]: Accepted publickey for core from 147.75.109.163 port 32856 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:14:17.708310 sshd-session[10133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:14:17.711369 systemd-logind[2667]: New session 17 of user core. Dec 13 15:14:17.725776 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 15:14:18.052943 sshd[10135]: Connection closed by 147.75.109.163 port 32856 Dec 13 15:14:18.053269 sshd-session[10133]: pam_unix(sshd:session): session closed for user core Dec 13 15:14:18.056344 systemd[1]: sshd@15-147.28.228.225:22-147.75.109.163:32856.service: Deactivated successfully. Dec 13 15:14:18.058544 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 15:14:18.059065 systemd-logind[2667]: Session 17 logged out. Waiting for processes to exit. Dec 13 15:14:18.059596 systemd-logind[2667]: Removed session 17. Dec 13 15:14:19.344348 update_engine[2677]: I20241213 15:14:19.344280 2677 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 15:14:19.344740 update_engine[2677]: I20241213 15:14:19.344568 2677 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 15:14:19.344819 update_engine[2677]: I20241213 15:14:19.344797 2677 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 15:14:19.345230 update_engine[2677]: E20241213 15:14:19.345216 2677 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 15:14:19.345259 update_engine[2677]: I20241213 15:14:19.345249 2677 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 15:14:19.345282 update_engine[2677]: I20241213 15:14:19.345257 2677 omaha_request_action.cc:617] Omaha request response: Dec 13 15:14:19.345342 update_engine[2677]: E20241213 15:14:19.345329 2677 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 15:14:19.345364 update_engine[2677]: I20241213 15:14:19.345348 2677 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 15:14:19.345364 update_engine[2677]: I20241213 15:14:19.345355 2677 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 15:14:19.345364 update_engine[2677]: I20241213 15:14:19.345358 2677 update_attempter.cc:306] Processing Done. Dec 13 15:14:19.345417 update_engine[2677]: E20241213 15:14:19.345371 2677 update_attempter.cc:619] Update failed. Dec 13 15:14:19.345417 update_engine[2677]: I20241213 15:14:19.345377 2677 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 15:14:19.345417 update_engine[2677]: I20241213 15:14:19.345381 2677 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 15:14:19.345417 update_engine[2677]: I20241213 15:14:19.345386 2677 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 15:14:19.345487 update_engine[2677]: I20241213 15:14:19.345436 2677 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 15:14:19.345487 update_engine[2677]: I20241213 15:14:19.345456 2677 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 15:14:19.345487 update_engine[2677]: I20241213 15:14:19.345461 2677 omaha_request_action.cc:272] Request: Dec 13 15:14:19.345487 update_engine[2677]: Dec 13 15:14:19.345487 update_engine[2677]: Dec 13 15:14:19.345487 update_engine[2677]: Dec 13 15:14:19.345487 update_engine[2677]: Dec 13 15:14:19.345487 update_engine[2677]: Dec 13 15:14:19.345487 update_engine[2677]: Dec 13 15:14:19.345487 update_engine[2677]: I20241213 15:14:19.345467 2677 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 15:14:19.345651 update_engine[2677]: I20241213 15:14:19.345573 2677 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 15:14:19.345726 locksmithd[2707]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 15:14:19.345907 update_engine[2677]: I20241213 15:14:19.345729 2677 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 15:14:19.346002 update_engine[2677]: E20241213 15:14:19.345986 2677 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 15:14:19.346027 update_engine[2677]: I20241213 15:14:19.346017 2677 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 15:14:19.346048 update_engine[2677]: I20241213 15:14:19.346024 2677 omaha_request_action.cc:617] Omaha request response: Dec 13 15:14:19.346048 update_engine[2677]: I20241213 15:14:19.346030 2677 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 15:14:19.346048 update_engine[2677]: I20241213 15:14:19.346035 2677 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 15:14:19.346048 update_engine[2677]: I20241213 15:14:19.346040 2677 update_attempter.cc:306] Processing Done. Dec 13 15:14:19.346048 update_engine[2677]: I20241213 15:14:19.346043 2677 update_attempter.cc:310] Error event sent. Dec 13 15:14:19.346139 update_engine[2677]: I20241213 15:14:19.346050 2677 update_check_scheduler.cc:74] Next update check in 44m15s Dec 13 15:14:19.346185 locksmithd[2707]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 15:14:23.128027 systemd[1]: Started sshd@16-147.28.228.225:22-147.75.109.163:32868.service - OpenSSH per-connection server daemon (147.75.109.163:32868). Dec 13 15:14:23.550782 sshd[10204]: Accepted publickey for core from 147.75.109.163 port 32868 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:14:23.551840 sshd-session[10204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:14:23.554749 systemd-logind[2667]: New session 18 of user core. Dec 13 15:14:23.568831 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 15:14:23.906083 sshd[10229]: Connection closed by 147.75.109.163 port 32868 Dec 13 15:14:23.906417 sshd-session[10204]: pam_unix(sshd:session): session closed for user core Dec 13 15:14:23.909364 systemd[1]: sshd@16-147.28.228.225:22-147.75.109.163:32868.service: Deactivated successfully. Dec 13 15:14:23.910971 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 15:14:23.911486 systemd-logind[2667]: Session 18 logged out. Waiting for processes to exit. Dec 13 15:14:23.912084 systemd-logind[2667]: Removed session 18. Dec 13 15:14:28.975945 systemd[1]: Started sshd@17-147.28.228.225:22-147.75.109.163:44554.service - OpenSSH per-connection server daemon (147.75.109.163:44554). Dec 13 15:14:29.391371 sshd[10266]: Accepted publickey for core from 147.75.109.163 port 44554 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:14:29.392332 sshd-session[10266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:14:29.395469 systemd-logind[2667]: New session 19 of user core. Dec 13 15:14:29.404770 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 15:14:29.739234 sshd[10268]: Connection closed by 147.75.109.163 port 44554 Dec 13 15:14:29.739544 sshd-session[10266]: pam_unix(sshd:session): session closed for user core Dec 13 15:14:29.742277 systemd[1]: sshd@17-147.28.228.225:22-147.75.109.163:44554.service: Deactivated successfully. Dec 13 15:14:29.743877 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 15:14:29.744376 systemd-logind[2667]: Session 19 logged out. Waiting for processes to exit. Dec 13 15:14:29.744936 systemd-logind[2667]: Removed session 19. Dec 13 15:14:34.816085 systemd[1]: Started sshd@18-147.28.228.225:22-147.75.109.163:44564.service - OpenSSH per-connection server daemon (147.75.109.163:44564). Dec 13 15:14:35.240545 sshd[10302]: Accepted publickey for core from 147.75.109.163 port 44564 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:14:35.241584 sshd-session[10302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:14:35.244529 systemd-logind[2667]: New session 20 of user core. Dec 13 15:14:35.258823 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 15:14:35.598595 sshd[10304]: Connection closed by 147.75.109.163 port 44564 Dec 13 15:14:35.598953 sshd-session[10302]: pam_unix(sshd:session): session closed for user core Dec 13 15:14:35.601722 systemd[1]: sshd@18-147.28.228.225:22-147.75.109.163:44564.service: Deactivated successfully. Dec 13 15:14:35.603917 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 15:14:35.604426 systemd-logind[2667]: Session 20 logged out. Waiting for processes to exit. Dec 13 15:14:35.605017 systemd-logind[2667]: Removed session 20. Dec 13 15:14:40.672992 systemd[1]: Started sshd@19-147.28.228.225:22-147.75.109.163:49632.service - OpenSSH per-connection server daemon (147.75.109.163:49632). Dec 13 15:14:41.103158 sshd[10339]: Accepted publickey for core from 147.75.109.163 port 49632 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:14:41.104102 sshd-session[10339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:14:41.107013 systemd-logind[2667]: New session 21 of user core. Dec 13 15:14:41.117773 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 15:14:41.463481 sshd[10341]: Connection closed by 147.75.109.163 port 49632 Dec 13 15:14:41.463889 sshd-session[10339]: pam_unix(sshd:session): session closed for user core Dec 13 15:14:41.466603 systemd[1]: sshd@19-147.28.228.225:22-147.75.109.163:49632.service: Deactivated successfully. Dec 13 15:14:41.468770 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 15:14:41.469282 systemd-logind[2667]: Session 21 logged out. Waiting for processes to exit. Dec 13 15:14:41.469959 systemd-logind[2667]: Removed session 21. Dec 13 15:14:46.533947 systemd[1]: Started sshd@20-147.28.228.225:22-147.75.109.163:40458.service - OpenSSH per-connection server daemon (147.75.109.163:40458). Dec 13 15:14:46.953942 sshd[10396]: Accepted publickey for core from 147.75.109.163 port 40458 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:14:46.954952 sshd-session[10396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:14:46.957815 systemd-logind[2667]: New session 22 of user core. Dec 13 15:14:46.967772 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 15:14:47.313887 sshd[10399]: Connection closed by 147.75.109.163 port 40458 Dec 13 15:14:47.314194 sshd-session[10396]: pam_unix(sshd:session): session closed for user core Dec 13 15:14:47.316461 systemd[1]: sshd@20-147.28.228.225:22-147.75.109.163:40458.service: Deactivated successfully. Dec 13 15:14:47.318918 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 15:14:47.320120 systemd-logind[2667]: Session 22 logged out. Waiting for processes to exit. Dec 13 15:14:47.320751 systemd-logind[2667]: Removed session 22. Dec 13 15:14:52.393012 systemd[1]: Started sshd@21-147.28.228.225:22-147.75.109.163:40468.service - OpenSSH per-connection server daemon (147.75.109.163:40468). Dec 13 15:14:52.828306 sshd[10463]: Accepted publickey for core from 147.75.109.163 port 40468 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:14:52.829411 sshd-session[10463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:14:52.832410 systemd-logind[2667]: New session 23 of user core. Dec 13 15:14:52.845773 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 15:14:53.191833 sshd[10465]: Connection closed by 147.75.109.163 port 40468 Dec 13 15:14:53.192192 sshd-session[10463]: pam_unix(sshd:session): session closed for user core Dec 13 15:14:53.194886 systemd[1]: sshd@21-147.28.228.225:22-147.75.109.163:40468.service: Deactivated successfully. Dec 13 15:14:53.197031 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 15:14:53.197565 systemd-logind[2667]: Session 23 logged out. Waiting for processes to exit. Dec 13 15:14:53.198186 systemd-logind[2667]: Removed session 23. Dec 13 15:14:58.264058 systemd[1]: Started sshd@22-147.28.228.225:22-147.75.109.163:45858.service - OpenSSH per-connection server daemon (147.75.109.163:45858). Dec 13 15:14:58.689349 sshd[10543]: Accepted publickey for core from 147.75.109.163 port 45858 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:14:58.690356 sshd-session[10543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:14:58.693394 systemd-logind[2667]: New session 24 of user core. Dec 13 15:14:58.707823 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 15:14:59.044621 sshd[10545]: Connection closed by 147.75.109.163 port 45858 Dec 13 15:14:59.044952 sshd-session[10543]: pam_unix(sshd:session): session closed for user core Dec 13 15:14:59.047767 systemd[1]: sshd@22-147.28.228.225:22-147.75.109.163:45858.service: Deactivated successfully. Dec 13 15:14:59.049951 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 15:14:59.050436 systemd-logind[2667]: Session 24 logged out. Waiting for processes to exit. Dec 13 15:14:59.051118 systemd-logind[2667]: Removed session 24. Dec 13 15:15:04.124977 systemd[1]: Started sshd@23-147.28.228.225:22-147.75.109.163:45874.service - OpenSSH per-connection server daemon (147.75.109.163:45874). Dec 13 15:15:04.557106 sshd[10583]: Accepted publickey for core from 147.75.109.163 port 45874 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:15:04.558104 sshd-session[10583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:15:04.561010 systemd-logind[2667]: New session 25 of user core. Dec 13 15:15:04.569825 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 15:15:04.915719 sshd[10585]: Connection closed by 147.75.109.163 port 45874 Dec 13 15:15:04.916037 sshd-session[10583]: pam_unix(sshd:session): session closed for user core Dec 13 15:15:04.918886 systemd[1]: sshd@23-147.28.228.225:22-147.75.109.163:45874.service: Deactivated successfully. Dec 13 15:15:04.920462 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 15:15:04.920985 systemd-logind[2667]: Session 25 logged out. Waiting for processes to exit. Dec 13 15:15:04.921599 systemd-logind[2667]: Removed session 25. Dec 13 15:15:09.991136 systemd[1]: Started sshd@24-147.28.228.225:22-147.75.109.163:35886.service - OpenSSH per-connection server daemon (147.75.109.163:35886). Dec 13 15:15:10.424160 sshd[10620]: Accepted publickey for core from 147.75.109.163 port 35886 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:15:10.425165 sshd-session[10620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:15:10.428072 systemd-logind[2667]: New session 26 of user core. Dec 13 15:15:10.439777 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 15:15:10.785250 sshd[10622]: Connection closed by 147.75.109.163 port 35886 Dec 13 15:15:10.785592 sshd-session[10620]: pam_unix(sshd:session): session closed for user core Dec 13 15:15:10.788385 systemd[1]: sshd@24-147.28.228.225:22-147.75.109.163:35886.service: Deactivated successfully. Dec 13 15:15:10.789978 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 15:15:10.790483 systemd-logind[2667]: Session 26 logged out. Waiting for processes to exit. Dec 13 15:15:10.791177 systemd-logind[2667]: Removed session 26. Dec 13 15:15:15.859093 systemd[1]: Started sshd@25-147.28.228.225:22-147.75.109.163:35888.service - OpenSSH per-connection server daemon (147.75.109.163:35888). Dec 13 15:15:16.282189 sshd[10660]: Accepted publickey for core from 147.75.109.163 port 35888 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:15:16.283281 sshd-session[10660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:15:16.286421 systemd-logind[2667]: New session 27 of user core. Dec 13 15:15:16.301771 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 15:15:16.637147 sshd[10662]: Connection closed by 147.75.109.163 port 35888 Dec 13 15:15:16.637506 sshd-session[10660]: pam_unix(sshd:session): session closed for user core Dec 13 15:15:16.640302 systemd[1]: sshd@25-147.28.228.225:22-147.75.109.163:35888.service: Deactivated successfully. Dec 13 15:15:16.641914 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 15:15:16.642416 systemd-logind[2667]: Session 27 logged out. Waiting for processes to exit. Dec 13 15:15:16.642967 systemd-logind[2667]: Removed session 27. Dec 13 15:15:21.718044 systemd[1]: Started sshd@26-147.28.228.225:22-147.75.109.163:51216.service - OpenSSH per-connection server daemon (147.75.109.163:51216). Dec 13 15:15:22.153560 sshd[10728]: Accepted publickey for core from 147.75.109.163 port 51216 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:15:22.154550 sshd-session[10728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:15:22.157427 systemd-logind[2667]: New session 28 of user core. Dec 13 15:15:22.175834 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 15:15:22.515764 sshd[10730]: Connection closed by 147.75.109.163 port 51216 Dec 13 15:15:22.516147 sshd-session[10728]: pam_unix(sshd:session): session closed for user core Dec 13 15:15:22.518887 systemd[1]: sshd@26-147.28.228.225:22-147.75.109.163:51216.service: Deactivated successfully. Dec 13 15:15:22.521095 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 15:15:22.521597 systemd-logind[2667]: Session 28 logged out. Waiting for processes to exit. Dec 13 15:15:22.522157 systemd-logind[2667]: Removed session 28. Dec 13 15:15:27.593002 systemd[1]: Started sshd@27-147.28.228.225:22-147.75.109.163:40578.service - OpenSSH per-connection server daemon (147.75.109.163:40578). Dec 13 15:15:28.026071 sshd[10784]: Accepted publickey for core from 147.75.109.163 port 40578 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:15:28.027096 sshd-session[10784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:15:28.030050 systemd-logind[2667]: New session 29 of user core. Dec 13 15:15:28.043771 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 15:15:28.387399 sshd[10788]: Connection closed by 147.75.109.163 port 40578 Dec 13 15:15:28.387735 sshd-session[10784]: pam_unix(sshd:session): session closed for user core Dec 13 15:15:28.390525 systemd[1]: sshd@27-147.28.228.225:22-147.75.109.163:40578.service: Deactivated successfully. Dec 13 15:15:28.392139 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 15:15:28.392648 systemd-logind[2667]: Session 29 logged out. Waiting for processes to exit. Dec 13 15:15:28.393230 systemd-logind[2667]: Removed session 29. Dec 13 15:15:33.456006 systemd[1]: Started sshd@28-147.28.228.225:22-147.75.109.163:40590.service - OpenSSH per-connection server daemon (147.75.109.163:40590). Dec 13 15:15:33.865471 sshd[10820]: Accepted publickey for core from 147.75.109.163 port 40590 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:15:33.866512 sshd-session[10820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:15:33.869538 systemd-logind[2667]: New session 30 of user core. Dec 13 15:15:33.884785 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 13 15:15:34.209965 sshd[10822]: Connection closed by 147.75.109.163 port 40590 Dec 13 15:15:34.210376 sshd-session[10820]: pam_unix(sshd:session): session closed for user core Dec 13 15:15:34.213173 systemd[1]: sshd@28-147.28.228.225:22-147.75.109.163:40590.service: Deactivated successfully. Dec 13 15:15:34.214792 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 15:15:34.215306 systemd-logind[2667]: Session 30 logged out. Waiting for processes to exit. Dec 13 15:15:34.215886 systemd-logind[2667]: Removed session 30. Dec 13 15:15:39.291997 systemd[1]: Started sshd@29-147.28.228.225:22-147.75.109.163:56008.service - OpenSSH per-connection server daemon (147.75.109.163:56008). Dec 13 15:15:39.724882 sshd[10856]: Accepted publickey for core from 147.75.109.163 port 56008 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:15:39.725868 sshd-session[10856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:15:39.728770 systemd-logind[2667]: New session 31 of user core. Dec 13 15:15:39.739842 systemd[1]: Started session-31.scope - Session 31 of User core. Dec 13 15:15:40.084087 sshd[10858]: Connection closed by 147.75.109.163 port 56008 Dec 13 15:15:40.084428 sshd-session[10856]: pam_unix(sshd:session): session closed for user core Dec 13 15:15:40.087193 systemd[1]: sshd@29-147.28.228.225:22-147.75.109.163:56008.service: Deactivated successfully. Dec 13 15:15:40.088964 systemd[1]: session-31.scope: Deactivated successfully. Dec 13 15:15:40.089469 systemd-logind[2667]: Session 31 logged out. Waiting for processes to exit. Dec 13 15:15:40.090083 systemd-logind[2667]: Removed session 31. Dec 13 15:15:45.156966 systemd[1]: Started sshd@30-147.28.228.225:22-147.75.109.163:56016.service - OpenSSH per-connection server daemon (147.75.109.163:56016). Dec 13 15:15:45.584030 sshd[10890]: Accepted publickey for core from 147.75.109.163 port 56016 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:15:45.585079 sshd-session[10890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:15:45.588001 systemd-logind[2667]: New session 32 of user core. Dec 13 15:15:45.596823 systemd[1]: Started session-32.scope - Session 32 of User core. Dec 13 15:15:45.942216 sshd[10892]: Connection closed by 147.75.109.163 port 56016 Dec 13 15:15:45.942591 sshd-session[10890]: pam_unix(sshd:session): session closed for user core Dec 13 15:15:45.945335 systemd[1]: sshd@30-147.28.228.225:22-147.75.109.163:56016.service: Deactivated successfully. Dec 13 15:15:45.946968 systemd[1]: session-32.scope: Deactivated successfully. Dec 13 15:15:45.947479 systemd-logind[2667]: Session 32 logged out. Waiting for processes to exit. Dec 13 15:15:45.948082 systemd-logind[2667]: Removed session 32. Dec 13 15:15:51.024003 systemd[1]: Started sshd@31-147.28.228.225:22-147.75.109.163:43964.service - OpenSSH per-connection server daemon (147.75.109.163:43964). Dec 13 15:15:51.457395 sshd[10955]: Accepted publickey for core from 147.75.109.163 port 43964 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:15:51.458447 sshd-session[10955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:15:51.461292 systemd-logind[2667]: New session 33 of user core. Dec 13 15:15:51.475822 systemd[1]: Started session-33.scope - Session 33 of User core. Dec 13 15:15:51.818984 sshd[10957]: Connection closed by 147.75.109.163 port 43964 Dec 13 15:15:51.819358 sshd-session[10955]: pam_unix(sshd:session): session closed for user core Dec 13 15:15:51.822145 systemd[1]: sshd@31-147.28.228.225:22-147.75.109.163:43964.service: Deactivated successfully. Dec 13 15:15:51.824380 systemd[1]: session-33.scope: Deactivated successfully. Dec 13 15:15:51.824897 systemd-logind[2667]: Session 33 logged out. Waiting for processes to exit. Dec 13 15:15:51.825476 systemd-logind[2667]: Removed session 33. Dec 13 15:15:56.895121 systemd[1]: Started sshd@32-147.28.228.225:22-147.75.109.163:47630.service - OpenSSH per-connection server daemon (147.75.109.163:47630). Dec 13 15:15:57.327869 sshd[11040]: Accepted publickey for core from 147.75.109.163 port 47630 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:15:57.328883 sshd-session[11040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:15:57.331846 systemd-logind[2667]: New session 34 of user core. Dec 13 15:15:57.340833 systemd[1]: Started session-34.scope - Session 34 of User core. Dec 13 15:15:57.688568 sshd[11042]: Connection closed by 147.75.109.163 port 47630 Dec 13 15:15:57.688938 sshd-session[11040]: pam_unix(sshd:session): session closed for user core Dec 13 15:15:57.691740 systemd[1]: sshd@32-147.28.228.225:22-147.75.109.163:47630.service: Deactivated successfully. Dec 13 15:15:57.693413 systemd[1]: session-34.scope: Deactivated successfully. Dec 13 15:15:57.693938 systemd-logind[2667]: Session 34 logged out. Waiting for processes to exit. Dec 13 15:15:57.694489 systemd-logind[2667]: Removed session 34. Dec 13 15:16:02.763178 systemd[1]: Started sshd@33-147.28.228.225:22-147.75.109.163:47640.service - OpenSSH per-connection server daemon (147.75.109.163:47640). Dec 13 15:16:03.191414 sshd[11081]: Accepted publickey for core from 147.75.109.163 port 47640 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:16:03.192429 sshd-session[11081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:16:03.195302 systemd-logind[2667]: New session 35 of user core. Dec 13 15:16:03.204781 systemd[1]: Started session-35.scope - Session 35 of User core. Dec 13 15:16:03.548085 sshd[11083]: Connection closed by 147.75.109.163 port 47640 Dec 13 15:16:03.548394 sshd-session[11081]: pam_unix(sshd:session): session closed for user core Dec 13 15:16:03.551267 systemd[1]: sshd@33-147.28.228.225:22-147.75.109.163:47640.service: Deactivated successfully. Dec 13 15:16:03.552951 systemd[1]: session-35.scope: Deactivated successfully. Dec 13 15:16:03.553478 systemd-logind[2667]: Session 35 logged out. Waiting for processes to exit. Dec 13 15:16:03.554023 systemd-logind[2667]: Removed session 35. Dec 13 15:16:08.619263 systemd[1]: Started sshd@34-147.28.228.225:22-147.75.109.163:46034.service - OpenSSH per-connection server daemon (147.75.109.163:46034). Dec 13 15:16:09.039426 sshd[11120]: Accepted publickey for core from 147.75.109.163 port 46034 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:16:09.040441 sshd-session[11120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:16:09.043333 systemd-logind[2667]: New session 36 of user core. Dec 13 15:16:09.053784 systemd[1]: Started session-36.scope - Session 36 of User core. Dec 13 15:16:09.390427 sshd[11122]: Connection closed by 147.75.109.163 port 46034 Dec 13 15:16:09.390754 sshd-session[11120]: pam_unix(sshd:session): session closed for user core Dec 13 15:16:09.393565 systemd[1]: sshd@34-147.28.228.225:22-147.75.109.163:46034.service: Deactivated successfully. Dec 13 15:16:09.395821 systemd[1]: session-36.scope: Deactivated successfully. Dec 13 15:16:09.396447 systemd-logind[2667]: Session 36 logged out. Waiting for processes to exit. Dec 13 15:16:09.396985 systemd-logind[2667]: Removed session 36. Dec 13 15:16:14.468101 systemd[1]: Started sshd@35-147.28.228.225:22-147.75.109.163:46042.service - OpenSSH per-connection server daemon (147.75.109.163:46042). Dec 13 15:16:14.901388 sshd[11155]: Accepted publickey for core from 147.75.109.163 port 46042 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:16:14.902460 sshd-session[11155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:16:14.905580 systemd-logind[2667]: New session 37 of user core. Dec 13 15:16:14.916787 systemd[1]: Started session-37.scope - Session 37 of User core. Dec 13 15:16:15.264380 sshd[11157]: Connection closed by 147.75.109.163 port 46042 Dec 13 15:16:15.264749 sshd-session[11155]: pam_unix(sshd:session): session closed for user core Dec 13 15:16:15.267526 systemd[1]: sshd@35-147.28.228.225:22-147.75.109.163:46042.service: Deactivated successfully. Dec 13 15:16:15.269116 systemd[1]: session-37.scope: Deactivated successfully. Dec 13 15:16:15.269612 systemd-logind[2667]: Session 37 logged out. Waiting for processes to exit. Dec 13 15:16:15.270186 systemd-logind[2667]: Removed session 37. Dec 13 15:16:20.340058 systemd[1]: Started sshd@36-147.28.228.225:22-147.75.109.163:40100.service - OpenSSH per-connection server daemon (147.75.109.163:40100). Dec 13 15:16:20.772760 sshd[11220]: Accepted publickey for core from 147.75.109.163 port 40100 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:16:20.773799 sshd-session[11220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:16:20.776615 systemd-logind[2667]: New session 38 of user core. Dec 13 15:16:20.785773 systemd[1]: Started session-38.scope - Session 38 of User core. Dec 13 15:16:21.135959 sshd[11237]: Connection closed by 147.75.109.163 port 40100 Dec 13 15:16:21.136292 sshd-session[11220]: pam_unix(sshd:session): session closed for user core Dec 13 15:16:21.139092 systemd[1]: sshd@36-147.28.228.225:22-147.75.109.163:40100.service: Deactivated successfully. Dec 13 15:16:21.141181 systemd[1]: session-38.scope: Deactivated successfully. Dec 13 15:16:21.141695 systemd-logind[2667]: Session 38 logged out. Waiting for processes to exit. Dec 13 15:16:21.142277 systemd-logind[2667]: Removed session 38. Dec 13 15:16:26.206983 systemd[1]: Started sshd@37-147.28.228.225:22-147.75.109.163:33496.service - OpenSSH per-connection server daemon (147.75.109.163:33496). Dec 13 15:16:26.630615 sshd[11290]: Accepted publickey for core from 147.75.109.163 port 33496 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:16:26.631576 sshd-session[11290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:16:26.634397 systemd-logind[2667]: New session 39 of user core. Dec 13 15:16:26.645772 systemd[1]: Started session-39.scope - Session 39 of User core. Dec 13 15:16:26.983693 sshd[11292]: Connection closed by 147.75.109.163 port 33496 Dec 13 15:16:26.984189 sshd-session[11290]: pam_unix(sshd:session): session closed for user core Dec 13 15:16:26.987263 systemd[1]: sshd@37-147.28.228.225:22-147.75.109.163:33496.service: Deactivated successfully. Dec 13 15:16:26.988890 systemd[1]: session-39.scope: Deactivated successfully. Dec 13 15:16:26.989395 systemd-logind[2667]: Session 39 logged out. Waiting for processes to exit. Dec 13 15:16:26.989928 systemd-logind[2667]: Removed session 39. Dec 13 15:16:32.059007 systemd[1]: Started sshd@38-147.28.228.225:22-147.75.109.163:33500.service - OpenSSH per-connection server daemon (147.75.109.163:33500). Dec 13 15:16:32.485868 sshd[11337]: Accepted publickey for core from 147.75.109.163 port 33500 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:16:32.486885 sshd-session[11337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:16:32.489716 systemd-logind[2667]: New session 40 of user core. Dec 13 15:16:32.500826 systemd[1]: Started session-40.scope - Session 40 of User core. Dec 13 15:16:32.844043 sshd[11339]: Connection closed by 147.75.109.163 port 33500 Dec 13 15:16:32.844355 sshd-session[11337]: pam_unix(sshd:session): session closed for user core Dec 13 15:16:32.847087 systemd[1]: sshd@38-147.28.228.225:22-147.75.109.163:33500.service: Deactivated successfully. Dec 13 15:16:32.849184 systemd[1]: session-40.scope: Deactivated successfully. Dec 13 15:16:32.849702 systemd-logind[2667]: Session 40 logged out. Waiting for processes to exit. Dec 13 15:16:32.850288 systemd-logind[2667]: Removed session 40. Dec 13 15:16:37.924992 systemd[1]: Started sshd@39-147.28.228.225:22-147.75.109.163:47736.service - OpenSSH per-connection server daemon (147.75.109.163:47736). Dec 13 15:16:38.349473 sshd[11375]: Accepted publickey for core from 147.75.109.163 port 47736 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:16:38.350437 sshd-session[11375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:16:38.353287 systemd-logind[2667]: New session 41 of user core. Dec 13 15:16:38.363827 systemd[1]: Started session-41.scope - Session 41 of User core. Dec 13 15:16:38.707968 sshd[11377]: Connection closed by 147.75.109.163 port 47736 Dec 13 15:16:38.708360 sshd-session[11375]: pam_unix(sshd:session): session closed for user core Dec 13 15:16:38.711245 systemd[1]: sshd@39-147.28.228.225:22-147.75.109.163:47736.service: Deactivated successfully. Dec 13 15:16:38.712880 systemd[1]: session-41.scope: Deactivated successfully. Dec 13 15:16:38.713358 systemd-logind[2667]: Session 41 logged out. Waiting for processes to exit. Dec 13 15:16:38.713926 systemd-logind[2667]: Removed session 41. Dec 13 15:16:43.784932 systemd[1]: Started sshd@40-147.28.228.225:22-147.75.109.163:47740.service - OpenSSH per-connection server daemon (147.75.109.163:47740). Dec 13 15:16:44.225853 sshd[11413]: Accepted publickey for core from 147.75.109.163 port 47740 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:16:44.226885 sshd-session[11413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:16:44.229762 systemd-logind[2667]: New session 42 of user core. Dec 13 15:16:44.243820 systemd[1]: Started session-42.scope - Session 42 of User core. Dec 13 15:16:44.592339 sshd[11415]: Connection closed by 147.75.109.163 port 47740 Dec 13 15:16:44.592712 sshd-session[11413]: pam_unix(sshd:session): session closed for user core Dec 13 15:16:44.595450 systemd[1]: sshd@40-147.28.228.225:22-147.75.109.163:47740.service: Deactivated successfully. Dec 13 15:16:44.597077 systemd[1]: session-42.scope: Deactivated successfully. Dec 13 15:16:44.597585 systemd-logind[2667]: Session 42 logged out. Waiting for processes to exit. Dec 13 15:16:44.598180 systemd-logind[2667]: Removed session 42. Dec 13 15:16:49.664052 systemd[1]: Started sshd@41-147.28.228.225:22-147.75.109.163:53342.service - OpenSSH per-connection server daemon (147.75.109.163:53342). Dec 13 15:16:50.086112 sshd[11478]: Accepted publickey for core from 147.75.109.163 port 53342 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:16:50.087115 sshd-session[11478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:16:50.089919 systemd-logind[2667]: New session 43 of user core. Dec 13 15:16:50.105771 systemd[1]: Started session-43.scope - Session 43 of User core. Dec 13 15:16:50.438903 sshd[11480]: Connection closed by 147.75.109.163 port 53342 Dec 13 15:16:50.439278 sshd-session[11478]: pam_unix(sshd:session): session closed for user core Dec 13 15:16:50.442044 systemd[1]: sshd@41-147.28.228.225:22-147.75.109.163:53342.service: Deactivated successfully. Dec 13 15:16:50.444276 systemd[1]: session-43.scope: Deactivated successfully. Dec 13 15:16:50.444794 systemd-logind[2667]: Session 43 logged out. Waiting for processes to exit. Dec 13 15:16:50.445376 systemd-logind[2667]: Removed session 43. Dec 13 15:16:55.509080 systemd[1]: Started sshd@42-147.28.228.225:22-147.75.109.163:53358.service - OpenSSH per-connection server daemon (147.75.109.163:53358). Dec 13 15:16:55.928917 sshd[11535]: Accepted publickey for core from 147.75.109.163 port 53358 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:16:55.930002 sshd-session[11535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:16:55.932901 systemd-logind[2667]: New session 44 of user core. Dec 13 15:16:55.941841 systemd[1]: Started session-44.scope - Session 44 of User core. Dec 13 15:16:56.281033 sshd[11557]: Connection closed by 147.75.109.163 port 53358 Dec 13 15:16:56.281384 sshd-session[11535]: pam_unix(sshd:session): session closed for user core Dec 13 15:16:56.284199 systemd[1]: sshd@42-147.28.228.225:22-147.75.109.163:53358.service: Deactivated successfully. Dec 13 15:16:56.285818 systemd[1]: session-44.scope: Deactivated successfully. Dec 13 15:16:56.286296 systemd-logind[2667]: Session 44 logged out. Waiting for processes to exit. Dec 13 15:16:56.286884 systemd-logind[2667]: Removed session 44. Dec 13 15:17:01.359996 systemd[1]: Started sshd@43-147.28.228.225:22-147.75.109.163:34634.service - OpenSSH per-connection server daemon (147.75.109.163:34634). Dec 13 15:17:01.795076 sshd[11595]: Accepted publickey for core from 147.75.109.163 port 34634 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:17:01.796069 sshd-session[11595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:17:01.798913 systemd-logind[2667]: New session 45 of user core. Dec 13 15:17:01.808779 systemd[1]: Started session-45.scope - Session 45 of User core. Dec 13 15:17:02.157240 sshd[11597]: Connection closed by 147.75.109.163 port 34634 Dec 13 15:17:02.157614 sshd-session[11595]: pam_unix(sshd:session): session closed for user core Dec 13 15:17:02.160482 systemd[1]: sshd@43-147.28.228.225:22-147.75.109.163:34634.service: Deactivated successfully. Dec 13 15:17:02.162086 systemd[1]: session-45.scope: Deactivated successfully. Dec 13 15:17:02.162597 systemd-logind[2667]: Session 45 logged out. Waiting for processes to exit. Dec 13 15:17:02.163202 systemd-logind[2667]: Removed session 45. Dec 13 15:17:07.228105 systemd[1]: Started sshd@44-147.28.228.225:22-147.75.109.163:50988.service - OpenSSH per-connection server daemon (147.75.109.163:50988). Dec 13 15:17:07.650632 sshd[11635]: Accepted publickey for core from 147.75.109.163 port 50988 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:17:07.651708 sshd-session[11635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:17:07.654545 systemd-logind[2667]: New session 46 of user core. Dec 13 15:17:07.665773 systemd[1]: Started session-46.scope - Session 46 of User core. Dec 13 15:17:08.004886 sshd[11637]: Connection closed by 147.75.109.163 port 50988 Dec 13 15:17:08.005307 sshd-session[11635]: pam_unix(sshd:session): session closed for user core Dec 13 15:17:08.008079 systemd[1]: sshd@44-147.28.228.225:22-147.75.109.163:50988.service: Deactivated successfully. Dec 13 15:17:08.010261 systemd[1]: session-46.scope: Deactivated successfully. Dec 13 15:17:08.010830 systemd-logind[2667]: Session 46 logged out. Waiting for processes to exit. Dec 13 15:17:08.011447 systemd-logind[2667]: Removed session 46. Dec 13 15:17:13.081026 systemd[1]: Started sshd@45-147.28.228.225:22-147.75.109.163:51000.service - OpenSSH per-connection server daemon (147.75.109.163:51000). Dec 13 15:17:13.513949 sshd[11671]: Accepted publickey for core from 147.75.109.163 port 51000 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:17:13.514991 sshd-session[11671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:17:13.517837 systemd-logind[2667]: New session 47 of user core. Dec 13 15:17:13.536769 systemd[1]: Started session-47.scope - Session 47 of User core. Dec 13 15:17:13.874336 sshd[11673]: Connection closed by 147.75.109.163 port 51000 Dec 13 15:17:13.874642 sshd-session[11671]: pam_unix(sshd:session): session closed for user core Dec 13 15:17:13.877428 systemd[1]: sshd@45-147.28.228.225:22-147.75.109.163:51000.service: Deactivated successfully. Dec 13 15:17:13.879036 systemd[1]: session-47.scope: Deactivated successfully. Dec 13 15:17:13.879536 systemd-logind[2667]: Session 47 logged out. Waiting for processes to exit. Dec 13 15:17:13.880079 systemd-logind[2667]: Removed session 47. Dec 13 15:17:18.953936 systemd[1]: Started sshd@46-147.28.228.225:22-147.75.109.163:45304.service - OpenSSH per-connection server daemon (147.75.109.163:45304). Dec 13 15:17:19.394458 sshd[11738]: Accepted publickey for core from 147.75.109.163 port 45304 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:17:19.395430 sshd-session[11738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:17:19.398355 systemd-logind[2667]: New session 48 of user core. Dec 13 15:17:19.421771 systemd[1]: Started session-48.scope - Session 48 of User core. Dec 13 15:17:19.761033 sshd[11740]: Connection closed by 147.75.109.163 port 45304 Dec 13 15:17:19.761396 sshd-session[11738]: pam_unix(sshd:session): session closed for user core Dec 13 15:17:19.764191 systemd[1]: sshd@46-147.28.228.225:22-147.75.109.163:45304.service: Deactivated successfully. Dec 13 15:17:19.765804 systemd[1]: session-48.scope: Deactivated successfully. Dec 13 15:17:19.766300 systemd-logind[2667]: Session 48 logged out. Waiting for processes to exit. Dec 13 15:17:19.766829 systemd-logind[2667]: Removed session 48. Dec 13 15:17:19.834888 systemd[1]: Started sshd@47-147.28.228.225:22-147.75.109.163:45306.service - OpenSSH per-connection server daemon (147.75.109.163:45306). Dec 13 15:17:20.270239 sshd[11776]: Accepted publickey for core from 147.75.109.163 port 45306 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:17:20.271254 sshd-session[11776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:17:20.273947 systemd-logind[2667]: New session 49 of user core. Dec 13 15:17:20.284824 systemd[1]: Started session-49.scope - Session 49 of User core. Dec 13 15:17:20.652094 sshd[11778]: Connection closed by 147.75.109.163 port 45306 Dec 13 15:17:20.652498 sshd-session[11776]: pam_unix(sshd:session): session closed for user core Dec 13 15:17:20.655700 systemd[1]: sshd@47-147.28.228.225:22-147.75.109.163:45306.service: Deactivated successfully. Dec 13 15:17:20.658031 systemd[1]: session-49.scope: Deactivated successfully. Dec 13 15:17:20.658548 systemd-logind[2667]: Session 49 logged out. Waiting for processes to exit. Dec 13 15:17:20.659102 systemd-logind[2667]: Removed session 49. Dec 13 15:17:20.729020 systemd[1]: Started sshd@48-147.28.228.225:22-147.75.109.163:45318.service - OpenSSH per-connection server daemon (147.75.109.163:45318). Dec 13 15:17:21.151820 sshd[11811]: Accepted publickey for core from 147.75.109.163 port 45318 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:17:21.152833 sshd-session[11811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:17:21.155517 systemd-logind[2667]: New session 50 of user core. Dec 13 15:17:21.168796 systemd[1]: Started session-50.scope - Session 50 of User core. Dec 13 15:17:22.486835 sshd[11813]: Connection closed by 147.75.109.163 port 45318 Dec 13 15:17:22.487229 sshd-session[11811]: pam_unix(sshd:session): session closed for user core Dec 13 15:17:22.490098 systemd[1]: sshd@48-147.28.228.225:22-147.75.109.163:45318.service: Deactivated successfully. Dec 13 15:17:22.491712 systemd[1]: session-50.scope: Deactivated successfully. Dec 13 15:17:22.491896 systemd[1]: session-50.scope: Consumed 2.484s CPU time. Dec 13 15:17:22.492238 systemd-logind[2667]: Session 50 logged out. Waiting for processes to exit. Dec 13 15:17:22.492842 systemd-logind[2667]: Removed session 50. Dec 13 15:17:22.559905 systemd[1]: Started sshd@49-147.28.228.225:22-147.75.109.163:45326.service - OpenSSH per-connection server daemon (147.75.109.163:45326). Dec 13 15:17:22.971580 sshd[11913]: Accepted publickey for core from 147.75.109.163 port 45326 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:17:22.972621 sshd-session[11913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:17:22.975866 systemd-logind[2667]: New session 51 of user core. Dec 13 15:17:22.985781 systemd[1]: Started session-51.scope - Session 51 of User core. Dec 13 15:17:23.406579 sshd[11916]: Connection closed by 147.75.109.163 port 45326 Dec 13 15:17:23.406922 sshd-session[11913]: pam_unix(sshd:session): session closed for user core Dec 13 15:17:23.409710 systemd[1]: sshd@49-147.28.228.225:22-147.75.109.163:45326.service: Deactivated successfully. Dec 13 15:17:23.411923 systemd[1]: session-51.scope: Deactivated successfully. Dec 13 15:17:23.412436 systemd-logind[2667]: Session 51 logged out. Waiting for processes to exit. Dec 13 15:17:23.413039 systemd-logind[2667]: Removed session 51. Dec 13 15:17:23.482924 systemd[1]: Started sshd@50-147.28.228.225:22-147.75.109.163:45332.service - OpenSSH per-connection server daemon (147.75.109.163:45332). Dec 13 15:17:23.905395 sshd[11985]: Accepted publickey for core from 147.75.109.163 port 45332 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:17:23.906425 sshd-session[11985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:17:23.909386 systemd-logind[2667]: New session 52 of user core. Dec 13 15:17:23.921838 systemd[1]: Started session-52.scope - Session 52 of User core. Dec 13 15:17:24.258776 sshd[11987]: Connection closed by 147.75.109.163 port 45332 Dec 13 15:17:24.259207 sshd-session[11985]: pam_unix(sshd:session): session closed for user core Dec 13 15:17:24.262068 systemd[1]: sshd@50-147.28.228.225:22-147.75.109.163:45332.service: Deactivated successfully. Dec 13 15:17:24.263704 systemd[1]: session-52.scope: Deactivated successfully. Dec 13 15:17:24.264272 systemd-logind[2667]: Session 52 logged out. Waiting for processes to exit. Dec 13 15:17:24.264838 systemd-logind[2667]: Removed session 52. Dec 13 15:17:29.333994 systemd[1]: Started sshd@51-147.28.228.225:22-147.75.109.163:41264.service - OpenSSH per-connection server daemon (147.75.109.163:41264). Dec 13 15:17:29.766107 sshd[12030]: Accepted publickey for core from 147.75.109.163 port 41264 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:17:29.767105 sshd-session[12030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:17:29.769910 systemd-logind[2667]: New session 53 of user core. Dec 13 15:17:29.781836 systemd[1]: Started session-53.scope - Session 53 of User core. Dec 13 15:17:30.122542 sshd[12032]: Connection closed by 147.75.109.163 port 41264 Dec 13 15:17:30.122884 sshd-session[12030]: pam_unix(sshd:session): session closed for user core Dec 13 15:17:30.125665 systemd[1]: sshd@51-147.28.228.225:22-147.75.109.163:41264.service: Deactivated successfully. Dec 13 15:17:30.127316 systemd[1]: session-53.scope: Deactivated successfully. Dec 13 15:17:30.127833 systemd-logind[2667]: Session 53 logged out. Waiting for processes to exit. Dec 13 15:17:30.128408 systemd-logind[2667]: Removed session 53. Dec 13 15:17:35.191961 systemd[1]: Started sshd@52-147.28.228.225:22-147.75.109.163:41274.service - OpenSSH per-connection server daemon (147.75.109.163:41274). Dec 13 15:17:35.604284 sshd[12058]: Accepted publickey for core from 147.75.109.163 port 41274 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:17:35.605253 sshd-session[12058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:17:35.608113 systemd-logind[2667]: New session 54 of user core. Dec 13 15:17:35.617778 systemd[1]: Started session-54.scope - Session 54 of User core. Dec 13 15:17:35.951463 sshd[12060]: Connection closed by 147.75.109.163 port 41274 Dec 13 15:17:35.951866 sshd-session[12058]: pam_unix(sshd:session): session closed for user core Dec 13 15:17:35.954630 systemd[1]: sshd@52-147.28.228.225:22-147.75.109.163:41274.service: Deactivated successfully. Dec 13 15:17:35.956882 systemd[1]: session-54.scope: Deactivated successfully. Dec 13 15:17:35.957443 systemd-logind[2667]: Session 54 logged out. Waiting for processes to exit. Dec 13 15:17:35.958055 systemd-logind[2667]: Removed session 54. Dec 13 15:17:41.024122 systemd[1]: Started sshd@53-147.28.228.225:22-147.75.109.163:50592.service - OpenSSH per-connection server daemon (147.75.109.163:50592). Dec 13 15:17:41.444284 sshd[12096]: Accepted publickey for core from 147.75.109.163 port 50592 ssh2: RSA SHA256:dl8RGyOzPlNGKoajgSmIjdxzy+Kp2cSBj3gZ9aBZ74A Dec 13 15:17:41.445299 sshd-session[12096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 15:17:41.448219 systemd-logind[2667]: New session 55 of user core. Dec 13 15:17:41.458826 systemd[1]: Started session-55.scope - Session 55 of User core. Dec 13 15:17:41.794925 sshd[12099]: Connection closed by 147.75.109.163 port 50592 Dec 13 15:17:41.795290 sshd-session[12096]: pam_unix(sshd:session): session closed for user core Dec 13 15:17:41.798130 systemd[1]: sshd@53-147.28.228.225:22-147.75.109.163:50592.service: Deactivated successfully. Dec 13 15:17:41.799840 systemd[1]: session-55.scope: Deactivated successfully. Dec 13 15:17:41.800347 systemd-logind[2667]: Session 55 logged out. Waiting for processes to exit. Dec 13 15:17:41.800938 systemd-logind[2667]: Removed session 55.