May 14 01:06:32.176561 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] May 14 01:06:32.176584 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 22:16:18 -00 2025 May 14 01:06:32.176592 kernel: KASLR enabled May 14 01:06:32.176598 kernel: efi: EFI v2.7 by American Megatrends May 14 01:06:32.176604 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea47e818 RNG=0xebf10018 MEMRESERVE=0xe47bff98 May 14 01:06:32.176609 kernel: random: crng init done May 14 01:06:32.176615 kernel: secureboot: Secure boot disabled May 14 01:06:32.176621 kernel: esrt: Reserving ESRT space from 0x00000000ea47e818 to 0x00000000ea47e878. May 14 01:06:32.176628 kernel: ACPI: Early table checksum verification disabled May 14 01:06:32.176634 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) May 14 01:06:32.176640 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) May 14 01:06:32.176646 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) May 14 01:06:32.176651 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) May 14 01:06:32.176657 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) May 14 01:06:32.176666 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) May 14 01:06:32.176671 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) May 14 01:06:32.176677 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) May 14 01:06:32.176683 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) May 14 01:06:32.176689 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) May 14 01:06:32.176695 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) May 14 01:06:32.176701 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) May 14 01:06:32.176707 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) May 14 01:06:32.176713 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) May 14 01:06:32.176719 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) May 14 01:06:32.176727 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) May 14 01:06:32.176733 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) May 14 01:06:32.176739 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) May 14 01:06:32.176745 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) May 14 01:06:32.176751 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 May 14 01:06:32.176756 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] May 14 01:06:32.176762 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] May 14 01:06:32.176768 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] May 14 01:06:32.176774 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] May 14 01:06:32.176780 kernel: NUMA: NODE_DATA [mem 0x83fdffcc800-0x83fdffd1fff] May 14 01:06:32.176786 kernel: Zone ranges: May 14 01:06:32.176793 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] May 14 01:06:32.176799 kernel: DMA32 empty May 14 01:06:32.176805 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] May 14 01:06:32.176811 kernel: Movable zone start for each node May 14 01:06:32.176817 kernel: Early memory node ranges May 14 01:06:32.176826 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] May 14 01:06:32.176832 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] May 14 01:06:32.176840 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] May 14 01:06:32.176847 kernel: node 0: [mem 0x0000000094000000-0x00000000eba34fff] May 14 01:06:32.176853 kernel: node 0: [mem 0x00000000eba35000-0x00000000ebec6fff] May 14 01:06:32.176859 kernel: node 0: [mem 0x00000000ebec7000-0x00000000ebec9fff] May 14 01:06:32.176865 kernel: node 0: [mem 0x00000000ebeca000-0x00000000ebeccfff] May 14 01:06:32.176871 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] May 14 01:06:32.176878 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] May 14 01:06:32.176884 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] May 14 01:06:32.176890 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] May 14 01:06:32.176896 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee53ffff] May 14 01:06:32.176904 kernel: node 0: [mem 0x00000000ee540000-0x00000000f765ffff] May 14 01:06:32.176910 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] May 14 01:06:32.176916 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] May 14 01:06:32.176923 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] May 14 01:06:32.176929 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] May 14 01:06:32.176935 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] May 14 01:06:32.176941 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] May 14 01:06:32.176948 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] May 14 01:06:32.176954 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] May 14 01:06:32.176960 kernel: On node 0, zone DMA: 768 pages in unavailable ranges May 14 01:06:32.176967 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges May 14 01:06:32.176974 kernel: psci: probing for conduit method from ACPI. May 14 01:06:32.177005 kernel: psci: PSCIv1.1 detected in firmware. May 14 01:06:32.177011 kernel: psci: Using standard PSCI v0.2 function IDs May 14 01:06:32.177018 kernel: psci: MIGRATE_INFO_TYPE not supported. May 14 01:06:32.177024 kernel: psci: SMC Calling Convention v1.2 May 14 01:06:32.177030 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 14 01:06:32.177037 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 May 14 01:06:32.177043 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 May 14 01:06:32.177050 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 May 14 01:06:32.177056 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 May 14 01:06:32.177062 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 May 14 01:06:32.177069 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 May 14 01:06:32.177077 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 May 14 01:06:32.177083 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 May 14 01:06:32.177089 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 May 14 01:06:32.177096 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 May 14 01:06:32.177102 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 May 14 01:06:32.177108 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 May 14 01:06:32.177114 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 May 14 01:06:32.177121 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 May 14 01:06:32.177127 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 May 14 01:06:32.177133 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 May 14 01:06:32.177139 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 May 14 01:06:32.177146 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 May 14 01:06:32.177153 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 May 14 01:06:32.177160 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 May 14 01:06:32.177166 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 May 14 01:06:32.177172 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 May 14 01:06:32.177178 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 May 14 01:06:32.177185 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 May 14 01:06:32.177191 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 May 14 01:06:32.177197 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 May 14 01:06:32.177203 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 May 14 01:06:32.177210 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 May 14 01:06:32.177216 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 May 14 01:06:32.177223 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 May 14 01:06:32.177230 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 May 14 01:06:32.177236 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 May 14 01:06:32.177242 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 May 14 01:06:32.177249 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 May 14 01:06:32.177255 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 May 14 01:06:32.177261 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 May 14 01:06:32.177267 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 May 14 01:06:32.177274 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 May 14 01:06:32.177280 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 May 14 01:06:32.177286 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 May 14 01:06:32.177293 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 May 14 01:06:32.177300 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 May 14 01:06:32.177307 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 May 14 01:06:32.177313 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 May 14 01:06:32.177319 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 May 14 01:06:32.177326 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 May 14 01:06:32.177332 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 May 14 01:06:32.177338 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 May 14 01:06:32.177345 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 May 14 01:06:32.177357 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 May 14 01:06:32.177364 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 May 14 01:06:32.177372 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 May 14 01:06:32.177379 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 May 14 01:06:32.177385 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 May 14 01:06:32.177392 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 May 14 01:06:32.177399 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 May 14 01:06:32.177405 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 May 14 01:06:32.177413 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 May 14 01:06:32.177420 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 May 14 01:06:32.177427 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 May 14 01:06:32.177433 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 May 14 01:06:32.177440 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 May 14 01:06:32.177446 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 May 14 01:06:32.177453 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 May 14 01:06:32.177460 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 May 14 01:06:32.177466 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 May 14 01:06:32.177473 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 May 14 01:06:32.177480 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 May 14 01:06:32.177486 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 May 14 01:06:32.177494 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 May 14 01:06:32.177501 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 May 14 01:06:32.177507 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 May 14 01:06:32.177514 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 May 14 01:06:32.177521 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 May 14 01:06:32.177527 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 May 14 01:06:32.177534 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 May 14 01:06:32.177541 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 May 14 01:06:32.177547 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 May 14 01:06:32.177554 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 May 14 01:06:32.177561 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 14 01:06:32.177569 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 14 01:06:32.177576 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 May 14 01:06:32.177583 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 May 14 01:06:32.177589 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 May 14 01:06:32.177596 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 May 14 01:06:32.177603 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 May 14 01:06:32.177610 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 May 14 01:06:32.177616 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 May 14 01:06:32.177623 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 May 14 01:06:32.177630 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 May 14 01:06:32.177636 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 May 14 01:06:32.177644 kernel: Detected PIPT I-cache on CPU0 May 14 01:06:32.177651 kernel: CPU features: detected: GIC system register CPU interface May 14 01:06:32.177658 kernel: CPU features: detected: Virtualization Host Extensions May 14 01:06:32.177665 kernel: CPU features: detected: Hardware dirty bit management May 14 01:06:32.177671 kernel: CPU features: detected: Spectre-v4 May 14 01:06:32.177678 kernel: CPU features: detected: Spectre-BHB May 14 01:06:32.177685 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 01:06:32.177691 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 01:06:32.177698 kernel: CPU features: detected: ARM erratum 1418040 May 14 01:06:32.177705 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 01:06:32.177712 kernel: alternatives: applying boot alternatives May 14 01:06:32.177720 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 14 01:06:32.177728 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 01:06:32.177735 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 14 01:06:32.177742 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes May 14 01:06:32.177748 kernel: printk: log_buf_len min size: 262144 bytes May 14 01:06:32.177755 kernel: printk: log_buf_len: 1048576 bytes May 14 01:06:32.177762 kernel: printk: early log buf free: 249864(95%) May 14 01:06:32.177768 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) May 14 01:06:32.177775 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) May 14 01:06:32.177782 kernel: Fallback order for Node 0: 0 May 14 01:06:32.177789 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 May 14 01:06:32.177797 kernel: Policy zone: Normal May 14 01:06:32.177804 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 01:06:32.177811 kernel: software IO TLB: area num 128. May 14 01:06:32.177818 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) May 14 01:06:32.177825 kernel: Memory: 262923292K/268174336K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38464K init, 897K bss, 5251044K reserved, 0K cma-reserved) May 14 01:06:32.177832 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 May 14 01:06:32.177838 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 01:06:32.177845 kernel: rcu: RCU event tracing is enabled. May 14 01:06:32.177852 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. May 14 01:06:32.177859 kernel: Trampoline variant of Tasks RCU enabled. May 14 01:06:32.177866 kernel: Tracing variant of Tasks RCU enabled. May 14 01:06:32.177873 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 01:06:32.177881 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 May 14 01:06:32.177888 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 01:06:32.177895 kernel: GICv3: GIC: Using split EOI/Deactivate mode May 14 01:06:32.177902 kernel: GICv3: 672 SPIs implemented May 14 01:06:32.177908 kernel: GICv3: 0 Extended SPIs implemented May 14 01:06:32.177915 kernel: Root IRQ handler: gic_handle_irq May 14 01:06:32.177922 kernel: GICv3: GICv3 features: 16 PPIs May 14 01:06:32.177928 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 May 14 01:06:32.177935 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 May 14 01:06:32.177942 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 May 14 01:06:32.177948 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 May 14 01:06:32.177955 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 May 14 01:06:32.177963 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 May 14 01:06:32.177970 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 May 14 01:06:32.178008 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 May 14 01:06:32.178015 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 May 14 01:06:32.178022 kernel: ITS [mem 0x100100040000-0x10010005ffff] May 14 01:06:32.178029 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) May 14 01:06:32.178036 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) May 14 01:06:32.178043 kernel: ITS [mem 0x100100060000-0x10010007ffff] May 14 01:06:32.178050 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) May 14 01:06:32.178057 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) May 14 01:06:32.178063 kernel: ITS [mem 0x100100080000-0x10010009ffff] May 14 01:06:32.178072 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) May 14 01:06:32.178079 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) May 14 01:06:32.178086 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] May 14 01:06:32.178093 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) May 14 01:06:32.178100 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) May 14 01:06:32.178106 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] May 14 01:06:32.178113 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) May 14 01:06:32.178120 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) May 14 01:06:32.178127 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] May 14 01:06:32.178133 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) May 14 01:06:32.178140 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) May 14 01:06:32.178148 kernel: ITS [mem 0x100100100000-0x10010011ffff] May 14 01:06:32.178155 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) May 14 01:06:32.178162 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) May 14 01:06:32.178169 kernel: ITS [mem 0x100100120000-0x10010013ffff] May 14 01:06:32.178175 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) May 14 01:06:32.178182 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) May 14 01:06:32.178189 kernel: GICv3: using LPI property table @0x00000800003e0000 May 14 01:06:32.178196 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 May 14 01:06:32.178203 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 01:06:32.178209 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.178216 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). May 14 01:06:32.178224 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). May 14 01:06:32.178231 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 01:06:32.178238 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 01:06:32.178245 kernel: Console: colour dummy device 80x25 May 14 01:06:32.178252 kernel: printk: console [tty0] enabled May 14 01:06:32.178259 kernel: ACPI: Core revision 20230628 May 14 01:06:32.178266 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 01:06:32.178273 kernel: pid_max: default: 81920 minimum: 640 May 14 01:06:32.178280 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 01:06:32.178287 kernel: landlock: Up and running. May 14 01:06:32.178295 kernel: SELinux: Initializing. May 14 01:06:32.178302 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 01:06:32.178309 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 01:06:32.178316 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 14 01:06:32.178323 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 14 01:06:32.178330 kernel: rcu: Hierarchical SRCU implementation. May 14 01:06:32.178337 kernel: rcu: Max phase no-delay instances is 400. May 14 01:06:32.178344 kernel: Platform MSI: ITS@0x100100040000 domain created May 14 01:06:32.178351 kernel: Platform MSI: ITS@0x100100060000 domain created May 14 01:06:32.178359 kernel: Platform MSI: ITS@0x100100080000 domain created May 14 01:06:32.178366 kernel: Platform MSI: ITS@0x1001000a0000 domain created May 14 01:06:32.178373 kernel: Platform MSI: ITS@0x1001000c0000 domain created May 14 01:06:32.178380 kernel: Platform MSI: ITS@0x1001000e0000 domain created May 14 01:06:32.178387 kernel: Platform MSI: ITS@0x100100100000 domain created May 14 01:06:32.178393 kernel: Platform MSI: ITS@0x100100120000 domain created May 14 01:06:32.178401 kernel: PCI/MSI: ITS@0x100100040000 domain created May 14 01:06:32.178407 kernel: PCI/MSI: ITS@0x100100060000 domain created May 14 01:06:32.178414 kernel: PCI/MSI: ITS@0x100100080000 domain created May 14 01:06:32.178422 kernel: PCI/MSI: ITS@0x1001000a0000 domain created May 14 01:06:32.178429 kernel: PCI/MSI: ITS@0x1001000c0000 domain created May 14 01:06:32.178436 kernel: PCI/MSI: ITS@0x1001000e0000 domain created May 14 01:06:32.178443 kernel: PCI/MSI: ITS@0x100100100000 domain created May 14 01:06:32.178449 kernel: PCI/MSI: ITS@0x100100120000 domain created May 14 01:06:32.178456 kernel: Remapping and enabling EFI services. May 14 01:06:32.178463 kernel: smp: Bringing up secondary CPUs ... May 14 01:06:32.178470 kernel: Detected PIPT I-cache on CPU1 May 14 01:06:32.178477 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 May 14 01:06:32.178484 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 May 14 01:06:32.178492 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.178499 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] May 14 01:06:32.178506 kernel: Detected PIPT I-cache on CPU2 May 14 01:06:32.178513 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 May 14 01:06:32.178520 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 May 14 01:06:32.178527 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.178533 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] May 14 01:06:32.178540 kernel: Detected PIPT I-cache on CPU3 May 14 01:06:32.178547 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 May 14 01:06:32.178555 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 May 14 01:06:32.178562 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.178569 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] May 14 01:06:32.178576 kernel: Detected PIPT I-cache on CPU4 May 14 01:06:32.178583 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 May 14 01:06:32.178590 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 May 14 01:06:32.178596 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.178603 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] May 14 01:06:32.178610 kernel: Detected PIPT I-cache on CPU5 May 14 01:06:32.178616 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 May 14 01:06:32.178625 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 May 14 01:06:32.178631 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.178638 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] May 14 01:06:32.178645 kernel: Detected PIPT I-cache on CPU6 May 14 01:06:32.178652 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 May 14 01:06:32.178659 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 May 14 01:06:32.178666 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.178672 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] May 14 01:06:32.178679 kernel: Detected PIPT I-cache on CPU7 May 14 01:06:32.178688 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 May 14 01:06:32.178695 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 May 14 01:06:32.178701 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.178708 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] May 14 01:06:32.178715 kernel: Detected PIPT I-cache on CPU8 May 14 01:06:32.178722 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 May 14 01:06:32.178728 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 May 14 01:06:32.178735 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.178742 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] May 14 01:06:32.178749 kernel: Detected PIPT I-cache on CPU9 May 14 01:06:32.178757 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 May 14 01:06:32.178764 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 May 14 01:06:32.178771 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.178777 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] May 14 01:06:32.178784 kernel: Detected PIPT I-cache on CPU10 May 14 01:06:32.178791 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 May 14 01:06:32.178798 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 May 14 01:06:32.178805 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.178811 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] May 14 01:06:32.178820 kernel: Detected PIPT I-cache on CPU11 May 14 01:06:32.178827 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 May 14 01:06:32.178834 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 May 14 01:06:32.178840 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.178847 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] May 14 01:06:32.178854 kernel: Detected PIPT I-cache on CPU12 May 14 01:06:32.178861 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 May 14 01:06:32.178868 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 May 14 01:06:32.178875 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.178881 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] May 14 01:06:32.178890 kernel: Detected PIPT I-cache on CPU13 May 14 01:06:32.178897 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 May 14 01:06:32.178904 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 May 14 01:06:32.178911 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.178917 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] May 14 01:06:32.178924 kernel: Detected PIPT I-cache on CPU14 May 14 01:06:32.178931 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 May 14 01:06:32.178938 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 May 14 01:06:32.178945 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.178954 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] May 14 01:06:32.178960 kernel: Detected PIPT I-cache on CPU15 May 14 01:06:32.178967 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 May 14 01:06:32.178974 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 May 14 01:06:32.178983 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.178990 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] May 14 01:06:32.178997 kernel: Detected PIPT I-cache on CPU16 May 14 01:06:32.179004 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 May 14 01:06:32.179011 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 May 14 01:06:32.179028 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179036 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] May 14 01:06:32.179043 kernel: Detected PIPT I-cache on CPU17 May 14 01:06:32.179051 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 May 14 01:06:32.179058 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 May 14 01:06:32.179065 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179072 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] May 14 01:06:32.179079 kernel: Detected PIPT I-cache on CPU18 May 14 01:06:32.179086 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 May 14 01:06:32.179094 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 May 14 01:06:32.179103 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179110 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] May 14 01:06:32.179117 kernel: Detected PIPT I-cache on CPU19 May 14 01:06:32.179124 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 May 14 01:06:32.179131 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 May 14 01:06:32.179139 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179148 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] May 14 01:06:32.179155 kernel: Detected PIPT I-cache on CPU20 May 14 01:06:32.179163 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 May 14 01:06:32.179170 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 May 14 01:06:32.179177 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179184 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] May 14 01:06:32.179191 kernel: Detected PIPT I-cache on CPU21 May 14 01:06:32.179198 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 May 14 01:06:32.179206 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 May 14 01:06:32.179215 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179222 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] May 14 01:06:32.179229 kernel: Detected PIPT I-cache on CPU22 May 14 01:06:32.179236 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 May 14 01:06:32.179243 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 May 14 01:06:32.179251 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179258 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] May 14 01:06:32.179265 kernel: Detected PIPT I-cache on CPU23 May 14 01:06:32.179272 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 May 14 01:06:32.179280 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 May 14 01:06:32.179288 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179296 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] May 14 01:06:32.179303 kernel: Detected PIPT I-cache on CPU24 May 14 01:06:32.179310 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 May 14 01:06:32.179317 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 May 14 01:06:32.179324 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179331 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] May 14 01:06:32.179339 kernel: Detected PIPT I-cache on CPU25 May 14 01:06:32.179346 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 May 14 01:06:32.179354 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 May 14 01:06:32.179363 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179372 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] May 14 01:06:32.179379 kernel: Detected PIPT I-cache on CPU26 May 14 01:06:32.179386 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 May 14 01:06:32.179393 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 May 14 01:06:32.179400 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179407 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] May 14 01:06:32.179415 kernel: Detected PIPT I-cache on CPU27 May 14 01:06:32.179423 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 May 14 01:06:32.179430 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 May 14 01:06:32.179438 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179445 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] May 14 01:06:32.179452 kernel: Detected PIPT I-cache on CPU28 May 14 01:06:32.179459 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 May 14 01:06:32.179466 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 May 14 01:06:32.179474 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179481 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] May 14 01:06:32.179488 kernel: Detected PIPT I-cache on CPU29 May 14 01:06:32.179496 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 May 14 01:06:32.179504 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 May 14 01:06:32.179511 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179518 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] May 14 01:06:32.179525 kernel: Detected PIPT I-cache on CPU30 May 14 01:06:32.179533 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 May 14 01:06:32.179540 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 May 14 01:06:32.179547 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179554 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] May 14 01:06:32.179563 kernel: Detected PIPT I-cache on CPU31 May 14 01:06:32.179570 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 May 14 01:06:32.179577 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 May 14 01:06:32.179585 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179592 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] May 14 01:06:32.179599 kernel: Detected PIPT I-cache on CPU32 May 14 01:06:32.179606 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 May 14 01:06:32.179614 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 May 14 01:06:32.179621 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179628 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] May 14 01:06:32.179636 kernel: Detected PIPT I-cache on CPU33 May 14 01:06:32.179644 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 May 14 01:06:32.179651 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 May 14 01:06:32.179658 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179665 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] May 14 01:06:32.179672 kernel: Detected PIPT I-cache on CPU34 May 14 01:06:32.179679 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 May 14 01:06:32.179687 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 May 14 01:06:32.179694 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179703 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] May 14 01:06:32.179710 kernel: Detected PIPT I-cache on CPU35 May 14 01:06:32.179717 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 May 14 01:06:32.179724 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 May 14 01:06:32.179731 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179738 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] May 14 01:06:32.179745 kernel: Detected PIPT I-cache on CPU36 May 14 01:06:32.179752 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 May 14 01:06:32.179760 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 May 14 01:06:32.179767 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179775 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] May 14 01:06:32.179783 kernel: Detected PIPT I-cache on CPU37 May 14 01:06:32.179790 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 May 14 01:06:32.179797 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 May 14 01:06:32.179804 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179811 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] May 14 01:06:32.179818 kernel: Detected PIPT I-cache on CPU38 May 14 01:06:32.179825 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 May 14 01:06:32.179833 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 May 14 01:06:32.179841 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179849 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] May 14 01:06:32.179856 kernel: Detected PIPT I-cache on CPU39 May 14 01:06:32.179864 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 May 14 01:06:32.179872 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 May 14 01:06:32.179879 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179886 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] May 14 01:06:32.179893 kernel: Detected PIPT I-cache on CPU40 May 14 01:06:32.179902 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 May 14 01:06:32.179909 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 May 14 01:06:32.179917 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179924 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] May 14 01:06:32.179931 kernel: Detected PIPT I-cache on CPU41 May 14 01:06:32.179938 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 May 14 01:06:32.179945 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 May 14 01:06:32.179953 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179960 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] May 14 01:06:32.179967 kernel: Detected PIPT I-cache on CPU42 May 14 01:06:32.179975 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 May 14 01:06:32.179985 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 May 14 01:06:32.179992 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.179999 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] May 14 01:06:32.180006 kernel: Detected PIPT I-cache on CPU43 May 14 01:06:32.180014 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 May 14 01:06:32.180021 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 May 14 01:06:32.180028 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180035 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] May 14 01:06:32.180044 kernel: Detected PIPT I-cache on CPU44 May 14 01:06:32.180051 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 May 14 01:06:32.180058 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 May 14 01:06:32.180066 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180073 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] May 14 01:06:32.180080 kernel: Detected PIPT I-cache on CPU45 May 14 01:06:32.180087 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 May 14 01:06:32.180095 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 May 14 01:06:32.180102 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180109 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] May 14 01:06:32.180118 kernel: Detected PIPT I-cache on CPU46 May 14 01:06:32.180125 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 May 14 01:06:32.180132 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 May 14 01:06:32.180140 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180147 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] May 14 01:06:32.180154 kernel: Detected PIPT I-cache on CPU47 May 14 01:06:32.180161 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 May 14 01:06:32.180169 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 May 14 01:06:32.180176 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180184 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] May 14 01:06:32.180192 kernel: Detected PIPT I-cache on CPU48 May 14 01:06:32.180199 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 May 14 01:06:32.180206 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 May 14 01:06:32.180213 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180220 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] May 14 01:06:32.180227 kernel: Detected PIPT I-cache on CPU49 May 14 01:06:32.180234 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 May 14 01:06:32.180242 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 May 14 01:06:32.180250 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180257 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] May 14 01:06:32.180265 kernel: Detected PIPT I-cache on CPU50 May 14 01:06:32.180272 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 May 14 01:06:32.180279 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 May 14 01:06:32.180286 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180294 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] May 14 01:06:32.180302 kernel: Detected PIPT I-cache on CPU51 May 14 01:06:32.180309 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 May 14 01:06:32.180317 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 May 14 01:06:32.180325 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180332 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] May 14 01:06:32.180339 kernel: Detected PIPT I-cache on CPU52 May 14 01:06:32.180347 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 May 14 01:06:32.180354 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 May 14 01:06:32.180361 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180368 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] May 14 01:06:32.180376 kernel: Detected PIPT I-cache on CPU53 May 14 01:06:32.180383 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 May 14 01:06:32.180392 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 May 14 01:06:32.180399 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180406 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] May 14 01:06:32.180413 kernel: Detected PIPT I-cache on CPU54 May 14 01:06:32.180420 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 May 14 01:06:32.180428 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 May 14 01:06:32.180435 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180442 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] May 14 01:06:32.180449 kernel: Detected PIPT I-cache on CPU55 May 14 01:06:32.180456 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 May 14 01:06:32.180465 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 May 14 01:06:32.180472 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180479 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] May 14 01:06:32.180486 kernel: Detected PIPT I-cache on CPU56 May 14 01:06:32.180494 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 May 14 01:06:32.180501 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 May 14 01:06:32.180508 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180515 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] May 14 01:06:32.180522 kernel: Detected PIPT I-cache on CPU57 May 14 01:06:32.180531 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 May 14 01:06:32.180538 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 May 14 01:06:32.180545 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180552 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] May 14 01:06:32.180560 kernel: Detected PIPT I-cache on CPU58 May 14 01:06:32.180567 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 May 14 01:06:32.180574 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 May 14 01:06:32.180581 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180589 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] May 14 01:06:32.180596 kernel: Detected PIPT I-cache on CPU59 May 14 01:06:32.180604 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 May 14 01:06:32.180612 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 May 14 01:06:32.180619 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180626 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] May 14 01:06:32.180633 kernel: Detected PIPT I-cache on CPU60 May 14 01:06:32.180640 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 May 14 01:06:32.180648 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 May 14 01:06:32.180655 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180662 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] May 14 01:06:32.180670 kernel: Detected PIPT I-cache on CPU61 May 14 01:06:32.180678 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 May 14 01:06:32.180685 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 May 14 01:06:32.180693 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180700 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] May 14 01:06:32.180707 kernel: Detected PIPT I-cache on CPU62 May 14 01:06:32.180714 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 May 14 01:06:32.180721 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 May 14 01:06:32.180729 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180736 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] May 14 01:06:32.180745 kernel: Detected PIPT I-cache on CPU63 May 14 01:06:32.180752 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 May 14 01:06:32.180759 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 May 14 01:06:32.180767 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180774 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] May 14 01:06:32.180781 kernel: Detected PIPT I-cache on CPU64 May 14 01:06:32.180788 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 May 14 01:06:32.180795 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 May 14 01:06:32.180802 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180811 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] May 14 01:06:32.180818 kernel: Detected PIPT I-cache on CPU65 May 14 01:06:32.180825 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 May 14 01:06:32.180833 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 May 14 01:06:32.180840 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180847 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] May 14 01:06:32.180854 kernel: Detected PIPT I-cache on CPU66 May 14 01:06:32.180861 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 May 14 01:06:32.180868 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 May 14 01:06:32.180877 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180884 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] May 14 01:06:32.180892 kernel: Detected PIPT I-cache on CPU67 May 14 01:06:32.180899 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 May 14 01:06:32.180906 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 May 14 01:06:32.180913 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180920 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] May 14 01:06:32.180928 kernel: Detected PIPT I-cache on CPU68 May 14 01:06:32.180935 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 May 14 01:06:32.180942 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 May 14 01:06:32.180951 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.180958 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] May 14 01:06:32.180965 kernel: Detected PIPT I-cache on CPU69 May 14 01:06:32.180973 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 May 14 01:06:32.180998 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 May 14 01:06:32.181006 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.181013 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] May 14 01:06:32.181021 kernel: Detected PIPT I-cache on CPU70 May 14 01:06:32.181028 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 May 14 01:06:32.181037 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 May 14 01:06:32.181045 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.181052 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] May 14 01:06:32.181059 kernel: Detected PIPT I-cache on CPU71 May 14 01:06:32.181067 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 May 14 01:06:32.181074 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 May 14 01:06:32.181081 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.181088 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] May 14 01:06:32.181096 kernel: Detected PIPT I-cache on CPU72 May 14 01:06:32.181103 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 May 14 01:06:32.181111 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 May 14 01:06:32.181119 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.181126 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] May 14 01:06:32.181133 kernel: Detected PIPT I-cache on CPU73 May 14 01:06:32.181140 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 May 14 01:06:32.181147 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 May 14 01:06:32.181154 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.181162 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] May 14 01:06:32.181169 kernel: Detected PIPT I-cache on CPU74 May 14 01:06:32.181177 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 May 14 01:06:32.181185 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 May 14 01:06:32.181192 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.181199 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] May 14 01:06:32.181206 kernel: Detected PIPT I-cache on CPU75 May 14 01:06:32.181213 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 May 14 01:06:32.181221 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 May 14 01:06:32.181228 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.181235 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] May 14 01:06:32.181242 kernel: Detected PIPT I-cache on CPU76 May 14 01:06:32.181251 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 May 14 01:06:32.181258 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 May 14 01:06:32.181266 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.181273 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] May 14 01:06:32.181280 kernel: Detected PIPT I-cache on CPU77 May 14 01:06:32.181287 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 May 14 01:06:32.181294 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 May 14 01:06:32.181302 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.181309 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] May 14 01:06:32.181317 kernel: Detected PIPT I-cache on CPU78 May 14 01:06:32.181325 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 May 14 01:06:32.181332 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 May 14 01:06:32.181339 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.181346 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] May 14 01:06:32.181353 kernel: Detected PIPT I-cache on CPU79 May 14 01:06:32.181360 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 May 14 01:06:32.181368 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 May 14 01:06:32.181375 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 01:06:32.181382 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] May 14 01:06:32.181391 kernel: smp: Brought up 1 node, 80 CPUs May 14 01:06:32.181398 kernel: SMP: Total of 80 processors activated. May 14 01:06:32.181405 kernel: CPU features: detected: 32-bit EL0 Support May 14 01:06:32.181413 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 01:06:32.181420 kernel: CPU features: detected: Common not Private translations May 14 01:06:32.181427 kernel: CPU features: detected: CRC32 instructions May 14 01:06:32.181434 kernel: CPU features: detected: Enhanced Virtualization Traps May 14 01:06:32.181442 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 01:06:32.181449 kernel: CPU features: detected: LSE atomic instructions May 14 01:06:32.181457 kernel: CPU features: detected: Privileged Access Never May 14 01:06:32.181464 kernel: CPU features: detected: RAS Extension Support May 14 01:06:32.181472 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 01:06:32.181479 kernel: CPU: All CPU(s) started at EL2 May 14 01:06:32.181486 kernel: alternatives: applying system-wide alternatives May 14 01:06:32.181493 kernel: devtmpfs: initialized May 14 01:06:32.181500 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 01:06:32.181508 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 14 01:06:32.181515 kernel: pinctrl core: initialized pinctrl subsystem May 14 01:06:32.181523 kernel: SMBIOS 3.4.0 present. May 14 01:06:32.181531 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 May 14 01:06:32.181538 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 01:06:32.181545 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations May 14 01:06:32.181553 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 01:06:32.181560 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 01:06:32.181567 kernel: audit: initializing netlink subsys (disabled) May 14 01:06:32.181574 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 May 14 01:06:32.181583 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 01:06:32.181590 kernel: cpuidle: using governor menu May 14 01:06:32.181597 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 01:06:32.181604 kernel: ASID allocator initialised with 32768 entries May 14 01:06:32.181612 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 01:06:32.181619 kernel: Serial: AMBA PL011 UART driver May 14 01:06:32.181626 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 14 01:06:32.181633 kernel: Modules: 0 pages in range for non-PLT usage May 14 01:06:32.181640 kernel: Modules: 509232 pages in range for PLT usage May 14 01:06:32.181647 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 01:06:32.181656 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 01:06:32.181663 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 01:06:32.181671 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 01:06:32.181678 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 01:06:32.181685 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 01:06:32.181692 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 01:06:32.181700 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 01:06:32.181707 kernel: ACPI: Added _OSI(Module Device) May 14 01:06:32.181714 kernel: ACPI: Added _OSI(Processor Device) May 14 01:06:32.181722 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 01:06:32.181730 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 01:06:32.181737 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded May 14 01:06:32.181744 kernel: ACPI: Interpreter enabled May 14 01:06:32.181751 kernel: ACPI: Using GIC for interrupt routing May 14 01:06:32.181758 kernel: ACPI: MCFG table detected, 8 entries May 14 01:06:32.181766 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 May 14 01:06:32.181773 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 May 14 01:06:32.181780 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 May 14 01:06:32.181789 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 May 14 01:06:32.181796 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 May 14 01:06:32.181803 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 May 14 01:06:32.181810 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 May 14 01:06:32.181817 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 May 14 01:06:32.181825 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA May 14 01:06:32.181832 kernel: printk: console [ttyAMA0] enabled May 14 01:06:32.181840 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA May 14 01:06:32.181848 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) May 14 01:06:32.181985 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 01:06:32.182058 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] May 14 01:06:32.182120 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] May 14 01:06:32.182179 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 14 01:06:32.182237 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 May 14 01:06:32.182295 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] May 14 01:06:32.182307 kernel: PCI host bridge to bus 000d:00 May 14 01:06:32.182378 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] May 14 01:06:32.182435 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] May 14 01:06:32.182489 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] May 14 01:06:32.182566 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 May 14 01:06:32.182637 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 May 14 01:06:32.182705 kernel: pci 000d:00:01.0: enabling Extended Tags May 14 01:06:32.182768 kernel: pci 000d:00:01.0: supports D1 D2 May 14 01:06:32.182831 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot May 14 01:06:32.182900 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 May 14 01:06:32.182964 kernel: pci 000d:00:02.0: supports D1 D2 May 14 01:06:32.183033 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot May 14 01:06:32.183103 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 May 14 01:06:32.183169 kernel: pci 000d:00:03.0: supports D1 D2 May 14 01:06:32.183231 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot May 14 01:06:32.183302 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 May 14 01:06:32.183364 kernel: pci 000d:00:04.0: supports D1 D2 May 14 01:06:32.183427 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot May 14 01:06:32.183436 kernel: acpiphp: Slot [1] registered May 14 01:06:32.183443 kernel: acpiphp: Slot [2] registered May 14 01:06:32.183453 kernel: acpiphp: Slot [3] registered May 14 01:06:32.183460 kernel: acpiphp: Slot [4] registered May 14 01:06:32.183514 kernel: pci_bus 000d:00: on NUMA node 0 May 14 01:06:32.183578 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 14 01:06:32.183641 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 14 01:06:32.183703 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 14 01:06:32.183767 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 14 01:06:32.183829 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 14 01:06:32.183893 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 14 01:06:32.183955 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 14 01:06:32.184023 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 14 01:06:32.184085 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 14 01:06:32.184148 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 14 01:06:32.184210 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 14 01:06:32.184271 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 14 01:06:32.184337 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] May 14 01:06:32.184399 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] May 14 01:06:32.184461 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] May 14 01:06:32.184522 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] May 14 01:06:32.184584 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] May 14 01:06:32.184646 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] May 14 01:06:32.184707 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] May 14 01:06:32.184772 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] May 14 01:06:32.184833 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.184895 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.184956 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.185025 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.185090 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.185154 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.185221 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.185285 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.185348 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.185410 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.185473 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.185534 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.185596 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.185658 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.185721 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.185786 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.185849 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] May 14 01:06:32.185912 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] May 14 01:06:32.185972 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] May 14 01:06:32.186040 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] May 14 01:06:32.186101 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] May 14 01:06:32.186164 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] May 14 01:06:32.186229 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] May 14 01:06:32.186290 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] May 14 01:06:32.186353 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] May 14 01:06:32.186414 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] May 14 01:06:32.186478 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] May 14 01:06:32.186540 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] May 14 01:06:32.186600 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] May 14 01:06:32.186655 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] May 14 01:06:32.186723 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] May 14 01:06:32.186783 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] May 14 01:06:32.186849 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] May 14 01:06:32.186909 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] May 14 01:06:32.186987 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] May 14 01:06:32.187047 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] May 14 01:06:32.187111 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] May 14 01:06:32.187170 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] May 14 01:06:32.187179 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) May 14 01:06:32.187250 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 01:06:32.187314 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] May 14 01:06:32.187375 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] May 14 01:06:32.187434 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 14 01:06:32.187495 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 May 14 01:06:32.187555 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] May 14 01:06:32.187565 kernel: PCI host bridge to bus 0000:00 May 14 01:06:32.187627 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] May 14 01:06:32.187686 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] May 14 01:06:32.187741 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 01:06:32.187811 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 May 14 01:06:32.187882 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 May 14 01:06:32.187947 kernel: pci 0000:00:01.0: enabling Extended Tags May 14 01:06:32.188013 kernel: pci 0000:00:01.0: supports D1 D2 May 14 01:06:32.188076 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot May 14 01:06:32.188146 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 May 14 01:06:32.188211 kernel: pci 0000:00:02.0: supports D1 D2 May 14 01:06:32.188273 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot May 14 01:06:32.188341 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 May 14 01:06:32.188405 kernel: pci 0000:00:03.0: supports D1 D2 May 14 01:06:32.188467 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot May 14 01:06:32.188535 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 May 14 01:06:32.188600 kernel: pci 0000:00:04.0: supports D1 D2 May 14 01:06:32.188662 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot May 14 01:06:32.188671 kernel: acpiphp: Slot [1-1] registered May 14 01:06:32.188679 kernel: acpiphp: Slot [2-1] registered May 14 01:06:32.188686 kernel: acpiphp: Slot [3-1] registered May 14 01:06:32.188693 kernel: acpiphp: Slot [4-1] registered May 14 01:06:32.188749 kernel: pci_bus 0000:00: on NUMA node 0 May 14 01:06:32.188810 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 14 01:06:32.188875 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 14 01:06:32.188938 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 14 01:06:32.189004 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 14 01:06:32.189066 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 14 01:06:32.189129 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 14 01:06:32.189191 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 14 01:06:32.189253 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 14 01:06:32.189318 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 14 01:06:32.189380 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 14 01:06:32.189442 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 14 01:06:32.189503 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 14 01:06:32.189565 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] May 14 01:06:32.189628 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 14 01:06:32.189690 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] May 14 01:06:32.189755 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 14 01:06:32.189818 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] May 14 01:06:32.189882 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 14 01:06:32.189943 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] May 14 01:06:32.190010 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 14 01:06:32.190071 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.190134 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.190194 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.190259 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.190321 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.190382 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.190444 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.190505 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.190567 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.190629 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.190691 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.190758 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.190819 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.190882 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.190943 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.191008 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.191069 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 14 01:06:32.191132 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] May 14 01:06:32.191194 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 14 01:06:32.191259 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] May 14 01:06:32.191323 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] May 14 01:06:32.191384 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 14 01:06:32.191447 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] May 14 01:06:32.191511 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] May 14 01:06:32.191574 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 14 01:06:32.191636 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] May 14 01:06:32.191698 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] May 14 01:06:32.191761 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 14 01:06:32.191817 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] May 14 01:06:32.191875 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] May 14 01:06:32.191942 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] May 14 01:06:32.192004 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 14 01:06:32.192070 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] May 14 01:06:32.192128 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 14 01:06:32.192202 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] May 14 01:06:32.192264 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 14 01:06:32.192332 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] May 14 01:06:32.192390 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 14 01:06:32.192400 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) May 14 01:06:32.192468 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 01:06:32.192528 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] May 14 01:06:32.192592 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] May 14 01:06:32.192651 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 14 01:06:32.192711 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 May 14 01:06:32.192771 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] May 14 01:06:32.192781 kernel: PCI host bridge to bus 0005:00 May 14 01:06:32.192843 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] May 14 01:06:32.192899 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] May 14 01:06:32.192956 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] May 14 01:06:32.193033 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 May 14 01:06:32.193104 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 May 14 01:06:32.193168 kernel: pci 0005:00:01.0: supports D1 D2 May 14 01:06:32.193230 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot May 14 01:06:32.193299 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 May 14 01:06:32.193362 kernel: pci 0005:00:03.0: supports D1 D2 May 14 01:06:32.193426 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot May 14 01:06:32.193497 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 May 14 01:06:32.193559 kernel: pci 0005:00:05.0: supports D1 D2 May 14 01:06:32.193621 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot May 14 01:06:32.193689 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 May 14 01:06:32.193753 kernel: pci 0005:00:07.0: supports D1 D2 May 14 01:06:32.193815 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot May 14 01:06:32.193827 kernel: acpiphp: Slot [1-2] registered May 14 01:06:32.193834 kernel: acpiphp: Slot [2-2] registered May 14 01:06:32.193904 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 May 14 01:06:32.193970 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] May 14 01:06:32.194037 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] May 14 01:06:32.194108 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 May 14 01:06:32.194173 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] May 14 01:06:32.194239 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] May 14 01:06:32.194298 kernel: pci_bus 0005:00: on NUMA node 0 May 14 01:06:32.194360 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 14 01:06:32.194422 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 14 01:06:32.194484 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 14 01:06:32.194547 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 14 01:06:32.194608 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 14 01:06:32.194673 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 14 01:06:32.194736 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 14 01:06:32.194797 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 14 01:06:32.194859 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 14 01:06:32.194921 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 14 01:06:32.194989 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 14 01:06:32.195055 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 May 14 01:06:32.195119 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] May 14 01:06:32.195182 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 14 01:06:32.195245 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] May 14 01:06:32.195308 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 14 01:06:32.195370 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] May 14 01:06:32.195433 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 14 01:06:32.195494 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] May 14 01:06:32.195558 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 14 01:06:32.195619 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.195681 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.195743 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.195805 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.195867 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.195928 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.195993 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.196057 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.196119 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.196181 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.196242 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.196304 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.196365 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.196427 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.196488 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.196550 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.196613 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] May 14 01:06:32.196675 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] May 14 01:06:32.196737 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 14 01:06:32.196798 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] May 14 01:06:32.196860 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] May 14 01:06:32.196922 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 14 01:06:32.196993 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] May 14 01:06:32.197057 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] May 14 01:06:32.197119 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] May 14 01:06:32.197182 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] May 14 01:06:32.197243 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 14 01:06:32.197309 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] May 14 01:06:32.197372 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] May 14 01:06:32.197436 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] May 14 01:06:32.197498 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] May 14 01:06:32.197561 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 14 01:06:32.197619 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] May 14 01:06:32.197674 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] May 14 01:06:32.197745 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] May 14 01:06:32.197803 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 14 01:06:32.197879 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] May 14 01:06:32.197937 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 14 01:06:32.198006 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] May 14 01:06:32.198064 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 14 01:06:32.198129 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] May 14 01:06:32.198190 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 14 01:06:32.198199 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) May 14 01:06:32.198267 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 01:06:32.198328 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] May 14 01:06:32.198388 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] May 14 01:06:32.198448 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 14 01:06:32.198507 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 May 14 01:06:32.198568 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] May 14 01:06:32.198578 kernel: PCI host bridge to bus 0003:00 May 14 01:06:32.198642 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] May 14 01:06:32.198697 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] May 14 01:06:32.198752 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] May 14 01:06:32.198822 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 May 14 01:06:32.198892 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 May 14 01:06:32.198956 kernel: pci 0003:00:01.0: supports D1 D2 May 14 01:06:32.199022 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot May 14 01:06:32.199092 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 May 14 01:06:32.199154 kernel: pci 0003:00:03.0: supports D1 D2 May 14 01:06:32.199217 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot May 14 01:06:32.199285 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 May 14 01:06:32.199351 kernel: pci 0003:00:05.0: supports D1 D2 May 14 01:06:32.199412 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot May 14 01:06:32.199421 kernel: acpiphp: Slot [1-3] registered May 14 01:06:32.199429 kernel: acpiphp: Slot [2-3] registered May 14 01:06:32.199500 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 May 14 01:06:32.199568 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] May 14 01:06:32.199633 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] May 14 01:06:32.199698 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] May 14 01:06:32.199764 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold May 14 01:06:32.199829 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] May 14 01:06:32.199894 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) May 14 01:06:32.199958 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] May 14 01:06:32.200025 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) May 14 01:06:32.200092 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) May 14 01:06:32.200166 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 May 14 01:06:32.200234 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] May 14 01:06:32.200298 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] May 14 01:06:32.200364 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] May 14 01:06:32.200430 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold May 14 01:06:32.200494 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] May 14 01:06:32.200557 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) May 14 01:06:32.200621 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] May 14 01:06:32.200688 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) May 14 01:06:32.200744 kernel: pci_bus 0003:00: on NUMA node 0 May 14 01:06:32.200812 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 14 01:06:32.200875 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 14 01:06:32.200939 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 14 01:06:32.201010 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 14 01:06:32.201074 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 14 01:06:32.201159 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 14 01:06:32.201228 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 May 14 01:06:32.201308 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 May 14 01:06:32.201371 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 14 01:06:32.201445 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] May 14 01:06:32.201509 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] May 14 01:06:32.201573 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] May 14 01:06:32.201635 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] May 14 01:06:32.201700 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] May 14 01:06:32.201763 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.201824 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.201888 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.201952 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.202021 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.202083 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.202145 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.202209 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.202271 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.202334 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.202395 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.202457 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.202519 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] May 14 01:06:32.202582 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] May 14 01:06:32.202646 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] May 14 01:06:32.202710 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] May 14 01:06:32.202776 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] May 14 01:06:32.202839 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] May 14 01:06:32.202905 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] May 14 01:06:32.202971 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] May 14 01:06:32.203040 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] May 14 01:06:32.203109 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] May 14 01:06:32.203173 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] May 14 01:06:32.203239 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] May 14 01:06:32.203302 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] May 14 01:06:32.203369 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] May 14 01:06:32.203433 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 14 01:06:32.203497 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 14 01:06:32.203563 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 14 01:06:32.203627 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 14 01:06:32.203694 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 14 01:06:32.203759 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 14 01:06:32.203822 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 14 01:06:32.203887 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 14 01:06:32.203949 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] May 14 01:06:32.204014 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] May 14 01:06:32.204078 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] May 14 01:06:32.204137 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc May 14 01:06:32.204193 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] May 14 01:06:32.204249 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] May 14 01:06:32.204327 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] May 14 01:06:32.204387 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] May 14 01:06:32.204456 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] May 14 01:06:32.204515 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] May 14 01:06:32.204583 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] May 14 01:06:32.204641 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] May 14 01:06:32.204651 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) May 14 01:06:32.204721 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 01:06:32.204788 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] May 14 01:06:32.204849 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] May 14 01:06:32.204910 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 14 01:06:32.204970 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 May 14 01:06:32.205115 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] May 14 01:06:32.205126 kernel: PCI host bridge to bus 000c:00 May 14 01:06:32.205190 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] May 14 01:06:32.205250 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] May 14 01:06:32.205305 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] May 14 01:06:32.205376 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 May 14 01:06:32.205458 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 May 14 01:06:32.205525 kernel: pci 000c:00:01.0: enabling Extended Tags May 14 01:06:32.205589 kernel: pci 000c:00:01.0: supports D1 D2 May 14 01:06:32.205650 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot May 14 01:06:32.205722 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 May 14 01:06:32.205784 kernel: pci 000c:00:02.0: supports D1 D2 May 14 01:06:32.205846 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot May 14 01:06:32.205916 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 May 14 01:06:32.205985 kernel: pci 000c:00:03.0: supports D1 D2 May 14 01:06:32.206053 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot May 14 01:06:32.206123 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 May 14 01:06:32.206192 kernel: pci 000c:00:04.0: supports D1 D2 May 14 01:06:32.206255 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot May 14 01:06:32.206265 kernel: acpiphp: Slot [1-4] registered May 14 01:06:32.206273 kernel: acpiphp: Slot [2-4] registered May 14 01:06:32.206280 kernel: acpiphp: Slot [3-2] registered May 14 01:06:32.206288 kernel: acpiphp: Slot [4-2] registered May 14 01:06:32.206343 kernel: pci_bus 000c:00: on NUMA node 0 May 14 01:06:32.206405 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 14 01:06:32.206469 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 14 01:06:32.206531 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 14 01:06:32.206593 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 14 01:06:32.206655 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 14 01:06:32.206715 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 14 01:06:32.206777 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 14 01:06:32.206837 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 14 01:06:32.206901 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 14 01:06:32.206963 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 14 01:06:32.207030 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 14 01:06:32.207092 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 14 01:06:32.207153 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] May 14 01:06:32.207214 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] May 14 01:06:32.207274 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] May 14 01:06:32.207339 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] May 14 01:06:32.207400 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] May 14 01:06:32.207461 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] May 14 01:06:32.207523 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] May 14 01:06:32.207583 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] May 14 01:06:32.207643 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.207703 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.207765 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.207827 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.207889 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.207950 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.208014 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.208074 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.208134 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.208195 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.208255 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.208316 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.208379 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.208440 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.208502 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.208563 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.208625 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] May 14 01:06:32.208685 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] May 14 01:06:32.208747 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] May 14 01:06:32.208810 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] May 14 01:06:32.208871 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] May 14 01:06:32.208933 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] May 14 01:06:32.209251 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] May 14 01:06:32.209320 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] May 14 01:06:32.209381 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] May 14 01:06:32.209443 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] May 14 01:06:32.209506 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] May 14 01:06:32.209567 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] May 14 01:06:32.209623 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] May 14 01:06:32.209678 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] May 14 01:06:32.209745 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] May 14 01:06:32.209802 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] May 14 01:06:32.209876 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] May 14 01:06:32.209934 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] May 14 01:06:32.210001 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] May 14 01:06:32.210059 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] May 14 01:06:32.210124 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] May 14 01:06:32.210181 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] May 14 01:06:32.210193 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) May 14 01:06:32.210262 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 01:06:32.210323 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] May 14 01:06:32.210383 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] May 14 01:06:32.210442 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 14 01:06:32.210500 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 May 14 01:06:32.210559 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] May 14 01:06:32.210571 kernel: PCI host bridge to bus 0002:00 May 14 01:06:32.210633 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] May 14 01:06:32.210690 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] May 14 01:06:32.210743 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] May 14 01:06:32.210812 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 May 14 01:06:32.210880 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 May 14 01:06:32.210944 kernel: pci 0002:00:01.0: supports D1 D2 May 14 01:06:32.211012 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot May 14 01:06:32.211080 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 May 14 01:06:32.211142 kernel: pci 0002:00:03.0: supports D1 D2 May 14 01:06:32.211203 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot May 14 01:06:32.211271 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 May 14 01:06:32.211333 kernel: pci 0002:00:05.0: supports D1 D2 May 14 01:06:32.211394 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot May 14 01:06:32.211465 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 May 14 01:06:32.211526 kernel: pci 0002:00:07.0: supports D1 D2 May 14 01:06:32.211588 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot May 14 01:06:32.211598 kernel: acpiphp: Slot [1-5] registered May 14 01:06:32.211605 kernel: acpiphp: Slot [2-5] registered May 14 01:06:32.211613 kernel: acpiphp: Slot [3-3] registered May 14 01:06:32.211620 kernel: acpiphp: Slot [4-3] registered May 14 01:06:32.211674 kernel: pci_bus 0002:00: on NUMA node 0 May 14 01:06:32.211738 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 14 01:06:32.211799 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 14 01:06:32.211861 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 14 01:06:32.211924 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 14 01:06:32.211991 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 14 01:06:32.212053 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 14 01:06:32.212114 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 14 01:06:32.212175 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 14 01:06:32.212235 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 14 01:06:32.212297 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 14 01:06:32.212358 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 14 01:06:32.212421 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 14 01:06:32.212482 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] May 14 01:06:32.212543 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] May 14 01:06:32.212605 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] May 14 01:06:32.212667 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] May 14 01:06:32.212731 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] May 14 01:06:32.212792 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] May 14 01:06:32.212853 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] May 14 01:06:32.212916 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] May 14 01:06:32.212979 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.213041 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.213102 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.213163 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.213224 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.213285 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.213346 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.213409 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.213470 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.213532 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.213594 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.213655 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.213716 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.213776 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.213836 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.213897 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.213960 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] May 14 01:06:32.214024 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] May 14 01:06:32.214085 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] May 14 01:06:32.214147 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] May 14 01:06:32.214208 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] May 14 01:06:32.214269 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] May 14 01:06:32.214333 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] May 14 01:06:32.214394 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] May 14 01:06:32.214456 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] May 14 01:06:32.214516 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] May 14 01:06:32.214579 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] May 14 01:06:32.214640 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] May 14 01:06:32.214698 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] May 14 01:06:32.214752 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] May 14 01:06:32.214819 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] May 14 01:06:32.214877 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] May 14 01:06:32.214949 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] May 14 01:06:32.215011 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] May 14 01:06:32.215075 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] May 14 01:06:32.215136 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] May 14 01:06:32.215199 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] May 14 01:06:32.215256 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] May 14 01:06:32.215266 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) May 14 01:06:32.215332 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 01:06:32.215393 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] May 14 01:06:32.215455 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] May 14 01:06:32.215514 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 14 01:06:32.215574 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 May 14 01:06:32.215633 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] May 14 01:06:32.215643 kernel: PCI host bridge to bus 0001:00 May 14 01:06:32.215705 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] May 14 01:06:32.215763 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] May 14 01:06:32.215817 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] May 14 01:06:32.215885 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 May 14 01:06:32.215955 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 May 14 01:06:32.216022 kernel: pci 0001:00:01.0: enabling Extended Tags May 14 01:06:32.216083 kernel: pci 0001:00:01.0: supports D1 D2 May 14 01:06:32.216145 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot May 14 01:06:32.216214 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 May 14 01:06:32.216277 kernel: pci 0001:00:02.0: supports D1 D2 May 14 01:06:32.216338 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot May 14 01:06:32.216405 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 May 14 01:06:32.216467 kernel: pci 0001:00:03.0: supports D1 D2 May 14 01:06:32.216527 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot May 14 01:06:32.216595 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 May 14 01:06:32.216659 kernel: pci 0001:00:04.0: supports D1 D2 May 14 01:06:32.216721 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot May 14 01:06:32.216731 kernel: acpiphp: Slot [1-6] registered May 14 01:06:32.216800 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 May 14 01:06:32.216864 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] May 14 01:06:32.216927 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] May 14 01:06:32.216997 kernel: pci 0001:01:00.0: PME# supported from D3cold May 14 01:06:32.217063 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 14 01:06:32.217134 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 May 14 01:06:32.217198 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] May 14 01:06:32.217261 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] May 14 01:06:32.217324 kernel: pci 0001:01:00.1: PME# supported from D3cold May 14 01:06:32.217334 kernel: acpiphp: Slot [2-6] registered May 14 01:06:32.217342 kernel: acpiphp: Slot [3-4] registered May 14 01:06:32.217349 kernel: acpiphp: Slot [4-4] registered May 14 01:06:32.217406 kernel: pci_bus 0001:00: on NUMA node 0 May 14 01:06:32.217468 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 14 01:06:32.217530 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 14 01:06:32.217591 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 14 01:06:32.217652 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 14 01:06:32.217714 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 14 01:06:32.217775 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 14 01:06:32.217838 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 14 01:06:32.217899 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 14 01:06:32.217961 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 14 01:06:32.218028 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 14 01:06:32.218091 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] May 14 01:06:32.218155 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] May 14 01:06:32.218217 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] May 14 01:06:32.218280 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] May 14 01:06:32.218342 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] May 14 01:06:32.218403 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] May 14 01:06:32.218464 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] May 14 01:06:32.218525 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] May 14 01:06:32.218585 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.218647 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.218708 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.218770 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.218832 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.218893 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.218954 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.219018 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.219080 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.219140 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.219201 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.219264 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.219327 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.219390 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.219450 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.219514 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.219578 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] May 14 01:06:32.219642 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] May 14 01:06:32.219706 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] May 14 01:06:32.219770 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] May 14 01:06:32.219832 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] May 14 01:06:32.219892 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] May 14 01:06:32.219954 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] May 14 01:06:32.220019 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] May 14 01:06:32.220080 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] May 14 01:06:32.220142 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] May 14 01:06:32.220206 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] May 14 01:06:32.220267 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] May 14 01:06:32.220329 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] May 14 01:06:32.220391 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] May 14 01:06:32.220452 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] May 14 01:06:32.220513 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] May 14 01:06:32.220572 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] May 14 01:06:32.220627 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] May 14 01:06:32.220701 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] May 14 01:06:32.220758 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] May 14 01:06:32.220823 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] May 14 01:06:32.220881 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] May 14 01:06:32.220948 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] May 14 01:06:32.221021 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] May 14 01:06:32.221087 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] May 14 01:06:32.221145 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] May 14 01:06:32.221156 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) May 14 01:06:32.221224 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 01:06:32.221288 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] May 14 01:06:32.221348 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] May 14 01:06:32.221408 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 14 01:06:32.221469 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 May 14 01:06:32.221528 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] May 14 01:06:32.221538 kernel: PCI host bridge to bus 0004:00 May 14 01:06:32.221600 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] May 14 01:06:32.221658 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] May 14 01:06:32.221713 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] May 14 01:06:32.221781 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 May 14 01:06:32.221850 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 May 14 01:06:32.221912 kernel: pci 0004:00:01.0: supports D1 D2 May 14 01:06:32.221973 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot May 14 01:06:32.222097 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 May 14 01:06:32.222164 kernel: pci 0004:00:03.0: supports D1 D2 May 14 01:06:32.222225 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot May 14 01:06:32.222293 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 May 14 01:06:32.222353 kernel: pci 0004:00:05.0: supports D1 D2 May 14 01:06:32.222413 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot May 14 01:06:32.222482 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 May 14 01:06:32.222561 kernel: pci 0004:01:00.0: enabling Extended Tags May 14 01:06:32.222628 kernel: pci 0004:01:00.0: supports D1 D2 May 14 01:06:32.222690 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 14 01:06:32.222766 kernel: pci_bus 0004:02: extended config space not accessible May 14 01:06:32.222840 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 May 14 01:06:32.222907 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] May 14 01:06:32.222974 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] May 14 01:06:32.223046 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] May 14 01:06:32.223114 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb May 14 01:06:32.223179 kernel: pci 0004:02:00.0: supports D1 D2 May 14 01:06:32.223245 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 14 01:06:32.223317 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 May 14 01:06:32.223383 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] May 14 01:06:32.223448 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold May 14 01:06:32.223503 kernel: pci_bus 0004:00: on NUMA node 0 May 14 01:06:32.223569 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 May 14 01:06:32.223630 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 14 01:06:32.223692 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 14 01:06:32.223753 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 14 01:06:32.223816 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 14 01:06:32.223876 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 14 01:06:32.223937 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 14 01:06:32.224006 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 14 01:06:32.224068 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] May 14 01:06:32.224130 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] May 14 01:06:32.224190 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] May 14 01:06:32.224252 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] May 14 01:06:32.224315 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] May 14 01:06:32.224377 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.224437 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.224501 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.224562 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.224624 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.224686 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.224749 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.224810 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.224872 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.224934 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.225001 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.225063 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.225128 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 14 01:06:32.225191 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] May 14 01:06:32.225256 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] May 14 01:06:32.225321 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] May 14 01:06:32.225388 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] May 14 01:06:32.225453 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] May 14 01:06:32.225520 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] May 14 01:06:32.225596 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] May 14 01:06:32.225659 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] May 14 01:06:32.225721 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] May 14 01:06:32.225782 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] May 14 01:06:32.225844 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] May 14 01:06:32.225909 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] May 14 01:06:32.225971 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] May 14 01:06:32.226112 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] May 14 01:06:32.226175 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] May 14 01:06:32.226236 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] May 14 01:06:32.226297 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] May 14 01:06:32.226357 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] May 14 01:06:32.226413 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc May 14 01:06:32.226471 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] May 14 01:06:32.226526 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] May 14 01:06:32.226592 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] May 14 01:06:32.226650 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] May 14 01:06:32.226712 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] May 14 01:06:32.226775 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] May 14 01:06:32.226833 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] May 14 01:06:32.226898 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] May 14 01:06:32.226956 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] May 14 01:06:32.226966 kernel: iommu: Default domain type: Translated May 14 01:06:32.226974 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 01:06:32.226985 kernel: efivars: Registered efivars operations May 14 01:06:32.227050 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device May 14 01:06:32.227118 kernel: pci 0004:02:00.0: vgaarb: bridge control possible May 14 01:06:32.227184 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none May 14 01:06:32.227196 kernel: vgaarb: loaded May 14 01:06:32.227204 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 01:06:32.227211 kernel: VFS: Disk quotas dquot_6.6.0 May 14 01:06:32.227219 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 01:06:32.227227 kernel: pnp: PnP ACPI init May 14 01:06:32.227292 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved May 14 01:06:32.227350 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved May 14 01:06:32.227407 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved May 14 01:06:32.227463 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved May 14 01:06:32.227518 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved May 14 01:06:32.227573 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved May 14 01:06:32.227631 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved May 14 01:06:32.227686 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved May 14 01:06:32.227695 kernel: pnp: PnP ACPI: found 1 devices May 14 01:06:32.227705 kernel: NET: Registered PF_INET protocol family May 14 01:06:32.227713 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 01:06:32.227721 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 14 01:06:32.227729 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 01:06:32.227737 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 01:06:32.227744 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 14 01:06:32.227752 kernel: TCP: Hash tables configured (established 524288 bind 65536) May 14 01:06:32.227760 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 14 01:06:32.227769 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 14 01:06:32.227777 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 01:06:32.227842 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes May 14 01:06:32.227852 kernel: kvm [1]: IPA Size Limit: 48 bits May 14 01:06:32.227860 kernel: kvm [1]: GICv3: no GICV resource entry May 14 01:06:32.227867 kernel: kvm [1]: disabling GICv2 emulation May 14 01:06:32.227875 kernel: kvm [1]: GIC system register CPU interface enabled May 14 01:06:32.227883 kernel: kvm [1]: vgic interrupt IRQ9 May 14 01:06:32.227890 kernel: kvm [1]: VHE mode initialized successfully May 14 01:06:32.227900 kernel: Initialise system trusted keyrings May 14 01:06:32.227907 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 May 14 01:06:32.227915 kernel: Key type asymmetric registered May 14 01:06:32.227923 kernel: Asymmetric key parser 'x509' registered May 14 01:06:32.227930 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 01:06:32.227938 kernel: io scheduler mq-deadline registered May 14 01:06:32.227945 kernel: io scheduler kyber registered May 14 01:06:32.227953 kernel: io scheduler bfq registered May 14 01:06:32.227961 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 01:06:32.227968 kernel: ACPI: button: Power Button [PWRB] May 14 01:06:32.227981 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). May 14 01:06:32.227989 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 01:06:32.228058 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 May 14 01:06:32.228118 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) May 14 01:06:32.228175 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 14 01:06:32.228232 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq May 14 01:06:32.228289 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq May 14 01:06:32.228348 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq May 14 01:06:32.228413 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 May 14 01:06:32.228470 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) May 14 01:06:32.228527 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 14 01:06:32.228584 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq May 14 01:06:32.228640 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq May 14 01:06:32.228700 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq May 14 01:06:32.228763 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 May 14 01:06:32.228821 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) May 14 01:06:32.228878 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 14 01:06:32.228935 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq May 14 01:06:32.228999 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq May 14 01:06:32.229057 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq May 14 01:06:32.229123 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 May 14 01:06:32.229182 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) May 14 01:06:32.229239 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 14 01:06:32.229296 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq May 14 01:06:32.229353 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq May 14 01:06:32.229409 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq May 14 01:06:32.229483 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 May 14 01:06:32.229544 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) May 14 01:06:32.229601 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 14 01:06:32.229659 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq May 14 01:06:32.229715 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq May 14 01:06:32.229772 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq May 14 01:06:32.229836 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 May 14 01:06:32.229896 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) May 14 01:06:32.229952 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 14 01:06:32.230013 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq May 14 01:06:32.230070 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq May 14 01:06:32.230127 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq May 14 01:06:32.230192 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 May 14 01:06:32.230251 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) May 14 01:06:32.230309 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 14 01:06:32.230366 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq May 14 01:06:32.230424 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq May 14 01:06:32.230481 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq May 14 01:06:32.230544 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 May 14 01:06:32.230605 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) May 14 01:06:32.230662 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 14 01:06:32.230719 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq May 14 01:06:32.230776 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq May 14 01:06:32.230835 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq May 14 01:06:32.230845 kernel: thunder_xcv, ver 1.0 May 14 01:06:32.230853 kernel: thunder_bgx, ver 1.0 May 14 01:06:32.230860 kernel: nicpf, ver 1.0 May 14 01:06:32.230870 kernel: nicvf, ver 1.0 May 14 01:06:32.230935 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 01:06:32.230997 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T01:06:30 UTC (1747184790) May 14 01:06:32.231008 kernel: efifb: probing for efifb May 14 01:06:32.231016 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k May 14 01:06:32.231023 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 14 01:06:32.231031 kernel: efifb: scrolling: redraw May 14 01:06:32.231039 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 14 01:06:32.231048 kernel: Console: switching to colour frame buffer device 100x37 May 14 01:06:32.231056 kernel: fb0: EFI VGA frame buffer device May 14 01:06:32.231064 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 May 14 01:06:32.231072 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 01:06:32.231079 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 14 01:06:32.231087 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 14 01:06:32.231095 kernel: watchdog: Hard watchdog permanently disabled May 14 01:06:32.231102 kernel: NET: Registered PF_INET6 protocol family May 14 01:06:32.231110 kernel: Segment Routing with IPv6 May 14 01:06:32.231119 kernel: In-situ OAM (IOAM) with IPv6 May 14 01:06:32.231127 kernel: NET: Registered PF_PACKET protocol family May 14 01:06:32.231134 kernel: Key type dns_resolver registered May 14 01:06:32.231142 kernel: registered taskstats version 1 May 14 01:06:32.231149 kernel: Loading compiled-in X.509 certificates May 14 01:06:32.231157 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 568a15bbab977599d8f910f319ba50c03c8a57bd' May 14 01:06:32.231165 kernel: Key type .fscrypt registered May 14 01:06:32.231172 kernel: Key type fscrypt-provisioning registered May 14 01:06:32.231179 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 01:06:32.231190 kernel: ima: Allocated hash algorithm: sha1 May 14 01:06:32.231197 kernel: ima: No architecture policies found May 14 01:06:32.231205 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 01:06:32.231269 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 May 14 01:06:32.231332 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 May 14 01:06:32.231396 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 May 14 01:06:32.231457 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 May 14 01:06:32.231520 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 May 14 01:06:32.231581 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 May 14 01:06:32.231646 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 May 14 01:06:32.231707 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 May 14 01:06:32.231772 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 May 14 01:06:32.231834 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 May 14 01:06:32.231896 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 May 14 01:06:32.231958 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 May 14 01:06:32.232024 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 May 14 01:06:32.232086 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 May 14 01:06:32.232150 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 May 14 01:06:32.232212 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 May 14 01:06:32.232275 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 May 14 01:06:32.232337 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 May 14 01:06:32.232400 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 May 14 01:06:32.232462 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 May 14 01:06:32.232525 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 May 14 01:06:32.232586 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 May 14 01:06:32.232652 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 May 14 01:06:32.232714 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 May 14 01:06:32.232779 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 May 14 01:06:32.232842 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 May 14 01:06:32.232906 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 May 14 01:06:32.232968 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 May 14 01:06:32.233035 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 May 14 01:06:32.233098 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 May 14 01:06:32.233161 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 May 14 01:06:32.233228 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 May 14 01:06:32.233292 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 May 14 01:06:32.233355 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 May 14 01:06:32.233419 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 May 14 01:06:32.233481 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 May 14 01:06:32.233545 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 May 14 01:06:32.233607 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 May 14 01:06:32.233672 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 May 14 01:06:32.233738 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 May 14 01:06:32.233803 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 May 14 01:06:32.233867 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 May 14 01:06:32.233930 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 May 14 01:06:32.233996 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 May 14 01:06:32.234059 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 May 14 01:06:32.234122 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 May 14 01:06:32.234186 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 May 14 01:06:32.234253 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 May 14 01:06:32.234315 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 May 14 01:06:32.234378 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 May 14 01:06:32.234442 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 May 14 01:06:32.234504 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 May 14 01:06:32.234569 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 May 14 01:06:32.234631 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 May 14 01:06:32.234696 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 May 14 01:06:32.234761 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 May 14 01:06:32.234826 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 May 14 01:06:32.234889 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 May 14 01:06:32.234953 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 May 14 01:06:32.235020 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 May 14 01:06:32.235085 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 May 14 01:06:32.235096 kernel: clk: Disabling unused clocks May 14 01:06:32.235104 kernel: Freeing unused kernel memory: 38464K May 14 01:06:32.235113 kernel: Run /init as init process May 14 01:06:32.235121 kernel: with arguments: May 14 01:06:32.235129 kernel: /init May 14 01:06:32.235136 kernel: with environment: May 14 01:06:32.235144 kernel: HOME=/ May 14 01:06:32.235151 kernel: TERM=linux May 14 01:06:32.235159 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 01:06:32.235167 systemd[1]: Successfully made /usr/ read-only. May 14 01:06:32.235178 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 01:06:32.235188 systemd[1]: Detected architecture arm64. May 14 01:06:32.235196 systemd[1]: Running in initrd. May 14 01:06:32.235204 systemd[1]: No hostname configured, using default hostname. May 14 01:06:32.235212 systemd[1]: Hostname set to . May 14 01:06:32.235220 systemd[1]: Initializing machine ID from random generator. May 14 01:06:32.235228 systemd[1]: Queued start job for default target initrd.target. May 14 01:06:32.235236 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 01:06:32.235246 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 01:06:32.235255 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 01:06:32.235263 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 01:06:32.235271 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 01:06:32.235280 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 01:06:32.235289 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 01:06:32.235297 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 01:06:32.235307 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 01:06:32.235315 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 01:06:32.235323 systemd[1]: Reached target paths.target - Path Units. May 14 01:06:32.235332 systemd[1]: Reached target slices.target - Slice Units. May 14 01:06:32.235340 systemd[1]: Reached target swap.target - Swaps. May 14 01:06:32.235348 systemd[1]: Reached target timers.target - Timer Units. May 14 01:06:32.235356 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 01:06:32.235364 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 01:06:32.235374 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 01:06:32.235382 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 01:06:32.235390 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 01:06:32.235398 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 01:06:32.235406 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 01:06:32.235414 systemd[1]: Reached target sockets.target - Socket Units. May 14 01:06:32.235422 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 01:06:32.235430 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 01:06:32.235438 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 01:06:32.235448 systemd[1]: Starting systemd-fsck-usr.service... May 14 01:06:32.235456 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 01:06:32.235464 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 01:06:32.235493 systemd-journald[901]: Collecting audit messages is disabled. May 14 01:06:32.235514 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 01:06:32.235522 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 01:06:32.235530 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 01:06:32.235538 kernel: Bridge firewalling registered May 14 01:06:32.235547 systemd-journald[901]: Journal started May 14 01:06:32.235565 systemd-journald[901]: Runtime Journal (/run/log/journal/33b386018fec406caeef2d11808f3396) is 8M, max 4G, 3.9G free. May 14 01:06:32.201829 systemd-modules-load[905]: Inserted module 'overlay' May 14 01:06:32.278172 systemd[1]: Started systemd-journald.service - Journal Service. May 14 01:06:32.225752 systemd-modules-load[905]: Inserted module 'br_netfilter' May 14 01:06:32.283749 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 01:06:32.294516 systemd[1]: Finished systemd-fsck-usr.service. May 14 01:06:32.305362 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 01:06:32.316160 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 01:06:32.330187 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 01:06:32.338553 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 01:06:32.356555 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 01:06:32.365910 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 01:06:32.383889 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 01:06:32.400042 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 01:06:32.410968 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 01:06:32.427747 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 01:06:32.448246 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 01:06:32.470063 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 01:06:32.483239 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 01:06:32.496080 dracut-cmdline[945]: dracut-dracut-053 May 14 01:06:32.496080 dracut-cmdline[945]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 14 01:06:32.508068 systemd-resolved[947]: Positive Trust Anchors: May 14 01:06:32.508078 systemd-resolved[947]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 01:06:32.508109 systemd-resolved[947]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 01:06:32.510587 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 01:06:32.523518 systemd-resolved[947]: Defaulting to hostname 'linux'. May 14 01:06:32.548173 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 01:06:32.562204 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 01:06:32.659984 kernel: SCSI subsystem initialized May 14 01:06:32.673989 kernel: Loading iSCSI transport class v2.0-870. May 14 01:06:32.692986 kernel: iscsi: registered transport (tcp) May 14 01:06:32.720572 kernel: iscsi: registered transport (qla4xxx) May 14 01:06:32.720596 kernel: QLogic iSCSI HBA Driver May 14 01:06:32.764045 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 01:06:32.775488 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 01:06:32.835699 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 01:06:32.835731 kernel: device-mapper: uevent: version 1.0.3 May 14 01:06:32.845447 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 01:06:32.910988 kernel: raid6: neonx8 gen() 15849 MB/s May 14 01:06:32.936987 kernel: raid6: neonx4 gen() 15868 MB/s May 14 01:06:32.961987 kernel: raid6: neonx2 gen() 13258 MB/s May 14 01:06:32.986987 kernel: raid6: neonx1 gen() 10467 MB/s May 14 01:06:33.011987 kernel: raid6: int64x8 gen() 6818 MB/s May 14 01:06:33.036987 kernel: raid6: int64x4 gen() 7371 MB/s May 14 01:06:33.061987 kernel: raid6: int64x2 gen() 6134 MB/s May 14 01:06:33.090242 kernel: raid6: int64x1 gen() 5077 MB/s May 14 01:06:33.090262 kernel: raid6: using algorithm neonx4 gen() 15868 MB/s May 14 01:06:33.124689 kernel: raid6: .... xor() 12522 MB/s, rmw enabled May 14 01:06:33.124710 kernel: raid6: using neon recovery algorithm May 14 01:06:33.147949 kernel: xor: measuring software checksum speed May 14 01:06:33.147970 kernel: 8regs : 21624 MB/sec May 14 01:06:33.155982 kernel: 32regs : 21710 MB/sec May 14 01:06:33.164015 kernel: arm64_neon : 28128 MB/sec May 14 01:06:33.172015 kernel: xor: using function: arm64_neon (28128 MB/sec) May 14 01:06:33.232986 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 01:06:33.244010 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 01:06:33.250425 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 01:06:33.286499 systemd-udevd[1144]: Using default interface naming scheme 'v255'. May 14 01:06:33.290051 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 01:06:33.295692 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 01:06:33.330989 dracut-pre-trigger[1155]: rd.md=0: removing MD RAID activation May 14 01:06:33.356723 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 01:06:33.366231 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 01:06:33.480780 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 01:06:33.490312 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 01:06:33.523077 kernel: pps_core: LinuxPPS API ver. 1 registered May 14 01:06:33.523094 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 14 01:06:33.525983 kernel: PTP clock support registered May 14 01:06:33.525994 kernel: ACPI: bus type USB registered May 14 01:06:33.526003 kernel: usbcore: registered new interface driver usbfs May 14 01:06:33.526012 kernel: usbcore: registered new interface driver hub May 14 01:06:33.526021 kernel: usbcore: registered new device driver usb May 14 01:06:33.578738 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 01:06:33.696166 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 14 01:06:33.696180 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 31 May 14 01:06:33.696329 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 14 01:06:33.696339 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 32 May 14 01:06:33.696430 kernel: igb 0003:03:00.0: Adding to iommu group 33 May 14 01:06:33.696517 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 14 01:06:33.696596 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 May 14 01:06:33.696675 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault May 14 01:06:33.696752 kernel: nvme 0005:03:00.0: Adding to iommu group 34 May 14 01:06:33.696836 kernel: nvme 0005:04:00.0: Adding to iommu group 35 May 14 01:06:33.578897 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 01:06:33.707986 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 01:06:33.719422 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 01:06:33.719582 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 01:06:33.736891 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 01:06:33.749178 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 01:06:33.760589 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 01:06:33.762300 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 01:06:33.779458 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 01:06:33.795877 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 01:06:33.807539 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 01:06:33.826104 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 01:06:33.849099 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 01:06:33.862831 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 01:06:33.873691 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 01:06:34.024098 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000010 May 14 01:06:34.024324 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 14 01:06:34.034972 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 May 14 01:06:34.047943 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed May 14 01:06:34.070587 kernel: hub 1-0:1.0: USB hub found May 14 01:06:34.070740 kernel: hub 1-0:1.0: 4 ports detected May 14 01:06:34.091592 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 May 14 01:06:34.091687 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 14 01:06:34.110708 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 14 01:06:34.131693 kernel: hub 2-0:1.0: USB hub found May 14 01:06:34.140906 kernel: hub 2-0:1.0: 4 ports detected May 14 01:06:34.156277 kernel: nvme nvme0: pci function 0005:03:00.0 May 14 01:06:34.166232 kernel: nvme nvme1: pci function 0005:04:00.0 May 14 01:06:34.196992 kernel: nvme nvme0: Shutdown timeout set to 8 seconds May 14 01:06:34.214983 kernel: nvme nvme1: Shutdown timeout set to 8 seconds May 14 01:06:34.247025 kernel: igb 0003:03:00.0: added PHC on eth0 May 14 01:06:34.247201 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 14 01:06:34.258639 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0a:d4:d6 May 14 01:06:34.270633 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 May 14 01:06:34.280579 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 14 01:06:34.288982 kernel: igb 0003:03:00.1: Adding to iommu group 36 May 14 01:06:34.298982 kernel: nvme nvme0: 32/0/0 default/read/poll queues May 14 01:06:34.308343 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 01:06:34.508647 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 01:06:34.508668 kernel: GPT:9289727 != 1875385007 May 14 01:06:34.508683 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 01:06:34.508692 kernel: GPT:9289727 != 1875385007 May 14 01:06:34.508701 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 01:06:34.508710 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 14 01:06:34.508719 kernel: nvme nvme1: 32/0/0 default/read/poll queues May 14 01:06:34.508830 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by (udev-worker) (1222) May 14 01:06:34.508841 kernel: igb 0003:03:00.1: added PHC on eth1 May 14 01:06:34.508927 kernel: BTRFS: device fsid ee830c17-a93d-4109-bd12-3fec8ef6763d devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (1220) May 14 01:06:34.508941 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection May 14 01:06:34.509023 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0a:d4:d7 May 14 01:06:34.509100 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 May 14 01:06:34.509175 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 14 01:06:34.509249 kernel: igb 0003:03:00.1 eno2: renamed from eth1 May 14 01:06:34.509322 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged May 14 01:06:34.509404 kernel: igb 0003:03:00.0 eno1: renamed from eth0 May 14 01:06:34.486194 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. May 14 01:06:34.525920 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. May 14 01:06:34.547036 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 14 01:06:34.558051 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 14 01:06:34.588609 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd May 14 01:06:34.566758 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 14 01:06:34.622675 kernel: hub 2-3:1.0: USB hub found May 14 01:06:34.622811 kernel: hub 2-3:1.0: 4 ports detected May 14 01:06:34.597825 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 01:06:34.649114 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 14 01:06:34.649209 disk-uuid[1304]: Primary Header is updated. May 14 01:06:34.649209 disk-uuid[1304]: Secondary Entries is updated. May 14 01:06:34.649209 disk-uuid[1304]: Secondary Header is updated. May 14 01:06:34.712991 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd May 14 01:06:34.784991 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 14 01:06:34.797983 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 May 14 01:06:34.821105 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 May 14 01:06:34.821250 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 14 01:06:34.881754 kernel: hub 1-3:1.0: USB hub found May 14 01:06:34.881959 kernel: hub 1-3:1.0: 4 ports detected May 14 01:06:35.175850 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged May 14 01:06:35.484985 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 14 01:06:35.499985 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 May 14 01:06:35.516990 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 May 14 01:06:35.650987 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 14 01:06:35.651272 disk-uuid[1305]: The operation has completed successfully. May 14 01:06:35.681475 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 01:06:35.681579 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 01:06:35.719887 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 01:06:35.735469 sh[1479]: Success May 14 01:06:35.758984 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 01:06:35.791227 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 01:06:35.802131 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 01:06:35.824570 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 01:06:35.860926 kernel: BTRFS info (device dm-0): first mount of filesystem ee830c17-a93d-4109-bd12-3fec8ef6763d May 14 01:06:35.860953 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 01:06:35.878523 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 01:06:35.892829 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 01:06:35.904253 kernel: BTRFS info (device dm-0): using free space tree May 14 01:06:35.923988 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 14 01:06:35.925427 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 01:06:35.936388 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 01:06:35.937688 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 01:06:35.953404 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 01:06:36.067405 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 14 01:06:36.067423 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 14 01:06:36.067432 kernel: BTRFS info (device nvme0n1p6): using free space tree May 14 01:06:36.067442 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 14 01:06:36.067451 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 14 01:06:36.067460 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 14 01:06:36.064408 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 01:06:36.072967 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 01:06:36.084949 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 01:06:36.112100 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 01:06:36.142416 systemd-networkd[1668]: lo: Link UP May 14 01:06:36.142421 systemd-networkd[1668]: lo: Gained carrier May 14 01:06:36.146330 systemd-networkd[1668]: Enumeration completed May 14 01:06:36.146704 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 01:06:36.147632 systemd-networkd[1668]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 01:06:36.153159 systemd[1]: Reached target network.target - Network. May 14 01:06:36.194688 ignition[1667]: Ignition 2.20.0 May 14 01:06:36.194696 ignition[1667]: Stage: fetch-offline May 14 01:06:36.199548 systemd-networkd[1668]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 01:06:36.194725 ignition[1667]: no configs at "/usr/lib/ignition/base.d" May 14 01:06:36.203844 unknown[1667]: fetched base config from "system" May 14 01:06:36.194733 ignition[1667]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 14 01:06:36.203851 unknown[1667]: fetched user config from "system" May 14 01:06:36.194874 ignition[1667]: parsed url from cmdline: "" May 14 01:06:36.206373 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 01:06:36.194877 ignition[1667]: no config URL provided May 14 01:06:36.219845 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 01:06:36.194882 ignition[1667]: reading system config file "/usr/lib/ignition/user.ign" May 14 01:06:36.220945 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 01:06:36.194931 ignition[1667]: parsing config with SHA512: 5c50efa5f57c54498768c2b23f37b8df46ad1dbe7e4425b7cc4ae4d6e8889acda6e539eb73e7870b8a1181c1934f3e7f32f552e12ea1a61d96c06a6d154d68f0 May 14 01:06:36.251941 systemd-networkd[1668]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 01:06:36.204266 ignition[1667]: fetch-offline: fetch-offline passed May 14 01:06:36.204270 ignition[1667]: POST message to Packet Timeline May 14 01:06:36.204274 ignition[1667]: POST Status error: resource requires networking May 14 01:06:36.204338 ignition[1667]: Ignition finished successfully May 14 01:06:36.257426 ignition[1712]: Ignition 2.20.0 May 14 01:06:36.257460 ignition[1712]: Stage: kargs May 14 01:06:36.257732 ignition[1712]: no configs at "/usr/lib/ignition/base.d" May 14 01:06:36.257741 ignition[1712]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 14 01:06:36.259313 ignition[1712]: kargs: kargs passed May 14 01:06:36.259318 ignition[1712]: POST message to Packet Timeline May 14 01:06:36.259548 ignition[1712]: GET https://metadata.packet.net/metadata: attempt #1 May 14 01:06:36.262258 ignition[1712]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51865->[::1]:53: read: connection refused May 14 01:06:36.463179 ignition[1712]: GET https://metadata.packet.net/metadata: attempt #2 May 14 01:06:36.464056 ignition[1712]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35538->[::1]:53: read: connection refused May 14 01:06:36.828990 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 14 01:06:36.832139 systemd-networkd[1668]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 01:06:36.864441 ignition[1712]: GET https://metadata.packet.net/metadata: attempt #3 May 14 01:06:36.864928 ignition[1712]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37031->[::1]:53: read: connection refused May 14 01:06:37.436992 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 14 01:06:37.439929 systemd-networkd[1668]: eno1: Link UP May 14 01:06:37.440069 systemd-networkd[1668]: eno2: Link UP May 14 01:06:37.440190 systemd-networkd[1668]: enP1p1s0f0np0: Link UP May 14 01:06:37.440331 systemd-networkd[1668]: enP1p1s0f0np0: Gained carrier May 14 01:06:37.451121 systemd-networkd[1668]: enP1p1s0f1np1: Link UP May 14 01:06:37.494021 systemd-networkd[1668]: enP1p1s0f0np0: DHCPv4 address 147.28.151.154/30, gateway 147.28.151.153 acquired from 147.28.144.140 May 14 01:06:37.665270 ignition[1712]: GET https://metadata.packet.net/metadata: attempt #4 May 14 01:06:37.665759 ignition[1712]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58229->[::1]:53: read: connection refused May 14 01:06:37.835236 systemd-networkd[1668]: enP1p1s0f1np1: Gained carrier May 14 01:06:38.467126 systemd-networkd[1668]: enP1p1s0f0np0: Gained IPv6LL May 14 01:06:38.915156 systemd-networkd[1668]: enP1p1s0f1np1: Gained IPv6LL May 14 01:06:39.266610 ignition[1712]: GET https://metadata.packet.net/metadata: attempt #5 May 14 01:06:39.267375 ignition[1712]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33017->[::1]:53: read: connection refused May 14 01:06:42.469839 ignition[1712]: GET https://metadata.packet.net/metadata: attempt #6 May 14 01:06:43.055336 ignition[1712]: GET result: OK May 14 01:06:43.932290 ignition[1712]: Ignition finished successfully May 14 01:06:43.936110 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 01:06:43.939206 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 01:06:43.967493 ignition[1733]: Ignition 2.20.0 May 14 01:06:43.967505 ignition[1733]: Stage: disks May 14 01:06:43.967741 ignition[1733]: no configs at "/usr/lib/ignition/base.d" May 14 01:06:43.967750 ignition[1733]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 14 01:06:43.969290 ignition[1733]: disks: disks passed May 14 01:06:43.969294 ignition[1733]: POST message to Packet Timeline May 14 01:06:43.969311 ignition[1733]: GET https://metadata.packet.net/metadata: attempt #1 May 14 01:06:45.008707 ignition[1733]: GET result: OK May 14 01:06:45.664628 ignition[1733]: Ignition finished successfully May 14 01:06:45.668123 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 01:06:45.673500 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 01:06:45.681295 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 01:06:45.689599 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 01:06:45.698419 systemd[1]: Reached target sysinit.target - System Initialization. May 14 01:06:45.707608 systemd[1]: Reached target basic.target - Basic System. May 14 01:06:45.717937 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 01:06:45.744934 systemd-fsck[1753]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 14 01:06:45.748254 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 01:06:45.756452 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 01:06:45.842982 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9f8d74e6-c079-469f-823a-18a62077a2c7 r/w with ordered data mode. Quota mode: none. May 14 01:06:45.843086 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 01:06:45.853502 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 01:06:45.864653 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 01:06:45.883507 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 01:06:45.891982 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/nvme0n1p6 scanned by mount (1763) May 14 01:06:45.892003 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 14 01:06:45.892014 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 14 01:06:45.892023 kernel: BTRFS info (device nvme0n1p6): using free space tree May 14 01:06:45.893987 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 14 01:06:45.893999 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 14 01:06:45.977456 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 14 01:06:45.983700 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 14 01:06:45.999328 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 01:06:45.999390 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 01:06:46.012919 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 01:06:46.026459 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 01:06:46.053668 coreos-metadata[1784]: May 14 01:06:46.041 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 14 01:06:46.067696 coreos-metadata[1783]: May 14 01:06:46.041 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 14 01:06:46.039914 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 01:06:46.087324 initrd-setup-root[1809]: cut: /sysroot/etc/passwd: No such file or directory May 14 01:06:46.093610 initrd-setup-root[1817]: cut: /sysroot/etc/group: No such file or directory May 14 01:06:46.100149 initrd-setup-root[1824]: cut: /sysroot/etc/shadow: No such file or directory May 14 01:06:46.106427 initrd-setup-root[1831]: cut: /sysroot/etc/gshadow: No such file or directory May 14 01:06:46.175593 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 01:06:46.187346 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 01:06:46.206513 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 01:06:46.214984 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 14 01:06:46.238859 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 01:06:46.255104 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 01:06:46.267379 ignition[1907]: INFO : Ignition 2.20.0 May 14 01:06:46.267379 ignition[1907]: INFO : Stage: mount May 14 01:06:46.278507 ignition[1907]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 01:06:46.278507 ignition[1907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 14 01:06:46.278507 ignition[1907]: INFO : mount: mount passed May 14 01:06:46.278507 ignition[1907]: INFO : POST message to Packet Timeline May 14 01:06:46.278507 ignition[1907]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 14 01:06:46.505582 coreos-metadata[1783]: May 14 01:06:46.505 INFO Fetch successful May 14 01:06:46.552189 coreos-metadata[1783]: May 14 01:06:46.552 INFO wrote hostname ci-4284.0.0-n-0b8132852a to /sysroot/etc/hostname May 14 01:06:46.555386 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 01:06:46.794950 ignition[1907]: INFO : GET result: OK May 14 01:06:47.063939 coreos-metadata[1784]: May 14 01:06:47.063 INFO Fetch successful May 14 01:06:47.090054 ignition[1907]: INFO : Ignition finished successfully May 14 01:06:47.092289 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 01:06:47.108372 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 14 01:06:47.108542 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 14 01:06:47.119386 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 01:06:47.142126 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 01:06:47.176990 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/nvme0n1p6 scanned by mount (1928) May 14 01:06:47.201072 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 14 01:06:47.201094 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 14 01:06:47.214236 kernel: BTRFS info (device nvme0n1p6): using free space tree May 14 01:06:47.237242 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 14 01:06:47.237264 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 14 01:06:47.245395 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 01:06:47.278586 ignition[1946]: INFO : Ignition 2.20.0 May 14 01:06:47.278586 ignition[1946]: INFO : Stage: files May 14 01:06:47.288269 ignition[1946]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 01:06:47.288269 ignition[1946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 14 01:06:47.288269 ignition[1946]: DEBUG : files: compiled without relabeling support, skipping May 14 01:06:47.288269 ignition[1946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 01:06:47.288269 ignition[1946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 01:06:47.288269 ignition[1946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 01:06:47.288269 ignition[1946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 01:06:47.288269 ignition[1946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 01:06:47.288269 ignition[1946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 01:06:47.288269 ignition[1946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 01:06:47.284093 unknown[1946]: wrote ssh authorized keys file for user: core May 14 01:06:47.382484 ignition[1946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 01:06:47.697088 ignition[1946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 01:06:47.707557 ignition[1946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 01:06:47.707557 ignition[1946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 01:06:47.707557 ignition[1946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 01:06:47.707557 ignition[1946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 01:06:47.707557 ignition[1946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 01:06:47.707557 ignition[1946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 01:06:47.707557 ignition[1946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 01:06:47.707557 ignition[1946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 01:06:47.707557 ignition[1946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 01:06:47.707557 ignition[1946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 01:06:47.707557 ignition[1946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 01:06:47.707557 ignition[1946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 01:06:47.707557 ignition[1946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 01:06:47.707557 ignition[1946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 14 01:06:47.889552 ignition[1946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 01:06:48.177000 ignition[1946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 01:06:48.177000 ignition[1946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 01:06:48.201759 ignition[1946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 01:06:48.201759 ignition[1946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 01:06:48.201759 ignition[1946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 01:06:48.201759 ignition[1946]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 14 01:06:48.201759 ignition[1946]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 14 01:06:48.201759 ignition[1946]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 01:06:48.201759 ignition[1946]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 01:06:48.201759 ignition[1946]: INFO : files: files passed May 14 01:06:48.201759 ignition[1946]: INFO : POST message to Packet Timeline May 14 01:06:48.201759 ignition[1946]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 14 01:06:48.935405 ignition[1946]: INFO : GET result: OK May 14 01:06:49.430158 ignition[1946]: INFO : Ignition finished successfully May 14 01:06:49.432553 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 01:06:49.443399 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 01:06:49.460462 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 01:06:49.479201 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 01:06:49.479386 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 01:06:49.497171 initrd-setup-root-after-ignition[1988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 01:06:49.497171 initrd-setup-root-after-ignition[1988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 01:06:49.491692 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 01:06:49.548432 initrd-setup-root-after-ignition[1992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 01:06:49.504419 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 01:06:49.521142 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 01:06:49.584733 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 01:06:49.584918 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 01:06:49.596431 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 01:06:49.612543 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 01:06:49.623704 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 01:06:49.624641 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 01:06:49.657514 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 01:06:49.670119 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 01:06:49.693474 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 01:06:49.705200 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 01:06:49.717035 systemd[1]: Stopped target timers.target - Timer Units. May 14 01:06:49.722842 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 01:06:49.722943 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 01:06:49.734522 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 01:06:49.745805 systemd[1]: Stopped target basic.target - Basic System. May 14 01:06:49.757272 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 01:06:49.768645 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 01:06:49.779940 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 01:06:49.791416 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 01:06:49.802654 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 01:06:49.814077 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 01:06:49.825217 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 01:06:49.841939 systemd[1]: Stopped target swap.target - Swaps. May 14 01:06:49.853156 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 01:06:49.853251 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 01:06:49.864535 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 01:06:49.875565 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 01:06:49.886759 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 01:06:49.890005 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 01:06:49.897859 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 01:06:49.897952 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 01:06:49.909212 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 01:06:49.909300 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 01:06:49.920347 systemd[1]: Stopped target paths.target - Path Units. May 14 01:06:49.931520 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 01:06:49.935004 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 01:06:49.948678 systemd[1]: Stopped target slices.target - Slice Units. May 14 01:06:49.960181 systemd[1]: Stopped target sockets.target - Socket Units. May 14 01:06:49.971685 systemd[1]: iscsid.socket: Deactivated successfully. May 14 01:06:49.971750 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 01:06:50.071328 ignition[2014]: INFO : Ignition 2.20.0 May 14 01:06:50.071328 ignition[2014]: INFO : Stage: umount May 14 01:06:50.071328 ignition[2014]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 01:06:50.071328 ignition[2014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 14 01:06:50.071328 ignition[2014]: INFO : umount: umount passed May 14 01:06:50.071328 ignition[2014]: INFO : POST message to Packet Timeline May 14 01:06:50.071328 ignition[2014]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 14 01:06:49.983309 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 01:06:49.983367 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 01:06:49.995039 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 01:06:49.995130 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 01:06:50.006615 systemd[1]: ignition-files.service: Deactivated successfully. May 14 01:06:50.006698 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 01:06:50.018260 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 14 01:06:50.018340 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 01:06:50.036460 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 01:06:50.053513 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 01:06:50.065329 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 01:06:50.065434 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 01:06:50.077445 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 01:06:50.077538 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 01:06:50.091298 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 01:06:50.093234 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 01:06:50.093309 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 01:06:50.124441 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 01:06:50.126003 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 01:06:50.528937 ignition[2014]: INFO : GET result: OK May 14 01:06:50.826943 ignition[2014]: INFO : Ignition finished successfully May 14 01:06:50.829998 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 01:06:50.830244 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 01:06:50.837356 systemd[1]: Stopped target network.target - Network. May 14 01:06:50.846523 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 01:06:50.846598 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 01:06:50.856180 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 01:06:50.856212 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 01:06:50.865748 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 01:06:50.865796 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 01:06:50.875607 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 01:06:50.875638 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 01:06:50.885395 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 01:06:50.885436 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 01:06:50.895447 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 01:06:50.905080 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 01:06:50.915067 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 01:06:50.915196 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 01:06:50.929058 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 01:06:50.929731 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 01:06:50.929807 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 01:06:50.942530 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 01:06:50.942818 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 01:06:50.942927 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 01:06:50.950831 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 01:06:50.951815 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 01:06:50.951960 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 01:06:50.961770 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 01:06:50.970005 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 01:06:50.970056 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 01:06:50.980361 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 01:06:50.980401 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 01:06:50.990739 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 01:06:50.990786 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 01:06:51.001021 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 01:06:51.018051 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 01:06:51.018397 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 01:06:51.018538 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 01:06:51.030754 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 01:06:51.030994 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 01:06:51.044955 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 01:06:51.045006 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 01:06:51.061567 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 01:06:51.061626 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 01:06:51.078313 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 01:06:51.078368 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 01:06:51.100603 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 01:06:51.100638 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 01:06:51.113049 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 01:06:51.123745 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 01:06:51.123791 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 01:06:51.135545 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 14 01:06:51.135596 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 01:06:51.152614 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 01:06:51.152648 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 01:06:51.164271 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 01:06:51.164313 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 01:06:51.177834 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 01:06:51.177891 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 01:06:51.178227 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 01:06:51.178310 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 01:06:51.714397 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 01:06:51.714581 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 01:06:51.725844 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 01:06:51.736943 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 01:06:51.757289 systemd[1]: Switching root. May 14 01:06:51.813374 systemd-journald[901]: Journal stopped May 14 01:06:53.897533 systemd-journald[901]: Received SIGTERM from PID 1 (systemd). May 14 01:06:53.897561 kernel: SELinux: policy capability network_peer_controls=1 May 14 01:06:53.897571 kernel: SELinux: policy capability open_perms=1 May 14 01:06:53.897579 kernel: SELinux: policy capability extended_socket_class=1 May 14 01:06:53.897586 kernel: SELinux: policy capability always_check_network=0 May 14 01:06:53.897594 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 01:06:53.897602 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 01:06:53.897611 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 01:06:53.897619 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 01:06:53.897627 kernel: audit: type=1403 audit(1747184811.985:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 01:06:53.897636 systemd[1]: Successfully loaded SELinux policy in 116.185ms. May 14 01:06:53.897646 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.857ms. May 14 01:06:53.897655 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 01:06:53.897664 systemd[1]: Detected architecture arm64. May 14 01:06:53.897677 systemd[1]: Detected first boot. May 14 01:06:53.897686 systemd[1]: Hostname set to . May 14 01:06:53.897695 systemd[1]: Initializing machine ID from random generator. May 14 01:06:53.897704 zram_generator::config[2084]: No configuration found. May 14 01:06:53.897715 systemd[1]: Populated /etc with preset unit settings. May 14 01:06:53.897724 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 01:06:53.897733 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 01:06:53.897741 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 01:06:53.897750 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 01:06:53.897759 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 01:06:53.897768 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 01:06:53.897778 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 01:06:53.897788 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 01:06:53.897797 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 01:06:53.897806 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 01:06:53.897815 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 01:06:53.897823 systemd[1]: Created slice user.slice - User and Session Slice. May 14 01:06:53.897832 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 01:06:53.897841 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 01:06:53.897852 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 01:06:53.897861 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 01:06:53.897870 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 01:06:53.897879 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 01:06:53.897888 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 14 01:06:53.897897 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 01:06:53.897906 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 01:06:53.897917 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 01:06:53.897926 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 01:06:53.897936 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 01:06:53.897946 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 01:06:53.897955 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 01:06:53.897964 systemd[1]: Reached target slices.target - Slice Units. May 14 01:06:53.897973 systemd[1]: Reached target swap.target - Swaps. May 14 01:06:53.897985 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 01:06:53.897995 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 01:06:53.898006 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 01:06:53.898015 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 01:06:53.898024 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 01:06:53.898034 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 01:06:53.898043 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 01:06:53.898053 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 01:06:53.898063 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 01:06:53.898073 systemd[1]: Mounting media.mount - External Media Directory... May 14 01:06:53.898082 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 01:06:53.898092 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 01:06:53.898101 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 01:06:53.898110 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 01:06:53.898120 systemd[1]: Reached target machines.target - Containers. May 14 01:06:53.898131 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 01:06:53.898140 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 01:06:53.898150 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 01:06:53.898159 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 01:06:53.898168 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 01:06:53.898177 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 01:06:53.898186 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 01:06:53.898195 kernel: ACPI: bus type drm_connector registered May 14 01:06:53.898204 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 01:06:53.898215 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 01:06:53.898224 kernel: fuse: init (API version 7.39) May 14 01:06:53.898232 kernel: loop: module loaded May 14 01:06:53.898240 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 01:06:53.898250 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 01:06:53.898259 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 01:06:53.898268 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 01:06:53.898277 systemd[1]: Stopped systemd-fsck-usr.service. May 14 01:06:53.898288 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 01:06:53.898297 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 01:06:53.898307 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 01:06:53.898316 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 01:06:53.898342 systemd-journald[2195]: Collecting audit messages is disabled. May 14 01:06:53.898365 systemd-journald[2195]: Journal started May 14 01:06:53.898384 systemd-journald[2195]: Runtime Journal (/run/log/journal/9758769a3eee465cbed9b44a01347d90) is 8M, max 4G, 3.9G free. May 14 01:06:52.536857 systemd[1]: Queued start job for default target multi-user.target. May 14 01:06:52.552314 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 14 01:06:52.552633 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 01:06:52.552927 systemd[1]: systemd-journald.service: Consumed 3.425s CPU time. May 14 01:06:53.920993 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 01:06:53.948993 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 01:06:53.969991 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 01:06:53.992913 systemd[1]: verity-setup.service: Deactivated successfully. May 14 01:06:53.992934 systemd[1]: Stopped verity-setup.service. May 14 01:06:54.018994 systemd[1]: Started systemd-journald.service - Journal Service. May 14 01:06:54.024145 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 01:06:54.029693 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 01:06:54.035187 systemd[1]: Mounted media.mount - External Media Directory. May 14 01:06:54.040648 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 01:06:54.046178 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 01:06:54.051603 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 01:06:54.057148 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 01:06:54.064035 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 01:06:54.069623 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 01:06:54.069779 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 01:06:54.076460 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 01:06:54.076625 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 01:06:54.081953 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 01:06:54.082120 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 01:06:54.087495 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 01:06:54.089011 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 01:06:54.094320 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 01:06:54.094473 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 01:06:54.099808 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 01:06:54.101013 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 01:06:54.106248 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 01:06:54.112020 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 01:06:54.117187 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 01:06:54.122228 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 01:06:54.127314 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 01:06:54.142429 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 01:06:54.148689 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 01:06:54.167622 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 01:06:54.172714 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 01:06:54.172748 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 01:06:54.178348 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 01:06:54.183924 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 01:06:54.189642 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 01:06:54.194399 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 01:06:54.195692 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 01:06:54.201299 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 01:06:54.206029 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 01:06:54.207113 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 01:06:54.211727 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 01:06:54.212229 systemd-journald[2195]: Time spent on flushing to /var/log/journal/9758769a3eee465cbed9b44a01347d90 is 24.097ms for 2359 entries. May 14 01:06:54.212229 systemd-journald[2195]: System Journal (/var/log/journal/9758769a3eee465cbed9b44a01347d90) is 8M, max 195.6M, 187.6M free. May 14 01:06:54.253562 systemd-journald[2195]: Received client request to flush runtime journal. May 14 01:06:54.253606 kernel: loop0: detected capacity change from 0 to 126448 May 14 01:06:54.212829 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 01:06:54.230149 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 01:06:54.235779 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 01:06:54.241479 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 01:06:54.258514 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 01:06:54.261103 systemd-tmpfiles[2240]: ACLs are not supported, ignoring. May 14 01:06:54.261118 systemd-tmpfiles[2240]: ACLs are not supported, ignoring. May 14 01:06:54.267983 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 01:06:54.272044 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 01:06:54.276662 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 01:06:54.282045 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 01:06:54.286803 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 01:06:54.291529 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 01:06:54.296497 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 01:06:54.306936 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 01:06:54.313048 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 01:06:54.314995 kernel: loop1: detected capacity change from 0 to 8 May 14 01:06:54.340723 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 01:06:54.346174 udevadm[2242]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 14 01:06:54.348506 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 01:06:54.349089 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 01:06:54.365395 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 01:06:54.370983 kernel: loop2: detected capacity change from 0 to 189592 May 14 01:06:54.377138 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 01:06:54.410252 systemd-tmpfiles[2275]: ACLs are not supported, ignoring. May 14 01:06:54.410264 systemd-tmpfiles[2275]: ACLs are not supported, ignoring. May 14 01:06:54.413988 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 01:06:54.434990 kernel: loop3: detected capacity change from 0 to 103832 May 14 01:06:54.448951 ldconfig[2228]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 01:06:54.450344 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 01:06:54.484994 kernel: loop4: detected capacity change from 0 to 126448 May 14 01:06:54.500987 kernel: loop5: detected capacity change from 0 to 8 May 14 01:06:54.512990 kernel: loop6: detected capacity change from 0 to 189592 May 14 01:06:54.529989 kernel: loop7: detected capacity change from 0 to 103832 May 14 01:06:54.534628 (sd-merge)[2284]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. May 14 01:06:54.535081 (sd-merge)[2284]: Merged extensions into '/usr'. May 14 01:06:54.538058 systemd[1]: Reload requested from client PID 2239 ('systemd-sysext') (unit systemd-sysext.service)... May 14 01:06:54.538070 systemd[1]: Reloading... May 14 01:06:54.583989 zram_generator::config[2314]: No configuration found. May 14 01:06:54.676168 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 01:06:54.737379 systemd[1]: Reloading finished in 198 ms. May 14 01:06:54.754503 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 01:06:54.759445 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 01:06:54.779231 systemd[1]: Starting ensure-sysext.service... May 14 01:06:54.785078 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 01:06:54.791874 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 01:06:54.802960 systemd[1]: Reload requested from client PID 2364 ('systemctl') (unit ensure-sysext.service)... May 14 01:06:54.802971 systemd[1]: Reloading... May 14 01:06:54.804346 systemd-tmpfiles[2365]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 01:06:54.804540 systemd-tmpfiles[2365]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 01:06:54.805147 systemd-tmpfiles[2365]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 01:06:54.805345 systemd-tmpfiles[2365]: ACLs are not supported, ignoring. May 14 01:06:54.805392 systemd-tmpfiles[2365]: ACLs are not supported, ignoring. May 14 01:06:54.808246 systemd-tmpfiles[2365]: Detected autofs mount point /boot during canonicalization of boot. May 14 01:06:54.808253 systemd-tmpfiles[2365]: Skipping /boot May 14 01:06:54.816524 systemd-tmpfiles[2365]: Detected autofs mount point /boot during canonicalization of boot. May 14 01:06:54.816531 systemd-tmpfiles[2365]: Skipping /boot May 14 01:06:54.819453 systemd-udevd[2366]: Using default interface naming scheme 'v255'. May 14 01:06:54.848984 zram_generator::config[2400]: No configuration found. May 14 01:06:54.880001 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (2410) May 14 01:06:54.900991 kernel: IPMI message handler: version 39.2 May 14 01:06:54.911987 kernel: ipmi device interface May 14 01:06:54.929852 kernel: ipmi_si: IPMI System Interface driver May 14 01:06:54.929893 kernel: ipmi_si: Unable to find any System Interface(s) May 14 01:06:54.945982 kernel: ipmi_ssif: IPMI SSIF Interface driver May 14 01:06:54.960592 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 01:06:55.040965 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 14 01:06:55.045872 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 14 01:06:55.046217 systemd[1]: Reloading finished in 242 ms. May 14 01:06:55.065428 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 01:06:55.087399 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 01:06:55.106361 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 01:06:55.116840 systemd[1]: Finished ensure-sysext.service. May 14 01:06:55.139643 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 01:06:55.155790 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 01:06:55.160859 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 01:06:55.161955 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 01:06:55.167836 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 01:06:55.173728 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 01:06:55.179417 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 01:06:55.181680 lvm[2598]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 01:06:55.185064 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 01:06:55.189981 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 01:06:55.190864 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 01:06:55.195655 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 01:06:55.196739 augenrules[2624]: No rules May 14 01:06:55.196844 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 01:06:55.203175 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 01:06:55.209621 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 01:06:55.215735 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 01:06:55.221338 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 01:06:55.226828 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 01:06:55.232342 systemd[1]: audit-rules.service: Deactivated successfully. May 14 01:06:55.232519 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 01:06:55.237451 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 01:06:55.244145 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 01:06:55.249191 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 01:06:55.249338 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 01:06:55.254259 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 01:06:55.254944 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 01:06:55.259874 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 01:06:55.260020 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 01:06:55.264934 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 01:06:55.265704 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 01:06:55.270567 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 01:06:55.275356 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 01:06:55.280781 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 01:06:55.294092 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 01:06:55.299997 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 01:06:55.304651 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 01:06:55.304716 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 01:06:55.322833 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 01:06:55.326165 lvm[2656]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 01:06:55.329420 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 01:06:55.334044 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 01:06:55.334530 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 01:06:55.339411 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 01:06:55.365005 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 01:06:55.380335 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 01:06:55.423071 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 01:06:55.428090 systemd[1]: Reached target time-set.target - System Time Set. May 14 01:06:55.433286 systemd-resolved[2632]: Positive Trust Anchors: May 14 01:06:55.433299 systemd-resolved[2632]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 01:06:55.433331 systemd-resolved[2632]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 01:06:55.436592 systemd-resolved[2632]: Using system hostname 'ci-4284.0.0-n-0b8132852a'. May 14 01:06:55.437843 systemd-networkd[2631]: lo: Link UP May 14 01:06:55.437849 systemd-networkd[2631]: lo: Gained carrier May 14 01:06:55.437917 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 01:06:55.441557 systemd-networkd[2631]: bond0: netdev ready May 14 01:06:55.442361 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 01:06:55.446706 systemd[1]: Reached target sysinit.target - System Initialization. May 14 01:06:55.450449 systemd-networkd[2631]: Enumeration completed May 14 01:06:55.451038 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 01:06:55.455338 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 01:06:55.458172 systemd-networkd[2631]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:95:bd:80.network. May 14 01:06:55.459852 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 01:06:55.464262 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 01:06:55.468664 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 01:06:55.473086 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 01:06:55.473107 systemd[1]: Reached target paths.target - Path Units. May 14 01:06:55.477492 systemd[1]: Reached target timers.target - Timer Units. May 14 01:06:55.482543 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 01:06:55.488361 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 01:06:55.494658 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 01:06:55.501639 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 01:06:55.506610 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 01:06:55.511662 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 01:06:55.516406 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 01:06:55.521012 systemd[1]: Reached target network.target - Network. May 14 01:06:55.525441 systemd[1]: Reached target sockets.target - Socket Units. May 14 01:06:55.529846 systemd[1]: Reached target basic.target - Basic System. May 14 01:06:55.534168 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 01:06:55.534191 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 01:06:55.535229 systemd[1]: Starting containerd.service - containerd container runtime... May 14 01:06:55.549658 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 01:06:55.555265 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 01:06:55.560887 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 01:06:55.566510 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 01:06:55.571049 jq[2695]: false May 14 01:06:55.571260 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 01:06:55.571514 coreos-metadata[2690]: May 14 01:06:55.571 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 14 01:06:55.572335 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 01:06:55.574442 coreos-metadata[2690]: May 14 01:06:55.574 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 14 01:06:55.576465 dbus-daemon[2691]: [system] SELinux support is enabled May 14 01:06:55.577897 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 01:06:55.583564 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 01:06:55.586651 extend-filesystems[2696]: Found loop4 May 14 01:06:55.592950 extend-filesystems[2696]: Found loop5 May 14 01:06:55.592950 extend-filesystems[2696]: Found loop6 May 14 01:06:55.592950 extend-filesystems[2696]: Found loop7 May 14 01:06:55.592950 extend-filesystems[2696]: Found nvme0n1 May 14 01:06:55.592950 extend-filesystems[2696]: Found nvme0n1p1 May 14 01:06:55.592950 extend-filesystems[2696]: Found nvme0n1p2 May 14 01:06:55.592950 extend-filesystems[2696]: Found nvme0n1p3 May 14 01:06:55.592950 extend-filesystems[2696]: Found usr May 14 01:06:55.592950 extend-filesystems[2696]: Found nvme0n1p4 May 14 01:06:55.592950 extend-filesystems[2696]: Found nvme0n1p6 May 14 01:06:55.592950 extend-filesystems[2696]: Found nvme0n1p7 May 14 01:06:55.592950 extend-filesystems[2696]: Found nvme0n1p9 May 14 01:06:55.592950 extend-filesystems[2696]: Checking size of /dev/nvme0n1p9 May 14 01:06:55.733387 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks May 14 01:06:55.733411 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (2401) May 14 01:06:55.589354 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 01:06:55.733486 extend-filesystems[2696]: Resized partition /dev/nvme0n1p9 May 14 01:06:55.730183 dbus-daemon[2691]: [system] Successfully activated service 'org.freedesktop.systemd1' May 14 01:06:55.601725 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 01:06:55.738439 extend-filesystems[2718]: resize2fs 1.47.2 (1-Jan-2025) May 14 01:06:55.607861 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 01:06:55.648240 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 01:06:55.657594 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 01:06:55.658174 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 01:06:55.743599 update_engine[2727]: I20250514 01:06:55.708757 2727 main.cc:92] Flatcar Update Engine starting May 14 01:06:55.743599 update_engine[2727]: I20250514 01:06:55.711318 2727 update_check_scheduler.cc:74] Next update check in 11m23s May 14 01:06:55.658761 systemd[1]: Starting update-engine.service - Update Engine... May 14 01:06:55.743845 jq[2728]: true May 14 01:06:55.667043 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 01:06:55.675985 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 01:06:55.744118 tar[2735]: linux-arm64/helm May 14 01:06:55.684816 systemd-logind[2719]: Watching system buttons on /dev/input/event0 (Power Button) May 14 01:06:55.689041 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 01:06:55.689707 systemd-logind[2719]: New seat seat0. May 14 01:06:55.744627 jq[2737]: true May 14 01:06:55.691015 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 01:06:55.691595 systemd[1]: motdgen.service: Deactivated successfully. May 14 01:06:55.691767 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 01:06:55.700804 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 01:06:55.701004 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 01:06:55.709837 systemd[1]: Started systemd-logind.service - User Login Management. May 14 01:06:55.729472 (ntainerd)[2738]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 01:06:55.748649 systemd[1]: Started update-engine.service - Update Engine. May 14 01:06:55.754877 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 01:06:55.755047 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 01:06:55.759929 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 01:06:55.760036 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 01:06:55.766229 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 01:06:55.766911 bash[2763]: Updated "/home/core/.ssh/authorized_keys" May 14 01:06:55.775803 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 01:06:55.783239 systemd[1]: Starting sshkeys.service... May 14 01:06:55.796209 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 14 01:06:55.799465 locksmithd[2764]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 01:06:55.802555 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 14 01:06:55.822245 coreos-metadata[2779]: May 14 01:06:55.822 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 14 01:06:55.823339 coreos-metadata[2779]: May 14 01:06:55.823 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 14 01:06:55.887516 containerd[2738]: time="2025-05-14T01:06:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 01:06:55.888086 containerd[2738]: time="2025-05-14T01:06:55.888057400Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 14 01:06:55.896259 containerd[2738]: time="2025-05-14T01:06:55.896231280Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.52µs" May 14 01:06:55.896295 containerd[2738]: time="2025-05-14T01:06:55.896258720Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 01:06:55.896295 containerd[2738]: time="2025-05-14T01:06:55.896277080Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 01:06:55.896453 containerd[2738]: time="2025-05-14T01:06:55.896440320Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 01:06:55.896472 containerd[2738]: time="2025-05-14T01:06:55.896459040Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 01:06:55.896493 containerd[2738]: time="2025-05-14T01:06:55.896484200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 01:06:55.896547 containerd[2738]: time="2025-05-14T01:06:55.896534560Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 01:06:55.896565 containerd[2738]: time="2025-05-14T01:06:55.896548440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 01:06:55.896826 containerd[2738]: time="2025-05-14T01:06:55.896810040Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 01:06:55.896845 containerd[2738]: time="2025-05-14T01:06:55.896826920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 01:06:55.896845 containerd[2738]: time="2025-05-14T01:06:55.896837880Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 01:06:55.896877 containerd[2738]: time="2025-05-14T01:06:55.896845760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 01:06:55.896928 containerd[2738]: time="2025-05-14T01:06:55.896917920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 01:06:55.897241 containerd[2738]: time="2025-05-14T01:06:55.897226000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 01:06:55.897271 containerd[2738]: time="2025-05-14T01:06:55.897259760Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 01:06:55.897289 containerd[2738]: time="2025-05-14T01:06:55.897272520Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 01:06:55.897306 containerd[2738]: time="2025-05-14T01:06:55.897295080Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 01:06:55.897527 containerd[2738]: time="2025-05-14T01:06:55.897515680Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 01:06:55.897593 containerd[2738]: time="2025-05-14T01:06:55.897582640Z" level=info msg="metadata content store policy set" policy=shared May 14 01:06:55.904458 containerd[2738]: time="2025-05-14T01:06:55.904433560Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 01:06:55.904501 containerd[2738]: time="2025-05-14T01:06:55.904469000Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 01:06:55.904501 containerd[2738]: time="2025-05-14T01:06:55.904482440Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 01:06:55.904501 containerd[2738]: time="2025-05-14T01:06:55.904494160Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 01:06:55.904589 containerd[2738]: time="2025-05-14T01:06:55.904507120Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 01:06:55.904589 containerd[2738]: time="2025-05-14T01:06:55.904519320Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 01:06:55.904589 containerd[2738]: time="2025-05-14T01:06:55.904531000Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 01:06:55.904589 containerd[2738]: time="2025-05-14T01:06:55.904544080Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 01:06:55.904589 containerd[2738]: time="2025-05-14T01:06:55.904554040Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 01:06:55.904589 containerd[2738]: time="2025-05-14T01:06:55.904564160Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 01:06:55.904589 containerd[2738]: time="2025-05-14T01:06:55.904573920Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 01:06:55.904589 containerd[2738]: time="2025-05-14T01:06:55.904586240Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 01:06:55.904774 containerd[2738]: time="2025-05-14T01:06:55.904698480Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 01:06:55.904774 containerd[2738]: time="2025-05-14T01:06:55.904720200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 01:06:55.904774 containerd[2738]: time="2025-05-14T01:06:55.904732040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 01:06:55.904774 containerd[2738]: time="2025-05-14T01:06:55.904742640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 01:06:55.904774 containerd[2738]: time="2025-05-14T01:06:55.904759680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 01:06:55.904774 containerd[2738]: time="2025-05-14T01:06:55.904771480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 01:06:55.904867 containerd[2738]: time="2025-05-14T01:06:55.904783240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 01:06:55.904867 containerd[2738]: time="2025-05-14T01:06:55.904794760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 01:06:55.904867 containerd[2738]: time="2025-05-14T01:06:55.904806080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 01:06:55.904867 containerd[2738]: time="2025-05-14T01:06:55.904817680Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 01:06:55.904867 containerd[2738]: time="2025-05-14T01:06:55.904828120Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 01:06:55.905103 containerd[2738]: time="2025-05-14T01:06:55.905088600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 01:06:55.905124 containerd[2738]: time="2025-05-14T01:06:55.905104800Z" level=info msg="Start snapshots syncer" May 14 01:06:55.905141 containerd[2738]: time="2025-05-14T01:06:55.905127240Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 01:06:55.905366 containerd[2738]: time="2025-05-14T01:06:55.905336160Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 01:06:55.905453 containerd[2738]: time="2025-05-14T01:06:55.905379160Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 01:06:55.905474 containerd[2738]: time="2025-05-14T01:06:55.905463680Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 01:06:55.905593 containerd[2738]: time="2025-05-14T01:06:55.905577800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 01:06:55.905614 containerd[2738]: time="2025-05-14T01:06:55.905601480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 01:06:55.905631 containerd[2738]: time="2025-05-14T01:06:55.905615240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 01:06:55.905631 containerd[2738]: time="2025-05-14T01:06:55.905626200Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 01:06:55.905664 containerd[2738]: time="2025-05-14T01:06:55.905639160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 01:06:55.905664 containerd[2738]: time="2025-05-14T01:06:55.905651480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 01:06:55.905700 containerd[2738]: time="2025-05-14T01:06:55.905665360Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 01:06:55.905700 containerd[2738]: time="2025-05-14T01:06:55.905690080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 01:06:55.905732 containerd[2738]: time="2025-05-14T01:06:55.905701760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 01:06:55.905732 containerd[2738]: time="2025-05-14T01:06:55.905712400Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 01:06:55.905764 containerd[2738]: time="2025-05-14T01:06:55.905745400Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 01:06:55.905764 containerd[2738]: time="2025-05-14T01:06:55.905758840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 01:06:55.905803 containerd[2738]: time="2025-05-14T01:06:55.905768000Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 01:06:55.905803 containerd[2738]: time="2025-05-14T01:06:55.905777240Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 01:06:55.905803 containerd[2738]: time="2025-05-14T01:06:55.905785040Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 01:06:55.905803 containerd[2738]: time="2025-05-14T01:06:55.905795240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 01:06:55.905865 containerd[2738]: time="2025-05-14T01:06:55.905805840Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 01:06:55.905893 containerd[2738]: time="2025-05-14T01:06:55.905882440Z" level=info msg="runtime interface created" May 14 01:06:55.905893 containerd[2738]: time="2025-05-14T01:06:55.905888960Z" level=info msg="created NRI interface" May 14 01:06:55.905931 containerd[2738]: time="2025-05-14T01:06:55.905897360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 01:06:55.905931 containerd[2738]: time="2025-05-14T01:06:55.905908120Z" level=info msg="Connect containerd service" May 14 01:06:55.905963 containerd[2738]: time="2025-05-14T01:06:55.905934040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 01:06:55.907306 containerd[2738]: time="2025-05-14T01:06:55.907275640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 01:06:55.987556 containerd[2738]: time="2025-05-14T01:06:55.987503000Z" level=info msg="Start subscribing containerd event" May 14 01:06:55.987602 containerd[2738]: time="2025-05-14T01:06:55.987590120Z" level=info msg="Start recovering state" May 14 01:06:55.987698 containerd[2738]: time="2025-05-14T01:06:55.987685600Z" level=info msg="Start event monitor" May 14 01:06:55.987728 containerd[2738]: time="2025-05-14T01:06:55.987718120Z" level=info msg="Start cni network conf syncer for default" May 14 01:06:55.987748 containerd[2738]: time="2025-05-14T01:06:55.987731680Z" level=info msg="Start streaming server" May 14 01:06:55.987748 containerd[2738]: time="2025-05-14T01:06:55.987744080Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 01:06:55.987780 containerd[2738]: time="2025-05-14T01:06:55.987752520Z" level=info msg="runtime interface starting up..." May 14 01:06:55.987780 containerd[2738]: time="2025-05-14T01:06:55.987758160Z" level=info msg="starting plugins..." May 14 01:06:55.987780 containerd[2738]: time="2025-05-14T01:06:55.987771680Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 01:06:55.987867 containerd[2738]: time="2025-05-14T01:06:55.987818360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 01:06:55.987886 containerd[2738]: time="2025-05-14T01:06:55.987868200Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 01:06:55.987933 containerd[2738]: time="2025-05-14T01:06:55.987921840Z" level=info msg="containerd successfully booted in 0.100722s" May 14 01:06:55.987971 systemd[1]: Started containerd.service - containerd container runtime. May 14 01:06:56.014188 sshd_keygen[2723]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 01:06:56.028031 tar[2735]: linux-arm64/LICENSE May 14 01:06:56.028111 tar[2735]: linux-arm64/README.md May 14 01:06:56.033270 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 01:06:56.047722 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 01:06:56.053377 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 01:06:56.062560 systemd[1]: issuegen.service: Deactivated successfully. May 14 01:06:56.062764 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 01:06:56.069774 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 01:06:56.089951 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 01:06:56.096424 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 01:06:56.102579 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 14 01:06:56.107752 systemd[1]: Reached target getty.target - Login Prompts. May 14 01:06:56.183991 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 May 14 01:06:56.199179 extend-filesystems[2718]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 14 01:06:56.199179 extend-filesystems[2718]: old_desc_blocks = 1, new_desc_blocks = 112 May 14 01:06:56.199179 extend-filesystems[2718]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. May 14 01:06:56.226787 extend-filesystems[2696]: Resized filesystem in /dev/nvme0n1p9 May 14 01:06:56.226787 extend-filesystems[2696]: Found nvme1n1 May 14 01:06:56.201803 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 01:06:56.202140 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 01:06:56.213783 systemd[1]: extend-filesystems.service: Consumed 210ms CPU time, 68.9M memory peak. May 14 01:06:56.574566 coreos-metadata[2690]: May 14 01:06:56.574 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 14 01:06:56.575188 coreos-metadata[2690]: May 14 01:06:56.575 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 14 01:06:56.835149 coreos-metadata[2779]: May 14 01:06:56.831 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 14 01:06:56.835552 coreos-metadata[2779]: May 14 01:06:56.835 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 14 01:06:56.909991 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 14 01:06:56.926987 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link May 14 01:06:56.931697 systemd-networkd[2631]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:95:bd:81.network. May 14 01:06:57.530990 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 14 01:06:57.547914 systemd-networkd[2631]: bond0: Configuring with /etc/systemd/network/05-bond0.network. May 14 01:06:57.548013 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link May 14 01:06:57.549461 systemd-networkd[2631]: enP1p1s0f0np0: Link UP May 14 01:06:57.549699 systemd-networkd[2631]: enP1p1s0f0np0: Gained carrier May 14 01:06:57.551717 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 01:06:57.567986 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 14 01:06:57.575309 systemd-networkd[2631]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:95:bd:80.network. May 14 01:06:57.575580 systemd-networkd[2631]: enP1p1s0f1np1: Link UP May 14 01:06:57.575771 systemd-networkd[2631]: enP1p1s0f1np1: Gained carrier May 14 01:06:57.589254 systemd-networkd[2631]: bond0: Link UP May 14 01:06:57.589519 systemd-networkd[2631]: bond0: Gained carrier May 14 01:06:57.589686 systemd-timesyncd[2633]: Network configuration changed, trying to establish connection. May 14 01:06:57.590267 systemd-timesyncd[2633]: Network configuration changed, trying to establish connection. May 14 01:06:57.590509 systemd-timesyncd[2633]: Network configuration changed, trying to establish connection. May 14 01:06:57.590639 systemd-timesyncd[2633]: Network configuration changed, trying to establish connection. May 14 01:06:57.670662 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex May 14 01:06:57.670700 kernel: bond0: active interface up! May 14 01:06:57.793989 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex May 14 01:06:58.575292 coreos-metadata[2690]: May 14 01:06:58.575 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 14 01:06:58.755394 systemd-timesyncd[2633]: Network configuration changed, trying to establish connection. May 14 01:06:58.835667 coreos-metadata[2779]: May 14 01:06:58.835 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 14 01:06:59.011047 systemd-networkd[2631]: bond0: Gained IPv6LL May 14 01:06:59.011419 systemd-timesyncd[2633]: Network configuration changed, trying to establish connection. May 14 01:06:59.014041 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 01:06:59.020103 systemd[1]: Reached target network-online.target - Network is Online. May 14 01:06:59.027329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:06:59.043747 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 01:06:59.065925 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 01:06:59.639150 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:06:59.645157 (kubelet)[2861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:07:00.072141 kubelet[2861]: E0514 01:07:00.072103 2861 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:07:00.074967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:07:00.075116 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:07:00.075477 systemd[1]: kubelet.service: Consumed 696ms CPU time, 245.9M memory peak. May 14 01:07:01.012047 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 01:07:01.018380 systemd[1]: Started sshd@0-147.28.151.154:22-139.178.68.195:50222.service - OpenSSH per-connection server daemon (139.178.68.195:50222). May 14 01:07:01.147817 login[2834]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying May 14 01:07:01.149289 login[2835]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 14 01:07:01.158777 systemd-logind[2719]: New session 2 of user core. May 14 01:07:01.160150 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 01:07:01.161450 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 01:07:01.180289 coreos-metadata[2690]: May 14 01:07:01.180 INFO Fetch successful May 14 01:07:01.181222 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 01:07:01.183838 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 01:07:01.189201 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 May 14 01:07:01.189381 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity May 14 01:07:01.192366 (systemd)[2894]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 01:07:01.194237 systemd-logind[2719]: New session c1 of user core. May 14 01:07:01.244603 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 01:07:01.246536 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... May 14 01:07:01.314009 systemd[2894]: Queued start job for default target default.target. May 14 01:07:01.325178 systemd[2894]: Created slice app.slice - User Application Slice. May 14 01:07:01.325203 systemd[2894]: Reached target paths.target - Paths. May 14 01:07:01.325236 systemd[2894]: Reached target timers.target - Timers. May 14 01:07:01.326474 systemd[2894]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 01:07:01.334747 systemd[2894]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 01:07:01.334800 systemd[2894]: Reached target sockets.target - Sockets. May 14 01:07:01.334842 systemd[2894]: Reached target basic.target - Basic System. May 14 01:07:01.334869 systemd[2894]: Reached target default.target - Main User Target. May 14 01:07:01.334891 systemd[2894]: Startup finished in 136ms. May 14 01:07:01.335211 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 01:07:01.336816 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 01:07:01.399285 coreos-metadata[2779]: May 14 01:07:01.399 INFO Fetch successful May 14 01:07:01.445538 unknown[2779]: wrote ssh authorized keys file for user: core May 14 01:07:01.452143 sshd[2883]: Accepted publickey for core from 139.178.68.195 port 50222 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:07:01.453547 sshd-session[2883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:07:01.468306 systemd-logind[2719]: New session 3 of user core. May 14 01:07:01.469795 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 01:07:01.476180 update-ssh-keys[2922]: Updated "/home/core/.ssh/authorized_keys" May 14 01:07:01.477462 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 14 01:07:01.478935 systemd[1]: Finished sshkeys.service. May 14 01:07:01.815817 systemd[1]: Started sshd@1-147.28.151.154:22-139.178.68.195:50232.service - OpenSSH per-connection server daemon (139.178.68.195:50232). May 14 01:07:02.069915 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. May 14 01:07:02.070407 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 01:07:02.070534 systemd[1]: Startup finished in 3.222s (kernel) + 20.515s (initrd) + 10.200s (userspace) = 33.938s. May 14 01:07:02.148180 login[2834]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 14 01:07:02.151353 systemd-logind[2719]: New session 1 of user core. May 14 01:07:02.162140 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 01:07:02.236525 sshd[2930]: Accepted publickey for core from 139.178.68.195 port 50232 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:07:02.237628 sshd-session[2930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:07:02.240447 systemd-logind[2719]: New session 4 of user core. May 14 01:07:02.250145 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 01:07:02.535272 sshd[2944]: Connection closed by 139.178.68.195 port 50232 May 14 01:07:02.535820 sshd-session[2930]: pam_unix(sshd:session): session closed for user core May 14 01:07:02.539435 systemd[1]: sshd@1-147.28.151.154:22-139.178.68.195:50232.service: Deactivated successfully. May 14 01:07:02.542633 systemd[1]: session-4.scope: Deactivated successfully. May 14 01:07:02.543156 systemd-logind[2719]: Session 4 logged out. Waiting for processes to exit. May 14 01:07:02.543680 systemd-logind[2719]: Removed session 4. May 14 01:07:02.608752 systemd[1]: Started sshd@2-147.28.151.154:22-139.178.68.195:50240.service - OpenSSH per-connection server daemon (139.178.68.195:50240). May 14 01:07:03.029807 sshd[2950]: Accepted publickey for core from 139.178.68.195 port 50240 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:07:03.030823 sshd-session[2950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:07:03.033744 systemd-logind[2719]: New session 5 of user core. May 14 01:07:03.045078 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 01:07:03.326697 sshd[2952]: Connection closed by 139.178.68.195 port 50240 May 14 01:07:03.327152 sshd-session[2950]: pam_unix(sshd:session): session closed for user core May 14 01:07:03.330596 systemd[1]: sshd@2-147.28.151.154:22-139.178.68.195:50240.service: Deactivated successfully. May 14 01:07:03.332396 systemd[1]: session-5.scope: Deactivated successfully. May 14 01:07:03.332919 systemd-logind[2719]: Session 5 logged out. Waiting for processes to exit. May 14 01:07:03.333474 systemd-logind[2719]: Removed session 5. May 14 01:07:03.403732 systemd[1]: Started sshd@3-147.28.151.154:22-139.178.68.195:50246.service - OpenSSH per-connection server daemon (139.178.68.195:50246). May 14 01:07:03.820491 sshd[2958]: Accepted publickey for core from 139.178.68.195 port 50246 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:07:03.821485 sshd-session[2958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:07:03.824299 systemd-logind[2719]: New session 6 of user core. May 14 01:07:03.836091 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 01:07:04.119731 sshd[2960]: Connection closed by 139.178.68.195 port 50246 May 14 01:07:04.120181 sshd-session[2958]: pam_unix(sshd:session): session closed for user core May 14 01:07:04.123657 systemd[1]: sshd@3-147.28.151.154:22-139.178.68.195:50246.service: Deactivated successfully. May 14 01:07:04.125685 systemd[1]: session-6.scope: Deactivated successfully. May 14 01:07:04.126300 systemd-logind[2719]: Session 6 logged out. Waiting for processes to exit. May 14 01:07:04.126834 systemd-logind[2719]: Removed session 6. May 14 01:07:04.194698 systemd[1]: Started sshd@4-147.28.151.154:22-139.178.68.195:41134.service - OpenSSH per-connection server daemon (139.178.68.195:41134). May 14 01:07:04.338906 systemd-timesyncd[2633]: Network configuration changed, trying to establish connection. May 14 01:07:04.629141 sshd[2967]: Accepted publickey for core from 139.178.68.195 port 41134 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:07:04.630314 sshd-session[2967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:07:04.633199 systemd-logind[2719]: New session 7 of user core. May 14 01:07:04.642145 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 01:07:04.882557 sudo[2970]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 01:07:04.882816 sudo[2970]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 01:07:04.895819 sudo[2970]: pam_unix(sudo:session): session closed for user root May 14 01:07:04.961148 sshd[2969]: Connection closed by 139.178.68.195 port 41134 May 14 01:07:04.961546 sshd-session[2967]: pam_unix(sshd:session): session closed for user core May 14 01:07:04.964658 systemd[1]: sshd@4-147.28.151.154:22-139.178.68.195:41134.service: Deactivated successfully. May 14 01:07:04.966135 systemd[1]: session-7.scope: Deactivated successfully. May 14 01:07:04.966675 systemd-logind[2719]: Session 7 logged out. Waiting for processes to exit. May 14 01:07:04.967299 systemd-logind[2719]: Removed session 7. May 14 01:07:05.038913 systemd[1]: Started sshd@5-147.28.151.154:22-139.178.68.195:41142.service - OpenSSH per-connection server daemon (139.178.68.195:41142). May 14 01:07:05.461505 sshd[2976]: Accepted publickey for core from 139.178.68.195 port 41142 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:07:05.462550 sshd-session[2976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:07:05.465536 systemd-logind[2719]: New session 8 of user core. May 14 01:07:05.479080 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 01:07:05.694473 sudo[2980]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 01:07:05.694730 sudo[2980]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 01:07:05.697130 sudo[2980]: pam_unix(sudo:session): session closed for user root May 14 01:07:05.701322 sudo[2979]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 01:07:05.701565 sudo[2979]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 01:07:05.708456 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 01:07:05.741566 augenrules[3002]: No rules May 14 01:07:05.742677 systemd[1]: audit-rules.service: Deactivated successfully. May 14 01:07:05.742886 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 01:07:05.743598 sudo[2979]: pam_unix(sudo:session): session closed for user root May 14 01:07:05.806079 sshd[2978]: Connection closed by 139.178.68.195 port 41142 May 14 01:07:05.806680 sshd-session[2976]: pam_unix(sshd:session): session closed for user core May 14 01:07:05.810556 systemd[1]: sshd@5-147.28.151.154:22-139.178.68.195:41142.service: Deactivated successfully. May 14 01:07:05.813494 systemd[1]: session-8.scope: Deactivated successfully. May 14 01:07:05.814014 systemd-logind[2719]: Session 8 logged out. Waiting for processes to exit. May 14 01:07:05.814580 systemd-logind[2719]: Removed session 8. May 14 01:07:05.886739 systemd[1]: Started sshd@6-147.28.151.154:22-139.178.68.195:41148.service - OpenSSH per-connection server daemon (139.178.68.195:41148). May 14 01:07:06.311636 sshd[3012]: Accepted publickey for core from 139.178.68.195 port 41148 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:07:06.312654 sshd-session[3012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:07:06.315554 systemd-logind[2719]: New session 9 of user core. May 14 01:07:06.325086 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 01:07:06.552576 sudo[3015]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 01:07:06.552837 sudo[3015]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 01:07:06.854345 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 01:07:06.869266 (dockerd)[3044]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 01:07:07.091017 dockerd[3044]: time="2025-05-14T01:07:07.090961960Z" level=info msg="Starting up" May 14 01:07:07.093086 dockerd[3044]: time="2025-05-14T01:07:07.093065000Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 01:07:07.118025 dockerd[3044]: time="2025-05-14T01:07:07.117962360Z" level=info msg="Loading containers: start." May 14 01:07:07.246987 kernel: Initializing XFRM netlink socket May 14 01:07:07.248509 systemd-timesyncd[2633]: Network configuration changed, trying to establish connection. May 14 01:07:07.303693 systemd-networkd[2631]: docker0: Link UP May 14 01:07:07.362078 dockerd[3044]: time="2025-05-14T01:07:07.362048080Z" level=info msg="Loading containers: done." May 14 01:07:07.371703 dockerd[3044]: time="2025-05-14T01:07:07.371650240Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 01:07:07.371783 dockerd[3044]: time="2025-05-14T01:07:07.371711480Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 14 01:07:07.371890 dockerd[3044]: time="2025-05-14T01:07:07.371875720Z" level=info msg="Daemon has completed initialization" May 14 01:07:07.391898 dockerd[3044]: time="2025-05-14T01:07:07.391851960Z" level=info msg="API listen on /run/docker.sock" May 14 01:07:07.391926 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 01:07:07.679854 systemd-timesyncd[2633]: Contacted time server [2604:2dc0:100:39f::1]:123 (2.flatcar.pool.ntp.org). May 14 01:07:07.679906 systemd-timesyncd[2633]: Initial clock synchronization to Wed 2025-05-14 01:07:07.815979 UTC. May 14 01:07:07.968887 containerd[2738]: time="2025-05-14T01:07:07.968855640Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 01:07:08.108947 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3679793291-merged.mount: Deactivated successfully. May 14 01:07:08.455515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1009357464.mount: Deactivated successfully. May 14 01:07:10.195566 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 01:07:10.197131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:07:10.309989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:07:10.313353 (kubelet)[3361]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:07:10.342860 kubelet[3361]: E0514 01:07:10.342826 3361 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:07:10.345893 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:07:10.346035 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:07:10.348073 systemd[1]: kubelet.service: Consumed 127ms CPU time, 102.9M memory peak. May 14 01:07:10.383160 containerd[2738]: time="2025-05-14T01:07:10.383117384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:10.383422 containerd[2738]: time="2025-05-14T01:07:10.383118600Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554608" May 14 01:07:10.384085 containerd[2738]: time="2025-05-14T01:07:10.384065473Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:10.386356 containerd[2738]: time="2025-05-14T01:07:10.386332588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:10.387388 containerd[2738]: time="2025-05-14T01:07:10.387368407Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 2.418475947s" May 14 01:07:10.387416 containerd[2738]: time="2025-05-14T01:07:10.387397907Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 14 01:07:10.387964 containerd[2738]: time="2025-05-14T01:07:10.387944222Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 01:07:12.101494 containerd[2738]: time="2025-05-14T01:07:12.101458249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:12.101708 containerd[2738]: time="2025-05-14T01:07:12.101465521Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458978" May 14 01:07:12.102505 containerd[2738]: time="2025-05-14T01:07:12.102475913Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:12.104759 containerd[2738]: time="2025-05-14T01:07:12.104737559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:12.105828 containerd[2738]: time="2025-05-14T01:07:12.105798006Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.717813883s" May 14 01:07:12.105854 containerd[2738]: time="2025-05-14T01:07:12.105839294Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 14 01:07:12.106184 containerd[2738]: time="2025-05-14T01:07:12.106164268Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 01:07:13.523887 containerd[2738]: time="2025-05-14T01:07:13.523850051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:13.524220 containerd[2738]: time="2025-05-14T01:07:13.523909163Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125813" May 14 01:07:13.524803 containerd[2738]: time="2025-05-14T01:07:13.524781481Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:13.527253 containerd[2738]: time="2025-05-14T01:07:13.527226545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:13.528229 containerd[2738]: time="2025-05-14T01:07:13.528201027Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.421998925s" May 14 01:07:13.528249 containerd[2738]: time="2025-05-14T01:07:13.528238189Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 14 01:07:13.528607 containerd[2738]: time="2025-05-14T01:07:13.528588706Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 01:07:14.313891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2561990672.mount: Deactivated successfully. May 14 01:07:15.195750 containerd[2738]: time="2025-05-14T01:07:15.195692198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:15.196069 containerd[2738]: time="2025-05-14T01:07:15.195732385Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871917" May 14 01:07:15.196511 containerd[2738]: time="2025-05-14T01:07:15.196453780Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:15.198019 containerd[2738]: time="2025-05-14T01:07:15.197988622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:15.198743 containerd[2738]: time="2025-05-14T01:07:15.198673051Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.670054183s" May 14 01:07:15.198743 containerd[2738]: time="2025-05-14T01:07:15.198707963Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 14 01:07:15.199253 containerd[2738]: time="2025-05-14T01:07:15.199059057Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 01:07:15.704064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3107239022.mount: Deactivated successfully. May 14 01:07:16.653407 containerd[2738]: time="2025-05-14T01:07:16.653373449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:16.653690 containerd[2738]: time="2025-05-14T01:07:16.653398515Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 14 01:07:16.654427 containerd[2738]: time="2025-05-14T01:07:16.654398777Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:16.656922 containerd[2738]: time="2025-05-14T01:07:16.656897723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:16.657904 containerd[2738]: time="2025-05-14T01:07:16.657879437Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.458786442s" May 14 01:07:16.657941 containerd[2738]: time="2025-05-14T01:07:16.657913193Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 01:07:16.658301 containerd[2738]: time="2025-05-14T01:07:16.658282060Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 01:07:16.951555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1651322368.mount: Deactivated successfully. May 14 01:07:16.951956 containerd[2738]: time="2025-05-14T01:07:16.951925354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 01:07:16.952030 containerd[2738]: time="2025-05-14T01:07:16.951988683Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 14 01:07:16.952563 containerd[2738]: time="2025-05-14T01:07:16.952543713Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 01:07:16.954153 containerd[2738]: time="2025-05-14T01:07:16.954134734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 01:07:16.954892 containerd[2738]: time="2025-05-14T01:07:16.954867761Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 296.556933ms" May 14 01:07:16.954939 containerd[2738]: time="2025-05-14T01:07:16.954897414Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 14 01:07:16.955302 containerd[2738]: time="2025-05-14T01:07:16.955279880Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 01:07:17.250369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount602868801.mount: Deactivated successfully. May 14 01:07:20.447209 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 01:07:20.449302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:07:20.591814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:07:20.595169 (kubelet)[3513]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:07:20.625088 kubelet[3513]: E0514 01:07:20.625054 3513 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:07:20.627207 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:07:20.627345 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:07:20.628245 systemd[1]: kubelet.service: Consumed 133ms CPU time, 111.1M memory peak. May 14 01:07:20.979789 containerd[2738]: time="2025-05-14T01:07:20.979722715Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" May 14 01:07:20.980034 containerd[2738]: time="2025-05-14T01:07:20.979732187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:20.980856 containerd[2738]: time="2025-05-14T01:07:20.980828898Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:20.983461 containerd[2738]: time="2025-05-14T01:07:20.983437259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:20.984642 containerd[2738]: time="2025-05-14T01:07:20.984617496Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.029305006s" May 14 01:07:20.984666 containerd[2738]: time="2025-05-14T01:07:20.984651572Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 14 01:07:26.001533 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:07:26.001666 systemd[1]: kubelet.service: Consumed 133ms CPU time, 111.1M memory peak. May 14 01:07:26.004064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:07:26.025797 systemd[1]: Reload requested from client PID 3606 ('systemctl') (unit session-9.scope)... May 14 01:07:26.025809 systemd[1]: Reloading... May 14 01:07:26.098983 zram_generator::config[3656]: No configuration found. May 14 01:07:26.188199 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 01:07:26.280307 systemd[1]: Reloading finished in 254 ms. May 14 01:07:26.333328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:07:26.335280 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:07:26.336725 systemd[1]: kubelet.service: Deactivated successfully. May 14 01:07:26.336925 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:07:26.336959 systemd[1]: kubelet.service: Consumed 81ms CPU time, 82.4M memory peak. May 14 01:07:26.338450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:07:26.444757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:07:26.448013 (kubelet)[3721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 01:07:26.477951 kubelet[3721]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 01:07:26.477951 kubelet[3721]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 01:07:26.477951 kubelet[3721]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 01:07:26.478211 kubelet[3721]: I0514 01:07:26.478089 3721 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 01:07:27.385062 kubelet[3721]: I0514 01:07:27.385014 3721 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 01:07:27.385062 kubelet[3721]: I0514 01:07:27.385057 3721 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 01:07:27.385300 kubelet[3721]: I0514 01:07:27.385287 3721 server.go:929] "Client rotation is on, will bootstrap in background" May 14 01:07:27.402881 kubelet[3721]: E0514 01:07:27.402856 3721 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.28.151.154:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.28.151.154:6443: connect: connection refused" logger="UnhandledError" May 14 01:07:27.404679 kubelet[3721]: I0514 01:07:27.404659 3721 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 01:07:27.412436 kubelet[3721]: I0514 01:07:27.412412 3721 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 01:07:27.431558 kubelet[3721]: I0514 01:07:27.431536 3721 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 01:07:27.432339 kubelet[3721]: I0514 01:07:27.432321 3721 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 01:07:27.432470 kubelet[3721]: I0514 01:07:27.432443 3721 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 01:07:27.433312 kubelet[3721]: I0514 01:07:27.432469 3721 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-0b8132852a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 01:07:27.433697 kubelet[3721]: I0514 01:07:27.433467 3721 topology_manager.go:138] "Creating topology manager with none policy" May 14 01:07:27.433728 kubelet[3721]: I0514 01:07:27.433702 3721 container_manager_linux.go:300] "Creating device plugin manager" May 14 01:07:27.433890 kubelet[3721]: I0514 01:07:27.433879 3721 state_mem.go:36] "Initialized new in-memory state store" May 14 01:07:27.435595 kubelet[3721]: I0514 01:07:27.435579 3721 kubelet.go:408] "Attempting to sync node with API server" May 14 01:07:27.435619 kubelet[3721]: I0514 01:07:27.435602 3721 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 01:07:27.435698 kubelet[3721]: I0514 01:07:27.435690 3721 kubelet.go:314] "Adding apiserver pod source" May 14 01:07:27.435722 kubelet[3721]: I0514 01:07:27.435700 3721 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 01:07:27.437399 kubelet[3721]: I0514 01:07:27.437382 3721 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 01:07:27.438174 kubelet[3721]: W0514 01:07:27.438137 3721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.151.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-0b8132852a&limit=500&resourceVersion=0": dial tcp 147.28.151.154:6443: connect: connection refused May 14 01:07:27.438202 kubelet[3721]: E0514 01:07:27.438186 3721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.151.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-0b8132852a&limit=500&resourceVersion=0\": dial tcp 147.28.151.154:6443: connect: connection refused" logger="UnhandledError" May 14 01:07:27.439090 kubelet[3721]: W0514 01:07:27.439051 3721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.151.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.151.154:6443: connect: connection refused May 14 01:07:27.439129 kubelet[3721]: I0514 01:07:27.439115 3721 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 01:07:27.439196 kubelet[3721]: E0514 01:07:27.439112 3721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.151.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.151.154:6443: connect: connection refused" logger="UnhandledError" May 14 01:07:27.439930 kubelet[3721]: W0514 01:07:27.439917 3721 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 01:07:27.440572 kubelet[3721]: I0514 01:07:27.440558 3721 server.go:1269] "Started kubelet" May 14 01:07:27.440835 kubelet[3721]: I0514 01:07:27.440792 3721 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 01:07:27.440999 kubelet[3721]: I0514 01:07:27.440961 3721 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 01:07:27.441208 kubelet[3721]: I0514 01:07:27.441198 3721 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 01:07:27.442042 kubelet[3721]: I0514 01:07:27.442026 3721 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 01:07:27.442070 kubelet[3721]: I0514 01:07:27.442037 3721 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 01:07:27.442238 kubelet[3721]: I0514 01:07:27.442229 3721 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 01:07:27.442341 kubelet[3721]: I0514 01:07:27.442333 3721 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 01:07:27.442395 kubelet[3721]: I0514 01:07:27.442388 3721 reconciler.go:26] "Reconciler: start to sync state" May 14 01:07:27.442466 kubelet[3721]: E0514 01:07:27.442450 3721 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-0b8132852a\" not found" May 14 01:07:27.442607 kubelet[3721]: E0514 01:07:27.442571 3721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.151.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-0b8132852a?timeout=10s\": dial tcp 147.28.151.154:6443: connect: connection refused" interval="200ms" May 14 01:07:27.442649 kubelet[3721]: I0514 01:07:27.442635 3721 factory.go:221] Registration of the systemd container factory successfully May 14 01:07:27.442674 kubelet[3721]: W0514 01:07:27.442644 3721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.151.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.151.154:6443: connect: connection refused May 14 01:07:27.442696 kubelet[3721]: E0514 01:07:27.442682 3721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.151.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.151.154:6443: connect: connection refused" logger="UnhandledError" May 14 01:07:27.442738 kubelet[3721]: I0514 01:07:27.442723 3721 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 01:07:27.442927 kubelet[3721]: E0514 01:07:27.442912 3721 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 01:07:27.443235 kubelet[3721]: I0514 01:07:27.443220 3721 server.go:460] "Adding debug handlers to kubelet server" May 14 01:07:27.443839 kubelet[3721]: I0514 01:07:27.443819 3721 factory.go:221] Registration of the containerd container factory successfully May 14 01:07:27.460024 kubelet[3721]: E0514 01:07:27.443868 3721 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.151.154:6443/api/v1/namespaces/default/events\": dial tcp 147.28.151.154:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-n-0b8132852a.183f3f5c9ca3637d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-n-0b8132852a,UID:ci-4284.0.0-n-0b8132852a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-n-0b8132852a,},FirstTimestamp:2025-05-14 01:07:27.440536445 +0000 UTC m=+0.989801777,LastTimestamp:2025-05-14 01:07:27.440536445 +0000 UTC m=+0.989801777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-n-0b8132852a,}" May 14 01:07:27.470198 kubelet[3721]: I0514 01:07:27.470160 3721 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 01:07:27.471176 kubelet[3721]: I0514 01:07:27.471166 3721 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 01:07:27.471199 kubelet[3721]: I0514 01:07:27.471183 3721 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 01:07:27.471199 kubelet[3721]: I0514 01:07:27.471198 3721 kubelet.go:2321] "Starting kubelet main sync loop" May 14 01:07:27.471252 kubelet[3721]: E0514 01:07:27.471236 3721 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 01:07:27.472551 kubelet[3721]: W0514 01:07:27.472509 3721 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.151.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.151.154:6443: connect: connection refused May 14 01:07:27.472590 kubelet[3721]: E0514 01:07:27.472565 3721 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.151.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.151.154:6443: connect: connection refused" logger="UnhandledError" May 14 01:07:27.474773 kubelet[3721]: I0514 01:07:27.474758 3721 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 01:07:27.474799 kubelet[3721]: I0514 01:07:27.474773 3721 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 01:07:27.474799 kubelet[3721]: I0514 01:07:27.474789 3721 state_mem.go:36] "Initialized new in-memory state store" May 14 01:07:27.475382 kubelet[3721]: I0514 01:07:27.475372 3721 policy_none.go:49] "None policy: Start" May 14 01:07:27.475825 kubelet[3721]: I0514 01:07:27.475816 3721 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 01:07:27.475846 kubelet[3721]: I0514 01:07:27.475832 3721 state_mem.go:35] "Initializing new in-memory state store" May 14 01:07:27.480386 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 01:07:27.498087 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 01:07:27.500666 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 01:07:27.509802 kubelet[3721]: I0514 01:07:27.509783 3721 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 01:07:27.510083 kubelet[3721]: I0514 01:07:27.509958 3721 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 01:07:27.510083 kubelet[3721]: I0514 01:07:27.509971 3721 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 01:07:27.510160 kubelet[3721]: I0514 01:07:27.510140 3721 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 01:07:27.510816 kubelet[3721]: E0514 01:07:27.510799 3721 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284.0.0-n-0b8132852a\" not found" May 14 01:07:27.579040 systemd[1]: Created slice kubepods-burstable-pod85e425da851fd37f17c5a70c7ad23115.slice - libcontainer container kubepods-burstable-pod85e425da851fd37f17c5a70c7ad23115.slice. May 14 01:07:27.602558 systemd[1]: Created slice kubepods-burstable-pod82654fdf306a0e5be29d2cb478de5b57.slice - libcontainer container kubepods-burstable-pod82654fdf306a0e5be29d2cb478de5b57.slice. May 14 01:07:27.611478 kubelet[3721]: I0514 01:07:27.611461 3721 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-0b8132852a" May 14 01:07:27.611813 kubelet[3721]: E0514 01:07:27.611794 3721 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.151.154:6443/api/v1/nodes\": dial tcp 147.28.151.154:6443: connect: connection refused" node="ci-4284.0.0-n-0b8132852a" May 14 01:07:27.618173 systemd[1]: Created slice kubepods-burstable-podf129cd5e282da7141d62b6a31a659e65.slice - libcontainer container kubepods-burstable-podf129cd5e282da7141d62b6a31a659e65.slice. May 14 01:07:27.643448 kubelet[3721]: I0514 01:07:27.643390 3721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85e425da851fd37f17c5a70c7ad23115-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-0b8132852a\" (UID: \"85e425da851fd37f17c5a70c7ad23115\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-0b8132852a" May 14 01:07:27.643543 kubelet[3721]: E0514 01:07:27.643518 3721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.151.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-0b8132852a?timeout=10s\": dial tcp 147.28.151.154:6443: connect: connection refused" interval="400ms" May 14 01:07:27.744214 kubelet[3721]: I0514 01:07:27.744151 3721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/85e425da851fd37f17c5a70c7ad23115-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-0b8132852a\" (UID: \"85e425da851fd37f17c5a70c7ad23115\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-0b8132852a" May 14 01:07:27.744214 kubelet[3721]: I0514 01:07:27.744213 3721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/85e425da851fd37f17c5a70c7ad23115-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-0b8132852a\" (UID: \"85e425da851fd37f17c5a70c7ad23115\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-0b8132852a" May 14 01:07:27.744519 kubelet[3721]: I0514 01:07:27.744262 3721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f129cd5e282da7141d62b6a31a659e65-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-0b8132852a\" (UID: \"f129cd5e282da7141d62b6a31a659e65\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-0b8132852a" May 14 01:07:27.744519 kubelet[3721]: I0514 01:07:27.744323 3721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f129cd5e282da7141d62b6a31a659e65-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-0b8132852a\" (UID: \"f129cd5e282da7141d62b6a31a659e65\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-0b8132852a" May 14 01:07:27.744519 kubelet[3721]: I0514 01:07:27.744409 3721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85e425da851fd37f17c5a70c7ad23115-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-0b8132852a\" (UID: \"85e425da851fd37f17c5a70c7ad23115\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-0b8132852a" May 14 01:07:27.744519 kubelet[3721]: I0514 01:07:27.744452 3721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85e425da851fd37f17c5a70c7ad23115-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-0b8132852a\" (UID: \"85e425da851fd37f17c5a70c7ad23115\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-0b8132852a" May 14 01:07:27.744519 kubelet[3721]: I0514 01:07:27.744495 3721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82654fdf306a0e5be29d2cb478de5b57-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-0b8132852a\" (UID: \"82654fdf306a0e5be29d2cb478de5b57\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-0b8132852a" May 14 01:07:27.744782 kubelet[3721]: I0514 01:07:27.744535 3721 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f129cd5e282da7141d62b6a31a659e65-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-0b8132852a\" (UID: \"f129cd5e282da7141d62b6a31a659e65\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-0b8132852a" May 14 01:07:27.814320 kubelet[3721]: I0514 01:07:27.814301 3721 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-0b8132852a" May 14 01:07:27.814609 kubelet[3721]: E0514 01:07:27.814584 3721 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.151.154:6443/api/v1/nodes\": dial tcp 147.28.151.154:6443: connect: connection refused" node="ci-4284.0.0-n-0b8132852a" May 14 01:07:27.901995 containerd[2738]: time="2025-05-14T01:07:27.901910652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-0b8132852a,Uid:85e425da851fd37f17c5a70c7ad23115,Namespace:kube-system,Attempt:0,}" May 14 01:07:27.912706 containerd[2738]: time="2025-05-14T01:07:27.912675781Z" level=info msg="connecting to shim a36e4ffc4caa440522ee844f7f771530e6d7774ad9a738360735210a1d212a1d" address="unix:///run/containerd/s/5f15eb7c75335486900dc5c6dde69e8f2f5a05b5ee694b992a7120948809477e" namespace=k8s.io protocol=ttrpc version=3 May 14 01:07:27.916865 containerd[2738]: time="2025-05-14T01:07:27.916845433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-0b8132852a,Uid:82654fdf306a0e5be29d2cb478de5b57,Namespace:kube-system,Attempt:0,}" May 14 01:07:27.920394 containerd[2738]: time="2025-05-14T01:07:27.920372380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-0b8132852a,Uid:f129cd5e282da7141d62b6a31a659e65,Namespace:kube-system,Attempt:0,}" May 14 01:07:27.924705 containerd[2738]: time="2025-05-14T01:07:27.924681419Z" level=info msg="connecting to shim 0fef226c39a7e97331c6252d1c0cf39a1f363b78dcbb3bf2c2fe147630ac1d30" address="unix:///run/containerd/s/6b554491cf3e43356b2edefd6b72263916b1de97e87449528f10ee83d542ef07" namespace=k8s.io protocol=ttrpc version=3 May 14 01:07:27.928350 containerd[2738]: time="2025-05-14T01:07:27.928323040Z" level=info msg="connecting to shim ff6c071ff7b4175308f9facffe9d99efc76ce36669281a41c67438ded9fa182a" address="unix:///run/containerd/s/9a2bb34d58dd500c7187d6253ad09431d4b5be527ac1f70708e50bf328352de7" namespace=k8s.io protocol=ttrpc version=3 May 14 01:07:27.940180 systemd[1]: Started cri-containerd-a36e4ffc4caa440522ee844f7f771530e6d7774ad9a738360735210a1d212a1d.scope - libcontainer container a36e4ffc4caa440522ee844f7f771530e6d7774ad9a738360735210a1d212a1d. May 14 01:07:27.946360 systemd[1]: Started cri-containerd-0fef226c39a7e97331c6252d1c0cf39a1f363b78dcbb3bf2c2fe147630ac1d30.scope - libcontainer container 0fef226c39a7e97331c6252d1c0cf39a1f363b78dcbb3bf2c2fe147630ac1d30. May 14 01:07:27.947666 systemd[1]: Started cri-containerd-ff6c071ff7b4175308f9facffe9d99efc76ce36669281a41c67438ded9fa182a.scope - libcontainer container ff6c071ff7b4175308f9facffe9d99efc76ce36669281a41c67438ded9fa182a. May 14 01:07:27.967135 containerd[2738]: time="2025-05-14T01:07:27.967098789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-0b8132852a,Uid:85e425da851fd37f17c5a70c7ad23115,Namespace:kube-system,Attempt:0,} returns sandbox id \"a36e4ffc4caa440522ee844f7f771530e6d7774ad9a738360735210a1d212a1d\"" May 14 01:07:27.969906 containerd[2738]: time="2025-05-14T01:07:27.969859905Z" level=info msg="CreateContainer within sandbox \"a36e4ffc4caa440522ee844f7f771530e6d7774ad9a738360735210a1d212a1d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 01:07:27.970961 containerd[2738]: time="2025-05-14T01:07:27.970936234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-0b8132852a,Uid:82654fdf306a0e5be29d2cb478de5b57,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fef226c39a7e97331c6252d1c0cf39a1f363b78dcbb3bf2c2fe147630ac1d30\"" May 14 01:07:27.972982 containerd[2738]: time="2025-05-14T01:07:27.972956273Z" level=info msg="CreateContainer within sandbox \"0fef226c39a7e97331c6252d1c0cf39a1f363b78dcbb3bf2c2fe147630ac1d30\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 01:07:27.973801 containerd[2738]: time="2025-05-14T01:07:27.973779701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-0b8132852a,Uid:f129cd5e282da7141d62b6a31a659e65,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff6c071ff7b4175308f9facffe9d99efc76ce36669281a41c67438ded9fa182a\"" May 14 01:07:27.974115 containerd[2738]: time="2025-05-14T01:07:27.974091400Z" level=info msg="Container 5739766860cbfb404485f5b6d7ca03f1ca73c91d62bfc045b72e54495654ab0a: CDI devices from CRI Config.CDIDevices: []" May 14 01:07:27.975382 containerd[2738]: time="2025-05-14T01:07:27.975358987Z" level=info msg="CreateContainer within sandbox \"ff6c071ff7b4175308f9facffe9d99efc76ce36669281a41c67438ded9fa182a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 01:07:27.976796 containerd[2738]: time="2025-05-14T01:07:27.976773090Z" level=info msg="Container 3090b96bc4fd22d088db1d5531631c5692184e9d124f3f64f091e452e06d52bf: CDI devices from CRI Config.CDIDevices: []" May 14 01:07:27.977897 containerd[2738]: time="2025-05-14T01:07:27.977874492Z" level=info msg="CreateContainer within sandbox \"a36e4ffc4caa440522ee844f7f771530e6d7774ad9a738360735210a1d212a1d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5739766860cbfb404485f5b6d7ca03f1ca73c91d62bfc045b72e54495654ab0a\"" May 14 01:07:27.978335 containerd[2738]: time="2025-05-14T01:07:27.978312802Z" level=info msg="StartContainer for \"5739766860cbfb404485f5b6d7ca03f1ca73c91d62bfc045b72e54495654ab0a\"" May 14 01:07:27.978934 containerd[2738]: time="2025-05-14T01:07:27.978911648Z" level=info msg="Container 1bac0fca46a84989e4c862892aa9e2fd275d2ab682e94a6f6480b0ecb1549e62: CDI devices from CRI Config.CDIDevices: []" May 14 01:07:27.979385 containerd[2738]: time="2025-05-14T01:07:27.979360172Z" level=info msg="connecting to shim 5739766860cbfb404485f5b6d7ca03f1ca73c91d62bfc045b72e54495654ab0a" address="unix:///run/containerd/s/5f15eb7c75335486900dc5c6dde69e8f2f5a05b5ee694b992a7120948809477e" protocol=ttrpc version=3 May 14 01:07:27.979700 containerd[2738]: time="2025-05-14T01:07:27.979675396Z" level=info msg="CreateContainer within sandbox \"0fef226c39a7e97331c6252d1c0cf39a1f363b78dcbb3bf2c2fe147630ac1d30\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3090b96bc4fd22d088db1d5531631c5692184e9d124f3f64f091e452e06d52bf\"" May 14 01:07:27.979955 containerd[2738]: time="2025-05-14T01:07:27.979936708Z" level=info msg="StartContainer for \"3090b96bc4fd22d088db1d5531631c5692184e9d124f3f64f091e452e06d52bf\"" May 14 01:07:27.980831 containerd[2738]: time="2025-05-14T01:07:27.980810964Z" level=info msg="connecting to shim 3090b96bc4fd22d088db1d5531631c5692184e9d124f3f64f091e452e06d52bf" address="unix:///run/containerd/s/6b554491cf3e43356b2edefd6b72263916b1de97e87449528f10ee83d542ef07" protocol=ttrpc version=3 May 14 01:07:27.982087 containerd[2738]: time="2025-05-14T01:07:27.982060246Z" level=info msg="CreateContainer within sandbox \"ff6c071ff7b4175308f9facffe9d99efc76ce36669281a41c67438ded9fa182a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1bac0fca46a84989e4c862892aa9e2fd275d2ab682e94a6f6480b0ecb1549e62\"" May 14 01:07:27.982313 containerd[2738]: time="2025-05-14T01:07:27.982291117Z" level=info msg="StartContainer for \"1bac0fca46a84989e4c862892aa9e2fd275d2ab682e94a6f6480b0ecb1549e62\"" May 14 01:07:27.983222 containerd[2738]: time="2025-05-14T01:07:27.983199339Z" level=info msg="connecting to shim 1bac0fca46a84989e4c862892aa9e2fd275d2ab682e94a6f6480b0ecb1549e62" address="unix:///run/containerd/s/9a2bb34d58dd500c7187d6253ad09431d4b5be527ac1f70708e50bf328352de7" protocol=ttrpc version=3 May 14 01:07:28.003105 systemd[1]: Started cri-containerd-3090b96bc4fd22d088db1d5531631c5692184e9d124f3f64f091e452e06d52bf.scope - libcontainer container 3090b96bc4fd22d088db1d5531631c5692184e9d124f3f64f091e452e06d52bf. May 14 01:07:28.004279 systemd[1]: Started cri-containerd-5739766860cbfb404485f5b6d7ca03f1ca73c91d62bfc045b72e54495654ab0a.scope - libcontainer container 5739766860cbfb404485f5b6d7ca03f1ca73c91d62bfc045b72e54495654ab0a. May 14 01:07:28.006539 systemd[1]: Started cri-containerd-1bac0fca46a84989e4c862892aa9e2fd275d2ab682e94a6f6480b0ecb1549e62.scope - libcontainer container 1bac0fca46a84989e4c862892aa9e2fd275d2ab682e94a6f6480b0ecb1549e62. May 14 01:07:28.036134 containerd[2738]: time="2025-05-14T01:07:28.036104001Z" level=info msg="StartContainer for \"3090b96bc4fd22d088db1d5531631c5692184e9d124f3f64f091e452e06d52bf\" returns successfully" May 14 01:07:28.036223 containerd[2738]: time="2025-05-14T01:07:28.036192546Z" level=info msg="StartContainer for \"1bac0fca46a84989e4c862892aa9e2fd275d2ab682e94a6f6480b0ecb1549e62\" returns successfully" May 14 01:07:28.036302 containerd[2738]: time="2025-05-14T01:07:28.036277926Z" level=info msg="StartContainer for \"5739766860cbfb404485f5b6d7ca03f1ca73c91d62bfc045b72e54495654ab0a\" returns successfully" May 14 01:07:28.044065 kubelet[3721]: E0514 01:07:28.044013 3721 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.151.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-0b8132852a?timeout=10s\": dial tcp 147.28.151.154:6443: connect: connection refused" interval="800ms" May 14 01:07:28.217356 kubelet[3721]: I0514 01:07:28.217332 3721 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-0b8132852a" May 14 01:07:29.407358 kubelet[3721]: E0514 01:07:29.407324 3721 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284.0.0-n-0b8132852a\" not found" node="ci-4284.0.0-n-0b8132852a" May 14 01:07:29.437372 kubelet[3721]: I0514 01:07:29.437349 3721 apiserver.go:52] "Watching apiserver" May 14 01:07:29.442864 kubelet[3721]: I0514 01:07:29.442845 3721 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 01:07:29.510955 kubelet[3721]: I0514 01:07:29.510930 3721 kubelet_node_status.go:75] "Successfully registered node" node="ci-4284.0.0-n-0b8132852a" May 14 01:07:30.904532 kubelet[3721]: W0514 01:07:30.904508 3721 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 01:07:31.385321 systemd[1]: Reload requested from client PID 4145 ('systemctl') (unit session-9.scope)... May 14 01:07:31.385333 systemd[1]: Reloading... May 14 01:07:31.456992 zram_generator::config[4196]: No configuration found. May 14 01:07:31.547045 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 01:07:31.649680 systemd[1]: Reloading finished in 264 ms. May 14 01:07:31.670246 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:07:31.690891 systemd[1]: kubelet.service: Deactivated successfully. May 14 01:07:31.691182 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:07:31.691234 systemd[1]: kubelet.service: Consumed 1.442s CPU time, 141.3M memory peak. May 14 01:07:31.692908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:07:31.817044 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:07:31.820527 (kubelet)[4258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 01:07:31.849113 kubelet[4258]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 01:07:31.849113 kubelet[4258]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 01:07:31.849113 kubelet[4258]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 01:07:31.849738 kubelet[4258]: I0514 01:07:31.849422 4258 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 01:07:31.854728 kubelet[4258]: I0514 01:07:31.854708 4258 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 01:07:31.854752 kubelet[4258]: I0514 01:07:31.854730 4258 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 01:07:31.854956 kubelet[4258]: I0514 01:07:31.854946 4258 server.go:929] "Client rotation is on, will bootstrap in background" May 14 01:07:31.856225 kubelet[4258]: I0514 01:07:31.856210 4258 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 01:07:31.858028 kubelet[4258]: I0514 01:07:31.858007 4258 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 01:07:31.860725 kubelet[4258]: I0514 01:07:31.860713 4258 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 01:07:31.879176 kubelet[4258]: I0514 01:07:31.879144 4258 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 01:07:31.879314 kubelet[4258]: I0514 01:07:31.879299 4258 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 01:07:31.879427 kubelet[4258]: I0514 01:07:31.879395 4258 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 01:07:31.879575 kubelet[4258]: I0514 01:07:31.879423 4258 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-0b8132852a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 01:07:31.879647 kubelet[4258]: I0514 01:07:31.879586 4258 topology_manager.go:138] "Creating topology manager with none policy" May 14 01:07:31.879647 kubelet[4258]: I0514 01:07:31.879596 4258 container_manager_linux.go:300] "Creating device plugin manager" May 14 01:07:31.879647 kubelet[4258]: I0514 01:07:31.879625 4258 state_mem.go:36] "Initialized new in-memory state store" May 14 01:07:31.879726 kubelet[4258]: I0514 01:07:31.879717 4258 kubelet.go:408] "Attempting to sync node with API server" May 14 01:07:31.879750 kubelet[4258]: I0514 01:07:31.879729 4258 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 01:07:31.879750 kubelet[4258]: I0514 01:07:31.879749 4258 kubelet.go:314] "Adding apiserver pod source" May 14 01:07:31.879789 kubelet[4258]: I0514 01:07:31.879758 4258 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 01:07:31.880211 kubelet[4258]: I0514 01:07:31.880191 4258 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 01:07:31.880659 kubelet[4258]: I0514 01:07:31.880646 4258 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 01:07:31.881046 kubelet[4258]: I0514 01:07:31.881032 4258 server.go:1269] "Started kubelet" May 14 01:07:31.881112 kubelet[4258]: I0514 01:07:31.881052 4258 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 01:07:31.881153 kubelet[4258]: I0514 01:07:31.881106 4258 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 01:07:31.881325 kubelet[4258]: I0514 01:07:31.881312 4258 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 01:07:31.882697 kubelet[4258]: I0514 01:07:31.882678 4258 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 01:07:31.882884 kubelet[4258]: I0514 01:07:31.882601 4258 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 01:07:31.883111 kubelet[4258]: I0514 01:07:31.883051 4258 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 01:07:31.883136 kubelet[4258]: I0514 01:07:31.883047 4258 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 01:07:31.883397 kubelet[4258]: I0514 01:07:31.883377 4258 reconciler.go:26] "Reconciler: start to sync state" May 14 01:07:31.883999 kubelet[4258]: E0514 01:07:31.883975 4258 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 01:07:31.883999 kubelet[4258]: E0514 01:07:31.883988 4258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-0b8132852a\" not found" May 14 01:07:31.884211 kubelet[4258]: I0514 01:07:31.884189 4258 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 01:07:31.884957 kubelet[4258]: I0514 01:07:31.884942 4258 factory.go:221] Registration of the containerd container factory successfully May 14 01:07:31.884982 kubelet[4258]: I0514 01:07:31.884958 4258 factory.go:221] Registration of the systemd container factory successfully May 14 01:07:31.885114 kubelet[4258]: I0514 01:07:31.885099 4258 server.go:460] "Adding debug handlers to kubelet server" May 14 01:07:31.890661 kubelet[4258]: I0514 01:07:31.890508 4258 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 01:07:31.892283 kubelet[4258]: I0514 01:07:31.892264 4258 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 01:07:31.892308 kubelet[4258]: I0514 01:07:31.892289 4258 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 01:07:31.892308 kubelet[4258]: I0514 01:07:31.892307 4258 kubelet.go:2321] "Starting kubelet main sync loop" May 14 01:07:31.892363 kubelet[4258]: E0514 01:07:31.892349 4258 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 01:07:31.916956 kubelet[4258]: I0514 01:07:31.916892 4258 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 01:07:31.916956 kubelet[4258]: I0514 01:07:31.916909 4258 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 01:07:31.916956 kubelet[4258]: I0514 01:07:31.916929 4258 state_mem.go:36] "Initialized new in-memory state store" May 14 01:07:31.917102 kubelet[4258]: I0514 01:07:31.917080 4258 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 01:07:31.917149 kubelet[4258]: I0514 01:07:31.917093 4258 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 01:07:31.917149 kubelet[4258]: I0514 01:07:31.917112 4258 policy_none.go:49] "None policy: Start" May 14 01:07:31.917550 kubelet[4258]: I0514 01:07:31.917537 4258 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 01:07:31.917572 kubelet[4258]: I0514 01:07:31.917555 4258 state_mem.go:35] "Initializing new in-memory state store" May 14 01:07:31.917722 kubelet[4258]: I0514 01:07:31.917711 4258 state_mem.go:75] "Updated machine memory state" May 14 01:07:31.920726 kubelet[4258]: I0514 01:07:31.920704 4258 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 01:07:31.920885 kubelet[4258]: I0514 01:07:31.920869 4258 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 01:07:31.920926 kubelet[4258]: I0514 01:07:31.920881 4258 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 01:07:31.921050 kubelet[4258]: I0514 01:07:31.921029 4258 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 01:07:31.995679 kubelet[4258]: W0514 01:07:31.995658 4258 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 01:07:31.995896 kubelet[4258]: W0514 01:07:31.995880 4258 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 01:07:31.996222 kubelet[4258]: W0514 01:07:31.996209 4258 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 01:07:31.996267 kubelet[4258]: E0514 01:07:31.996251 4258 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4284.0.0-n-0b8132852a\" already exists" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-0b8132852a" May 14 01:07:32.023906 kubelet[4258]: I0514 01:07:32.023889 4258 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284.0.0-n-0b8132852a" May 14 01:07:32.027819 kubelet[4258]: I0514 01:07:32.027794 4258 kubelet_node_status.go:111] "Node was previously registered" node="ci-4284.0.0-n-0b8132852a" May 14 01:07:32.027875 kubelet[4258]: I0514 01:07:32.027856 4258 kubelet_node_status.go:75] "Successfully registered node" node="ci-4284.0.0-n-0b8132852a" May 14 01:07:32.184480 kubelet[4258]: I0514 01:07:32.184393 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/82654fdf306a0e5be29d2cb478de5b57-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-0b8132852a\" (UID: \"82654fdf306a0e5be29d2cb478de5b57\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-0b8132852a" May 14 01:07:32.184480 kubelet[4258]: I0514 01:07:32.184434 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f129cd5e282da7141d62b6a31a659e65-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-0b8132852a\" (UID: \"f129cd5e282da7141d62b6a31a659e65\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-0b8132852a" May 14 01:07:32.184480 kubelet[4258]: I0514 01:07:32.184459 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f129cd5e282da7141d62b6a31a659e65-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-0b8132852a\" (UID: \"f129cd5e282da7141d62b6a31a659e65\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-0b8132852a" May 14 01:07:32.184671 kubelet[4258]: I0514 01:07:32.184498 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/85e425da851fd37f17c5a70c7ad23115-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-0b8132852a\" (UID: \"85e425da851fd37f17c5a70c7ad23115\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-0b8132852a" May 14 01:07:32.184671 kubelet[4258]: I0514 01:07:32.184524 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f129cd5e282da7141d62b6a31a659e65-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-0b8132852a\" (UID: \"f129cd5e282da7141d62b6a31a659e65\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-0b8132852a" May 14 01:07:32.184671 kubelet[4258]: I0514 01:07:32.184547 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85e425da851fd37f17c5a70c7ad23115-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-0b8132852a\" (UID: \"85e425da851fd37f17c5a70c7ad23115\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-0b8132852a" May 14 01:07:32.184671 kubelet[4258]: I0514 01:07:32.184569 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85e425da851fd37f17c5a70c7ad23115-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-0b8132852a\" (UID: \"85e425da851fd37f17c5a70c7ad23115\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-0b8132852a" May 14 01:07:32.184671 kubelet[4258]: I0514 01:07:32.184594 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/85e425da851fd37f17c5a70c7ad23115-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-0b8132852a\" (UID: \"85e425da851fd37f17c5a70c7ad23115\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-0b8132852a" May 14 01:07:32.184808 kubelet[4258]: I0514 01:07:32.184615 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85e425da851fd37f17c5a70c7ad23115-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-0b8132852a\" (UID: \"85e425da851fd37f17c5a70c7ad23115\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-0b8132852a" May 14 01:07:32.880061 kubelet[4258]: I0514 01:07:32.880031 4258 apiserver.go:52] "Watching apiserver" May 14 01:07:32.884220 kubelet[4258]: I0514 01:07:32.884197 4258 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 01:07:32.917798 kubelet[4258]: I0514 01:07:32.917727 4258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284.0.0-n-0b8132852a" podStartSLOduration=1.9177060190000002 podStartE2EDuration="1.917706019s" podCreationTimestamp="2025-05-14 01:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 01:07:32.913327794 +0000 UTC m=+1.090029452" watchObservedRunningTime="2025-05-14 01:07:32.917706019 +0000 UTC m=+1.094407637" May 14 01:07:32.917944 kubelet[4258]: I0514 01:07:32.917845 4258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284.0.0-n-0b8132852a" podStartSLOduration=1.917839911 podStartE2EDuration="1.917839911s" podCreationTimestamp="2025-05-14 01:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 01:07:32.917819337 +0000 UTC m=+1.094520955" watchObservedRunningTime="2025-05-14 01:07:32.917839911 +0000 UTC m=+1.094541529" May 14 01:07:32.928960 kubelet[4258]: I0514 01:07:32.928910 4258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-0b8132852a" podStartSLOduration=2.928896949 podStartE2EDuration="2.928896949s" podCreationTimestamp="2025-05-14 01:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 01:07:32.923190167 +0000 UTC m=+1.099891785" watchObservedRunningTime="2025-05-14 01:07:32.928896949 +0000 UTC m=+1.105598567" May 14 01:07:36.200271 sudo[3015]: pam_unix(sudo:session): session closed for user root May 14 01:07:36.265373 sshd[3014]: Connection closed by 139.178.68.195 port 41148 May 14 01:07:36.265699 sshd-session[3012]: pam_unix(sshd:session): session closed for user core May 14 01:07:36.268658 systemd[1]: sshd@6-147.28.151.154:22-139.178.68.195:41148.service: Deactivated successfully. May 14 01:07:36.270434 systemd[1]: session-9.scope: Deactivated successfully. May 14 01:07:36.270617 systemd[1]: session-9.scope: Consumed 7.181s CPU time, 242.1M memory peak. May 14 01:07:36.271662 systemd-logind[2719]: Session 9 logged out. Waiting for processes to exit. May 14 01:07:36.272241 systemd-logind[2719]: Removed session 9. May 14 01:07:39.099572 kubelet[4258]: I0514 01:07:39.099537 4258 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 01:07:39.099963 containerd[2738]: time="2025-05-14T01:07:39.099827633Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 01:07:39.100151 kubelet[4258]: I0514 01:07:39.099961 4258 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 01:07:40.084809 systemd[1]: Created slice kubepods-besteffort-pod04b256f7_36f9_4507_a52b_f48b7f60a4b8.slice - libcontainer container kubepods-besteffort-pod04b256f7_36f9_4507_a52b_f48b7f60a4b8.slice. May 14 01:07:40.131871 kubelet[4258]: I0514 01:07:40.131841 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/04b256f7-36f9-4507-a52b-f48b7f60a4b8-kube-proxy\") pod \"kube-proxy-qpzm7\" (UID: \"04b256f7-36f9-4507-a52b-f48b7f60a4b8\") " pod="kube-system/kube-proxy-qpzm7" May 14 01:07:40.131871 kubelet[4258]: I0514 01:07:40.131876 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04b256f7-36f9-4507-a52b-f48b7f60a4b8-xtables-lock\") pod \"kube-proxy-qpzm7\" (UID: \"04b256f7-36f9-4507-a52b-f48b7f60a4b8\") " pod="kube-system/kube-proxy-qpzm7" May 14 01:07:40.132172 kubelet[4258]: I0514 01:07:40.131894 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04b256f7-36f9-4507-a52b-f48b7f60a4b8-lib-modules\") pod \"kube-proxy-qpzm7\" (UID: \"04b256f7-36f9-4507-a52b-f48b7f60a4b8\") " pod="kube-system/kube-proxy-qpzm7" May 14 01:07:40.132172 kubelet[4258]: I0514 01:07:40.131913 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj2g5\" (UniqueName: \"kubernetes.io/projected/04b256f7-36f9-4507-a52b-f48b7f60a4b8-kube-api-access-rj2g5\") pod \"kube-proxy-qpzm7\" (UID: \"04b256f7-36f9-4507-a52b-f48b7f60a4b8\") " pod="kube-system/kube-proxy-qpzm7" May 14 01:07:40.245855 systemd[1]: Created slice kubepods-besteffort-pod4299382d_f0ac_4dfd_9dce_b2a255adb21a.slice - libcontainer container kubepods-besteffort-pod4299382d_f0ac_4dfd_9dce_b2a255adb21a.slice. May 14 01:07:40.332342 kubelet[4258]: I0514 01:07:40.332311 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4299382d-f0ac-4dfd-9dce-b2a255adb21a-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-mlxbx\" (UID: \"4299382d-f0ac-4dfd-9dce-b2a255adb21a\") " pod="tigera-operator/tigera-operator-6f6897fdc5-mlxbx" May 14 01:07:40.332417 kubelet[4258]: I0514 01:07:40.332355 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxh4f\" (UniqueName: \"kubernetes.io/projected/4299382d-f0ac-4dfd-9dce-b2a255adb21a-kube-api-access-nxh4f\") pod \"tigera-operator-6f6897fdc5-mlxbx\" (UID: \"4299382d-f0ac-4dfd-9dce-b2a255adb21a\") " pod="tigera-operator/tigera-operator-6f6897fdc5-mlxbx" May 14 01:07:40.399556 containerd[2738]: time="2025-05-14T01:07:40.399470841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qpzm7,Uid:04b256f7-36f9-4507-a52b-f48b7f60a4b8,Namespace:kube-system,Attempt:0,}" May 14 01:07:40.408140 containerd[2738]: time="2025-05-14T01:07:40.408115202Z" level=info msg="connecting to shim 2efb615b905424827ec86edd87fdfc2208e26053ca09418ffc114bd400dcca15" address="unix:///run/containerd/s/5b1b9cac3b83ed97e715e14d5894eacefc2c096472a38309d1ba36e8c4552113" namespace=k8s.io protocol=ttrpc version=3 May 14 01:07:40.434168 systemd[1]: Started cri-containerd-2efb615b905424827ec86edd87fdfc2208e26053ca09418ffc114bd400dcca15.scope - libcontainer container 2efb615b905424827ec86edd87fdfc2208e26053ca09418ffc114bd400dcca15. May 14 01:07:40.451495 containerd[2738]: time="2025-05-14T01:07:40.451459157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qpzm7,Uid:04b256f7-36f9-4507-a52b-f48b7f60a4b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2efb615b905424827ec86edd87fdfc2208e26053ca09418ffc114bd400dcca15\"" May 14 01:07:40.453528 containerd[2738]: time="2025-05-14T01:07:40.453506301Z" level=info msg="CreateContainer within sandbox \"2efb615b905424827ec86edd87fdfc2208e26053ca09418ffc114bd400dcca15\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 01:07:40.459022 containerd[2738]: time="2025-05-14T01:07:40.458995787Z" level=info msg="Container dc291839f5e64bce1dedbe37c6b5c8af014655c7b3ea38110b77825280e8e468: CDI devices from CRI Config.CDIDevices: []" May 14 01:07:40.463876 containerd[2738]: time="2025-05-14T01:07:40.463848768Z" level=info msg="CreateContainer within sandbox \"2efb615b905424827ec86edd87fdfc2208e26053ca09418ffc114bd400dcca15\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dc291839f5e64bce1dedbe37c6b5c8af014655c7b3ea38110b77825280e8e468\"" May 14 01:07:40.464211 containerd[2738]: time="2025-05-14T01:07:40.464189125Z" level=info msg="StartContainer for \"dc291839f5e64bce1dedbe37c6b5c8af014655c7b3ea38110b77825280e8e468\"" May 14 01:07:40.465456 containerd[2738]: time="2025-05-14T01:07:40.465428286Z" level=info msg="connecting to shim dc291839f5e64bce1dedbe37c6b5c8af014655c7b3ea38110b77825280e8e468" address="unix:///run/containerd/s/5b1b9cac3b83ed97e715e14d5894eacefc2c096472a38309d1ba36e8c4552113" protocol=ttrpc version=3 May 14 01:07:40.489155 systemd[1]: Started cri-containerd-dc291839f5e64bce1dedbe37c6b5c8af014655c7b3ea38110b77825280e8e468.scope - libcontainer container dc291839f5e64bce1dedbe37c6b5c8af014655c7b3ea38110b77825280e8e468. May 14 01:07:40.516554 containerd[2738]: time="2025-05-14T01:07:40.516525160Z" level=info msg="StartContainer for \"dc291839f5e64bce1dedbe37c6b5c8af014655c7b3ea38110b77825280e8e468\" returns successfully" May 14 01:07:40.548174 containerd[2738]: time="2025-05-14T01:07:40.548149416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-mlxbx,Uid:4299382d-f0ac-4dfd-9dce-b2a255adb21a,Namespace:tigera-operator,Attempt:0,}" May 14 01:07:40.556501 containerd[2738]: time="2025-05-14T01:07:40.556472664Z" level=info msg="connecting to shim 813edb17d9a467566d31fc8581e2a9aa479a4fd0e87e4bbff9246e225dbdb7e9" address="unix:///run/containerd/s/dd00aac18963264f7b62e4faddfea51ba2495a6472a81ec31faceb9faf9f39fb" namespace=k8s.io protocol=ttrpc version=3 May 14 01:07:40.580099 systemd[1]: Started cri-containerd-813edb17d9a467566d31fc8581e2a9aa479a4fd0e87e4bbff9246e225dbdb7e9.scope - libcontainer container 813edb17d9a467566d31fc8581e2a9aa479a4fd0e87e4bbff9246e225dbdb7e9. May 14 01:07:40.605065 containerd[2738]: time="2025-05-14T01:07:40.605035323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-mlxbx,Uid:4299382d-f0ac-4dfd-9dce-b2a255adb21a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"813edb17d9a467566d31fc8581e2a9aa479a4fd0e87e4bbff9246e225dbdb7e9\"" May 14 01:07:40.606242 containerd[2738]: time="2025-05-14T01:07:40.606222392Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 01:07:40.918182 kubelet[4258]: I0514 01:07:40.918124 4258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qpzm7" podStartSLOduration=0.918107558 podStartE2EDuration="918.107558ms" podCreationTimestamp="2025-05-14 01:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 01:07:40.918036302 +0000 UTC m=+9.094737920" watchObservedRunningTime="2025-05-14 01:07:40.918107558 +0000 UTC m=+9.094809176" May 14 01:07:41.132712 update_engine[2727]: I20250514 01:07:41.132589 2727 update_attempter.cc:509] Updating boot flags... May 14 01:07:41.163991 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (4820) May 14 01:07:41.192994 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (4820) May 14 01:07:41.628756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2092139739.mount: Deactivated successfully. May 14 01:07:41.911234 containerd[2738]: time="2025-05-14T01:07:41.911156567Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:41.911546 containerd[2738]: time="2025-05-14T01:07:41.911166729Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 14 01:07:41.911893 containerd[2738]: time="2025-05-14T01:07:41.911875281Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:41.913540 containerd[2738]: time="2025-05-14T01:07:41.913516873Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:41.914208 containerd[2738]: time="2025-05-14T01:07:41.914192538Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 1.307941579s" May 14 01:07:41.914236 containerd[2738]: time="2025-05-14T01:07:41.914213822Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 14 01:07:41.915609 containerd[2738]: time="2025-05-14T01:07:41.915593238Z" level=info msg="CreateContainer within sandbox \"813edb17d9a467566d31fc8581e2a9aa479a4fd0e87e4bbff9246e225dbdb7e9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 01:07:41.918988 containerd[2738]: time="2025-05-14T01:07:41.918960279Z" level=info msg="Container 27c45c52a56c47e07114df7779c0c40aaf46f91aeef9def2b833af78f6ac57c8: CDI devices from CRI Config.CDIDevices: []" May 14 01:07:41.921735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2753459441.mount: Deactivated successfully. May 14 01:07:41.922465 containerd[2738]: time="2025-05-14T01:07:41.922402097Z" level=info msg="CreateContainer within sandbox \"813edb17d9a467566d31fc8581e2a9aa479a4fd0e87e4bbff9246e225dbdb7e9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"27c45c52a56c47e07114df7779c0c40aaf46f91aeef9def2b833af78f6ac57c8\"" May 14 01:07:41.923023 containerd[2738]: time="2025-05-14T01:07:41.922993544Z" level=info msg="StartContainer for \"27c45c52a56c47e07114df7779c0c40aaf46f91aeef9def2b833af78f6ac57c8\"" May 14 01:07:41.924384 containerd[2738]: time="2025-05-14T01:07:41.924356796Z" level=info msg="connecting to shim 27c45c52a56c47e07114df7779c0c40aaf46f91aeef9def2b833af78f6ac57c8" address="unix:///run/containerd/s/dd00aac18963264f7b62e4faddfea51ba2495a6472a81ec31faceb9faf9f39fb" protocol=ttrpc version=3 May 14 01:07:41.946144 systemd[1]: Started cri-containerd-27c45c52a56c47e07114df7779c0c40aaf46f91aeef9def2b833af78f6ac57c8.scope - libcontainer container 27c45c52a56c47e07114df7779c0c40aaf46f91aeef9def2b833af78f6ac57c8. May 14 01:07:41.965717 containerd[2738]: time="2025-05-14T01:07:41.965685292Z" level=info msg="StartContainer for \"27c45c52a56c47e07114df7779c0c40aaf46f91aeef9def2b833af78f6ac57c8\" returns successfully" May 14 01:07:42.926232 kubelet[4258]: I0514 01:07:42.926184 4258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-mlxbx" podStartSLOduration=1.617340493 podStartE2EDuration="2.926169547s" podCreationTimestamp="2025-05-14 01:07:40 +0000 UTC" firstStartedPulling="2025-05-14 01:07:40.60585883 +0000 UTC m=+8.782560448" lastFinishedPulling="2025-05-14 01:07:41.914687924 +0000 UTC m=+10.091389502" observedRunningTime="2025-05-14 01:07:42.926032639 +0000 UTC m=+11.102734217" watchObservedRunningTime="2025-05-14 01:07:42.926169547 +0000 UTC m=+11.102871165" May 14 01:07:46.214775 systemd[1]: Created slice kubepods-besteffort-pod559b6a8d_3b60_4ecf_9622_e36b0087b018.slice - libcontainer container kubepods-besteffort-pod559b6a8d_3b60_4ecf_9622_e36b0087b018.slice. May 14 01:07:46.274126 kubelet[4258]: I0514 01:07:46.274094 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/559b6a8d-3b60-4ecf-9622-e36b0087b018-typha-certs\") pod \"calico-typha-dd847f4c6-vx2gv\" (UID: \"559b6a8d-3b60-4ecf-9622-e36b0087b018\") " pod="calico-system/calico-typha-dd847f4c6-vx2gv" May 14 01:07:46.274542 kubelet[4258]: I0514 01:07:46.274470 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/559b6a8d-3b60-4ecf-9622-e36b0087b018-tigera-ca-bundle\") pod \"calico-typha-dd847f4c6-vx2gv\" (UID: \"559b6a8d-3b60-4ecf-9622-e36b0087b018\") " pod="calico-system/calico-typha-dd847f4c6-vx2gv" May 14 01:07:46.274542 kubelet[4258]: I0514 01:07:46.274501 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnssz\" (UniqueName: \"kubernetes.io/projected/559b6a8d-3b60-4ecf-9622-e36b0087b018-kube-api-access-hnssz\") pod \"calico-typha-dd847f4c6-vx2gv\" (UID: \"559b6a8d-3b60-4ecf-9622-e36b0087b018\") " pod="calico-system/calico-typha-dd847f4c6-vx2gv" May 14 01:07:46.411453 systemd[1]: Created slice kubepods-besteffort-pod57b8a373_d4fa_4b33_8ca4_b4d6b8931bbb.slice - libcontainer container kubepods-besteffort-pod57b8a373_d4fa_4b33_8ca4_b4d6b8931bbb.slice. May 14 01:07:46.475844 kubelet[4258]: I0514 01:07:46.475734 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb-var-lib-calico\") pod \"calico-node-fhnsm\" (UID: \"57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb\") " pod="calico-system/calico-node-fhnsm" May 14 01:07:46.475844 kubelet[4258]: I0514 01:07:46.475773 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb-cni-net-dir\") pod \"calico-node-fhnsm\" (UID: \"57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb\") " pod="calico-system/calico-node-fhnsm" May 14 01:07:46.475844 kubelet[4258]: I0514 01:07:46.475790 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb-xtables-lock\") pod \"calico-node-fhnsm\" (UID: \"57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb\") " pod="calico-system/calico-node-fhnsm" May 14 01:07:46.475844 kubelet[4258]: I0514 01:07:46.475815 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb-lib-modules\") pod \"calico-node-fhnsm\" (UID: \"57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb\") " pod="calico-system/calico-node-fhnsm" May 14 01:07:46.476062 kubelet[4258]: I0514 01:07:46.475897 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb-policysync\") pod \"calico-node-fhnsm\" (UID: \"57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb\") " pod="calico-system/calico-node-fhnsm" May 14 01:07:46.476062 kubelet[4258]: I0514 01:07:46.475949 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb-node-certs\") pod \"calico-node-fhnsm\" (UID: \"57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb\") " pod="calico-system/calico-node-fhnsm" May 14 01:07:46.476062 kubelet[4258]: I0514 01:07:46.475988 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb-cni-log-dir\") pod \"calico-node-fhnsm\" (UID: \"57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb\") " pod="calico-system/calico-node-fhnsm" May 14 01:07:46.476062 kubelet[4258]: I0514 01:07:46.476020 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb-tigera-ca-bundle\") pod \"calico-node-fhnsm\" (UID: \"57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb\") " pod="calico-system/calico-node-fhnsm" May 14 01:07:46.476062 kubelet[4258]: I0514 01:07:46.476048 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb-var-run-calico\") pod \"calico-node-fhnsm\" (UID: \"57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb\") " pod="calico-system/calico-node-fhnsm" May 14 01:07:46.476264 kubelet[4258]: I0514 01:07:46.476075 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6ntj\" (UniqueName: \"kubernetes.io/projected/57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb-kube-api-access-j6ntj\") pod \"calico-node-fhnsm\" (UID: \"57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb\") " pod="calico-system/calico-node-fhnsm" May 14 01:07:46.476264 kubelet[4258]: I0514 01:07:46.476107 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb-flexvol-driver-host\") pod \"calico-node-fhnsm\" (UID: \"57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb\") " pod="calico-system/calico-node-fhnsm" May 14 01:07:46.476264 kubelet[4258]: I0514 01:07:46.476134 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb-cni-bin-dir\") pod \"calico-node-fhnsm\" (UID: \"57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb\") " pod="calico-system/calico-node-fhnsm" May 14 01:07:46.518287 containerd[2738]: time="2025-05-14T01:07:46.518252471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-dd847f4c6-vx2gv,Uid:559b6a8d-3b60-4ecf-9622-e36b0087b018,Namespace:calico-system,Attempt:0,}" May 14 01:07:46.526585 containerd[2738]: time="2025-05-14T01:07:46.526561538Z" level=info msg="connecting to shim a5da550e4fc927d0d3d7994f6f44654943a40e11b04e641b95e38170a48ae4c3" address="unix:///run/containerd/s/0dde75ba4ec0d4a0e9fd8e66f8b3bd6239483861529871d685efc2d44ad36c32" namespace=k8s.io protocol=ttrpc version=3 May 14 01:07:46.553088 systemd[1]: Started cri-containerd-a5da550e4fc927d0d3d7994f6f44654943a40e11b04e641b95e38170a48ae4c3.scope - libcontainer container a5da550e4fc927d0d3d7994f6f44654943a40e11b04e641b95e38170a48ae4c3. May 14 01:07:46.577965 kubelet[4258]: E0514 01:07:46.577942 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.577965 kubelet[4258]: W0514 01:07:46.577961 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.578091 kubelet[4258]: E0514 01:07:46.578054 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.578325 containerd[2738]: time="2025-05-14T01:07:46.578299888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-dd847f4c6-vx2gv,Uid:559b6a8d-3b60-4ecf-9622-e36b0087b018,Namespace:calico-system,Attempt:0,} returns sandbox id \"a5da550e4fc927d0d3d7994f6f44654943a40e11b04e641b95e38170a48ae4c3\"" May 14 01:07:46.579871 containerd[2738]: time="2025-05-14T01:07:46.579788249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 01:07:46.579974 kubelet[4258]: E0514 01:07:46.579888 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.579974 kubelet[4258]: W0514 01:07:46.579908 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.579974 kubelet[4258]: E0514 01:07:46.579923 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.585782 kubelet[4258]: E0514 01:07:46.585765 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.585811 kubelet[4258]: W0514 01:07:46.585782 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.585811 kubelet[4258]: E0514 01:07:46.585796 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.600466 kubelet[4258]: E0514 01:07:46.600435 4258 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cjbvf" podUID="7395286e-89a3-42ee-9c78-0a22650e7dbd" May 14 01:07:46.677256 kubelet[4258]: E0514 01:07:46.677235 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.677256 kubelet[4258]: W0514 01:07:46.677251 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.677321 kubelet[4258]: E0514 01:07:46.677267 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.677536 kubelet[4258]: E0514 01:07:46.677525 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.677536 kubelet[4258]: W0514 01:07:46.677533 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.677583 kubelet[4258]: E0514 01:07:46.677543 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.677773 kubelet[4258]: E0514 01:07:46.677764 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.677773 kubelet[4258]: W0514 01:07:46.677771 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.677812 kubelet[4258]: E0514 01:07:46.677779 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.677957 kubelet[4258]: E0514 01:07:46.677948 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.677957 kubelet[4258]: W0514 01:07:46.677956 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.678009 kubelet[4258]: E0514 01:07:46.677964 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.678201 kubelet[4258]: E0514 01:07:46.678191 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.678201 kubelet[4258]: W0514 01:07:46.678199 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.678246 kubelet[4258]: E0514 01:07:46.678206 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.678396 kubelet[4258]: E0514 01:07:46.678387 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.678396 kubelet[4258]: W0514 01:07:46.678394 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.678437 kubelet[4258]: E0514 01:07:46.678402 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.678584 kubelet[4258]: E0514 01:07:46.678575 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.678584 kubelet[4258]: W0514 01:07:46.678582 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.678636 kubelet[4258]: E0514 01:07:46.678589 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.678789 kubelet[4258]: E0514 01:07:46.678782 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.678810 kubelet[4258]: W0514 01:07:46.678789 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.678810 kubelet[4258]: E0514 01:07:46.678796 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.678971 kubelet[4258]: E0514 01:07:46.678963 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.678994 kubelet[4258]: W0514 01:07:46.678971 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.678994 kubelet[4258]: E0514 01:07:46.678984 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.679185 kubelet[4258]: E0514 01:07:46.679178 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.679206 kubelet[4258]: W0514 01:07:46.679186 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.679206 kubelet[4258]: E0514 01:07:46.679193 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.679403 kubelet[4258]: E0514 01:07:46.679396 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.679424 kubelet[4258]: W0514 01:07:46.679403 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.679424 kubelet[4258]: E0514 01:07:46.679411 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.679602 kubelet[4258]: E0514 01:07:46.679592 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.679602 kubelet[4258]: W0514 01:07:46.679600 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.679646 kubelet[4258]: E0514 01:07:46.679607 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.679798 kubelet[4258]: E0514 01:07:46.679788 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.679798 kubelet[4258]: W0514 01:07:46.679795 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.679841 kubelet[4258]: E0514 01:07:46.679803 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.679953 kubelet[4258]: E0514 01:07:46.679944 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.679953 kubelet[4258]: W0514 01:07:46.679951 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.680000 kubelet[4258]: E0514 01:07:46.679959 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.680129 kubelet[4258]: E0514 01:07:46.680119 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.680129 kubelet[4258]: W0514 01:07:46.680127 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.680168 kubelet[4258]: E0514 01:07:46.680134 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.680309 kubelet[4258]: E0514 01:07:46.680299 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.680309 kubelet[4258]: W0514 01:07:46.680306 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.680350 kubelet[4258]: E0514 01:07:46.680314 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.680498 kubelet[4258]: E0514 01:07:46.680489 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.680498 kubelet[4258]: W0514 01:07:46.680496 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.680540 kubelet[4258]: E0514 01:07:46.680504 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.680718 kubelet[4258]: E0514 01:07:46.680711 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.680737 kubelet[4258]: W0514 01:07:46.680718 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.680737 kubelet[4258]: E0514 01:07:46.680725 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.680929 kubelet[4258]: E0514 01:07:46.680923 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.680953 kubelet[4258]: W0514 01:07:46.680930 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.680953 kubelet[4258]: E0514 01:07:46.680936 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.681081 kubelet[4258]: E0514 01:07:46.681074 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.681104 kubelet[4258]: W0514 01:07:46.681081 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.681104 kubelet[4258]: E0514 01:07:46.681088 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.681380 kubelet[4258]: E0514 01:07:46.681372 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.681399 kubelet[4258]: W0514 01:07:46.681380 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.681399 kubelet[4258]: E0514 01:07:46.681388 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.681436 kubelet[4258]: I0514 01:07:46.681407 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7395286e-89a3-42ee-9c78-0a22650e7dbd-socket-dir\") pod \"csi-node-driver-cjbvf\" (UID: \"7395286e-89a3-42ee-9c78-0a22650e7dbd\") " pod="calico-system/csi-node-driver-cjbvf" May 14 01:07:46.681621 kubelet[4258]: E0514 01:07:46.681610 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.681621 kubelet[4258]: W0514 01:07:46.681619 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.681663 kubelet[4258]: E0514 01:07:46.681631 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.681663 kubelet[4258]: I0514 01:07:46.681651 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7395286e-89a3-42ee-9c78-0a22650e7dbd-kubelet-dir\") pod \"csi-node-driver-cjbvf\" (UID: \"7395286e-89a3-42ee-9c78-0a22650e7dbd\") " pod="calico-system/csi-node-driver-cjbvf" May 14 01:07:46.681813 kubelet[4258]: E0514 01:07:46.681803 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.681813 kubelet[4258]: W0514 01:07:46.681811 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.681855 kubelet[4258]: E0514 01:07:46.681823 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.681855 kubelet[4258]: I0514 01:07:46.681837 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wsnm\" (UniqueName: \"kubernetes.io/projected/7395286e-89a3-42ee-9c78-0a22650e7dbd-kube-api-access-6wsnm\") pod \"csi-node-driver-cjbvf\" (UID: \"7395286e-89a3-42ee-9c78-0a22650e7dbd\") " pod="calico-system/csi-node-driver-cjbvf" May 14 01:07:46.682063 kubelet[4258]: E0514 01:07:46.682052 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.682063 kubelet[4258]: W0514 01:07:46.682061 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.682099 kubelet[4258]: E0514 01:07:46.682072 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.682099 kubelet[4258]: I0514 01:07:46.682085 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7395286e-89a3-42ee-9c78-0a22650e7dbd-varrun\") pod \"csi-node-driver-cjbvf\" (UID: \"7395286e-89a3-42ee-9c78-0a22650e7dbd\") " pod="calico-system/csi-node-driver-cjbvf" May 14 01:07:46.682272 kubelet[4258]: E0514 01:07:46.682261 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.682272 kubelet[4258]: W0514 01:07:46.682270 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.682311 kubelet[4258]: E0514 01:07:46.682281 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.682311 kubelet[4258]: I0514 01:07:46.682295 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7395286e-89a3-42ee-9c78-0a22650e7dbd-registration-dir\") pod \"csi-node-driver-cjbvf\" (UID: \"7395286e-89a3-42ee-9c78-0a22650e7dbd\") " pod="calico-system/csi-node-driver-cjbvf" May 14 01:07:46.682549 kubelet[4258]: E0514 01:07:46.682537 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.682549 kubelet[4258]: W0514 01:07:46.682547 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.682589 kubelet[4258]: E0514 01:07:46.682558 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.682754 kubelet[4258]: E0514 01:07:46.682747 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.682777 kubelet[4258]: W0514 01:07:46.682754 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.682797 kubelet[4258]: E0514 01:07:46.682776 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.682965 kubelet[4258]: E0514 01:07:46.682959 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.682987 kubelet[4258]: W0514 01:07:46.682966 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.682987 kubelet[4258]: E0514 01:07:46.682984 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.683128 kubelet[4258]: E0514 01:07:46.683119 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.683128 kubelet[4258]: W0514 01:07:46.683126 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.683169 kubelet[4258]: E0514 01:07:46.683140 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.683286 kubelet[4258]: E0514 01:07:46.683279 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.683305 kubelet[4258]: W0514 01:07:46.683286 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.683305 kubelet[4258]: E0514 01:07:46.683301 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.683507 kubelet[4258]: E0514 01:07:46.683499 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.683529 kubelet[4258]: W0514 01:07:46.683507 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.683529 kubelet[4258]: E0514 01:07:46.683520 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.683683 kubelet[4258]: E0514 01:07:46.683674 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.683683 kubelet[4258]: W0514 01:07:46.683681 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.683727 kubelet[4258]: E0514 01:07:46.683689 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.683909 kubelet[4258]: E0514 01:07:46.683900 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.683932 kubelet[4258]: W0514 01:07:46.683908 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.683932 kubelet[4258]: E0514 01:07:46.683917 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.684097 kubelet[4258]: E0514 01:07:46.684087 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.684097 kubelet[4258]: W0514 01:07:46.684094 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.684142 kubelet[4258]: E0514 01:07:46.684102 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.684323 kubelet[4258]: E0514 01:07:46.684313 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.684323 kubelet[4258]: W0514 01:07:46.684320 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.684362 kubelet[4258]: E0514 01:07:46.684328 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.713754 containerd[2738]: time="2025-05-14T01:07:46.713724767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fhnsm,Uid:57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb,Namespace:calico-system,Attempt:0,}" May 14 01:07:46.722094 containerd[2738]: time="2025-05-14T01:07:46.722068720Z" level=info msg="connecting to shim 19fcb7ea11c7b0a2a683d8c3744bf97bc7202bf3e480dc623025bc01ed5aa420" address="unix:///run/containerd/s/070490945271c07f30bc304cfc2fcccd280d08cce6d653f03836a93ac622a351" namespace=k8s.io protocol=ttrpc version=3 May 14 01:07:46.749155 systemd[1]: Started cri-containerd-19fcb7ea11c7b0a2a683d8c3744bf97bc7202bf3e480dc623025bc01ed5aa420.scope - libcontainer container 19fcb7ea11c7b0a2a683d8c3744bf97bc7202bf3e480dc623025bc01ed5aa420. May 14 01:07:46.765853 containerd[2738]: time="2025-05-14T01:07:46.765823896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fhnsm,Uid:57b8a373-d4fa-4b33-8ca4-b4d6b8931bbb,Namespace:calico-system,Attempt:0,} returns sandbox id \"19fcb7ea11c7b0a2a683d8c3744bf97bc7202bf3e480dc623025bc01ed5aa420\"" May 14 01:07:46.783163 kubelet[4258]: E0514 01:07:46.783143 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.783163 kubelet[4258]: W0514 01:07:46.783162 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.783253 kubelet[4258]: E0514 01:07:46.783180 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.783469 kubelet[4258]: E0514 01:07:46.783458 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.783469 kubelet[4258]: W0514 01:07:46.783466 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.783510 kubelet[4258]: E0514 01:07:46.783478 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.783759 kubelet[4258]: E0514 01:07:46.783749 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.783759 kubelet[4258]: W0514 01:07:46.783757 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.783800 kubelet[4258]: E0514 01:07:46.783768 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.783987 kubelet[4258]: E0514 01:07:46.783967 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.784012 kubelet[4258]: W0514 01:07:46.783974 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.784012 kubelet[4258]: E0514 01:07:46.784006 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.784210 kubelet[4258]: E0514 01:07:46.784202 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.784233 kubelet[4258]: W0514 01:07:46.784210 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.784233 kubelet[4258]: E0514 01:07:46.784220 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.784431 kubelet[4258]: E0514 01:07:46.784422 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.784453 kubelet[4258]: W0514 01:07:46.784431 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.784472 kubelet[4258]: E0514 01:07:46.784453 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.784608 kubelet[4258]: E0514 01:07:46.784600 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.784630 kubelet[4258]: W0514 01:07:46.784607 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.784630 kubelet[4258]: E0514 01:07:46.784623 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.784826 kubelet[4258]: E0514 01:07:46.784819 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.784826 kubelet[4258]: W0514 01:07:46.784826 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.784867 kubelet[4258]: E0514 01:07:46.784840 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.785029 kubelet[4258]: E0514 01:07:46.785021 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.785029 kubelet[4258]: W0514 01:07:46.785028 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.785070 kubelet[4258]: E0514 01:07:46.785041 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.785227 kubelet[4258]: E0514 01:07:46.785218 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.785227 kubelet[4258]: W0514 01:07:46.785226 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.785273 kubelet[4258]: E0514 01:07:46.785237 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.785414 kubelet[4258]: E0514 01:07:46.785406 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.785437 kubelet[4258]: W0514 01:07:46.785414 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.785459 kubelet[4258]: E0514 01:07:46.785436 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.785631 kubelet[4258]: E0514 01:07:46.785624 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.785653 kubelet[4258]: W0514 01:07:46.785631 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.785653 kubelet[4258]: E0514 01:07:46.785646 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.785797 kubelet[4258]: E0514 01:07:46.785789 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.785851 kubelet[4258]: W0514 01:07:46.785797 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.785851 kubelet[4258]: E0514 01:07:46.785812 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.786000 kubelet[4258]: E0514 01:07:46.785992 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.786026 kubelet[4258]: W0514 01:07:46.785999 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.786026 kubelet[4258]: E0514 01:07:46.786018 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.786147 kubelet[4258]: E0514 01:07:46.786135 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.786147 kubelet[4258]: W0514 01:07:46.786142 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.786194 kubelet[4258]: E0514 01:07:46.786157 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.786313 kubelet[4258]: E0514 01:07:46.786303 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.786313 kubelet[4258]: W0514 01:07:46.786311 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.786355 kubelet[4258]: E0514 01:07:46.786322 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.786484 kubelet[4258]: E0514 01:07:46.786473 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.786484 kubelet[4258]: W0514 01:07:46.786482 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.786527 kubelet[4258]: E0514 01:07:46.786493 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.786710 kubelet[4258]: E0514 01:07:46.786699 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.786710 kubelet[4258]: W0514 01:07:46.786707 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.786748 kubelet[4258]: E0514 01:07:46.786716 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.787009 kubelet[4258]: E0514 01:07:46.786995 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.787035 kubelet[4258]: W0514 01:07:46.787007 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.787035 kubelet[4258]: E0514 01:07:46.787020 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.787252 kubelet[4258]: E0514 01:07:46.787239 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.787252 kubelet[4258]: W0514 01:07:46.787250 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.787291 kubelet[4258]: E0514 01:07:46.787263 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.787484 kubelet[4258]: E0514 01:07:46.787473 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.787484 kubelet[4258]: W0514 01:07:46.787481 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.787527 kubelet[4258]: E0514 01:07:46.787497 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.787679 kubelet[4258]: E0514 01:07:46.787669 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.787679 kubelet[4258]: W0514 01:07:46.787677 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.787719 kubelet[4258]: E0514 01:07:46.787691 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.787908 kubelet[4258]: E0514 01:07:46.787900 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.787931 kubelet[4258]: W0514 01:07:46.787908 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.787931 kubelet[4258]: E0514 01:07:46.787919 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.788091 kubelet[4258]: E0514 01:07:46.788081 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.788091 kubelet[4258]: W0514 01:07:46.788089 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.788130 kubelet[4258]: E0514 01:07:46.788100 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.788327 kubelet[4258]: E0514 01:07:46.788316 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.788327 kubelet[4258]: W0514 01:07:46.788324 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.788371 kubelet[4258]: E0514 01:07:46.788331 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:46.796997 kubelet[4258]: E0514 01:07:46.796976 4258 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:07:46.797023 kubelet[4258]: W0514 01:07:46.796995 4258 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:07:46.797023 kubelet[4258]: E0514 01:07:46.797011 4258 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:07:47.370118 containerd[2738]: time="2025-05-14T01:07:47.370074878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:47.370272 containerd[2738]: time="2025-05-14T01:07:47.370125006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 14 01:07:47.370775 containerd[2738]: time="2025-05-14T01:07:47.370757743Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:47.372242 containerd[2738]: time="2025-05-14T01:07:47.372221688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:47.372834 containerd[2738]: time="2025-05-14T01:07:47.372812338Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 792.990964ms" May 14 01:07:47.372856 containerd[2738]: time="2025-05-14T01:07:47.372842143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 14 01:07:47.373534 containerd[2738]: time="2025-05-14T01:07:47.373515726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 01:07:47.378131 containerd[2738]: time="2025-05-14T01:07:47.378105752Z" level=info msg="CreateContainer within sandbox \"a5da550e4fc927d0d3d7994f6f44654943a40e11b04e641b95e38170a48ae4c3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 01:07:47.381661 containerd[2738]: time="2025-05-14T01:07:47.381630693Z" level=info msg="Container 481b2bf50c7a9d768d6839b4262f3acff06a90fea0fd2445026bfe1a0f9c68ac: CDI devices from CRI Config.CDIDevices: []" May 14 01:07:47.385036 containerd[2738]: time="2025-05-14T01:07:47.385002571Z" level=info msg="CreateContainer within sandbox \"a5da550e4fc927d0d3d7994f6f44654943a40e11b04e641b95e38170a48ae4c3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"481b2bf50c7a9d768d6839b4262f3acff06a90fea0fd2445026bfe1a0f9c68ac\"" May 14 01:07:47.385540 containerd[2738]: time="2025-05-14T01:07:47.385517090Z" level=info msg="StartContainer for \"481b2bf50c7a9d768d6839b4262f3acff06a90fea0fd2445026bfe1a0f9c68ac\"" May 14 01:07:47.386504 containerd[2738]: time="2025-05-14T01:07:47.386480998Z" level=info msg="connecting to shim 481b2bf50c7a9d768d6839b4262f3acff06a90fea0fd2445026bfe1a0f9c68ac" address="unix:///run/containerd/s/0dde75ba4ec0d4a0e9fd8e66f8b3bd6239483861529871d685efc2d44ad36c32" protocol=ttrpc version=3 May 14 01:07:47.409091 systemd[1]: Started cri-containerd-481b2bf50c7a9d768d6839b4262f3acff06a90fea0fd2445026bfe1a0f9c68ac.scope - libcontainer container 481b2bf50c7a9d768d6839b4262f3acff06a90fea0fd2445026bfe1a0f9c68ac. May 14 01:07:47.438126 containerd[2738]: time="2025-05-14T01:07:47.438095166Z" level=info msg="StartContainer for \"481b2bf50c7a9d768d6839b4262f3acff06a90fea0fd2445026bfe1a0f9c68ac\" returns successfully" May 14 01:07:47.840601 containerd[2738]: time="2025-05-14T01:07:47.840555942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:47.840850 containerd[2738]: time="2025-05-14T01:07:47.840588667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 14 01:07:47.841179 containerd[2738]: time="2025-05-14T01:07:47.841160555Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:47.842795 containerd[2738]: time="2025-05-14T01:07:47.842771082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:47.843397 containerd[2738]: time="2025-05-14T01:07:47.843369494Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 469.824843ms" May 14 01:07:47.843419 containerd[2738]: time="2025-05-14T01:07:47.843405499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 14 01:07:47.844869 containerd[2738]: time="2025-05-14T01:07:47.844850441Z" level=info msg="CreateContainer within sandbox \"19fcb7ea11c7b0a2a683d8c3744bf97bc7202bf3e480dc623025bc01ed5aa420\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 01:07:47.849048 containerd[2738]: time="2025-05-14T01:07:47.849018121Z" level=info msg="Container e42b546b685333949ca971f692a4df359058d5105651cffdb0bc2ea67f070c04: CDI devices from CRI Config.CDIDevices: []" May 14 01:07:47.853039 containerd[2738]: time="2025-05-14T01:07:47.853008174Z" level=info msg="CreateContainer within sandbox \"19fcb7ea11c7b0a2a683d8c3744bf97bc7202bf3e480dc623025bc01ed5aa420\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e42b546b685333949ca971f692a4df359058d5105651cffdb0bc2ea67f070c04\"" May 14 01:07:47.853376 containerd[2738]: time="2025-05-14T01:07:47.853355068Z" level=info msg="StartContainer for \"e42b546b685333949ca971f692a4df359058d5105651cffdb0bc2ea67f070c04\"" May 14 01:07:47.854642 containerd[2738]: time="2025-05-14T01:07:47.854618902Z" level=info msg="connecting to shim e42b546b685333949ca971f692a4df359058d5105651cffdb0bc2ea67f070c04" address="unix:///run/containerd/s/070490945271c07f30bc304cfc2fcccd280d08cce6d653f03836a93ac622a351" protocol=ttrpc version=3 May 14 01:07:47.877156 systemd[1]: Started cri-containerd-e42b546b685333949ca971f692a4df359058d5105651cffdb0bc2ea67f070c04.scope - libcontainer container e42b546b685333949ca971f692a4df359058d5105651cffdb0bc2ea67f070c04. May 14 01:07:47.893464 kubelet[4258]: E0514 01:07:47.893438 4258 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cjbvf" podUID="7395286e-89a3-42ee-9c78-0a22650e7dbd" May 14 01:07:47.903983 containerd[2738]: time="2025-05-14T01:07:47.903954199Z" level=info msg="StartContainer for \"e42b546b685333949ca971f692a4df359058d5105651cffdb0bc2ea67f070c04\" returns successfully" May 14 01:07:47.915158 systemd[1]: cri-containerd-e42b546b685333949ca971f692a4df359058d5105651cffdb0bc2ea67f070c04.scope: Deactivated successfully. May 14 01:07:47.916925 containerd[2738]: time="2025-05-14T01:07:47.916899588Z" level=info msg="received exit event container_id:\"e42b546b685333949ca971f692a4df359058d5105651cffdb0bc2ea67f070c04\" id:\"e42b546b685333949ca971f692a4df359058d5105651cffdb0bc2ea67f070c04\" pid:5186 exited_at:{seconds:1747184867 nanos:916434596}" May 14 01:07:47.917013 containerd[2738]: time="2025-05-14T01:07:47.916972039Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e42b546b685333949ca971f692a4df359058d5105651cffdb0bc2ea67f070c04\" id:\"e42b546b685333949ca971f692a4df359058d5105651cffdb0bc2ea67f070c04\" pid:5186 exited_at:{seconds:1747184867 nanos:916434596}" May 14 01:07:47.941469 kubelet[4258]: I0514 01:07:47.941422 4258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-dd847f4c6-vx2gv" podStartSLOduration=1.147044972 podStartE2EDuration="1.941405832s" podCreationTimestamp="2025-05-14 01:07:46 +0000 UTC" firstStartedPulling="2025-05-14 01:07:46.579069493 +0000 UTC m=+14.755771111" lastFinishedPulling="2025-05-14 01:07:47.373430353 +0000 UTC m=+15.550131971" observedRunningTime="2025-05-14 01:07:47.941100825 +0000 UTC m=+16.117802443" watchObservedRunningTime="2025-05-14 01:07:47.941405832 +0000 UTC m=+16.118107410" May 14 01:07:48.925640 kubelet[4258]: I0514 01:07:48.925613 4258 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 01:07:48.926337 containerd[2738]: time="2025-05-14T01:07:48.926314365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 01:07:49.893075 kubelet[4258]: E0514 01:07:49.892990 4258 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cjbvf" podUID="7395286e-89a3-42ee-9c78-0a22650e7dbd" May 14 01:07:50.503397 containerd[2738]: time="2025-05-14T01:07:50.503270966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:50.503397 containerd[2738]: time="2025-05-14T01:07:50.503282287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 14 01:07:50.503958 containerd[2738]: time="2025-05-14T01:07:50.503929212Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:50.505711 containerd[2738]: time="2025-05-14T01:07:50.505663519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:50.506413 containerd[2738]: time="2025-05-14T01:07:50.506352169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 1.580001759s" May 14 01:07:50.506527 containerd[2738]: time="2025-05-14T01:07:50.506509630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 14 01:07:50.508102 containerd[2738]: time="2025-05-14T01:07:50.508047192Z" level=info msg="CreateContainer within sandbox \"19fcb7ea11c7b0a2a683d8c3744bf97bc7202bf3e480dc623025bc01ed5aa420\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 01:07:50.513717 containerd[2738]: time="2025-05-14T01:07:50.512624431Z" level=info msg="Container c22c71c1f64cfcdb0780c278528cedda9b0d06f55ad017bfe83544fac79c01ea: CDI devices from CRI Config.CDIDevices: []" May 14 01:07:50.517306 containerd[2738]: time="2025-05-14T01:07:50.517277001Z" level=info msg="CreateContainer within sandbox \"19fcb7ea11c7b0a2a683d8c3744bf97bc7202bf3e480dc623025bc01ed5aa420\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c22c71c1f64cfcdb0780c278528cedda9b0d06f55ad017bfe83544fac79c01ea\"" May 14 01:07:50.518309 containerd[2738]: time="2025-05-14T01:07:50.517715738Z" level=info msg="StartContainer for \"c22c71c1f64cfcdb0780c278528cedda9b0d06f55ad017bfe83544fac79c01ea\"" May 14 01:07:50.519036 containerd[2738]: time="2025-05-14T01:07:50.519007347Z" level=info msg="connecting to shim c22c71c1f64cfcdb0780c278528cedda9b0d06f55ad017bfe83544fac79c01ea" address="unix:///run/containerd/s/070490945271c07f30bc304cfc2fcccd280d08cce6d653f03836a93ac622a351" protocol=ttrpc version=3 May 14 01:07:50.542091 systemd[1]: Started cri-containerd-c22c71c1f64cfcdb0780c278528cedda9b0d06f55ad017bfe83544fac79c01ea.scope - libcontainer container c22c71c1f64cfcdb0780c278528cedda9b0d06f55ad017bfe83544fac79c01ea. May 14 01:07:50.569332 containerd[2738]: time="2025-05-14T01:07:50.569265051Z" level=info msg="StartContainer for \"c22c71c1f64cfcdb0780c278528cedda9b0d06f55ad017bfe83544fac79c01ea\" returns successfully" May 14 01:07:50.940093 systemd[1]: cri-containerd-c22c71c1f64cfcdb0780c278528cedda9b0d06f55ad017bfe83544fac79c01ea.scope: Deactivated successfully. May 14 01:07:50.940390 systemd[1]: cri-containerd-c22c71c1f64cfcdb0780c278528cedda9b0d06f55ad017bfe83544fac79c01ea.scope: Consumed 841ms CPU time, 177.6M memory peak, 150.3M written to disk. May 14 01:07:50.940877 containerd[2738]: time="2025-05-14T01:07:50.940851849Z" level=info msg="received exit event container_id:\"c22c71c1f64cfcdb0780c278528cedda9b0d06f55ad017bfe83544fac79c01ea\" id:\"c22c71c1f64cfcdb0780c278528cedda9b0d06f55ad017bfe83544fac79c01ea\" pid:5254 exited_at:{seconds:1747184870 nanos:940716511}" May 14 01:07:50.940994 containerd[2738]: time="2025-05-14T01:07:50.940960503Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c22c71c1f64cfcdb0780c278528cedda9b0d06f55ad017bfe83544fac79c01ea\" id:\"c22c71c1f64cfcdb0780c278528cedda9b0d06f55ad017bfe83544fac79c01ea\" pid:5254 exited_at:{seconds:1747184870 nanos:940716511}" May 14 01:07:50.955864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c22c71c1f64cfcdb0780c278528cedda9b0d06f55ad017bfe83544fac79c01ea-rootfs.mount: Deactivated successfully. May 14 01:07:50.969063 kubelet[4258]: I0514 01:07:50.969035 4258 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 01:07:50.993747 systemd[1]: Created slice kubepods-besteffort-podc18681e9_1dd6_46cc_b221_8a4c7b6eda02.slice - libcontainer container kubepods-besteffort-podc18681e9_1dd6_46cc_b221_8a4c7b6eda02.slice. May 14 01:07:50.997307 systemd[1]: Created slice kubepods-besteffort-pod7fe1d95b_3705_4935_9628_5b06f6f92d39.slice - libcontainer container kubepods-besteffort-pod7fe1d95b_3705_4935_9628_5b06f6f92d39.slice. May 14 01:07:51.001025 systemd[1]: Created slice kubepods-besteffort-podd811dbb7_adf4_41f9_a429_34ab8c71d029.slice - libcontainer container kubepods-besteffort-podd811dbb7_adf4_41f9_a429_34ab8c71d029.slice. May 14 01:07:51.005151 systemd[1]: Created slice kubepods-burstable-poddf95e024_1eb7_4a72_86d3_b96aa159a727.slice - libcontainer container kubepods-burstable-poddf95e024_1eb7_4a72_86d3_b96aa159a727.slice. May 14 01:07:51.008886 systemd[1]: Created slice kubepods-burstable-podbf8008ce_027d_400e_9ddd_494d12e962b6.slice - libcontainer container kubepods-burstable-podbf8008ce_027d_400e_9ddd_494d12e962b6.slice. May 14 01:07:51.112527 kubelet[4258]: I0514 01:07:51.112493 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7fe1d95b-3705-4935-9628-5b06f6f92d39-calico-apiserver-certs\") pod \"calico-apiserver-8f55f948b-nhgrj\" (UID: \"7fe1d95b-3705-4935-9628-5b06f6f92d39\") " pod="calico-apiserver/calico-apiserver-8f55f948b-nhgrj" May 14 01:07:51.112527 kubelet[4258]: I0514 01:07:51.112527 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf8008ce-027d-400e-9ddd-494d12e962b6-config-volume\") pod \"coredns-6f6b679f8f-vwczl\" (UID: \"bf8008ce-027d-400e-9ddd-494d12e962b6\") " pod="kube-system/coredns-6f6b679f8f-vwczl" May 14 01:07:51.112712 kubelet[4258]: I0514 01:07:51.112549 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df95e024-1eb7-4a72-86d3-b96aa159a727-config-volume\") pod \"coredns-6f6b679f8f-wr4wj\" (UID: \"df95e024-1eb7-4a72-86d3-b96aa159a727\") " pod="kube-system/coredns-6f6b679f8f-wr4wj" May 14 01:07:51.112712 kubelet[4258]: I0514 01:07:51.112565 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-957v8\" (UniqueName: \"kubernetes.io/projected/df95e024-1eb7-4a72-86d3-b96aa159a727-kube-api-access-957v8\") pod \"coredns-6f6b679f8f-wr4wj\" (UID: \"df95e024-1eb7-4a72-86d3-b96aa159a727\") " pod="kube-system/coredns-6f6b679f8f-wr4wj" May 14 01:07:51.112712 kubelet[4258]: I0514 01:07:51.112582 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxvpn\" (UniqueName: \"kubernetes.io/projected/c18681e9-1dd6-46cc-b221-8a4c7b6eda02-kube-api-access-fxvpn\") pod \"calico-kube-controllers-79df9bdd84-6gckh\" (UID: \"c18681e9-1dd6-46cc-b221-8a4c7b6eda02\") " pod="calico-system/calico-kube-controllers-79df9bdd84-6gckh" May 14 01:07:51.112712 kubelet[4258]: I0514 01:07:51.112626 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkjfq\" (UniqueName: \"kubernetes.io/projected/bf8008ce-027d-400e-9ddd-494d12e962b6-kube-api-access-lkjfq\") pod \"coredns-6f6b679f8f-vwczl\" (UID: \"bf8008ce-027d-400e-9ddd-494d12e962b6\") " pod="kube-system/coredns-6f6b679f8f-vwczl" May 14 01:07:51.112712 kubelet[4258]: I0514 01:07:51.112684 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c18681e9-1dd6-46cc-b221-8a4c7b6eda02-tigera-ca-bundle\") pod \"calico-kube-controllers-79df9bdd84-6gckh\" (UID: \"c18681e9-1dd6-46cc-b221-8a4c7b6eda02\") " pod="calico-system/calico-kube-controllers-79df9bdd84-6gckh" May 14 01:07:51.112823 kubelet[4258]: I0514 01:07:51.112732 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d811dbb7-adf4-41f9-a429-34ab8c71d029-calico-apiserver-certs\") pod \"calico-apiserver-8f55f948b-vvqdn\" (UID: \"d811dbb7-adf4-41f9-a429-34ab8c71d029\") " pod="calico-apiserver/calico-apiserver-8f55f948b-vvqdn" May 14 01:07:51.112823 kubelet[4258]: I0514 01:07:51.112749 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2knf\" (UniqueName: \"kubernetes.io/projected/d811dbb7-adf4-41f9-a429-34ab8c71d029-kube-api-access-c2knf\") pod \"calico-apiserver-8f55f948b-vvqdn\" (UID: \"d811dbb7-adf4-41f9-a429-34ab8c71d029\") " pod="calico-apiserver/calico-apiserver-8f55f948b-vvqdn" May 14 01:07:51.112823 kubelet[4258]: I0514 01:07:51.112804 4258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwk5d\" (UniqueName: \"kubernetes.io/projected/7fe1d95b-3705-4935-9628-5b06f6f92d39-kube-api-access-hwk5d\") pod \"calico-apiserver-8f55f948b-nhgrj\" (UID: \"7fe1d95b-3705-4935-9628-5b06f6f92d39\") " pod="calico-apiserver/calico-apiserver-8f55f948b-nhgrj" May 14 01:07:51.295666 containerd[2738]: time="2025-05-14T01:07:51.295617255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79df9bdd84-6gckh,Uid:c18681e9-1dd6-46cc-b221-8a4c7b6eda02,Namespace:calico-system,Attempt:0,}" May 14 01:07:51.299410 containerd[2738]: time="2025-05-14T01:07:51.299390404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f55f948b-nhgrj,Uid:7fe1d95b-3705-4935-9628-5b06f6f92d39,Namespace:calico-apiserver,Attempt:0,}" May 14 01:07:51.303280 containerd[2738]: time="2025-05-14T01:07:51.303258646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f55f948b-vvqdn,Uid:d811dbb7-adf4-41f9-a429-34ab8c71d029,Namespace:calico-apiserver,Attempt:0,}" May 14 01:07:51.307764 containerd[2738]: time="2025-05-14T01:07:51.307738443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wr4wj,Uid:df95e024-1eb7-4a72-86d3-b96aa159a727,Namespace:kube-system,Attempt:0,}" May 14 01:07:51.311267 containerd[2738]: time="2025-05-14T01:07:51.311238598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vwczl,Uid:bf8008ce-027d-400e-9ddd-494d12e962b6,Namespace:kube-system,Attempt:0,}" May 14 01:07:51.354242 containerd[2738]: time="2025-05-14T01:07:51.354106611Z" level=error msg="Failed to destroy network for sandbox \"826fa021aa384bb391a616141840d501bd06fc7e2046c62af1ac9785df797231\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:07:51.354536 containerd[2738]: time="2025-05-14T01:07:51.354473536Z" level=error msg="Failed to destroy network for sandbox \"f3f161ab2bcac0f9e7300ff3f29b21e61fe4b06c27ca1cc1e1f442e2c84f02ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:07:51.354664 containerd[2738]: time="2025-05-14T01:07:51.354635676Z" level=error msg="Failed to destroy network for sandbox \"a04ab4ce77e149b68172709c23645c0b1d289ba4919502068c8e6734a3a8904d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:07:51.354845 containerd[2738]: time="2025-05-14T01:07:51.354699564Z" level=error msg="Failed to destroy network for sandbox \"3c7fa6ee66d5df763f4d5b7390cdf29064ae4dbe565d087babe75987baa020b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:07:51.355074 containerd[2738]: time="2025-05-14T01:07:51.354768293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f55f948b-vvqdn,Uid:d811dbb7-adf4-41f9-a429-34ab8c71d029,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"826fa021aa384bb391a616141840d501bd06fc7e2046c62af1ac9785df797231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:07:51.356436 containerd[2738]: time="2025-05-14T01:07:51.356391695Z" level=error msg="Failed to destroy network for sandbox \"785fd935b674ee88329f7bd965d0eba757bb913f68c9c057770845a054827ca7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:07:51.356508 containerd[2738]: time="2025-05-14T01:07:51.356404856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wr4wj,Uid:df95e024-1eb7-4a72-86d3-b96aa159a727,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3f161ab2bcac0f9e7300ff3f29b21e61fe4b06c27ca1cc1e1f442e2c84f02ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:07:51.356607 kubelet[4258]: E0514 01:07:51.356570 4258 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"826fa021aa384bb391a616141840d501bd06fc7e2046c62af1ac9785df797231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:07:51.356653 kubelet[4258]: E0514 01:07:51.356642 4258 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"826fa021aa384bb391a616141840d501bd06fc7e2046c62af1ac9785df797231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f55f948b-vvqdn" May 14 01:07:51.356681 kubelet[4258]: E0514 01:07:51.356660 4258 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"826fa021aa384bb391a616141840d501bd06fc7e2046c62af1ac9785df797231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f55f948b-vvqdn" May 14 01:07:51.356705 kubelet[4258]: E0514 01:07:51.356666 4258 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3f161ab2bcac0f9e7300ff3f29b21e61fe4b06c27ca1cc1e1f442e2c84f02ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:07:51.356733 containerd[2738]: time="2025-05-14T01:07:51.356600121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vwczl,Uid:bf8008ce-027d-400e-9ddd-494d12e962b6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a04ab4ce77e149b68172709c23645c0b1d289ba4919502068c8e6734a3a8904d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:07:51.356769 kubelet[4258]: E0514 01:07:51.356709 4258 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8f55f948b-vvqdn_calico-apiserver(d811dbb7-adf4-41f9-a429-34ab8c71d029)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8f55f948b-vvqdn_calico-apiserver(d811dbb7-adf4-41f9-a429-34ab8c71d029)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"826fa021aa384bb391a616141840d501bd06fc7e2046c62af1ac9785df797231\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8f55f948b-vvqdn" podUID="d811dbb7-adf4-41f9-a429-34ab8c71d029" May 14 01:07:51.356769 kubelet[4258]: E0514 01:07:51.356720 4258 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3f161ab2bcac0f9e7300ff3f29b21e61fe4b06c27ca1cc1e1f442e2c84f02ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wr4wj" May 14 01:07:51.356769 kubelet[4258]: E0514 01:07:51.356740 4258 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3f161ab2bcac0f9e7300ff3f29b21e61fe4b06c27ca1cc1e1f442e2c84f02ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wr4wj" May 14 01:07:51.356851 kubelet[4258]: E0514 01:07:51.356772 4258 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-wr4wj_kube-system(df95e024-1eb7-4a72-86d3-b96aa159a727)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-wr4wj_kube-system(df95e024-1eb7-4a72-86d3-b96aa159a727)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3f161ab2bcac0f9e7300ff3f29b21e61fe4b06c27ca1cc1e1f442e2c84f02ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wr4wj" podUID="df95e024-1eb7-4a72-86d3-b96aa159a727" May 14 01:07:51.356851 kubelet[4258]: E0514 01:07:51.356778 4258 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a04ab4ce77e149b68172709c23645c0b1d289ba4919502068c8e6734a3a8904d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:07:51.356851 kubelet[4258]: E0514 01:07:51.356817 4258 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a04ab4ce77e149b68172709c23645c0b1d289ba4919502068c8e6734a3a8904d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vwczl" May 14 01:07:51.356927 containerd[2738]: time="2025-05-14T01:07:51.356753100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79df9bdd84-6gckh,Uid:c18681e9-1dd6-46cc-b221-8a4c7b6eda02,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7fa6ee66d5df763f4d5b7390cdf29064ae4dbe565d087babe75987baa020b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:07:51.356961 kubelet[4258]: E0514 01:07:51.356833 4258 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a04ab4ce77e149b68172709c23645c0b1d289ba4919502068c8e6734a3a8904d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vwczl" May 14 01:07:51.356961 kubelet[4258]: E0514 01:07:51.356867 4258 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-vwczl_kube-system(bf8008ce-027d-400e-9ddd-494d12e962b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-vwczl_kube-system(bf8008ce-027d-400e-9ddd-494d12e962b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a04ab4ce77e149b68172709c23645c0b1d289ba4919502068c8e6734a3a8904d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-vwczl" podUID="bf8008ce-027d-400e-9ddd-494d12e962b6" May 14 01:07:51.356961 kubelet[4258]: E0514 01:07:51.356881 4258 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7fa6ee66d5df763f4d5b7390cdf29064ae4dbe565d087babe75987baa020b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:07:51.357102 kubelet[4258]: E0514 01:07:51.356910 4258 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7fa6ee66d5df763f4d5b7390cdf29064ae4dbe565d087babe75987baa020b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79df9bdd84-6gckh" May 14 01:07:51.357102 kubelet[4258]: E0514 01:07:51.356924 4258 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c7fa6ee66d5df763f4d5b7390cdf29064ae4dbe565d087babe75987baa020b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79df9bdd84-6gckh" May 14 01:07:51.357102 kubelet[4258]: E0514 01:07:51.356948 4258 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79df9bdd84-6gckh_calico-system(c18681e9-1dd6-46cc-b221-8a4c7b6eda02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79df9bdd84-6gckh_calico-system(c18681e9-1dd6-46cc-b221-8a4c7b6eda02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c7fa6ee66d5df763f4d5b7390cdf29064ae4dbe565d087babe75987baa020b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79df9bdd84-6gckh" podUID="c18681e9-1dd6-46cc-b221-8a4c7b6eda02" May 14 01:07:51.357228 containerd[2738]: time="2025-05-14T01:07:51.357190034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f55f948b-nhgrj,Uid:7fe1d95b-3705-4935-9628-5b06f6f92d39,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"785fd935b674ee88329f7bd965d0eba757bb913f68c9c057770845a054827ca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:07:51.357837 kubelet[4258]: E0514 01:07:51.357822 4258 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"785fd935b674ee88329f7bd965d0eba757bb913f68c9c057770845a054827ca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:07:51.357862 kubelet[4258]: E0514 01:07:51.357847 4258 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"785fd935b674ee88329f7bd965d0eba757bb913f68c9c057770845a054827ca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f55f948b-nhgrj" May 14 01:07:51.357885 kubelet[4258]: E0514 01:07:51.357862 4258 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"785fd935b674ee88329f7bd965d0eba757bb913f68c9c057770845a054827ca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8f55f948b-nhgrj" May 14 01:07:51.357927 kubelet[4258]: E0514 01:07:51.357909 4258 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8f55f948b-nhgrj_calico-apiserver(7fe1d95b-3705-4935-9628-5b06f6f92d39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8f55f948b-nhgrj_calico-apiserver(7fe1d95b-3705-4935-9628-5b06f6f92d39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"785fd935b674ee88329f7bd965d0eba757bb913f68c9c057770845a054827ca7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8f55f948b-nhgrj" podUID="7fe1d95b-3705-4935-9628-5b06f6f92d39" May 14 01:07:51.897930 systemd[1]: Created slice kubepods-besteffort-pod7395286e_89a3_42ee_9c78_0a22650e7dbd.slice - libcontainer container kubepods-besteffort-pod7395286e_89a3_42ee_9c78_0a22650e7dbd.slice. May 14 01:07:51.899514 containerd[2738]: time="2025-05-14T01:07:51.899487650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjbvf,Uid:7395286e-89a3-42ee-9c78-0a22650e7dbd,Namespace:calico-system,Attempt:0,}" May 14 01:07:51.935675 containerd[2738]: time="2025-05-14T01:07:51.935628505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 01:07:51.956719 containerd[2738]: time="2025-05-14T01:07:51.956675643Z" level=error msg="Failed to destroy network for sandbox \"b2d09222e117eb5b0fc515d4fa09c64242966dd3f90640f4d605938b252a7c66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:07:51.957123 containerd[2738]: time="2025-05-14T01:07:51.957095096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjbvf,Uid:7395286e-89a3-42ee-9c78-0a22650e7dbd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d09222e117eb5b0fc515d4fa09c64242966dd3f90640f4d605938b252a7c66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:07:51.957312 kubelet[4258]: E0514 01:07:51.957270 4258 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d09222e117eb5b0fc515d4fa09c64242966dd3f90640f4d605938b252a7c66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:07:51.957362 kubelet[4258]: E0514 01:07:51.957337 4258 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d09222e117eb5b0fc515d4fa09c64242966dd3f90640f4d605938b252a7c66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjbvf" May 14 01:07:51.957362 kubelet[4258]: E0514 01:07:51.957355 4258 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2d09222e117eb5b0fc515d4fa09c64242966dd3f90640f4d605938b252a7c66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cjbvf" May 14 01:07:51.957414 kubelet[4258]: E0514 01:07:51.957393 4258 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cjbvf_calico-system(7395286e-89a3-42ee-9c78-0a22650e7dbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cjbvf_calico-system(7395286e-89a3-42ee-9c78-0a22650e7dbd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2d09222e117eb5b0fc515d4fa09c64242966dd3f90640f4d605938b252a7c66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cjbvf" podUID="7395286e-89a3-42ee-9c78-0a22650e7dbd" May 14 01:07:51.958330 systemd[1]: run-netns-cni\x2d01dae50c\x2d81a9\x2d9e18\x2dca83\x2d17dc70f11a0d.mount: Deactivated successfully. May 14 01:07:54.741175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3778305017.mount: Deactivated successfully. May 14 01:07:54.762504 containerd[2738]: time="2025-05-14T01:07:54.762441392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 14 01:07:54.762504 containerd[2738]: time="2025-05-14T01:07:54.762457634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:54.763181 containerd[2738]: time="2025-05-14T01:07:54.763157028Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:54.764489 containerd[2738]: time="2025-05-14T01:07:54.764467128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:07:54.765008 containerd[2738]: time="2025-05-14T01:07:54.764987984Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 2.829321915s" May 14 01:07:54.765049 containerd[2738]: time="2025-05-14T01:07:54.765014027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 14 01:07:54.770545 containerd[2738]: time="2025-05-14T01:07:54.770519416Z" level=info msg="CreateContainer within sandbox \"19fcb7ea11c7b0a2a683d8c3744bf97bc7202bf3e480dc623025bc01ed5aa420\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 01:07:54.775570 containerd[2738]: time="2025-05-14T01:07:54.775544633Z" level=info msg="Container 6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696: CDI devices from CRI Config.CDIDevices: []" May 14 01:07:54.780804 containerd[2738]: time="2025-05-14T01:07:54.780771072Z" level=info msg="CreateContainer within sandbox \"19fcb7ea11c7b0a2a683d8c3744bf97bc7202bf3e480dc623025bc01ed5aa420\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\"" May 14 01:07:54.781112 containerd[2738]: time="2025-05-14T01:07:54.781093546Z" level=info msg="StartContainer for \"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\"" May 14 01:07:54.782444 containerd[2738]: time="2025-05-14T01:07:54.782422889Z" level=info msg="connecting to shim 6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696" address="unix:///run/containerd/s/070490945271c07f30bc304cfc2fcccd280d08cce6d653f03836a93ac622a351" protocol=ttrpc version=3 May 14 01:07:54.813145 systemd[1]: Started cri-containerd-6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696.scope - libcontainer container 6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696. May 14 01:07:54.842687 containerd[2738]: time="2025-05-14T01:07:54.842658490Z" level=info msg="StartContainer for \"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\" returns successfully" May 14 01:07:54.949464 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 01:07:54.949541 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 01:07:54.953726 kubelet[4258]: I0514 01:07:54.953680 4258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fhnsm" podStartSLOduration=0.954756064 podStartE2EDuration="8.95366456s" podCreationTimestamp="2025-05-14 01:07:46 +0000 UTC" firstStartedPulling="2025-05-14 01:07:46.76665439 +0000 UTC m=+14.943356008" lastFinishedPulling="2025-05-14 01:07:54.765562886 +0000 UTC m=+22.942264504" observedRunningTime="2025-05-14 01:07:54.953330844 +0000 UTC m=+23.130032542" watchObservedRunningTime="2025-05-14 01:07:54.95366456 +0000 UTC m=+23.130366178" May 14 01:07:55.943862 kubelet[4258]: I0514 01:07:55.943826 4258 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 01:08:01.894089 containerd[2738]: time="2025-05-14T01:08:01.894030122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vwczl,Uid:bf8008ce-027d-400e-9ddd-494d12e962b6,Namespace:kube-system,Attempt:0,}" May 14 01:08:02.013031 systemd-networkd[2631]: cali47a35c4dde1: Link UP May 14 01:08:02.013438 systemd-networkd[2631]: cali47a35c4dde1: Gained carrier May 14 01:08:02.020137 containerd[2738]: 2025-05-14 01:08:01.918 [INFO][6108] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 01:08:02.020137 containerd[2738]: 2025-05-14 01:08:01.934 [INFO][6108] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--vwczl-eth0 coredns-6f6b679f8f- kube-system bf8008ce-027d-400e-9ddd-494d12e962b6 689 0 2025-05-14 01:07:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284.0.0-n-0b8132852a coredns-6f6b679f8f-vwczl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali47a35c4dde1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-vwczl" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--vwczl-" May 14 01:08:02.020137 containerd[2738]: 2025-05-14 01:08:01.935 [INFO][6108] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-vwczl" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--vwczl-eth0" May 14 01:08:02.020137 containerd[2738]: 2025-05-14 01:08:01.973 [INFO][6138] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0" HandleID="k8s-pod-network.63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0" Workload="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--vwczl-eth0" May 14 01:08:02.020332 containerd[2738]: 2025-05-14 01:08:01.987 [INFO][6138] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0" HandleID="k8s-pod-network.63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0" Workload="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--vwczl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000314df0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284.0.0-n-0b8132852a", "pod":"coredns-6f6b679f8f-vwczl", "timestamp":"2025-05-14 01:08:01.973972816 +0000 UTC"}, Hostname:"ci-4284.0.0-n-0b8132852a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 01:08:02.020332 containerd[2738]: 2025-05-14 01:08:01.987 [INFO][6138] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 01:08:02.020332 containerd[2738]: 2025-05-14 01:08:01.988 [INFO][6138] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 01:08:02.020332 containerd[2738]: 2025-05-14 01:08:01.988 [INFO][6138] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-0b8132852a' May 14 01:08:02.020332 containerd[2738]: 2025-05-14 01:08:01.989 [INFO][6138] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:02.020332 containerd[2738]: 2025-05-14 01:08:01.992 [INFO][6138] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-0b8132852a" May 14 01:08:02.020332 containerd[2738]: 2025-05-14 01:08:01.995 [INFO][6138] ipam/ipam.go 489: Trying affinity for 192.168.120.0/26 host="ci-4284.0.0-n-0b8132852a" May 14 01:08:02.020332 containerd[2738]: 2025-05-14 01:08:01.997 [INFO][6138] ipam/ipam.go 155: Attempting to load block cidr=192.168.120.0/26 host="ci-4284.0.0-n-0b8132852a" May 14 01:08:02.020332 containerd[2738]: 2025-05-14 01:08:01.998 [INFO][6138] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.0/26 host="ci-4284.0.0-n-0b8132852a" May 14 01:08:02.020509 containerd[2738]: 2025-05-14 01:08:01.998 [INFO][6138] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.0/26 handle="k8s-pod-network.63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:02.020509 containerd[2738]: 2025-05-14 01:08:01.999 [INFO][6138] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0 May 14 01:08:02.020509 containerd[2738]: 2025-05-14 01:08:02.002 [INFO][6138] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.120.0/26 handle="k8s-pod-network.63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:02.020509 containerd[2738]: 2025-05-14 01:08:02.006 [INFO][6138] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.120.1/26] block=192.168.120.0/26 handle="k8s-pod-network.63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:02.020509 containerd[2738]: 2025-05-14 01:08:02.006 [INFO][6138] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.1/26] handle="k8s-pod-network.63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:02.020509 containerd[2738]: 2025-05-14 01:08:02.006 [INFO][6138] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 01:08:02.020509 containerd[2738]: 2025-05-14 01:08:02.006 [INFO][6138] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.1/26] IPv6=[] ContainerID="63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0" HandleID="k8s-pod-network.63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0" Workload="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--vwczl-eth0" May 14 01:08:02.020637 containerd[2738]: 2025-05-14 01:08:02.008 [INFO][6108] cni-plugin/k8s.go 386: Populated endpoint ContainerID="63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-vwczl" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--vwczl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--vwczl-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf8008ce-027d-400e-9ddd-494d12e962b6", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-0b8132852a", ContainerID:"", Pod:"coredns-6f6b679f8f-vwczl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali47a35c4dde1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:08:02.020637 containerd[2738]: 2025-05-14 01:08:02.008 [INFO][6108] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.120.1/32] ContainerID="63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-vwczl" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--vwczl-eth0" May 14 01:08:02.020637 containerd[2738]: 2025-05-14 01:08:02.008 [INFO][6108] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali47a35c4dde1 ContainerID="63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-vwczl" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--vwczl-eth0" May 14 01:08:02.020637 containerd[2738]: 2025-05-14 01:08:02.013 [INFO][6108] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-vwczl" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--vwczl-eth0" May 14 01:08:02.020637 containerd[2738]: 2025-05-14 01:08:02.013 [INFO][6108] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-vwczl" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--vwczl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--vwczl-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf8008ce-027d-400e-9ddd-494d12e962b6", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-0b8132852a", ContainerID:"63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0", Pod:"coredns-6f6b679f8f-vwczl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali47a35c4dde1", MAC:"da:64:c3:7f:75:21", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:08:02.020637 containerd[2738]: 2025-05-14 01:08:02.018 [INFO][6108] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0" Namespace="kube-system" Pod="coredns-6f6b679f8f-vwczl" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--vwczl-eth0" May 14 01:08:02.031273 containerd[2738]: time="2025-05-14T01:08:02.031241100Z" level=info msg="connecting to shim 63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0" address="unix:///run/containerd/s/7ddecc13cb8fe69e547334ac448fa3c7a01a8675e3ca05a864e762a1104029b4" namespace=k8s.io protocol=ttrpc version=3 May 14 01:08:02.059150 systemd[1]: Started cri-containerd-63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0.scope - libcontainer container 63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0. May 14 01:08:02.084580 containerd[2738]: time="2025-05-14T01:08:02.084552244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vwczl,Uid:bf8008ce-027d-400e-9ddd-494d12e962b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0\"" May 14 01:08:02.086321 containerd[2738]: time="2025-05-14T01:08:02.086298254Z" level=info msg="CreateContainer within sandbox \"63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 01:08:02.090586 containerd[2738]: time="2025-05-14T01:08:02.090554928Z" level=info msg="Container 4335be0411cc7e26fdbd57a027fc157647e5d7ac75ee25231b6eb357f627ef79: CDI devices from CRI Config.CDIDevices: []" May 14 01:08:02.093332 containerd[2738]: time="2025-05-14T01:08:02.093301772Z" level=info msg="CreateContainer within sandbox \"63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4335be0411cc7e26fdbd57a027fc157647e5d7ac75ee25231b6eb357f627ef79\"" May 14 01:08:02.093623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount417034129.mount: Deactivated successfully. May 14 01:08:02.094206 containerd[2738]: time="2025-05-14T01:08:02.093646117Z" level=info msg="StartContainer for \"4335be0411cc7e26fdbd57a027fc157647e5d7ac75ee25231b6eb357f627ef79\"" May 14 01:08:02.094410 containerd[2738]: time="2025-05-14T01:08:02.094388932Z" level=info msg="connecting to shim 4335be0411cc7e26fdbd57a027fc157647e5d7ac75ee25231b6eb357f627ef79" address="unix:///run/containerd/s/7ddecc13cb8fe69e547334ac448fa3c7a01a8675e3ca05a864e762a1104029b4" protocol=ttrpc version=3 May 14 01:08:02.114157 systemd[1]: Started cri-containerd-4335be0411cc7e26fdbd57a027fc157647e5d7ac75ee25231b6eb357f627ef79.scope - libcontainer container 4335be0411cc7e26fdbd57a027fc157647e5d7ac75ee25231b6eb357f627ef79. May 14 01:08:02.134690 containerd[2738]: time="2025-05-14T01:08:02.134662071Z" level=info msg="StartContainer for \"4335be0411cc7e26fdbd57a027fc157647e5d7ac75ee25231b6eb357f627ef79\" returns successfully" May 14 01:08:02.895610 containerd[2738]: time="2025-05-14T01:08:02.895571643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wr4wj,Uid:df95e024-1eb7-4a72-86d3-b96aa159a727,Namespace:kube-system,Attempt:0,}" May 14 01:08:02.896221 containerd[2738]: time="2025-05-14T01:08:02.896194810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f55f948b-nhgrj,Uid:7fe1d95b-3705-4935-9628-5b06f6f92d39,Namespace:calico-apiserver,Attempt:0,}" May 14 01:08:02.964119 kubelet[4258]: I0514 01:08:02.964064 4258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-vwczl" podStartSLOduration=22.964049989 podStartE2EDuration="22.964049989s" podCreationTimestamp="2025-05-14 01:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 01:08:02.963550793 +0000 UTC m=+31.140252411" watchObservedRunningTime="2025-05-14 01:08:02.964049989 +0000 UTC m=+31.140751607" May 14 01:08:03.075233 systemd-networkd[2631]: cali90687b17f1e: Link UP May 14 01:08:03.075358 systemd-networkd[2631]: cali90687b17f1e: Gained carrier May 14 01:08:03.081573 containerd[2738]: 2025-05-14 01:08:02.914 [INFO][6320] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 01:08:03.081573 containerd[2738]: 2025-05-14 01:08:02.929 [INFO][6320] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--wr4wj-eth0 coredns-6f6b679f8f- kube-system df95e024-1eb7-4a72-86d3-b96aa159a727 687 0 2025-05-14 01:07:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284.0.0-n-0b8132852a coredns-6f6b679f8f-wr4wj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali90687b17f1e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00" Namespace="kube-system" Pod="coredns-6f6b679f8f-wr4wj" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--wr4wj-" May 14 01:08:03.081573 containerd[2738]: 2025-05-14 01:08:02.929 [INFO][6320] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00" Namespace="kube-system" Pod="coredns-6f6b679f8f-wr4wj" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--wr4wj-eth0" May 14 01:08:03.081573 containerd[2738]: 2025-05-14 01:08:02.951 [INFO][6373] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00" HandleID="k8s-pod-network.dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00" Workload="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--wr4wj-eth0" May 14 01:08:03.081573 containerd[2738]: 2025-05-14 01:08:02.960 [INFO][6373] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00" HandleID="k8s-pod-network.dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00" Workload="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--wr4wj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c6e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284.0.0-n-0b8132852a", "pod":"coredns-6f6b679f8f-wr4wj", "timestamp":"2025-05-14 01:08:02.951111752 +0000 UTC"}, Hostname:"ci-4284.0.0-n-0b8132852a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 01:08:03.081573 containerd[2738]: 2025-05-14 01:08:02.960 [INFO][6373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 01:08:03.081573 containerd[2738]: 2025-05-14 01:08:02.960 [INFO][6373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 01:08:03.081573 containerd[2738]: 2025-05-14 01:08:02.960 [INFO][6373] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-0b8132852a' May 14 01:08:03.081573 containerd[2738]: 2025-05-14 01:08:02.961 [INFO][6373] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:03.081573 containerd[2738]: 2025-05-14 01:08:03.059 [INFO][6373] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-0b8132852a" May 14 01:08:03.081573 containerd[2738]: 2025-05-14 01:08:03.062 [INFO][6373] ipam/ipam.go 489: Trying affinity for 192.168.120.0/26 host="ci-4284.0.0-n-0b8132852a" May 14 01:08:03.081573 containerd[2738]: 2025-05-14 01:08:03.064 [INFO][6373] ipam/ipam.go 155: Attempting to load block cidr=192.168.120.0/26 host="ci-4284.0.0-n-0b8132852a" May 14 01:08:03.081573 containerd[2738]: 2025-05-14 01:08:03.065 [INFO][6373] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.0/26 host="ci-4284.0.0-n-0b8132852a" May 14 01:08:03.081573 containerd[2738]: 2025-05-14 01:08:03.065 [INFO][6373] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.0/26 handle="k8s-pod-network.dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:03.081573 containerd[2738]: 2025-05-14 01:08:03.066 [INFO][6373] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00 May 14 01:08:03.081573 containerd[2738]: 2025-05-14 01:08:03.069 [INFO][6373] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.120.0/26 handle="k8s-pod-network.dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:03.081573 containerd[2738]: 2025-05-14 01:08:03.072 [INFO][6373] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.120.2/26] block=192.168.120.0/26 handle="k8s-pod-network.dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:03.081573 containerd[2738]: 2025-05-14 01:08:03.072 [INFO][6373] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.2/26] handle="k8s-pod-network.dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:03.081573 containerd[2738]: 2025-05-14 01:08:03.072 [INFO][6373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 01:08:03.081573 containerd[2738]: 2025-05-14 01:08:03.072 [INFO][6373] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.2/26] IPv6=[] ContainerID="dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00" HandleID="k8s-pod-network.dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00" Workload="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--wr4wj-eth0" May 14 01:08:03.082023 containerd[2738]: 2025-05-14 01:08:03.074 [INFO][6320] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00" Namespace="kube-system" Pod="coredns-6f6b679f8f-wr4wj" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--wr4wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--wr4wj-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"df95e024-1eb7-4a72-86d3-b96aa159a727", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-0b8132852a", ContainerID:"", Pod:"coredns-6f6b679f8f-wr4wj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90687b17f1e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:08:03.082023 containerd[2738]: 2025-05-14 01:08:03.074 [INFO][6320] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.120.2/32] ContainerID="dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00" Namespace="kube-system" Pod="coredns-6f6b679f8f-wr4wj" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--wr4wj-eth0" May 14 01:08:03.082023 containerd[2738]: 2025-05-14 01:08:03.074 [INFO][6320] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90687b17f1e ContainerID="dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00" Namespace="kube-system" Pod="coredns-6f6b679f8f-wr4wj" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--wr4wj-eth0" May 14 01:08:03.082023 containerd[2738]: 2025-05-14 01:08:03.075 [INFO][6320] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00" Namespace="kube-system" Pod="coredns-6f6b679f8f-wr4wj" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--wr4wj-eth0" May 14 01:08:03.082023 containerd[2738]: 2025-05-14 01:08:03.075 [INFO][6320] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00" Namespace="kube-system" Pod="coredns-6f6b679f8f-wr4wj" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--wr4wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--wr4wj-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"df95e024-1eb7-4a72-86d3-b96aa159a727", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 7, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-0b8132852a", ContainerID:"dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00", Pod:"coredns-6f6b679f8f-wr4wj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90687b17f1e", MAC:"22:34:2d:8d:27:2c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:08:03.082023 containerd[2738]: 2025-05-14 01:08:03.080 [INFO][6320] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00" Namespace="kube-system" Pod="coredns-6f6b679f8f-wr4wj" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-coredns--6f6b679f8f--wr4wj-eth0" May 14 01:08:03.094469 containerd[2738]: time="2025-05-14T01:08:03.094436311Z" level=info msg="connecting to shim dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00" address="unix:///run/containerd/s/6743d7823803f60cdd563f5c40e270517a26ac449b1999396608c38067173f73" namespace=k8s.io protocol=ttrpc version=3 May 14 01:08:03.123166 systemd[1]: Started cri-containerd-dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00.scope - libcontainer container dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00. May 14 01:08:03.147987 containerd[2738]: time="2025-05-14T01:08:03.147920704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wr4wj,Uid:df95e024-1eb7-4a72-86d3-b96aa159a727,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00\"" May 14 01:08:03.149881 containerd[2738]: time="2025-05-14T01:08:03.149854122Z" level=info msg="CreateContainer within sandbox \"dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 01:08:03.153830 containerd[2738]: time="2025-05-14T01:08:03.153802802Z" level=info msg="Container ec99a29da211bd410147696186a91983fb298ad917b77988696bfda7dea45cf5: CDI devices from CRI Config.CDIDevices: []" May 14 01:08:03.156470 containerd[2738]: time="2025-05-14T01:08:03.156445789Z" level=info msg="CreateContainer within sandbox \"dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ec99a29da211bd410147696186a91983fb298ad917b77988696bfda7dea45cf5\"" May 14 01:08:03.156781 containerd[2738]: time="2025-05-14T01:08:03.156760011Z" level=info msg="StartContainer for \"ec99a29da211bd410147696186a91983fb298ad917b77988696bfda7dea45cf5\"" May 14 01:08:03.157009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount958078975.mount: Deactivated successfully. May 14 01:08:03.157493 containerd[2738]: time="2025-05-14T01:08:03.157473702Z" level=info msg="connecting to shim ec99a29da211bd410147696186a91983fb298ad917b77988696bfda7dea45cf5" address="unix:///run/containerd/s/6743d7823803f60cdd563f5c40e270517a26ac449b1999396608c38067173f73" protocol=ttrpc version=3 May 14 01:08:03.184153 systemd[1]: Started cri-containerd-ec99a29da211bd410147696186a91983fb298ad917b77988696bfda7dea45cf5.scope - libcontainer container ec99a29da211bd410147696186a91983fb298ad917b77988696bfda7dea45cf5. May 14 01:08:03.188175 systemd-networkd[2631]: calid902da80b39: Link UP May 14 01:08:03.188334 systemd-networkd[2631]: calid902da80b39: Gained carrier May 14 01:08:03.194889 containerd[2738]: 2025-05-14 01:08:02.914 [INFO][6321] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 01:08:03.194889 containerd[2738]: 2025-05-14 01:08:02.929 [INFO][6321] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--nhgrj-eth0 calico-apiserver-8f55f948b- calico-apiserver 7fe1d95b-3705-4935-9628-5b06f6f92d39 686 0 2025-05-14 01:07:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8f55f948b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-n-0b8132852a calico-apiserver-8f55f948b-nhgrj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid902da80b39 [] []}} ContainerID="d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51" Namespace="calico-apiserver" Pod="calico-apiserver-8f55f948b-nhgrj" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--nhgrj-" May 14 01:08:03.194889 containerd[2738]: 2025-05-14 01:08:02.929 [INFO][6321] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51" Namespace="calico-apiserver" Pod="calico-apiserver-8f55f948b-nhgrj" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--nhgrj-eth0" May 14 01:08:03.194889 containerd[2738]: 2025-05-14 01:08:02.951 [INFO][6375] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51" HandleID="k8s-pod-network.d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51" Workload="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--nhgrj-eth0" May 14 01:08:03.194889 containerd[2738]: 2025-05-14 01:08:03.059 [INFO][6375] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51" HandleID="k8s-pod-network.d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51" Workload="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--nhgrj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400073eba0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-n-0b8132852a", "pod":"calico-apiserver-8f55f948b-nhgrj", "timestamp":"2025-05-14 01:08:02.951400854 +0000 UTC"}, Hostname:"ci-4284.0.0-n-0b8132852a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 01:08:03.194889 containerd[2738]: 2025-05-14 01:08:03.059 [INFO][6375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 01:08:03.194889 containerd[2738]: 2025-05-14 01:08:03.072 [INFO][6375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 01:08:03.194889 containerd[2738]: 2025-05-14 01:08:03.072 [INFO][6375] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-0b8132852a' May 14 01:08:03.194889 containerd[2738]: 2025-05-14 01:08:03.074 [INFO][6375] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:03.194889 containerd[2738]: 2025-05-14 01:08:03.159 [INFO][6375] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-0b8132852a" May 14 01:08:03.194889 containerd[2738]: 2025-05-14 01:08:03.162 [INFO][6375] ipam/ipam.go 489: Trying affinity for 192.168.120.0/26 host="ci-4284.0.0-n-0b8132852a" May 14 01:08:03.194889 containerd[2738]: 2025-05-14 01:08:03.164 [INFO][6375] ipam/ipam.go 155: Attempting to load block cidr=192.168.120.0/26 host="ci-4284.0.0-n-0b8132852a" May 14 01:08:03.194889 containerd[2738]: 2025-05-14 01:08:03.167 [INFO][6375] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.0/26 host="ci-4284.0.0-n-0b8132852a" May 14 01:08:03.194889 containerd[2738]: 2025-05-14 01:08:03.167 [INFO][6375] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.0/26 handle="k8s-pod-network.d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:03.194889 containerd[2738]: 2025-05-14 01:08:03.168 [INFO][6375] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51 May 14 01:08:03.194889 containerd[2738]: 2025-05-14 01:08:03.171 [INFO][6375] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.120.0/26 handle="k8s-pod-network.d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:03.194889 containerd[2738]: 2025-05-14 01:08:03.185 [INFO][6375] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.120.3/26] block=192.168.120.0/26 handle="k8s-pod-network.d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:03.194889 containerd[2738]: 2025-05-14 01:08:03.185 [INFO][6375] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.3/26] handle="k8s-pod-network.d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:03.194889 containerd[2738]: 2025-05-14 01:08:03.185 [INFO][6375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 01:08:03.194889 containerd[2738]: 2025-05-14 01:08:03.185 [INFO][6375] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.3/26] IPv6=[] ContainerID="d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51" HandleID="k8s-pod-network.d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51" Workload="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--nhgrj-eth0" May 14 01:08:03.195499 containerd[2738]: 2025-05-14 01:08:03.186 [INFO][6321] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51" Namespace="calico-apiserver" Pod="calico-apiserver-8f55f948b-nhgrj" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--nhgrj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--nhgrj-eth0", GenerateName:"calico-apiserver-8f55f948b-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fe1d95b-3705-4935-9628-5b06f6f92d39", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 7, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f55f948b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-0b8132852a", ContainerID:"", Pod:"calico-apiserver-8f55f948b-nhgrj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid902da80b39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:08:03.195499 containerd[2738]: 2025-05-14 01:08:03.187 [INFO][6321] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.120.3/32] ContainerID="d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51" Namespace="calico-apiserver" Pod="calico-apiserver-8f55f948b-nhgrj" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--nhgrj-eth0" May 14 01:08:03.195499 containerd[2738]: 2025-05-14 01:08:03.187 [INFO][6321] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid902da80b39 ContainerID="d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51" Namespace="calico-apiserver" Pod="calico-apiserver-8f55f948b-nhgrj" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--nhgrj-eth0" May 14 01:08:03.195499 containerd[2738]: 2025-05-14 01:08:03.188 [INFO][6321] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51" Namespace="calico-apiserver" Pod="calico-apiserver-8f55f948b-nhgrj" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--nhgrj-eth0" May 14 01:08:03.195499 containerd[2738]: 2025-05-14 01:08:03.188 [INFO][6321] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51" Namespace="calico-apiserver" Pod="calico-apiserver-8f55f948b-nhgrj" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--nhgrj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--nhgrj-eth0", GenerateName:"calico-apiserver-8f55f948b-", Namespace:"calico-apiserver", SelfLink:"", UID:"7fe1d95b-3705-4935-9628-5b06f6f92d39", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 7, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f55f948b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-0b8132852a", ContainerID:"d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51", Pod:"calico-apiserver-8f55f948b-nhgrj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid902da80b39", MAC:"5a:2e:45:c6:a7:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:08:03.195499 containerd[2738]: 2025-05-14 01:08:03.193 [INFO][6321] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51" Namespace="calico-apiserver" Pod="calico-apiserver-8f55f948b-nhgrj" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--nhgrj-eth0" May 14 01:08:03.203920 containerd[2738]: time="2025-05-14T01:08:03.203894235Z" level=info msg="StartContainer for \"ec99a29da211bd410147696186a91983fb298ad917b77988696bfda7dea45cf5\" returns successfully" May 14 01:08:03.209152 containerd[2738]: time="2025-05-14T01:08:03.209122326Z" level=info msg="connecting to shim d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51" address="unix:///run/containerd/s/7b1fac485f4e8f83a9b0f1146d081842e4da43a104866cd696bb1e9ee8f7f439" namespace=k8s.io protocol=ttrpc version=3 May 14 01:08:03.241157 systemd[1]: Started cri-containerd-d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51.scope - libcontainer container d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51. May 14 01:08:03.266879 containerd[2738]: time="2025-05-14T01:08:03.266850420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f55f948b-nhgrj,Uid:7fe1d95b-3705-4935-9628-5b06f6f92d39,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51\"" May 14 01:08:03.267964 containerd[2738]: time="2025-05-14T01:08:03.267944698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 01:08:03.331098 systemd-networkd[2631]: cali47a35c4dde1: Gained IPv6LL May 14 01:08:03.965421 kubelet[4258]: I0514 01:08:03.965369 4258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wr4wj" podStartSLOduration=23.965353967 podStartE2EDuration="23.965353967s" podCreationTimestamp="2025-05-14 01:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 01:08:03.964938817 +0000 UTC m=+32.141640435" watchObservedRunningTime="2025-05-14 01:08:03.965353967 +0000 UTC m=+32.142055585" May 14 01:08:04.084539 containerd[2738]: time="2025-05-14T01:08:04.084473858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 14 01:08:04.084539 containerd[2738]: time="2025-05-14T01:08:04.084478938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:08:04.085210 containerd[2738]: time="2025-05-14T01:08:04.085185266Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:08:04.086721 containerd[2738]: time="2025-05-14T01:08:04.086702849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:08:04.087357 containerd[2738]: time="2025-05-14T01:08:04.087338613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 819.366833ms" May 14 01:08:04.087383 containerd[2738]: time="2025-05-14T01:08:04.087364174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 14 01:08:04.088724 containerd[2738]: time="2025-05-14T01:08:04.088703266Z" level=info msg="CreateContainer within sandbox \"d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 01:08:04.091813 containerd[2738]: time="2025-05-14T01:08:04.091789796Z" level=info msg="Container f08d1b43e94b22dea1d963b7dce4b1dc38cde6ee7c316ca1ec50ddc5e94febdb: CDI devices from CRI Config.CDIDevices: []" May 14 01:08:04.095177 containerd[2738]: time="2025-05-14T01:08:04.095155905Z" level=info msg="CreateContainer within sandbox \"d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f08d1b43e94b22dea1d963b7dce4b1dc38cde6ee7c316ca1ec50ddc5e94febdb\"" May 14 01:08:04.095504 containerd[2738]: time="2025-05-14T01:08:04.095485847Z" level=info msg="StartContainer for \"f08d1b43e94b22dea1d963b7dce4b1dc38cde6ee7c316ca1ec50ddc5e94febdb\"" May 14 01:08:04.096406 containerd[2738]: time="2025-05-14T01:08:04.096384068Z" level=info msg="connecting to shim f08d1b43e94b22dea1d963b7dce4b1dc38cde6ee7c316ca1ec50ddc5e94febdb" address="unix:///run/containerd/s/7b1fac485f4e8f83a9b0f1146d081842e4da43a104866cd696bb1e9ee8f7f439" protocol=ttrpc version=3 May 14 01:08:04.121143 systemd[1]: Started cri-containerd-f08d1b43e94b22dea1d963b7dce4b1dc38cde6ee7c316ca1ec50ddc5e94febdb.scope - libcontainer container f08d1b43e94b22dea1d963b7dce4b1dc38cde6ee7c316ca1ec50ddc5e94febdb. May 14 01:08:04.148697 containerd[2738]: time="2025-05-14T01:08:04.148671828Z" level=info msg="StartContainer for \"f08d1b43e94b22dea1d963b7dce4b1dc38cde6ee7c316ca1ec50ddc5e94febdb\" returns successfully" May 14 01:08:04.673880 kubelet[4258]: I0514 01:08:04.673836 4258 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 01:08:04.675101 systemd-networkd[2631]: cali90687b17f1e: Gained IPv6LL May 14 01:08:04.969473 kubelet[4258]: I0514 01:08:04.969280 4258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8f55f948b-nhgrj" podStartSLOduration=19.149198889 podStartE2EDuration="19.96926429s" podCreationTimestamp="2025-05-14 01:07:45 +0000 UTC" firstStartedPulling="2025-05-14 01:08:03.267757285 +0000 UTC m=+31.444458903" lastFinishedPulling="2025-05-14 01:08:04.087822686 +0000 UTC m=+32.264524304" observedRunningTime="2025-05-14 01:08:04.969149042 +0000 UTC m=+33.145850660" watchObservedRunningTime="2025-05-14 01:08:04.96926429 +0000 UTC m=+33.145965908" May 14 01:08:04.995046 systemd-networkd[2631]: calid902da80b39: Gained IPv6LL May 14 01:08:05.409998 kernel: bpftool[6772]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 14 01:08:05.564396 systemd-networkd[2631]: vxlan.calico: Link UP May 14 01:08:05.564401 systemd-networkd[2631]: vxlan.calico: Gained carrier May 14 01:08:05.894337 containerd[2738]: time="2025-05-14T01:08:05.893995689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79df9bdd84-6gckh,Uid:c18681e9-1dd6-46cc-b221-8a4c7b6eda02,Namespace:calico-system,Attempt:0,}" May 14 01:08:05.894337 containerd[2738]: time="2025-05-14T01:08:05.894138018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjbvf,Uid:7395286e-89a3-42ee-9c78-0a22650e7dbd,Namespace:calico-system,Attempt:0,}" May 14 01:08:05.894337 containerd[2738]: time="2025-05-14T01:08:05.894138138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f55f948b-vvqdn,Uid:d811dbb7-adf4-41f9-a429-34ab8c71d029,Namespace:calico-apiserver,Attempt:0,}" May 14 01:08:05.961668 kubelet[4258]: I0514 01:08:05.961638 4258 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 01:08:05.977468 systemd-networkd[2631]: cali933d77b879e: Link UP May 14 01:08:05.977633 systemd-networkd[2631]: cali933d77b879e: Gained carrier May 14 01:08:05.984268 containerd[2738]: 2025-05-14 01:08:05.926 [INFO][7079] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--0b8132852a-k8s-calico--kube--controllers--79df9bdd84--6gckh-eth0 calico-kube-controllers-79df9bdd84- calico-system c18681e9-1dd6-46cc-b221-8a4c7b6eda02 683 0 2025-05-14 01:07:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79df9bdd84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4284.0.0-n-0b8132852a calico-kube-controllers-79df9bdd84-6gckh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali933d77b879e [] []}} ContainerID="e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed" Namespace="calico-system" Pod="calico-kube-controllers-79df9bdd84-6gckh" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--kube--controllers--79df9bdd84--6gckh-" May 14 01:08:05.984268 containerd[2738]: 2025-05-14 01:08:05.926 [INFO][7079] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed" Namespace="calico-system" Pod="calico-kube-controllers-79df9bdd84-6gckh" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--kube--controllers--79df9bdd84--6gckh-eth0" May 14 01:08:05.984268 containerd[2738]: 2025-05-14 01:08:05.947 [INFO][7161] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed" HandleID="k8s-pod-network.e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed" Workload="ci--4284.0.0--n--0b8132852a-k8s-calico--kube--controllers--79df9bdd84--6gckh-eth0" May 14 01:08:05.984268 containerd[2738]: 2025-05-14 01:08:05.957 [INFO][7161] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed" HandleID="k8s-pod-network.e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed" Workload="ci--4284.0.0--n--0b8132852a-k8s-calico--kube--controllers--79df9bdd84--6gckh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000503740), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-n-0b8132852a", "pod":"calico-kube-controllers-79df9bdd84-6gckh", "timestamp":"2025-05-14 01:08:05.947917375 +0000 UTC"}, Hostname:"ci-4284.0.0-n-0b8132852a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 01:08:05.984268 containerd[2738]: 2025-05-14 01:08:05.957 [INFO][7161] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 01:08:05.984268 containerd[2738]: 2025-05-14 01:08:05.957 [INFO][7161] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 01:08:05.984268 containerd[2738]: 2025-05-14 01:08:05.957 [INFO][7161] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-0b8132852a' May 14 01:08:05.984268 containerd[2738]: 2025-05-14 01:08:05.959 [INFO][7161] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:05.984268 containerd[2738]: 2025-05-14 01:08:05.961 [INFO][7161] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-0b8132852a" May 14 01:08:05.984268 containerd[2738]: 2025-05-14 01:08:05.965 [INFO][7161] ipam/ipam.go 489: Trying affinity for 192.168.120.0/26 host="ci-4284.0.0-n-0b8132852a" May 14 01:08:05.984268 containerd[2738]: 2025-05-14 01:08:05.966 [INFO][7161] ipam/ipam.go 155: Attempting to load block cidr=192.168.120.0/26 host="ci-4284.0.0-n-0b8132852a" May 14 01:08:05.984268 containerd[2738]: 2025-05-14 01:08:05.968 [INFO][7161] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.0/26 host="ci-4284.0.0-n-0b8132852a" May 14 01:08:05.984268 containerd[2738]: 2025-05-14 01:08:05.968 [INFO][7161] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.0/26 handle="k8s-pod-network.e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:05.984268 containerd[2738]: 2025-05-14 01:08:05.969 [INFO][7161] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed May 14 01:08:05.984268 containerd[2738]: 2025-05-14 01:08:05.971 [INFO][7161] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.120.0/26 handle="k8s-pod-network.e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:05.984268 containerd[2738]: 2025-05-14 01:08:05.975 [INFO][7161] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.120.4/26] block=192.168.120.0/26 handle="k8s-pod-network.e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:05.984268 containerd[2738]: 2025-05-14 01:08:05.975 [INFO][7161] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.4/26] handle="k8s-pod-network.e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:05.984268 containerd[2738]: 2025-05-14 01:08:05.975 [INFO][7161] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 01:08:05.984268 containerd[2738]: 2025-05-14 01:08:05.975 [INFO][7161] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.4/26] IPv6=[] ContainerID="e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed" HandleID="k8s-pod-network.e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed" Workload="ci--4284.0.0--n--0b8132852a-k8s-calico--kube--controllers--79df9bdd84--6gckh-eth0" May 14 01:08:05.984712 containerd[2738]: 2025-05-14 01:08:05.976 [INFO][7079] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed" Namespace="calico-system" Pod="calico-kube-controllers-79df9bdd84-6gckh" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--kube--controllers--79df9bdd84--6gckh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--0b8132852a-k8s-calico--kube--controllers--79df9bdd84--6gckh-eth0", GenerateName:"calico-kube-controllers-79df9bdd84-", Namespace:"calico-system", SelfLink:"", UID:"c18681e9-1dd6-46cc-b221-8a4c7b6eda02", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 7, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79df9bdd84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-0b8132852a", ContainerID:"", Pod:"calico-kube-controllers-79df9bdd84-6gckh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali933d77b879e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:08:05.984712 containerd[2738]: 2025-05-14 01:08:05.976 [INFO][7079] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.120.4/32] ContainerID="e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed" Namespace="calico-system" Pod="calico-kube-controllers-79df9bdd84-6gckh" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--kube--controllers--79df9bdd84--6gckh-eth0" May 14 01:08:05.984712 containerd[2738]: 2025-05-14 01:08:05.976 [INFO][7079] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali933d77b879e ContainerID="e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed" Namespace="calico-system" Pod="calico-kube-controllers-79df9bdd84-6gckh" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--kube--controllers--79df9bdd84--6gckh-eth0" May 14 01:08:05.984712 containerd[2738]: 2025-05-14 01:08:05.977 [INFO][7079] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed" Namespace="calico-system" Pod="calico-kube-controllers-79df9bdd84-6gckh" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--kube--controllers--79df9bdd84--6gckh-eth0" May 14 01:08:05.984712 containerd[2738]: 2025-05-14 01:08:05.977 [INFO][7079] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed" Namespace="calico-system" Pod="calico-kube-controllers-79df9bdd84-6gckh" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--kube--controllers--79df9bdd84--6gckh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--0b8132852a-k8s-calico--kube--controllers--79df9bdd84--6gckh-eth0", GenerateName:"calico-kube-controllers-79df9bdd84-", Namespace:"calico-system", SelfLink:"", UID:"c18681e9-1dd6-46cc-b221-8a4c7b6eda02", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 7, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79df9bdd84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-0b8132852a", ContainerID:"e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed", Pod:"calico-kube-controllers-79df9bdd84-6gckh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali933d77b879e", MAC:"76:12:89:51:b3:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:08:05.984712 containerd[2738]: 2025-05-14 01:08:05.983 [INFO][7079] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed" Namespace="calico-system" Pod="calico-kube-controllers-79df9bdd84-6gckh" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--kube--controllers--79df9bdd84--6gckh-eth0" May 14 01:08:05.995581 containerd[2738]: time="2025-05-14T01:08:05.995551210Z" level=info msg="connecting to shim e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed" address="unix:///run/containerd/s/27184d9b848dc4a909608a58930240c2bbfe81e595664817ffa980edb2792880" namespace=k8s.io protocol=ttrpc version=3 May 14 01:08:06.025095 systemd[1]: Started cri-containerd-e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed.scope - libcontainer container e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed. May 14 01:08:06.049941 containerd[2738]: time="2025-05-14T01:08:06.049912203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79df9bdd84-6gckh,Uid:c18681e9-1dd6-46cc-b221-8a4c7b6eda02,Namespace:calico-system,Attempt:0,} returns sandbox id \"e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed\"" May 14 01:08:06.051081 containerd[2738]: time="2025-05-14T01:08:06.051055394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 01:08:06.091176 systemd-networkd[2631]: calic76141880a9: Link UP May 14 01:08:06.091355 systemd-networkd[2631]: calic76141880a9: Gained carrier May 14 01:08:06.097923 containerd[2738]: 2025-05-14 01:08:05.927 [INFO][7085] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--vvqdn-eth0 calico-apiserver-8f55f948b- calico-apiserver d811dbb7-adf4-41f9-a429-34ab8c71d029 688 0 2025-05-14 01:07:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8f55f948b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-n-0b8132852a calico-apiserver-8f55f948b-vvqdn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic76141880a9 [] []}} ContainerID="592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2" Namespace="calico-apiserver" Pod="calico-apiserver-8f55f948b-vvqdn" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--vvqdn-" May 14 01:08:06.097923 containerd[2738]: 2025-05-14 01:08:05.927 [INFO][7085] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2" Namespace="calico-apiserver" Pod="calico-apiserver-8f55f948b-vvqdn" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--vvqdn-eth0" May 14 01:08:06.097923 containerd[2738]: 2025-05-14 01:08:05.949 [INFO][7167] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2" HandleID="k8s-pod-network.592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2" Workload="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--vvqdn-eth0" May 14 01:08:06.097923 containerd[2738]: 2025-05-14 01:08:05.960 [INFO][7167] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2" HandleID="k8s-pod-network.592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2" Workload="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--vvqdn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-n-0b8132852a", "pod":"calico-apiserver-8f55f948b-vvqdn", "timestamp":"2025-05-14 01:08:05.949178818 +0000 UTC"}, Hostname:"ci-4284.0.0-n-0b8132852a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 01:08:06.097923 containerd[2738]: 2025-05-14 01:08:05.960 [INFO][7167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 01:08:06.097923 containerd[2738]: 2025-05-14 01:08:05.975 [INFO][7167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 01:08:06.097923 containerd[2738]: 2025-05-14 01:08:05.975 [INFO][7167] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-0b8132852a' May 14 01:08:06.097923 containerd[2738]: 2025-05-14 01:08:06.060 [INFO][7167] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:06.097923 containerd[2738]: 2025-05-14 01:08:06.067 [INFO][7167] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-0b8132852a" May 14 01:08:06.097923 containerd[2738]: 2025-05-14 01:08:06.070 [INFO][7167] ipam/ipam.go 489: Trying affinity for 192.168.120.0/26 host="ci-4284.0.0-n-0b8132852a" May 14 01:08:06.097923 containerd[2738]: 2025-05-14 01:08:06.071 [INFO][7167] ipam/ipam.go 155: Attempting to load block cidr=192.168.120.0/26 host="ci-4284.0.0-n-0b8132852a" May 14 01:08:06.097923 containerd[2738]: 2025-05-14 01:08:06.073 [INFO][7167] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.0/26 host="ci-4284.0.0-n-0b8132852a" May 14 01:08:06.097923 containerd[2738]: 2025-05-14 01:08:06.073 [INFO][7167] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.0/26 handle="k8s-pod-network.592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:06.097923 containerd[2738]: 2025-05-14 01:08:06.074 [INFO][7167] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2 May 14 01:08:06.097923 containerd[2738]: 2025-05-14 01:08:06.076 [INFO][7167] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.120.0/26 handle="k8s-pod-network.592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:06.097923 containerd[2738]: 2025-05-14 01:08:06.087 [INFO][7167] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.120.5/26] block=192.168.120.0/26 handle="k8s-pod-network.592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:06.097923 containerd[2738]: 2025-05-14 01:08:06.087 [INFO][7167] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.5/26] handle="k8s-pod-network.592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:06.097923 containerd[2738]: 2025-05-14 01:08:06.087 [INFO][7167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 01:08:06.097923 containerd[2738]: 2025-05-14 01:08:06.087 [INFO][7167] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.5/26] IPv6=[] ContainerID="592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2" HandleID="k8s-pod-network.592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2" Workload="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--vvqdn-eth0" May 14 01:08:06.098346 containerd[2738]: 2025-05-14 01:08:06.089 [INFO][7085] cni-plugin/k8s.go 386: Populated endpoint ContainerID="592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2" Namespace="calico-apiserver" Pod="calico-apiserver-8f55f948b-vvqdn" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--vvqdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--vvqdn-eth0", GenerateName:"calico-apiserver-8f55f948b-", Namespace:"calico-apiserver", SelfLink:"", UID:"d811dbb7-adf4-41f9-a429-34ab8c71d029", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 7, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f55f948b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-0b8132852a", ContainerID:"", Pod:"calico-apiserver-8f55f948b-vvqdn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic76141880a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:08:06.098346 containerd[2738]: 2025-05-14 01:08:06.090 [INFO][7085] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.120.5/32] ContainerID="592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2" Namespace="calico-apiserver" Pod="calico-apiserver-8f55f948b-vvqdn" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--vvqdn-eth0" May 14 01:08:06.098346 containerd[2738]: 2025-05-14 01:08:06.090 [INFO][7085] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic76141880a9 ContainerID="592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2" Namespace="calico-apiserver" Pod="calico-apiserver-8f55f948b-vvqdn" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--vvqdn-eth0" May 14 01:08:06.098346 containerd[2738]: 2025-05-14 01:08:06.091 [INFO][7085] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2" Namespace="calico-apiserver" Pod="calico-apiserver-8f55f948b-vvqdn" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--vvqdn-eth0" May 14 01:08:06.098346 containerd[2738]: 2025-05-14 01:08:06.091 [INFO][7085] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2" Namespace="calico-apiserver" Pod="calico-apiserver-8f55f948b-vvqdn" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--vvqdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--vvqdn-eth0", GenerateName:"calico-apiserver-8f55f948b-", Namespace:"calico-apiserver", SelfLink:"", UID:"d811dbb7-adf4-41f9-a429-34ab8c71d029", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 7, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8f55f948b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-0b8132852a", ContainerID:"592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2", Pod:"calico-apiserver-8f55f948b-vvqdn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic76141880a9", MAC:"4e:a7:22:7b:da:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:08:06.098346 containerd[2738]: 2025-05-14 01:08:06.096 [INFO][7085] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2" Namespace="calico-apiserver" Pod="calico-apiserver-8f55f948b-vvqdn" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-calico--apiserver--8f55f948b--vvqdn-eth0" May 14 01:08:06.110385 containerd[2738]: time="2025-05-14T01:08:06.110353443Z" level=info msg="connecting to shim 592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2" address="unix:///run/containerd/s/bfe22e86817549fc0a0de8db0857a4ff4d20cde00672b45d0efbbf4e459fba0a" namespace=k8s.io protocol=ttrpc version=3 May 14 01:08:06.144105 systemd[1]: Started cri-containerd-592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2.scope - libcontainer container 592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2. May 14 01:08:06.169719 containerd[2738]: time="2025-05-14T01:08:06.169685494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8f55f948b-vvqdn,Uid:d811dbb7-adf4-41f9-a429-34ab8c71d029,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2\"" May 14 01:08:06.171489 containerd[2738]: time="2025-05-14T01:08:06.171468127Z" level=info msg="CreateContainer within sandbox \"592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 01:08:06.174870 containerd[2738]: time="2025-05-14T01:08:06.174846299Z" level=info msg="Container 760f2ea8aa4e30ccee80c18aff79fb60e0725edbcacdbf2ae8a997f06eb530d1: CDI devices from CRI Config.CDIDevices: []" May 14 01:08:06.177910 containerd[2738]: time="2025-05-14T01:08:06.177889730Z" level=info msg="CreateContainer within sandbox \"592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"760f2ea8aa4e30ccee80c18aff79fb60e0725edbcacdbf2ae8a997f06eb530d1\"" May 14 01:08:06.178205 containerd[2738]: time="2025-05-14T01:08:06.178187549Z" level=info msg="StartContainer for \"760f2ea8aa4e30ccee80c18aff79fb60e0725edbcacdbf2ae8a997f06eb530d1\"" May 14 01:08:06.179138 containerd[2738]: time="2025-05-14T01:08:06.179113407Z" level=info msg="connecting to shim 760f2ea8aa4e30ccee80c18aff79fb60e0725edbcacdbf2ae8a997f06eb530d1" address="unix:///run/containerd/s/bfe22e86817549fc0a0de8db0857a4ff4d20cde00672b45d0efbbf4e459fba0a" protocol=ttrpc version=3 May 14 01:08:06.185856 systemd-networkd[2631]: caliecb629d849e: Link UP May 14 01:08:06.186022 systemd-networkd[2631]: caliecb629d849e: Gained carrier May 14 01:08:06.202120 systemd[1]: Started cri-containerd-760f2ea8aa4e30ccee80c18aff79fb60e0725edbcacdbf2ae8a997f06eb530d1.scope - libcontainer container 760f2ea8aa4e30ccee80c18aff79fb60e0725edbcacdbf2ae8a997f06eb530d1. May 14 01:08:06.203608 containerd[2738]: 2025-05-14 01:08:05.928 [INFO][7088] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--0b8132852a-k8s-csi--node--driver--cjbvf-eth0 csi-node-driver- calico-system 7395286e-89a3-42ee-9c78-0a22650e7dbd 621 0 2025-05-14 01:07:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4284.0.0-n-0b8132852a csi-node-driver-cjbvf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliecb629d849e [] []}} ContainerID="702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36" Namespace="calico-system" Pod="csi-node-driver-cjbvf" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-csi--node--driver--cjbvf-" May 14 01:08:06.203608 containerd[2738]: 2025-05-14 01:08:05.928 [INFO][7088] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36" Namespace="calico-system" Pod="csi-node-driver-cjbvf" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-csi--node--driver--cjbvf-eth0" May 14 01:08:06.203608 containerd[2738]: 2025-05-14 01:08:05.950 [INFO][7169] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36" HandleID="k8s-pod-network.702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36" Workload="ci--4284.0.0--n--0b8132852a-k8s-csi--node--driver--cjbvf-eth0" May 14 01:08:06.203608 containerd[2738]: 2025-05-14 01:08:05.960 [INFO][7169] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36" HandleID="k8s-pod-network.702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36" Workload="ci--4284.0.0--n--0b8132852a-k8s-csi--node--driver--cjbvf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004124e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-n-0b8132852a", "pod":"csi-node-driver-cjbvf", "timestamp":"2025-05-14 01:08:05.949993671 +0000 UTC"}, Hostname:"ci-4284.0.0-n-0b8132852a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 01:08:06.203608 containerd[2738]: 2025-05-14 01:08:05.960 [INFO][7169] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 01:08:06.203608 containerd[2738]: 2025-05-14 01:08:06.087 [INFO][7169] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 01:08:06.203608 containerd[2738]: 2025-05-14 01:08:06.087 [INFO][7169] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-0b8132852a' May 14 01:08:06.203608 containerd[2738]: 2025-05-14 01:08:06.160 [INFO][7169] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:06.203608 containerd[2738]: 2025-05-14 01:08:06.168 [INFO][7169] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-0b8132852a" May 14 01:08:06.203608 containerd[2738]: 2025-05-14 01:08:06.171 [INFO][7169] ipam/ipam.go 489: Trying affinity for 192.168.120.0/26 host="ci-4284.0.0-n-0b8132852a" May 14 01:08:06.203608 containerd[2738]: 2025-05-14 01:08:06.172 [INFO][7169] ipam/ipam.go 155: Attempting to load block cidr=192.168.120.0/26 host="ci-4284.0.0-n-0b8132852a" May 14 01:08:06.203608 containerd[2738]: 2025-05-14 01:08:06.174 [INFO][7169] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.120.0/26 host="ci-4284.0.0-n-0b8132852a" May 14 01:08:06.203608 containerd[2738]: 2025-05-14 01:08:06.174 [INFO][7169] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.120.0/26 handle="k8s-pod-network.702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:06.203608 containerd[2738]: 2025-05-14 01:08:06.175 [INFO][7169] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36 May 14 01:08:06.203608 containerd[2738]: 2025-05-14 01:08:06.178 [INFO][7169] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.120.0/26 handle="k8s-pod-network.702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:06.203608 containerd[2738]: 2025-05-14 01:08:06.183 [INFO][7169] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.120.6/26] block=192.168.120.0/26 handle="k8s-pod-network.702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:06.203608 containerd[2738]: 2025-05-14 01:08:06.183 [INFO][7169] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.120.6/26] handle="k8s-pod-network.702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36" host="ci-4284.0.0-n-0b8132852a" May 14 01:08:06.203608 containerd[2738]: 2025-05-14 01:08:06.183 [INFO][7169] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 01:08:06.203608 containerd[2738]: 2025-05-14 01:08:06.183 [INFO][7169] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.6/26] IPv6=[] ContainerID="702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36" HandleID="k8s-pod-network.702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36" Workload="ci--4284.0.0--n--0b8132852a-k8s-csi--node--driver--cjbvf-eth0" May 14 01:08:06.204023 containerd[2738]: 2025-05-14 01:08:06.184 [INFO][7088] cni-plugin/k8s.go 386: Populated endpoint ContainerID="702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36" Namespace="calico-system" Pod="csi-node-driver-cjbvf" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-csi--node--driver--cjbvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--0b8132852a-k8s-csi--node--driver--cjbvf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7395286e-89a3-42ee-9c78-0a22650e7dbd", ResourceVersion:"621", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 7, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-0b8132852a", ContainerID:"", Pod:"csi-node-driver-cjbvf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliecb629d849e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:08:06.204023 containerd[2738]: 2025-05-14 01:08:06.184 [INFO][7088] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.120.6/32] ContainerID="702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36" Namespace="calico-system" Pod="csi-node-driver-cjbvf" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-csi--node--driver--cjbvf-eth0" May 14 01:08:06.204023 containerd[2738]: 2025-05-14 01:08:06.184 [INFO][7088] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliecb629d849e ContainerID="702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36" Namespace="calico-system" Pod="csi-node-driver-cjbvf" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-csi--node--driver--cjbvf-eth0" May 14 01:08:06.204023 containerd[2738]: 2025-05-14 01:08:06.186 [INFO][7088] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36" Namespace="calico-system" Pod="csi-node-driver-cjbvf" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-csi--node--driver--cjbvf-eth0" May 14 01:08:06.204023 containerd[2738]: 2025-05-14 01:08:06.186 [INFO][7088] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36" Namespace="calico-system" Pod="csi-node-driver-cjbvf" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-csi--node--driver--cjbvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--0b8132852a-k8s-csi--node--driver--cjbvf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7395286e-89a3-42ee-9c78-0a22650e7dbd", ResourceVersion:"621", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 7, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-0b8132852a", ContainerID:"702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36", Pod:"csi-node-driver-cjbvf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliecb629d849e", MAC:"3a:27:02:b2:57:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:08:06.204023 containerd[2738]: 2025-05-14 01:08:06.202 [INFO][7088] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36" Namespace="calico-system" Pod="csi-node-driver-cjbvf" WorkloadEndpoint="ci--4284.0.0--n--0b8132852a-k8s-csi--node--driver--cjbvf-eth0" May 14 01:08:06.214258 containerd[2738]: time="2025-05-14T01:08:06.214222575Z" level=info msg="connecting to shim 702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36" address="unix:///run/containerd/s/9f470da283e16075c6c8b7129657c396e2be827ea8bef778d163e41c7c37873e" namespace=k8s.io protocol=ttrpc version=3 May 14 01:08:06.238123 systemd[1]: Started cri-containerd-702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36.scope - libcontainer container 702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36. May 14 01:08:06.240036 containerd[2738]: time="2025-05-14T01:08:06.240006557Z" level=info msg="StartContainer for \"760f2ea8aa4e30ccee80c18aff79fb60e0725edbcacdbf2ae8a997f06eb530d1\" returns successfully" May 14 01:08:06.256303 containerd[2738]: time="2025-05-14T01:08:06.256274580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cjbvf,Uid:7395286e-89a3-42ee-9c78-0a22650e7dbd,Namespace:calico-system,Attempt:0,} returns sandbox id \"702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36\"" May 14 01:08:06.872094 containerd[2738]: time="2025-05-14T01:08:06.872055343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:08:06.872273 containerd[2738]: time="2025-05-14T01:08:06.872098186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 14 01:08:06.872757 containerd[2738]: time="2025-05-14T01:08:06.872731986Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:08:06.874335 containerd[2738]: time="2025-05-14T01:08:06.874308085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:08:06.874989 containerd[2738]: time="2025-05-14T01:08:06.874963166Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 823.87845ms" May 14 01:08:06.875016 containerd[2738]: time="2025-05-14T01:08:06.874992888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 14 01:08:06.875733 containerd[2738]: time="2025-05-14T01:08:06.875714413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 01:08:06.880346 containerd[2738]: time="2025-05-14T01:08:06.880319503Z" level=info msg="CreateContainer within sandbox \"e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 01:08:06.883865 containerd[2738]: time="2025-05-14T01:08:06.883832324Z" level=info msg="Container 81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586: CDI devices from CRI Config.CDIDevices: []" May 14 01:08:06.887229 containerd[2738]: time="2025-05-14T01:08:06.887193055Z" level=info msg="CreateContainer within sandbox \"e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\"" May 14 01:08:06.887536 containerd[2738]: time="2025-05-14T01:08:06.887510395Z" level=info msg="StartContainer for \"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\"" May 14 01:08:06.888499 containerd[2738]: time="2025-05-14T01:08:06.888472416Z" level=info msg="connecting to shim 81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586" address="unix:///run/containerd/s/27184d9b848dc4a909608a58930240c2bbfe81e595664817ffa980edb2792880" protocol=ttrpc version=3 May 14 01:08:06.923092 systemd[1]: Started cri-containerd-81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586.scope - libcontainer container 81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586. May 14 01:08:06.955394 containerd[2738]: time="2025-05-14T01:08:06.955366982Z" level=info msg="StartContainer for \"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" returns successfully" May 14 01:08:06.972666 kubelet[4258]: I0514 01:08:06.972621 4258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79df9bdd84-6gckh" podStartSLOduration=20.147866523 podStartE2EDuration="20.972608587s" podCreationTimestamp="2025-05-14 01:07:46 +0000 UTC" firstStartedPulling="2025-05-14 01:08:06.050854382 +0000 UTC m=+34.227555960" lastFinishedPulling="2025-05-14 01:08:06.875596406 +0000 UTC m=+35.052298024" observedRunningTime="2025-05-14 01:08:06.972398493 +0000 UTC m=+35.149100071" watchObservedRunningTime="2025-05-14 01:08:06.972608587 +0000 UTC m=+35.149310205" May 14 01:08:06.997031 containerd[2738]: time="2025-05-14T01:08:06.996999240Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"24ff460ca166858969cd2adb55fdcc69fd02ca8fa449c1f3a16faafd6f72cfd1\" pid:7518 exit_status:1 exited_at:{seconds:1747184886 nanos:996374841}" May 14 01:08:07.107130 systemd-networkd[2631]: cali933d77b879e: Gained IPv6LL May 14 01:08:07.320653 containerd[2738]: time="2025-05-14T01:08:07.320609518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:08:07.320776 containerd[2738]: time="2025-05-14T01:08:07.320614479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 14 01:08:07.321333 containerd[2738]: time="2025-05-14T01:08:07.321310121Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:08:07.322834 containerd[2738]: time="2025-05-14T01:08:07.322812892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:08:07.323546 containerd[2738]: time="2025-05-14T01:08:07.323515654Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 447.77428ms" May 14 01:08:07.323598 containerd[2738]: time="2025-05-14T01:08:07.323548496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 14 01:08:07.325102 containerd[2738]: time="2025-05-14T01:08:07.325076909Z" level=info msg="CreateContainer within sandbox \"702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 01:08:07.330474 containerd[2738]: time="2025-05-14T01:08:07.330446914Z" level=info msg="Container fd1245a009ca84d156824c1e8a9b8af49c3d1e8d17acfa7762a66fbb0942db76: CDI devices from CRI Config.CDIDevices: []" May 14 01:08:07.334479 containerd[2738]: time="2025-05-14T01:08:07.334451636Z" level=info msg="CreateContainer within sandbox \"702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fd1245a009ca84d156824c1e8a9b8af49c3d1e8d17acfa7762a66fbb0942db76\"" May 14 01:08:07.334797 containerd[2738]: time="2025-05-14T01:08:07.334767455Z" level=info msg="StartContainer for \"fd1245a009ca84d156824c1e8a9b8af49c3d1e8d17acfa7762a66fbb0942db76\"" May 14 01:08:07.336101 containerd[2738]: time="2025-05-14T01:08:07.336076174Z" level=info msg="connecting to shim fd1245a009ca84d156824c1e8a9b8af49c3d1e8d17acfa7762a66fbb0942db76" address="unix:///run/containerd/s/9f470da283e16075c6c8b7129657c396e2be827ea8bef778d163e41c7c37873e" protocol=ttrpc version=3 May 14 01:08:07.363093 systemd[1]: Started cri-containerd-fd1245a009ca84d156824c1e8a9b8af49c3d1e8d17acfa7762a66fbb0942db76.scope - libcontainer container fd1245a009ca84d156824c1e8a9b8af49c3d1e8d17acfa7762a66fbb0942db76. May 14 01:08:07.390299 containerd[2738]: time="2025-05-14T01:08:07.390271455Z" level=info msg="StartContainer for \"fd1245a009ca84d156824c1e8a9b8af49c3d1e8d17acfa7762a66fbb0942db76\" returns successfully" May 14 01:08:07.391069 containerd[2738]: time="2025-05-14T01:08:07.391044222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 01:08:07.491053 systemd-networkd[2631]: vxlan.calico: Gained IPv6LL May 14 01:08:07.819494 containerd[2738]: time="2025-05-14T01:08:07.819449274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:08:07.819756 containerd[2738]: time="2025-05-14T01:08:07.819704729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 14 01:08:07.820158 containerd[2738]: time="2025-05-14T01:08:07.820138195Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:08:07.821735 containerd[2738]: time="2025-05-14T01:08:07.821711770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:08:07.822412 containerd[2738]: time="2025-05-14T01:08:07.822375611Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 431.290227ms" May 14 01:08:07.822467 containerd[2738]: time="2025-05-14T01:08:07.822414573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 14 01:08:07.824046 containerd[2738]: time="2025-05-14T01:08:07.824018590Z" level=info msg="CreateContainer within sandbox \"702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 01:08:07.828525 containerd[2738]: time="2025-05-14T01:08:07.828487061Z" level=info msg="Container 5c907c835c3d75b76e0f5c52e7c53a9fd89d1328e9c4d0fda95e79733bb4e431: CDI devices from CRI Config.CDIDevices: []" May 14 01:08:07.833097 containerd[2738]: time="2025-05-14T01:08:07.833068618Z" level=info msg="CreateContainer within sandbox \"702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5c907c835c3d75b76e0f5c52e7c53a9fd89d1328e9c4d0fda95e79733bb4e431\"" May 14 01:08:07.833412 containerd[2738]: time="2025-05-14T01:08:07.833384877Z" level=info msg="StartContainer for \"5c907c835c3d75b76e0f5c52e7c53a9fd89d1328e9c4d0fda95e79733bb4e431\"" May 14 01:08:07.834760 containerd[2738]: time="2025-05-14T01:08:07.834730438Z" level=info msg="connecting to shim 5c907c835c3d75b76e0f5c52e7c53a9fd89d1328e9c4d0fda95e79733bb4e431" address="unix:///run/containerd/s/9f470da283e16075c6c8b7129657c396e2be827ea8bef778d163e41c7c37873e" protocol=ttrpc version=3 May 14 01:08:07.857105 systemd[1]: Started cri-containerd-5c907c835c3d75b76e0f5c52e7c53a9fd89d1328e9c4d0fda95e79733bb4e431.scope - libcontainer container 5c907c835c3d75b76e0f5c52e7c53a9fd89d1328e9c4d0fda95e79733bb4e431. May 14 01:08:07.884872 containerd[2738]: time="2025-05-14T01:08:07.884840552Z" level=info msg="StartContainer for \"5c907c835c3d75b76e0f5c52e7c53a9fd89d1328e9c4d0fda95e79733bb4e431\" returns successfully" May 14 01:08:07.939080 systemd-networkd[2631]: caliecb629d849e: Gained IPv6LL May 14 01:08:07.940598 kubelet[4258]: I0514 01:08:07.940577 4258 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 01:08:07.940634 kubelet[4258]: I0514 01:08:07.940606 4258 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 01:08:07.971468 kubelet[4258]: I0514 01:08:07.971440 4258 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 01:08:07.979309 kubelet[4258]: I0514 01:08:07.979270 4258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cjbvf" podStartSLOduration=20.413496059 podStartE2EDuration="21.979257427s" podCreationTimestamp="2025-05-14 01:07:46 +0000 UTC" firstStartedPulling="2025-05-14 01:08:06.257171396 +0000 UTC m=+34.433873014" lastFinishedPulling="2025-05-14 01:08:07.822932804 +0000 UTC m=+35.999634382" observedRunningTime="2025-05-14 01:08:07.979107778 +0000 UTC m=+36.155809396" watchObservedRunningTime="2025-05-14 01:08:07.979257427 +0000 UTC m=+36.155959045" May 14 01:08:07.979703 kubelet[4258]: I0514 01:08:07.979681 4258 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8f55f948b-vvqdn" podStartSLOduration=22.979675732 podStartE2EDuration="22.979675732s" podCreationTimestamp="2025-05-14 01:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 01:08:06.979392933 +0000 UTC m=+35.156094551" watchObservedRunningTime="2025-05-14 01:08:07.979675732 +0000 UTC m=+36.156377350" May 14 01:08:08.014824 containerd[2738]: time="2025-05-14T01:08:08.014784627Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"ed0ff6cdc1ad2d4cb7cc7f5ea578ca3435515ec58cc532b3668e5f1f61d2320d\" pid:7633 exited_at:{seconds:1747184888 nanos:14461608}" May 14 01:08:08.131081 systemd-networkd[2631]: calic76141880a9: Gained IPv6LL May 14 01:08:14.734254 kubelet[4258]: I0514 01:08:14.734206 4258 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 01:08:14.784044 containerd[2738]: time="2025-05-14T01:08:14.784012983Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\" id:\"05afd6d8f561e46d2292bea357253e6564f1899f1bae5d2a7e2d95be8c67fcd6\" pid:7689 exited_at:{seconds:1747184894 nanos:783801813}" May 14 01:08:14.849640 containerd[2738]: time="2025-05-14T01:08:14.849607712Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\" id:\"611ca46f01410e2ffb6b3542650a625d015c7f15b5fd087f34ab315a7e74f362\" pid:7719 exited_at:{seconds:1747184894 nanos:849402142}" May 14 01:08:24.639909 kubelet[4258]: I0514 01:08:24.639753 4258 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 01:08:27.394288 containerd[2738]: time="2025-05-14T01:08:27.394253214Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"0439c3fefdb2172ca0bbf3448a6ac040e7942a0ab52b75dfd11cb586b233c4fa\" pid:7786 exited_at:{seconds:1747184907 nanos:394103969}" May 14 01:08:43.085678 containerd[2738]: time="2025-05-14T01:08:43.085638161Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"e185024f19228893d8a85cc622fef4edd64ba6b1746260c0f32f15d764514a94\" pid:7811 exited_at:{seconds:1747184923 nanos:85444155}" May 14 01:08:44.794668 containerd[2738]: time="2025-05-14T01:08:44.794635145Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\" id:\"3e443ce3863159c512633d4e89e55a08b1fc484bd95564347d2c40c53610499f\" pid:7832 exited_at:{seconds:1747184924 nanos:794393312}" May 14 01:08:50.099573 kubelet[4258]: I0514 01:08:50.099524 4258 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 01:08:57.398444 containerd[2738]: time="2025-05-14T01:08:57.398393929Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"3d8adf5fbc0b4c7e83e98987492184ee5c730c428a4bb19f8c3f74b2206a01c7\" pid:7873 exited_at:{seconds:1747184937 nanos:398190732}" May 14 01:09:14.786336 containerd[2738]: time="2025-05-14T01:09:14.786290640Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\" id:\"309a9290ae989dfec160aee687b6c01db5aedfb27d4fffaaa39f7789cb7abe4f\" pid:7901 exited_at:{seconds:1747184954 nanos:785944760}" May 14 01:09:27.389744 containerd[2738]: time="2025-05-14T01:09:27.389693678Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"31aa015e694c061d2b804f42be81cca1654185ef301ebf9ac61f4b7d98084f59\" pid:7939 exited_at:{seconds:1747184967 nanos:389481797}" May 14 01:09:43.093497 containerd[2738]: time="2025-05-14T01:09:43.093451018Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"87f09b7f8a7dbfa119d3d59a9d0dbbdac1d06e9c56f32d8e72d3fe276b87b959\" pid:7984 exited_at:{seconds:1747184983 nanos:93253496}" May 14 01:09:44.789370 containerd[2738]: time="2025-05-14T01:09:44.789320649Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\" id:\"5f7bc3875043ebea1ad50c7e7e1bf80fabc9c0f1ca924eed4145c11f9aeba498\" pid:8005 exited_at:{seconds:1747184984 nanos:789012325}" May 14 01:09:57.398504 containerd[2738]: time="2025-05-14T01:09:57.398431052Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"ea836e9e3d4fc6fc8ef8aad33a3c06113615f7075c2a804acc830f7e8e652e4d\" pid:8039 exited_at:{seconds:1747184997 nanos:398263770}" May 14 01:10:14.785549 containerd[2738]: time="2025-05-14T01:10:14.785508248Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\" id:\"40f196f1bbc63d8ea16cfa7f440e761096dfe4d47dda722852981bf74673785c\" pid:8089 exited_at:{seconds:1747185014 nanos:785092321}" May 14 01:10:27.391400 containerd[2738]: time="2025-05-14T01:10:27.391357870Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"0fd3f24ef10c42552a88f76f39b12ea405576e96026b9bef0557adea176b29d1\" pid:8120 exited_at:{seconds:1747185027 nanos:391157827}" May 14 01:10:43.098372 containerd[2738]: time="2025-05-14T01:10:43.098333047Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"93d76a321753d9987a3c09cb3a60cc3c852a271802bc396132831fea224efbc7\" pid:8151 exited_at:{seconds:1747185043 nanos:98167524}" May 14 01:10:44.789175 containerd[2738]: time="2025-05-14T01:10:44.789136286Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\" id:\"3f3e710f1e81f17c85486097294eee77cbab1fb4729c96c879c56d48a97531ee\" pid:8173 exited_at:{seconds:1747185044 nanos:788933722}" May 14 01:10:57.388757 containerd[2738]: time="2025-05-14T01:10:57.388704399Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"5ef4e9873a2435b11db56938e8ae70e6434affc7b397920e78dbfaed696f54da\" pid:8211 exited_at:{seconds:1747185057 nanos:388488763}" May 14 01:11:14.785914 containerd[2738]: time="2025-05-14T01:11:14.785865188Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\" id:\"c366fd96e97bd456abb989b88607100be0cc1a5c51fdd9bb49de02657a3d142b\" pid:8245 exited_at:{seconds:1747185074 nanos:785455553}" May 14 01:11:27.394297 containerd[2738]: time="2025-05-14T01:11:27.394215271Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"8e2b70ba873fba0328a8529c54e56652b78fefe9851e1e65e3addf35bd39c371\" pid:8288 exited_at:{seconds:1747185087 nanos:394058072}" May 14 01:11:43.085764 containerd[2738]: time="2025-05-14T01:11:43.085724435Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"0092e94b0ac5535d550a7efffc2dae1747e54d337a85458eb8637a1fd41deae5\" pid:8316 exited_at:{seconds:1747185103 nanos:85555316}" May 14 01:11:44.782453 containerd[2738]: time="2025-05-14T01:11:44.782422875Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\" id:\"c69363e092ef2db80497ec17a2aa1a4bf12e8656b91cf9a0d0ce7360f4f62cda\" pid:8338 exited_at:{seconds:1747185104 nanos:782179955}" May 14 01:11:57.388545 containerd[2738]: time="2025-05-14T01:11:57.388504495Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"f91681899477e912122f8f18d3855c9c198652bae1d1e6ca46b790df564ff576\" pid:8377 exited_at:{seconds:1747185117 nanos:388340655}" May 14 01:12:14.787600 containerd[2738]: time="2025-05-14T01:12:14.787558279Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\" id:\"13ee1210313d0e015240f0bac5f19698d4767c912465bd8768c05763b54794c2\" pid:8420 exited_at:{seconds:1747185134 nanos:787295277}" May 14 01:12:27.392689 containerd[2738]: time="2025-05-14T01:12:27.392650539Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"92cb5881342f9147f1979782ddfacbe5a34283023defa5eb4c2d779893b458a9\" pid:8450 exited_at:{seconds:1747185147 nanos:392457298}" May 14 01:12:27.967639 containerd[2738]: time="2025-05-14T01:12:27.967522114Z" level=warning msg="container event discarded" container=a36e4ffc4caa440522ee844f7f771530e6d7774ad9a738360735210a1d212a1d type=CONTAINER_CREATED_EVENT May 14 01:12:27.967639 containerd[2738]: time="2025-05-14T01:12:27.967595674Z" level=warning msg="container event discarded" container=a36e4ffc4caa440522ee844f7f771530e6d7774ad9a738360735210a1d212a1d type=CONTAINER_STARTED_EVENT May 14 01:12:27.981939 containerd[2738]: time="2025-05-14T01:12:27.981841130Z" level=warning msg="container event discarded" container=0fef226c39a7e97331c6252d1c0cf39a1f363b78dcbb3bf2c2fe147630ac1d30 type=CONTAINER_CREATED_EVENT May 14 01:12:27.981939 containerd[2738]: time="2025-05-14T01:12:27.981884570Z" level=warning msg="container event discarded" container=0fef226c39a7e97331c6252d1c0cf39a1f363b78dcbb3bf2c2fe147630ac1d30 type=CONTAINER_STARTED_EVENT May 14 01:12:27.981939 containerd[2738]: time="2025-05-14T01:12:27.981900890Z" level=warning msg="container event discarded" container=ff6c071ff7b4175308f9facffe9d99efc76ce36669281a41c67438ded9fa182a type=CONTAINER_CREATED_EVENT May 14 01:12:27.981939 containerd[2738]: time="2025-05-14T01:12:27.981914930Z" level=warning msg="container event discarded" container=ff6c071ff7b4175308f9facffe9d99efc76ce36669281a41c67438ded9fa182a type=CONTAINER_STARTED_EVENT May 14 01:12:27.981939 containerd[2738]: time="2025-05-14T01:12:27.981927290Z" level=warning msg="container event discarded" container=5739766860cbfb404485f5b6d7ca03f1ca73c91d62bfc045b72e54495654ab0a type=CONTAINER_CREATED_EVENT May 14 01:12:27.981939 containerd[2738]: time="2025-05-14T01:12:27.981940490Z" level=warning msg="container event discarded" container=3090b96bc4fd22d088db1d5531631c5692184e9d124f3f64f091e452e06d52bf type=CONTAINER_CREATED_EVENT May 14 01:12:27.993822 containerd[2738]: time="2025-05-14T01:12:27.993768890Z" level=warning msg="container event discarded" container=1bac0fca46a84989e4c862892aa9e2fd275d2ab682e94a6f6480b0ecb1549e62 type=CONTAINER_CREATED_EVENT May 14 01:12:28.046064 containerd[2738]: time="2025-05-14T01:12:28.046047686Z" level=warning msg="container event discarded" container=3090b96bc4fd22d088db1d5531631c5692184e9d124f3f64f091e452e06d52bf type=CONTAINER_STARTED_EVENT May 14 01:12:28.046195 containerd[2738]: time="2025-05-14T01:12:28.046162607Z" level=warning msg="container event discarded" container=1bac0fca46a84989e4c862892aa9e2fd275d2ab682e94a6f6480b0ecb1549e62 type=CONTAINER_STARTED_EVENT May 14 01:12:28.046195 containerd[2738]: time="2025-05-14T01:12:28.046178207Z" level=warning msg="container event discarded" container=5739766860cbfb404485f5b6d7ca03f1ca73c91d62bfc045b72e54495654ab0a type=CONTAINER_STARTED_EVENT May 14 01:12:40.461919 containerd[2738]: time="2025-05-14T01:12:40.461839718Z" level=warning msg="container event discarded" container=2efb615b905424827ec86edd87fdfc2208e26053ca09418ffc114bd400dcca15 type=CONTAINER_CREATED_EVENT May 14 01:12:40.461919 containerd[2738]: time="2025-05-14T01:12:40.461893679Z" level=warning msg="container event discarded" container=2efb615b905424827ec86edd87fdfc2208e26053ca09418ffc114bd400dcca15 type=CONTAINER_STARTED_EVENT May 14 01:12:40.474101 containerd[2738]: time="2025-05-14T01:12:40.474061099Z" level=warning msg="container event discarded" container=dc291839f5e64bce1dedbe37c6b5c8af014655c7b3ea38110b77825280e8e468 type=CONTAINER_CREATED_EVENT May 14 01:12:40.526291 containerd[2738]: time="2025-05-14T01:12:40.526242327Z" level=warning msg="container event discarded" container=dc291839f5e64bce1dedbe37c6b5c8af014655c7b3ea38110b77825280e8e468 type=CONTAINER_STARTED_EVENT May 14 01:12:40.615572 containerd[2738]: time="2025-05-14T01:12:40.615530900Z" level=warning msg="container event discarded" container=813edb17d9a467566d31fc8581e2a9aa479a4fd0e87e4bbff9246e225dbdb7e9 type=CONTAINER_CREATED_EVENT May 14 01:12:40.615572 containerd[2738]: time="2025-05-14T01:12:40.615554541Z" level=warning msg="container event discarded" container=813edb17d9a467566d31fc8581e2a9aa479a4fd0e87e4bbff9246e225dbdb7e9 type=CONTAINER_STARTED_EVENT May 14 01:12:41.931720 containerd[2738]: time="2025-05-14T01:12:41.931645366Z" level=warning msg="container event discarded" container=27c45c52a56c47e07114df7779c0c40aaf46f91aeef9def2b833af78f6ac57c8 type=CONTAINER_CREATED_EVENT May 14 01:12:41.975135 containerd[2738]: time="2025-05-14T01:12:41.975108407Z" level=warning msg="container event discarded" container=27c45c52a56c47e07114df7779c0c40aaf46f91aeef9def2b833af78f6ac57c8 type=CONTAINER_STARTED_EVENT May 14 01:12:43.086526 containerd[2738]: time="2025-05-14T01:12:43.086489169Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"394cf00d16d6d446d77c7d004ccb693ce069d419938f2cff93121fa6d554a49c\" pid:8479 exited_at:{seconds:1747185163 nanos:86278847}" May 14 01:12:44.787542 containerd[2738]: time="2025-05-14T01:12:44.787497978Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\" id:\"060fa2f721928d2b3114863ff2554c731c7776bccb161ef64754dd13fea8b943\" pid:8501 exited_at:{seconds:1747185164 nanos:787282456}" May 14 01:12:46.589472 containerd[2738]: time="2025-05-14T01:12:46.589374117Z" level=warning msg="container event discarded" container=a5da550e4fc927d0d3d7994f6f44654943a40e11b04e641b95e38170a48ae4c3 type=CONTAINER_CREATED_EVENT May 14 01:12:46.589472 containerd[2738]: time="2025-05-14T01:12:46.589436398Z" level=warning msg="container event discarded" container=a5da550e4fc927d0d3d7994f6f44654943a40e11b04e641b95e38170a48ae4c3 type=CONTAINER_STARTED_EVENT May 14 01:12:46.776566 containerd[2738]: time="2025-05-14T01:12:46.776511446Z" level=warning msg="container event discarded" container=19fcb7ea11c7b0a2a683d8c3744bf97bc7202bf3e480dc623025bc01ed5aa420 type=CONTAINER_CREATED_EVENT May 14 01:12:46.776566 containerd[2738]: time="2025-05-14T01:12:46.776548326Z" level=warning msg="container event discarded" container=19fcb7ea11c7b0a2a683d8c3744bf97bc7202bf3e480dc623025bc01ed5aa420 type=CONTAINER_STARTED_EVENT May 14 01:12:47.394898 containerd[2738]: time="2025-05-14T01:12:47.394863931Z" level=warning msg="container event discarded" container=481b2bf50c7a9d768d6839b4262f3acff06a90fea0fd2445026bfe1a0f9c68ac type=CONTAINER_CREATED_EVENT May 14 01:12:47.449412 containerd[2738]: time="2025-05-14T01:12:47.449336337Z" level=warning msg="container event discarded" container=481b2bf50c7a9d768d6839b4262f3acff06a90fea0fd2445026bfe1a0f9c68ac type=CONTAINER_STARTED_EVENT May 14 01:12:47.862717 containerd[2738]: time="2025-05-14T01:12:47.862688858Z" level=warning msg="container event discarded" container=e42b546b685333949ca971f692a4df359058d5105651cffdb0bc2ea67f070c04 type=CONTAINER_CREATED_EVENT May 14 01:12:47.913606 containerd[2738]: time="2025-05-14T01:12:47.913551751Z" level=warning msg="container event discarded" container=e42b546b685333949ca971f692a4df359058d5105651cffdb0bc2ea67f070c04 type=CONTAINER_STARTED_EVENT May 14 01:12:48.059786 containerd[2738]: time="2025-05-14T01:12:48.059764978Z" level=warning msg="container event discarded" container=e42b546b685333949ca971f692a4df359058d5105651cffdb0bc2ea67f070c04 type=CONTAINER_STOPPED_EVENT May 14 01:12:50.527198 containerd[2738]: time="2025-05-14T01:12:50.527128847Z" level=warning msg="container event discarded" container=c22c71c1f64cfcdb0780c278528cedda9b0d06f55ad017bfe83544fac79c01ea type=CONTAINER_CREATED_EVENT May 14 01:12:50.579413 containerd[2738]: time="2025-05-14T01:12:50.579362327Z" level=warning msg="container event discarded" container=c22c71c1f64cfcdb0780c278528cedda9b0d06f55ad017bfe83544fac79c01ea type=CONTAINER_STARTED_EVENT May 14 01:12:51.138600 containerd[2738]: time="2025-05-14T01:12:51.138559792Z" level=warning msg="container event discarded" container=c22c71c1f64cfcdb0780c278528cedda9b0d06f55ad017bfe83544fac79c01ea type=CONTAINER_STOPPED_EVENT May 14 01:12:54.791331 containerd[2738]: time="2025-05-14T01:12:54.791122629Z" level=warning msg="container event discarded" container=6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696 type=CONTAINER_CREATED_EVENT May 14 01:12:54.852490 containerd[2738]: time="2025-05-14T01:12:54.852455414Z" level=warning msg="container event discarded" container=6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696 type=CONTAINER_STARTED_EVENT May 14 01:12:57.395715 containerd[2738]: time="2025-05-14T01:12:57.395683825Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"daf884de4e5b0059e62adb785f5cbeb114bed9c5c0ba48727facf10e177ea212\" pid:8550 exited_at:{seconds:1747185177 nanos:395517744}" May 14 01:13:02.094593 containerd[2738]: time="2025-05-14T01:13:02.094537478Z" level=warning msg="container event discarded" container=63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0 type=CONTAINER_CREATED_EVENT May 14 01:13:02.094593 containerd[2738]: time="2025-05-14T01:13:02.094581479Z" level=warning msg="container event discarded" container=63679e2230c11a6cf1c2fc8bd1425b53e3f0cf5f8092e738608ccff7db66a0c0 type=CONTAINER_STARTED_EVENT May 14 01:13:02.094593 containerd[2738]: time="2025-05-14T01:13:02.094598079Z" level=warning msg="container event discarded" container=4335be0411cc7e26fdbd57a027fc157647e5d7ac75ee25231b6eb357f627ef79 type=CONTAINER_CREATED_EVENT May 14 01:13:02.144797 containerd[2738]: time="2025-05-14T01:13:02.144761308Z" level=warning msg="container event discarded" container=4335be0411cc7e26fdbd57a027fc157647e5d7ac75ee25231b6eb357f627ef79 type=CONTAINER_STARTED_EVENT May 14 01:13:03.158771 containerd[2738]: time="2025-05-14T01:13:03.158695897Z" level=warning msg="container event discarded" container=dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00 type=CONTAINER_CREATED_EVENT May 14 01:13:03.158771 containerd[2738]: time="2025-05-14T01:13:03.158742258Z" level=warning msg="container event discarded" container=dd2dc6a601a80bcf55982f918ad947b60027c993d3b9c6e20870761f3f872f00 type=CONTAINER_STARTED_EVENT May 14 01:13:03.158771 containerd[2738]: time="2025-05-14T01:13:03.158750538Z" level=warning msg="container event discarded" container=ec99a29da211bd410147696186a91983fb298ad917b77988696bfda7dea45cf5 type=CONTAINER_CREATED_EVENT May 14 01:13:03.213999 containerd[2738]: time="2025-05-14T01:13:03.213922462Z" level=warning msg="container event discarded" container=ec99a29da211bd410147696186a91983fb298ad917b77988696bfda7dea45cf5 type=CONTAINER_STARTED_EVENT May 14 01:13:03.277197 containerd[2738]: time="2025-05-14T01:13:03.277147029Z" level=warning msg="container event discarded" container=d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51 type=CONTAINER_CREATED_EVENT May 14 01:13:03.277197 containerd[2738]: time="2025-05-14T01:13:03.277176869Z" level=warning msg="container event discarded" container=d07622cf76b1d1395372f2ed78084578cb1ef09e02fbaa8dc46d06734c653a51 type=CONTAINER_STARTED_EVENT May 14 01:13:04.104690 containerd[2738]: time="2025-05-14T01:13:04.104664542Z" level=warning msg="container event discarded" container=f08d1b43e94b22dea1d963b7dce4b1dc38cde6ee7c316ca1ec50ddc5e94febdb type=CONTAINER_CREATED_EVENT May 14 01:13:04.158966 containerd[2738]: time="2025-05-14T01:13:04.158933421Z" level=warning msg="container event discarded" container=f08d1b43e94b22dea1d963b7dce4b1dc38cde6ee7c316ca1ec50ddc5e94febdb type=CONTAINER_STARTED_EVENT May 14 01:13:06.060227 containerd[2738]: time="2025-05-14T01:13:06.060170966Z" level=warning msg="container event discarded" container=e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed type=CONTAINER_CREATED_EVENT May 14 01:13:06.060227 containerd[2738]: time="2025-05-14T01:13:06.060205447Z" level=warning msg="container event discarded" container=e5e2169f5f85ef0c09e208d8feafc025ef44bcd761bbf58635776e939a44e3ed type=CONTAINER_STARTED_EVENT May 14 01:13:06.180544 containerd[2738]: time="2025-05-14T01:13:06.180486223Z" level=warning msg="container event discarded" container=592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2 type=CONTAINER_CREATED_EVENT May 14 01:13:06.180544 containerd[2738]: time="2025-05-14T01:13:06.180515263Z" level=warning msg="container event discarded" container=592346596a6865168dadccadb54d9f9875f2cf5b0bc0245be0698381aa9d0de2 type=CONTAINER_STARTED_EVENT May 14 01:13:06.180544 containerd[2738]: time="2025-05-14T01:13:06.180523263Z" level=warning msg="container event discarded" container=760f2ea8aa4e30ccee80c18aff79fb60e0725edbcacdbf2ae8a997f06eb530d1 type=CONTAINER_CREATED_EVENT May 14 01:13:06.249744 containerd[2738]: time="2025-05-14T01:13:06.249704545Z" level=warning msg="container event discarded" container=760f2ea8aa4e30ccee80c18aff79fb60e0725edbcacdbf2ae8a997f06eb530d1 type=CONTAINER_STARTED_EVENT May 14 01:13:06.267054 containerd[2738]: time="2025-05-14T01:13:06.267012086Z" level=warning msg="container event discarded" container=702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36 type=CONTAINER_CREATED_EVENT May 14 01:13:06.267054 containerd[2738]: time="2025-05-14T01:13:06.267034926Z" level=warning msg="container event discarded" container=702781b30074ddaee48b87e58e0cc1219dd1531e0d63aecd5be95fd1a4998b36 type=CONTAINER_STARTED_EVENT May 14 01:13:06.897521 containerd[2738]: time="2025-05-14T01:13:06.897472070Z" level=warning msg="container event discarded" container=81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586 type=CONTAINER_CREATED_EVENT May 14 01:13:06.964825 containerd[2738]: time="2025-05-14T01:13:06.964770293Z" level=warning msg="container event discarded" container=81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586 type=CONTAINER_STARTED_EVENT May 14 01:13:07.343631 containerd[2738]: time="2025-05-14T01:13:07.343581352Z" level=warning msg="container event discarded" container=fd1245a009ca84d156824c1e8a9b8af49c3d1e8d17acfa7762a66fbb0942db76 type=CONTAINER_CREATED_EVENT May 14 01:13:07.399827 containerd[2738]: time="2025-05-14T01:13:07.399782743Z" level=warning msg="container event discarded" container=fd1245a009ca84d156824c1e8a9b8af49c3d1e8d17acfa7762a66fbb0942db76 type=CONTAINER_STARTED_EVENT May 14 01:13:07.842435 containerd[2738]: time="2025-05-14T01:13:07.842300635Z" level=warning msg="container event discarded" container=5c907c835c3d75b76e0f5c52e7c53a9fd89d1328e9c4d0fda95e79733bb4e431 type=CONTAINER_CREATED_EVENT May 14 01:13:07.894621 containerd[2738]: time="2025-05-14T01:13:07.894581304Z" level=warning msg="container event discarded" container=5c907c835c3d75b76e0f5c52e7c53a9fd89d1328e9c4d0fda95e79733bb4e431 type=CONTAINER_STARTED_EVENT May 14 01:13:14.784314 containerd[2738]: time="2025-05-14T01:13:14.784274668Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\" id:\"91075893388c523c51db4c95109059ad1f9c3ec66680420d4c40a28a6d36433d\" pid:8579 exited_at:{seconds:1747185194 nanos:784037545}" May 14 01:13:27.392797 containerd[2738]: time="2025-05-14T01:13:27.392730510Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"ae1cd8ad45f051d1e99f257d205fcaa580738eee50f62519e5f624071848cd45\" pid:8616 exited_at:{seconds:1747185207 nanos:392551148}" May 14 01:13:43.089590 containerd[2738]: time="2025-05-14T01:13:43.089548417Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"e8216c5cd7a0a37bbd40b7e08fe9a4b6138fd303f922e45df90aaed0d796310a\" pid:8642 exited_at:{seconds:1747185223 nanos:89407855}" May 14 01:13:44.790096 containerd[2738]: time="2025-05-14T01:13:44.790043955Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\" id:\"f489cacfd4d7f551fd36721f0a75a81ef11cd4452bd1accfe4816ea09d1532df\" pid:8665 exited_at:{seconds:1747185224 nanos:789592829}" May 14 01:13:57.393833 containerd[2738]: time="2025-05-14T01:13:57.393792955Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"fa74059f20f2d096ffbd512de65616facf2291833a6958cddc2df480499666b8\" pid:8700 exited_at:{seconds:1747185237 nanos:393600392}" May 14 01:14:14.789375 containerd[2738]: time="2025-05-14T01:14:14.789303830Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\" id:\"b6e1ef7e20db08f8594293dc50906acf0338811e07801e3cce3f1a50245c2c38\" pid:8725 exited_at:{seconds:1747185254 nanos:789053907}" May 14 01:14:27.389549 containerd[2738]: time="2025-05-14T01:14:27.389456361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"c40cd61021312c6fb2ebf166e4d3aaf9768f4595b61bb2f5707332dbed741449\" pid:8767 exited_at:{seconds:1747185267 nanos:389287719}" May 14 01:14:43.089646 containerd[2738]: time="2025-05-14T01:14:43.089601999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"9e6b72709adca9209bf81803a7e2d9ff9cb29047658bade5178e5a00bc5a933a\" pid:8809 exited_at:{seconds:1747185283 nanos:89428757}" May 14 01:14:44.785613 containerd[2738]: time="2025-05-14T01:14:44.785576294Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\" id:\"70387293de66b97913ec34cd820db482bd064099fecafa300d013ccefb27645e\" pid:8830 exited_at:{seconds:1747185284 nanos:785214289}" May 14 01:14:57.389654 containerd[2738]: time="2025-05-14T01:14:57.389604669Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"24e51d583a2d75beabeb753696be61e2bbf40a52954a6a7635cfc0da28d11ad6\" pid:8858 exited_at:{seconds:1747185297 nanos:389368706}" May 14 01:15:14.786431 containerd[2738]: time="2025-05-14T01:15:14.786383600Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\" id:\"d40d3269101a0facbaee86a13d9f57b735879b9246b35510356fce19dbbdb2b8\" pid:8884 exited_at:{seconds:1747185314 nanos:786131523}" May 14 01:15:27.393538 containerd[2738]: time="2025-05-14T01:15:27.393495442Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"2baabd26d47a55f4173956b6651d70106107a25750b87dcc89109450b4d320a7\" pid:8915 exited_at:{seconds:1747185327 nanos:393339683}" May 14 01:15:43.086490 containerd[2738]: time="2025-05-14T01:15:43.086438552Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"0c9967d9e1ca7a1db34b28f5318e37e0f6147a2f06b716a0115256f2c763bb1e\" pid:8941 exited_at:{seconds:1747185343 nanos:86237914}" May 14 01:15:44.791601 containerd[2738]: time="2025-05-14T01:15:44.791560561Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\" id:\"38fabe83aef66a862d59077f8cdb6654033f4dd7e068af92b07b46539eb1490a\" pid:8963 exited_at:{seconds:1747185344 nanos:791375403}" May 14 01:15:49.195938 systemd[1]: Started sshd@7-147.28.151.154:22-139.178.68.195:51606.service - OpenSSH per-connection server daemon (139.178.68.195:51606). May 14 01:15:49.619137 sshd[8983]: Accepted publickey for core from 139.178.68.195 port 51606 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:15:49.620244 sshd-session[8983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:15:49.624056 systemd-logind[2719]: New session 10 of user core. May 14 01:15:49.640073 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 01:15:49.980473 sshd[8985]: Connection closed by 139.178.68.195 port 51606 May 14 01:15:49.980823 sshd-session[8983]: pam_unix(sshd:session): session closed for user core May 14 01:15:49.983861 systemd[1]: sshd@7-147.28.151.154:22-139.178.68.195:51606.service: Deactivated successfully. May 14 01:15:49.986156 systemd[1]: session-10.scope: Deactivated successfully. May 14 01:15:49.986721 systemd-logind[2719]: Session 10 logged out. Waiting for processes to exit. May 14 01:15:49.987387 systemd-logind[2719]: Removed session 10. May 14 01:15:55.052089 systemd[1]: Started sshd@8-147.28.151.154:22-139.178.68.195:35174.service - OpenSSH per-connection server daemon (139.178.68.195:35174). May 14 01:15:55.471828 sshd[9027]: Accepted publickey for core from 139.178.68.195 port 35174 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:15:55.472878 sshd-session[9027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:15:55.476120 systemd-logind[2719]: New session 11 of user core. May 14 01:15:55.487125 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 01:15:55.823304 sshd[9029]: Connection closed by 139.178.68.195 port 35174 May 14 01:15:55.823625 sshd-session[9027]: pam_unix(sshd:session): session closed for user core May 14 01:15:55.826557 systemd[1]: sshd@8-147.28.151.154:22-139.178.68.195:35174.service: Deactivated successfully. May 14 01:15:55.828291 systemd[1]: session-11.scope: Deactivated successfully. May 14 01:15:55.828852 systemd-logind[2719]: Session 11 logged out. Waiting for processes to exit. May 14 01:15:55.829460 systemd-logind[2719]: Removed session 11. May 14 01:15:55.905017 systemd[1]: Started sshd@9-147.28.151.154:22-139.178.68.195:35178.service - OpenSSH per-connection server daemon (139.178.68.195:35178). May 14 01:15:56.324557 sshd[9065]: Accepted publickey for core from 139.178.68.195 port 35178 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:15:56.325579 sshd-session[9065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:15:56.328749 systemd-logind[2719]: New session 12 of user core. May 14 01:15:56.342140 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 01:15:56.707423 sshd[9069]: Connection closed by 139.178.68.195 port 35178 May 14 01:15:56.707789 sshd-session[9065]: pam_unix(sshd:session): session closed for user core May 14 01:15:56.710719 systemd[1]: sshd@9-147.28.151.154:22-139.178.68.195:35178.service: Deactivated successfully. May 14 01:15:56.712467 systemd[1]: session-12.scope: Deactivated successfully. May 14 01:15:56.713042 systemd-logind[2719]: Session 12 logged out. Waiting for processes to exit. May 14 01:15:56.713623 systemd-logind[2719]: Removed session 12. May 14 01:15:56.788028 systemd[1]: Started sshd@10-147.28.151.154:22-139.178.68.195:35190.service - OpenSSH per-connection server daemon (139.178.68.195:35190). May 14 01:15:57.219513 sshd[9105]: Accepted publickey for core from 139.178.68.195 port 35190 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:15:57.220513 sshd-session[9105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:15:57.223523 systemd-logind[2719]: New session 13 of user core. May 14 01:15:57.233077 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 01:15:57.392460 containerd[2738]: time="2025-05-14T01:15:57.392424750Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"c11e6232d9da55c6129a7907c2eec33283251e6c66202c2c26a599cf9decabee\" pid:9121 exited_at:{seconds:1747185357 nanos:392252511}" May 14 01:15:57.578847 sshd[9107]: Connection closed by 139.178.68.195 port 35190 May 14 01:15:57.579175 sshd-session[9105]: pam_unix(sshd:session): session closed for user core May 14 01:15:57.582041 systemd[1]: sshd@10-147.28.151.154:22-139.178.68.195:35190.service: Deactivated successfully. May 14 01:15:57.583772 systemd[1]: session-13.scope: Deactivated successfully. May 14 01:15:57.584389 systemd-logind[2719]: Session 13 logged out. Waiting for processes to exit. May 14 01:15:57.585018 systemd-logind[2719]: Removed session 13. May 14 01:16:02.653103 systemd[1]: Started sshd@11-147.28.151.154:22-139.178.68.195:35198.service - OpenSSH per-connection server daemon (139.178.68.195:35198). May 14 01:16:03.064345 sshd[9178]: Accepted publickey for core from 139.178.68.195 port 35198 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:16:03.065368 sshd-session[9178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:16:03.068565 systemd-logind[2719]: New session 14 of user core. May 14 01:16:03.080076 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 01:16:03.421316 sshd[9195]: Connection closed by 139.178.68.195 port 35198 May 14 01:16:03.421647 sshd-session[9178]: pam_unix(sshd:session): session closed for user core May 14 01:16:03.424509 systemd[1]: sshd@11-147.28.151.154:22-139.178.68.195:35198.service: Deactivated successfully. May 14 01:16:03.426265 systemd[1]: session-14.scope: Deactivated successfully. May 14 01:16:03.426838 systemd-logind[2719]: Session 14 logged out. Waiting for processes to exit. May 14 01:16:03.427507 systemd-logind[2719]: Removed session 14. May 14 01:16:08.493039 systemd[1]: Started sshd@12-147.28.151.154:22-139.178.68.195:37184.service - OpenSSH per-connection server daemon (139.178.68.195:37184). May 14 01:16:08.902055 sshd[9232]: Accepted publickey for core from 139.178.68.195 port 37184 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:16:08.903036 sshd-session[9232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:16:08.906145 systemd-logind[2719]: New session 15 of user core. May 14 01:16:08.919077 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 01:16:09.251535 sshd[9234]: Connection closed by 139.178.68.195 port 37184 May 14 01:16:09.251913 sshd-session[9232]: pam_unix(sshd:session): session closed for user core May 14 01:16:09.254779 systemd[1]: sshd@12-147.28.151.154:22-139.178.68.195:37184.service: Deactivated successfully. May 14 01:16:09.256550 systemd[1]: session-15.scope: Deactivated successfully. May 14 01:16:09.257087 systemd-logind[2719]: Session 15 logged out. Waiting for processes to exit. May 14 01:16:09.257652 systemd-logind[2719]: Removed session 15. May 14 01:16:14.331034 systemd[1]: Started sshd@13-147.28.151.154:22-139.178.68.195:36258.service - OpenSSH per-connection server daemon (139.178.68.195:36258). May 14 01:16:14.762564 sshd[9270]: Accepted publickey for core from 139.178.68.195 port 36258 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:16:14.763647 sshd-session[9270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:16:14.766792 systemd-logind[2719]: New session 16 of user core. May 14 01:16:14.768001 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 01:16:14.788946 containerd[2738]: time="2025-05-14T01:16:14.788910574Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6463d8b3de2474a1960911f00885aae0bd3dac149504522affd0736510064696\" id:\"bb77efc2382afd7302d774569bf75b5c1e240e6c19af452b97e623bfd8da37e8\" pid:9283 exited_at:{seconds:1747185374 nanos:788548415}" May 14 01:16:15.119624 sshd[9295]: Connection closed by 139.178.68.195 port 36258 May 14 01:16:15.119908 sshd-session[9270]: pam_unix(sshd:session): session closed for user core May 14 01:16:15.122846 systemd[1]: sshd@13-147.28.151.154:22-139.178.68.195:36258.service: Deactivated successfully. May 14 01:16:15.124609 systemd[1]: session-16.scope: Deactivated successfully. May 14 01:16:15.125174 systemd-logind[2719]: Session 16 logged out. Waiting for processes to exit. May 14 01:16:15.125754 systemd-logind[2719]: Removed session 16. May 14 01:16:15.195954 systemd[1]: Started sshd@14-147.28.151.154:22-139.178.68.195:36274.service - OpenSSH per-connection server daemon (139.178.68.195:36274). May 14 01:16:15.618324 sshd[9337]: Accepted publickey for core from 139.178.68.195 port 36274 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:16:15.619351 sshd-session[9337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:16:15.622426 systemd-logind[2719]: New session 17 of user core. May 14 01:16:15.634078 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 01:16:16.055827 sshd[9339]: Connection closed by 139.178.68.195 port 36274 May 14 01:16:16.056213 sshd-session[9337]: pam_unix(sshd:session): session closed for user core May 14 01:16:16.059253 systemd[1]: sshd@14-147.28.151.154:22-139.178.68.195:36274.service: Deactivated successfully. May 14 01:16:16.061010 systemd[1]: session-17.scope: Deactivated successfully. May 14 01:16:16.061560 systemd-logind[2719]: Session 17 logged out. Waiting for processes to exit. May 14 01:16:16.062143 systemd-logind[2719]: Removed session 17. May 14 01:16:16.138100 systemd[1]: Started sshd@15-147.28.151.154:22-139.178.68.195:36284.service - OpenSSH per-connection server daemon (139.178.68.195:36284). May 14 01:16:16.551907 sshd[9367]: Accepted publickey for core from 139.178.68.195 port 36284 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:16:16.552913 sshd-session[9367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:16:16.556025 systemd-logind[2719]: New session 18 of user core. May 14 01:16:16.565078 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 01:16:17.904178 sshd[9369]: Connection closed by 139.178.68.195 port 36284 May 14 01:16:17.904561 sshd-session[9367]: pam_unix(sshd:session): session closed for user core May 14 01:16:17.907468 systemd[1]: sshd@15-147.28.151.154:22-139.178.68.195:36284.service: Deactivated successfully. May 14 01:16:17.909198 systemd[1]: session-18.scope: Deactivated successfully. May 14 01:16:17.909430 systemd[1]: session-18.scope: Consumed 3.846s CPU time, 115.6M memory peak. May 14 01:16:17.909777 systemd-logind[2719]: Session 18 logged out. Waiting for processes to exit. May 14 01:16:17.910351 systemd-logind[2719]: Removed session 18. May 14 01:16:17.975890 systemd[1]: Started sshd@16-147.28.151.154:22-139.178.68.195:36300.service - OpenSSH per-connection server daemon (139.178.68.195:36300). May 14 01:16:18.378242 sshd[9466]: Accepted publickey for core from 139.178.68.195 port 36300 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:16:18.379263 sshd-session[9466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:16:18.382252 systemd-logind[2719]: New session 19 of user core. May 14 01:16:18.391130 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 01:16:18.820133 sshd[9468]: Connection closed by 139.178.68.195 port 36300 May 14 01:16:18.820480 sshd-session[9466]: pam_unix(sshd:session): session closed for user core May 14 01:16:18.823380 systemd[1]: sshd@16-147.28.151.154:22-139.178.68.195:36300.service: Deactivated successfully. May 14 01:16:18.825118 systemd[1]: session-19.scope: Deactivated successfully. May 14 01:16:18.825644 systemd-logind[2719]: Session 19 logged out. Waiting for processes to exit. May 14 01:16:18.826221 systemd-logind[2719]: Removed session 19. May 14 01:16:18.890976 systemd[1]: Started sshd@17-147.28.151.154:22-139.178.68.195:36310.service - OpenSSH per-connection server daemon (139.178.68.195:36310). May 14 01:16:19.290849 sshd[9519]: Accepted publickey for core from 139.178.68.195 port 36310 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:16:19.292084 sshd-session[9519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:16:19.295295 systemd-logind[2719]: New session 20 of user core. May 14 01:16:19.318141 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 01:16:19.632270 sshd[9521]: Connection closed by 139.178.68.195 port 36310 May 14 01:16:19.632551 sshd-session[9519]: pam_unix(sshd:session): session closed for user core May 14 01:16:19.635425 systemd[1]: sshd@17-147.28.151.154:22-139.178.68.195:36310.service: Deactivated successfully. May 14 01:16:19.638556 systemd[1]: session-20.scope: Deactivated successfully. May 14 01:16:19.639132 systemd-logind[2719]: Session 20 logged out. Waiting for processes to exit. May 14 01:16:19.639698 systemd-logind[2719]: Removed session 20. May 14 01:16:24.715989 systemd[1]: Started sshd@18-147.28.151.154:22-139.178.68.195:50788.service - OpenSSH per-connection server daemon (139.178.68.195:50788). May 14 01:16:25.144532 sshd[9558]: Accepted publickey for core from 139.178.68.195 port 50788 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:16:25.145472 sshd-session[9558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:16:25.148407 systemd-logind[2719]: New session 21 of user core. May 14 01:16:25.161125 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 01:16:25.496998 sshd[9560]: Connection closed by 139.178.68.195 port 50788 May 14 01:16:25.497394 sshd-session[9558]: pam_unix(sshd:session): session closed for user core May 14 01:16:25.500236 systemd[1]: sshd@18-147.28.151.154:22-139.178.68.195:50788.service: Deactivated successfully. May 14 01:16:25.502506 systemd[1]: session-21.scope: Deactivated successfully. May 14 01:16:25.503100 systemd-logind[2719]: Session 21 logged out. Waiting for processes to exit. May 14 01:16:25.503660 systemd-logind[2719]: Removed session 21. May 14 01:16:27.388460 containerd[2738]: time="2025-05-14T01:16:27.388424831Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81e71bf614c1239d8ffce3b289153fa3127c1a9b6aca8677803bdf6692520586\" id:\"9cac3267582c4d8914805e3ff4f459e8002c858ad26ea9ce52c6842e45f69e61\" pid:9614 exited_at:{seconds:1747185387 nanos:388240552}" May 14 01:16:30.567022 systemd[1]: Started sshd@19-147.28.151.154:22-139.178.68.195:50794.service - OpenSSH per-connection server daemon (139.178.68.195:50794). May 14 01:16:30.969844 sshd[9626]: Accepted publickey for core from 139.178.68.195 port 50794 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:16:30.970974 sshd-session[9626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:16:30.974110 systemd-logind[2719]: New session 22 of user core. May 14 01:16:30.988073 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 01:16:31.311865 sshd[9628]: Connection closed by 139.178.68.195 port 50794 May 14 01:16:31.312209 sshd-session[9626]: pam_unix(sshd:session): session closed for user core May 14 01:16:31.315098 systemd[1]: sshd@19-147.28.151.154:22-139.178.68.195:50794.service: Deactivated successfully. May 14 01:16:31.316786 systemd[1]: session-22.scope: Deactivated successfully. May 14 01:16:31.317373 systemd-logind[2719]: Session 22 logged out. Waiting for processes to exit. May 14 01:16:31.317948 systemd-logind[2719]: Removed session 22. May 14 01:16:36.386933 systemd[1]: Started sshd@20-147.28.151.154:22-139.178.68.195:59184.service - OpenSSH per-connection server daemon (139.178.68.195:59184). May 14 01:16:36.809873 sshd[9664]: Accepted publickey for core from 139.178.68.195 port 59184 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 01:16:36.810904 sshd-session[9664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:16:36.813929 systemd-logind[2719]: New session 23 of user core. May 14 01:16:36.826119 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 01:16:37.162307 sshd[9666]: Connection closed by 139.178.68.195 port 59184 May 14 01:16:37.162750 sshd-session[9664]: pam_unix(sshd:session): session closed for user core May 14 01:16:37.166457 systemd[1]: sshd@20-147.28.151.154:22-139.178.68.195:59184.service: Deactivated successfully. May 14 01:16:37.169053 systemd[1]: session-23.scope: Deactivated successfully. May 14 01:16:37.169690 systemd-logind[2719]: Session 23 logged out. Waiting for processes to exit. May 14 01:16:37.170303 systemd-logind[2719]: Removed session 23.