May 13 23:49:37.164366 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] May 13 23:49:37.164389 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 22:16:18 -00 2025 May 13 23:49:37.164397 kernel: KASLR enabled May 13 23:49:37.164403 kernel: efi: EFI v2.7 by American Megatrends May 13 23:49:37.164408 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea465818 RNG=0xebf10018 MEMRESERVE=0xe476de18 May 13 23:49:37.164413 kernel: random: crng init done May 13 23:49:37.164420 kernel: secureboot: Secure boot disabled May 13 23:49:37.164426 kernel: esrt: Reserving ESRT space from 0x00000000ea465818 to 0x00000000ea465878. May 13 23:49:37.164433 kernel: ACPI: Early table checksum verification disabled May 13 23:49:37.164439 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) May 13 23:49:37.164445 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) May 13 23:49:37.164451 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) May 13 23:49:37.164456 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) May 13 23:49:37.164462 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) May 13 23:49:37.164470 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) May 13 23:49:37.164476 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) May 13 23:49:37.164483 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) May 13 23:49:37.164489 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) May 13 23:49:37.164495 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) May 13 23:49:37.164501 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) May 13 23:49:37.164507 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) May 13 23:49:37.164513 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) May 13 23:49:37.164518 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) May 13 23:49:37.164524 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) May 13 23:49:37.164532 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) May 13 23:49:37.164538 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) May 13 23:49:37.164544 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) May 13 23:49:37.164550 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) May 13 23:49:37.164556 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 May 13 23:49:37.164562 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] May 13 23:49:37.164567 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] May 13 23:49:37.164573 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] May 13 23:49:37.164580 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] May 13 23:49:37.164585 kernel: NUMA: NODE_DATA [mem 0x83fdffcd800-0x83fdffd2fff] May 13 23:49:37.164591 kernel: Zone ranges: May 13 23:49:37.164599 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] May 13 23:49:37.164605 kernel: DMA32 empty May 13 23:49:37.164611 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] May 13 23:49:37.164617 kernel: Movable zone start for each node May 13 23:49:37.164623 kernel: Early memory node ranges May 13 23:49:37.164631 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] May 13 23:49:37.164638 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] May 13 23:49:37.164646 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] May 13 23:49:37.164652 kernel: node 0: [mem 0x0000000094000000-0x00000000eba36fff] May 13 23:49:37.164658 kernel: node 0: [mem 0x00000000eba37000-0x00000000ebeadfff] May 13 23:49:37.164665 kernel: node 0: [mem 0x00000000ebeae000-0x00000000ebeaefff] May 13 23:49:37.164671 kernel: node 0: [mem 0x00000000ebeaf000-0x00000000ebeccfff] May 13 23:49:37.164677 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] May 13 23:49:37.164684 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] May 13 23:49:37.164690 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] May 13 23:49:37.164696 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] May 13 23:49:37.164702 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee53ffff] May 13 23:49:37.164710 kernel: node 0: [mem 0x00000000ee540000-0x00000000f765ffff] May 13 23:49:37.164717 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] May 13 23:49:37.164723 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] May 13 23:49:37.164729 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] May 13 23:49:37.164736 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] May 13 23:49:37.164742 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] May 13 23:49:37.164748 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] May 13 23:49:37.164755 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] May 13 23:49:37.164761 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] May 13 23:49:37.164767 kernel: On node 0, zone DMA: 768 pages in unavailable ranges May 13 23:49:37.164774 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges May 13 23:49:37.164781 kernel: psci: probing for conduit method from ACPI. May 13 23:49:37.164788 kernel: psci: PSCIv1.1 detected in firmware. May 13 23:49:37.164794 kernel: psci: Using standard PSCI v0.2 function IDs May 13 23:49:37.164800 kernel: psci: MIGRATE_INFO_TYPE not supported. May 13 23:49:37.164807 kernel: psci: SMC Calling Convention v1.2 May 13 23:49:37.164813 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 13 23:49:37.164819 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 May 13 23:49:37.164826 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 May 13 23:49:37.164832 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 May 13 23:49:37.164838 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 May 13 23:49:37.164845 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 May 13 23:49:37.164851 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 May 13 23:49:37.164859 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 May 13 23:49:37.164865 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 May 13 23:49:37.164872 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 May 13 23:49:37.164878 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 May 13 23:49:37.164884 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 May 13 23:49:37.164890 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 May 13 23:49:37.164897 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 May 13 23:49:37.164903 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 May 13 23:49:37.164909 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 May 13 23:49:37.164915 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 May 13 23:49:37.164922 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 May 13 23:49:37.164928 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 May 13 23:49:37.164936 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 May 13 23:49:37.164942 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 May 13 23:49:37.164948 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 May 13 23:49:37.164979 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 May 13 23:49:37.164985 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 May 13 23:49:37.164991 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 May 13 23:49:37.164998 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 May 13 23:49:37.165004 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 May 13 23:49:37.165010 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 May 13 23:49:37.165016 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 May 13 23:49:37.165023 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 May 13 23:49:37.165031 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 May 13 23:49:37.165037 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 May 13 23:49:37.165043 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 May 13 23:49:37.165050 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 May 13 23:49:37.165056 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 May 13 23:49:37.165063 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 May 13 23:49:37.165069 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 May 13 23:49:37.165075 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 May 13 23:49:37.165082 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 May 13 23:49:37.165088 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 May 13 23:49:37.165094 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 May 13 23:49:37.165100 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 May 13 23:49:37.165108 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 May 13 23:49:37.165115 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 May 13 23:49:37.165121 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 May 13 23:49:37.165127 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 May 13 23:49:37.165133 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 May 13 23:49:37.165140 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 May 13 23:49:37.165146 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 May 13 23:49:37.165152 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 May 13 23:49:37.165165 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 May 13 23:49:37.165172 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 May 13 23:49:37.165180 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 May 13 23:49:37.165187 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 May 13 23:49:37.165194 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 May 13 23:49:37.165201 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 May 13 23:49:37.165207 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 May 13 23:49:37.165214 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 May 13 23:49:37.165222 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 May 13 23:49:37.165229 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 May 13 23:49:37.165235 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 May 13 23:49:37.165242 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 May 13 23:49:37.165249 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 May 13 23:49:37.165255 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 May 13 23:49:37.165262 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 May 13 23:49:37.165269 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 May 13 23:49:37.165275 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 May 13 23:49:37.165282 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 May 13 23:49:37.165288 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 May 13 23:49:37.165295 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 May 13 23:49:37.165303 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 May 13 23:49:37.165310 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 May 13 23:49:37.165317 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 May 13 23:49:37.165323 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 May 13 23:49:37.165330 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 May 13 23:49:37.165337 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 May 13 23:49:37.165343 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 May 13 23:49:37.165350 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 May 13 23:49:37.165357 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 May 13 23:49:37.165363 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 May 13 23:49:37.165370 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 23:49:37.165378 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 23:49:37.165386 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 May 13 23:49:37.165392 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 May 13 23:49:37.165399 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 May 13 23:49:37.165406 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 May 13 23:49:37.165412 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 May 13 23:49:37.165419 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 May 13 23:49:37.165425 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 May 13 23:49:37.165432 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 May 13 23:49:37.165439 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 May 13 23:49:37.165445 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 May 13 23:49:37.165453 kernel: Detected PIPT I-cache on CPU0 May 13 23:49:37.165460 kernel: CPU features: detected: GIC system register CPU interface May 13 23:49:37.165467 kernel: CPU features: detected: Virtualization Host Extensions May 13 23:49:37.165474 kernel: CPU features: detected: Hardware dirty bit management May 13 23:49:37.165480 kernel: CPU features: detected: Spectre-v4 May 13 23:49:37.165487 kernel: CPU features: detected: Spectre-BHB May 13 23:49:37.165494 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 23:49:37.165500 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 23:49:37.165507 kernel: CPU features: detected: ARM erratum 1418040 May 13 23:49:37.165514 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 23:49:37.165521 kernel: alternatives: applying boot alternatives May 13 23:49:37.165529 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:49:37.165537 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:49:37.165544 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 13 23:49:37.165551 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes May 13 23:49:37.165558 kernel: printk: log_buf_len min size: 262144 bytes May 13 23:49:37.165564 kernel: printk: log_buf_len: 1048576 bytes May 13 23:49:37.165571 kernel: printk: early log buf free: 249864(95%) May 13 23:49:37.165578 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) May 13 23:49:37.165585 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) May 13 23:49:37.165591 kernel: Fallback order for Node 0: 0 May 13 23:49:37.165598 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 May 13 23:49:37.165606 kernel: Policy zone: Normal May 13 23:49:37.165613 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:49:37.165620 kernel: software IO TLB: area num 128. May 13 23:49:37.165626 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) May 13 23:49:37.165634 kernel: Memory: 262923296K/268174336K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38464K init, 897K bss, 5251040K reserved, 0K cma-reserved) May 13 23:49:37.165641 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 May 13 23:49:37.165647 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:49:37.165655 kernel: rcu: RCU event tracing is enabled. May 13 23:49:37.165662 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. May 13 23:49:37.165668 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:49:37.165675 kernel: Tracing variant of Tasks RCU enabled. May 13 23:49:37.165682 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:49:37.165690 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 May 13 23:49:37.165697 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 23:49:37.165704 kernel: GICv3: GIC: Using split EOI/Deactivate mode May 13 23:49:37.165711 kernel: GICv3: 672 SPIs implemented May 13 23:49:37.165717 kernel: GICv3: 0 Extended SPIs implemented May 13 23:49:37.165724 kernel: Root IRQ handler: gic_handle_irq May 13 23:49:37.165731 kernel: GICv3: GICv3 features: 16 PPIs May 13 23:49:37.165738 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 May 13 23:49:37.165744 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 May 13 23:49:37.165751 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 May 13 23:49:37.165757 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 May 13 23:49:37.165764 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 May 13 23:49:37.165772 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 May 13 23:49:37.165779 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 May 13 23:49:37.165786 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 May 13 23:49:37.165792 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 May 13 23:49:37.165799 kernel: ITS [mem 0x100100040000-0x10010005ffff] May 13 23:49:37.165806 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000270000 (indirect, esz 8, psz 64K, shr 1) May 13 23:49:37.165812 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000280000 (flat, esz 2, psz 64K, shr 1) May 13 23:49:37.165819 kernel: ITS [mem 0x100100060000-0x10010007ffff] May 13 23:49:37.165826 kernel: ITS@0x0000100100060000: allocated 8192 Devices @800002a0000 (indirect, esz 8, psz 64K, shr 1) May 13 23:49:37.165833 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @800002b0000 (flat, esz 2, psz 64K, shr 1) May 13 23:49:37.165840 kernel: ITS [mem 0x100100080000-0x10010009ffff] May 13 23:49:37.165848 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800002d0000 (indirect, esz 8, psz 64K, shr 1) May 13 23:49:37.165855 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800002e0000 (flat, esz 2, psz 64K, shr 1) May 13 23:49:37.165862 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] May 13 23:49:37.165868 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000300000 (indirect, esz 8, psz 64K, shr 1) May 13 23:49:37.165875 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @80000310000 (flat, esz 2, psz 64K, shr 1) May 13 23:49:37.165882 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] May 13 23:49:37.165889 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000330000 (indirect, esz 8, psz 64K, shr 1) May 13 23:49:37.165896 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000340000 (flat, esz 2, psz 64K, shr 1) May 13 23:49:37.165903 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] May 13 23:49:37.165910 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000360000 (indirect, esz 8, psz 64K, shr 1) May 13 23:49:37.165916 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000370000 (flat, esz 2, psz 64K, shr 1) May 13 23:49:37.165925 kernel: ITS [mem 0x100100100000-0x10010011ffff] May 13 23:49:37.165932 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000390000 (indirect, esz 8, psz 64K, shr 1) May 13 23:49:37.165938 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @800003a0000 (flat, esz 2, psz 64K, shr 1) May 13 23:49:37.165945 kernel: ITS [mem 0x100100120000-0x10010013ffff] May 13 23:49:37.165954 kernel: ITS@0x0000100100120000: allocated 8192 Devices @800003c0000 (indirect, esz 8, psz 64K, shr 1) May 13 23:49:37.165961 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800003d0000 (flat, esz 2, psz 64K, shr 1) May 13 23:49:37.165968 kernel: GICv3: using LPI property table @0x00000800003e0000 May 13 23:49:37.165975 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800003f0000 May 13 23:49:37.165981 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:49:37.165988 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.165995 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). May 13 23:49:37.166003 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). May 13 23:49:37.166011 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 23:49:37.166018 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 23:49:37.166024 kernel: Console: colour dummy device 80x25 May 13 23:49:37.166031 kernel: printk: console [tty0] enabled May 13 23:49:37.166038 kernel: ACPI: Core revision 20230628 May 13 23:49:37.166045 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 23:49:37.166052 kernel: pid_max: default: 81920 minimum: 640 May 13 23:49:37.166059 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:49:37.166066 kernel: landlock: Up and running. May 13 23:49:37.166074 kernel: SELinux: Initializing. May 13 23:49:37.166081 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:49:37.166088 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:49:37.166095 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 13 23:49:37.166102 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. May 13 23:49:37.166109 kernel: rcu: Hierarchical SRCU implementation. May 13 23:49:37.166116 kernel: rcu: Max phase no-delay instances is 400. May 13 23:49:37.166123 kernel: Platform MSI: ITS@0x100100040000 domain created May 13 23:49:37.166130 kernel: Platform MSI: ITS@0x100100060000 domain created May 13 23:49:37.166138 kernel: Platform MSI: ITS@0x100100080000 domain created May 13 23:49:37.166145 kernel: Platform MSI: ITS@0x1001000a0000 domain created May 13 23:49:37.166152 kernel: Platform MSI: ITS@0x1001000c0000 domain created May 13 23:49:37.166159 kernel: Platform MSI: ITS@0x1001000e0000 domain created May 13 23:49:37.166165 kernel: Platform MSI: ITS@0x100100100000 domain created May 13 23:49:37.166172 kernel: Platform MSI: ITS@0x100100120000 domain created May 13 23:49:37.166179 kernel: PCI/MSI: ITS@0x100100040000 domain created May 13 23:49:37.166186 kernel: PCI/MSI: ITS@0x100100060000 domain created May 13 23:49:37.166193 kernel: PCI/MSI: ITS@0x100100080000 domain created May 13 23:49:37.166201 kernel: PCI/MSI: ITS@0x1001000a0000 domain created May 13 23:49:37.166208 kernel: PCI/MSI: ITS@0x1001000c0000 domain created May 13 23:49:37.166215 kernel: PCI/MSI: ITS@0x1001000e0000 domain created May 13 23:49:37.166221 kernel: PCI/MSI: ITS@0x100100100000 domain created May 13 23:49:37.166228 kernel: PCI/MSI: ITS@0x100100120000 domain created May 13 23:49:37.166235 kernel: Remapping and enabling EFI services. May 13 23:49:37.166242 kernel: smp: Bringing up secondary CPUs ... May 13 23:49:37.166249 kernel: Detected PIPT I-cache on CPU1 May 13 23:49:37.166255 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 May 13 23:49:37.166262 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000800000 May 13 23:49:37.166271 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.166278 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] May 13 23:49:37.166284 kernel: Detected PIPT I-cache on CPU2 May 13 23:49:37.166291 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 May 13 23:49:37.166298 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000080000810000 May 13 23:49:37.166305 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.166312 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] May 13 23:49:37.166319 kernel: Detected PIPT I-cache on CPU3 May 13 23:49:37.166326 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 May 13 23:49:37.166334 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000080000820000 May 13 23:49:37.166341 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.166348 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] May 13 23:49:37.166355 kernel: Detected PIPT I-cache on CPU4 May 13 23:49:37.166361 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 May 13 23:49:37.166368 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000830000 May 13 23:49:37.166375 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.166382 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] May 13 23:49:37.166389 kernel: Detected PIPT I-cache on CPU5 May 13 23:49:37.166396 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 May 13 23:49:37.166404 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000840000 May 13 23:49:37.166411 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.166418 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] May 13 23:49:37.166424 kernel: Detected PIPT I-cache on CPU6 May 13 23:49:37.166431 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 May 13 23:49:37.166438 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000850000 May 13 23:49:37.166445 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.166452 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] May 13 23:49:37.166459 kernel: Detected PIPT I-cache on CPU7 May 13 23:49:37.166467 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 May 13 23:49:37.166474 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000860000 May 13 23:49:37.166481 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.166488 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] May 13 23:49:37.166494 kernel: Detected PIPT I-cache on CPU8 May 13 23:49:37.166501 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 May 13 23:49:37.166508 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000870000 May 13 23:49:37.166515 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.166521 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] May 13 23:49:37.166528 kernel: Detected PIPT I-cache on CPU9 May 13 23:49:37.166537 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 May 13 23:49:37.166544 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000880000 May 13 23:49:37.166551 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.166557 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] May 13 23:49:37.166564 kernel: Detected PIPT I-cache on CPU10 May 13 23:49:37.166571 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 May 13 23:49:37.166578 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000890000 May 13 23:49:37.166585 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.166591 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] May 13 23:49:37.166600 kernel: Detected PIPT I-cache on CPU11 May 13 23:49:37.166607 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 May 13 23:49:37.166614 kernel: GICv3: CPU11: using allocated LPI pending table @0x00000800008a0000 May 13 23:49:37.166620 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.166627 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] May 13 23:49:37.166634 kernel: Detected PIPT I-cache on CPU12 May 13 23:49:37.166641 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 May 13 23:49:37.166648 kernel: GICv3: CPU12: using allocated LPI pending table @0x00000800008b0000 May 13 23:49:37.166654 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.166661 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] May 13 23:49:37.166669 kernel: Detected PIPT I-cache on CPU13 May 13 23:49:37.166676 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 May 13 23:49:37.166683 kernel: GICv3: CPU13: using allocated LPI pending table @0x00000800008c0000 May 13 23:49:37.166690 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.166697 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] May 13 23:49:37.166704 kernel: Detected PIPT I-cache on CPU14 May 13 23:49:37.166711 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 May 13 23:49:37.166718 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800008d0000 May 13 23:49:37.166725 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.166733 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] May 13 23:49:37.166740 kernel: Detected PIPT I-cache on CPU15 May 13 23:49:37.166747 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 May 13 23:49:37.166754 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800008e0000 May 13 23:49:37.166760 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.166767 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] May 13 23:49:37.166774 kernel: Detected PIPT I-cache on CPU16 May 13 23:49:37.166781 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 May 13 23:49:37.166788 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800008f0000 May 13 23:49:37.166804 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.166813 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] May 13 23:49:37.166820 kernel: Detected PIPT I-cache on CPU17 May 13 23:49:37.166827 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 May 13 23:49:37.166834 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000900000 May 13 23:49:37.166841 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.166848 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] May 13 23:49:37.166856 kernel: Detected PIPT I-cache on CPU18 May 13 23:49:37.166863 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 May 13 23:49:37.166870 kernel: GICv3: CPU18: using allocated LPI pending table @0x0000080000910000 May 13 23:49:37.166879 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.166886 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] May 13 23:49:37.166893 kernel: Detected PIPT I-cache on CPU19 May 13 23:49:37.166900 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 May 13 23:49:37.166907 kernel: GICv3: CPU19: using allocated LPI pending table @0x0000080000920000 May 13 23:49:37.166915 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.166924 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] May 13 23:49:37.166931 kernel: Detected PIPT I-cache on CPU20 May 13 23:49:37.166939 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 May 13 23:49:37.166946 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000930000 May 13 23:49:37.166972 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.166980 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] May 13 23:49:37.166987 kernel: Detected PIPT I-cache on CPU21 May 13 23:49:37.166994 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 May 13 23:49:37.167001 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000940000 May 13 23:49:37.167010 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167017 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] May 13 23:49:37.167025 kernel: Detected PIPT I-cache on CPU22 May 13 23:49:37.167032 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 May 13 23:49:37.167039 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000950000 May 13 23:49:37.167046 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167053 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] May 13 23:49:37.167060 kernel: Detected PIPT I-cache on CPU23 May 13 23:49:37.167068 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 May 13 23:49:37.167075 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000960000 May 13 23:49:37.167084 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167091 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] May 13 23:49:37.167099 kernel: Detected PIPT I-cache on CPU24 May 13 23:49:37.167106 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 May 13 23:49:37.167113 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000970000 May 13 23:49:37.167120 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167128 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] May 13 23:49:37.167135 kernel: Detected PIPT I-cache on CPU25 May 13 23:49:37.167142 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 May 13 23:49:37.167151 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000980000 May 13 23:49:37.167160 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167168 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] May 13 23:49:37.167175 kernel: Detected PIPT I-cache on CPU26 May 13 23:49:37.167183 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 May 13 23:49:37.167190 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000990000 May 13 23:49:37.167197 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167205 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] May 13 23:49:37.167212 kernel: Detected PIPT I-cache on CPU27 May 13 23:49:37.167220 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 May 13 23:49:37.167228 kernel: GICv3: CPU27: using allocated LPI pending table @0x00000800009a0000 May 13 23:49:37.167235 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167242 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] May 13 23:49:37.167249 kernel: Detected PIPT I-cache on CPU28 May 13 23:49:37.167257 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 May 13 23:49:37.167264 kernel: GICv3: CPU28: using allocated LPI pending table @0x00000800009b0000 May 13 23:49:37.167271 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167278 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] May 13 23:49:37.167285 kernel: Detected PIPT I-cache on CPU29 May 13 23:49:37.167294 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 May 13 23:49:37.167301 kernel: GICv3: CPU29: using allocated LPI pending table @0x00000800009c0000 May 13 23:49:37.167308 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167315 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] May 13 23:49:37.167322 kernel: Detected PIPT I-cache on CPU30 May 13 23:49:37.167330 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 May 13 23:49:37.167337 kernel: GICv3: CPU30: using allocated LPI pending table @0x00000800009d0000 May 13 23:49:37.167344 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167351 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] May 13 23:49:37.167360 kernel: Detected PIPT I-cache on CPU31 May 13 23:49:37.167367 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 May 13 23:49:37.167374 kernel: GICv3: CPU31: using allocated LPI pending table @0x00000800009e0000 May 13 23:49:37.167382 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167389 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] May 13 23:49:37.167396 kernel: Detected PIPT I-cache on CPU32 May 13 23:49:37.167403 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 May 13 23:49:37.167410 kernel: GICv3: CPU32: using allocated LPI pending table @0x00000800009f0000 May 13 23:49:37.167417 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167425 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] May 13 23:49:37.167433 kernel: Detected PIPT I-cache on CPU33 May 13 23:49:37.167440 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 May 13 23:49:37.167448 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000a00000 May 13 23:49:37.167455 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167462 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] May 13 23:49:37.167469 kernel: Detected PIPT I-cache on CPU34 May 13 23:49:37.167477 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 May 13 23:49:37.167484 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000a10000 May 13 23:49:37.167491 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167500 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] May 13 23:49:37.167507 kernel: Detected PIPT I-cache on CPU35 May 13 23:49:37.167514 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 May 13 23:49:37.167521 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000a20000 May 13 23:49:37.167529 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167536 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] May 13 23:49:37.167543 kernel: Detected PIPT I-cache on CPU36 May 13 23:49:37.167550 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 May 13 23:49:37.167558 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000a30000 May 13 23:49:37.167565 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167574 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] May 13 23:49:37.167581 kernel: Detected PIPT I-cache on CPU37 May 13 23:49:37.167588 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 May 13 23:49:37.167595 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000a40000 May 13 23:49:37.167602 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167610 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] May 13 23:49:37.167617 kernel: Detected PIPT I-cache on CPU38 May 13 23:49:37.167624 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 May 13 23:49:37.167632 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000a50000 May 13 23:49:37.167641 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167648 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] May 13 23:49:37.167655 kernel: Detected PIPT I-cache on CPU39 May 13 23:49:37.167664 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 May 13 23:49:37.167671 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000a60000 May 13 23:49:37.167678 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167686 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] May 13 23:49:37.167693 kernel: Detected PIPT I-cache on CPU40 May 13 23:49:37.167702 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 May 13 23:49:37.167709 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a70000 May 13 23:49:37.167716 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167723 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] May 13 23:49:37.167731 kernel: Detected PIPT I-cache on CPU41 May 13 23:49:37.167738 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 May 13 23:49:37.167745 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a80000 May 13 23:49:37.167752 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167759 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] May 13 23:49:37.167766 kernel: Detected PIPT I-cache on CPU42 May 13 23:49:37.167775 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 May 13 23:49:37.167782 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a90000 May 13 23:49:37.167789 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167797 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] May 13 23:49:37.167804 kernel: Detected PIPT I-cache on CPU43 May 13 23:49:37.167811 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 May 13 23:49:37.167818 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000aa0000 May 13 23:49:37.167825 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167833 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] May 13 23:49:37.167841 kernel: Detected PIPT I-cache on CPU44 May 13 23:49:37.167848 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 May 13 23:49:37.167856 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000ab0000 May 13 23:49:37.167863 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167870 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] May 13 23:49:37.167877 kernel: Detected PIPT I-cache on CPU45 May 13 23:49:37.167884 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 May 13 23:49:37.167892 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000ac0000 May 13 23:49:37.167899 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167906 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] May 13 23:49:37.167914 kernel: Detected PIPT I-cache on CPU46 May 13 23:49:37.167921 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 May 13 23:49:37.167929 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ad0000 May 13 23:49:37.167936 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167943 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] May 13 23:49:37.167952 kernel: Detected PIPT I-cache on CPU47 May 13 23:49:37.167960 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 May 13 23:49:37.167967 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000ae0000 May 13 23:49:37.167974 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.167983 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] May 13 23:49:37.167990 kernel: Detected PIPT I-cache on CPU48 May 13 23:49:37.167997 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 May 13 23:49:37.168005 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000af0000 May 13 23:49:37.168012 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168019 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] May 13 23:49:37.168026 kernel: Detected PIPT I-cache on CPU49 May 13 23:49:37.168033 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 May 13 23:49:37.168040 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000b00000 May 13 23:49:37.168049 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168056 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] May 13 23:49:37.168063 kernel: Detected PIPT I-cache on CPU50 May 13 23:49:37.168070 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 May 13 23:49:37.168078 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000b10000 May 13 23:49:37.168085 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168092 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] May 13 23:49:37.168100 kernel: Detected PIPT I-cache on CPU51 May 13 23:49:37.168108 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 May 13 23:49:37.168115 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000b20000 May 13 23:49:37.168124 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168131 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] May 13 23:49:37.168138 kernel: Detected PIPT I-cache on CPU52 May 13 23:49:37.168145 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 May 13 23:49:37.168153 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000b30000 May 13 23:49:37.168160 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168167 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] May 13 23:49:37.168174 kernel: Detected PIPT I-cache on CPU53 May 13 23:49:37.168181 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 May 13 23:49:37.168190 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000b40000 May 13 23:49:37.168197 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168204 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] May 13 23:49:37.168211 kernel: Detected PIPT I-cache on CPU54 May 13 23:49:37.168218 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 May 13 23:49:37.168226 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000b50000 May 13 23:49:37.168233 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168240 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] May 13 23:49:37.168247 kernel: Detected PIPT I-cache on CPU55 May 13 23:49:37.168254 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 May 13 23:49:37.168263 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000b60000 May 13 23:49:37.168271 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168278 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] May 13 23:49:37.168285 kernel: Detected PIPT I-cache on CPU56 May 13 23:49:37.168292 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 May 13 23:49:37.168299 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b70000 May 13 23:49:37.168306 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168313 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] May 13 23:49:37.168321 kernel: Detected PIPT I-cache on CPU57 May 13 23:49:37.168329 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 May 13 23:49:37.168336 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b80000 May 13 23:49:37.168344 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168351 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] May 13 23:49:37.168358 kernel: Detected PIPT I-cache on CPU58 May 13 23:49:37.168365 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 May 13 23:49:37.168372 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b90000 May 13 23:49:37.168379 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168386 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] May 13 23:49:37.168394 kernel: Detected PIPT I-cache on CPU59 May 13 23:49:37.168402 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 May 13 23:49:37.168410 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000ba0000 May 13 23:49:37.168417 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168424 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] May 13 23:49:37.168431 kernel: Detected PIPT I-cache on CPU60 May 13 23:49:37.168438 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 May 13 23:49:37.168445 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000bb0000 May 13 23:49:37.168453 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168460 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] May 13 23:49:37.168468 kernel: Detected PIPT I-cache on CPU61 May 13 23:49:37.168476 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 May 13 23:49:37.168483 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000bc0000 May 13 23:49:37.168490 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168497 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] May 13 23:49:37.168504 kernel: Detected PIPT I-cache on CPU62 May 13 23:49:37.168511 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 May 13 23:49:37.168519 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000bd0000 May 13 23:49:37.168526 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168533 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] May 13 23:49:37.168542 kernel: Detected PIPT I-cache on CPU63 May 13 23:49:37.168549 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 May 13 23:49:37.168556 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000be0000 May 13 23:49:37.168563 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168570 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] May 13 23:49:37.168578 kernel: Detected PIPT I-cache on CPU64 May 13 23:49:37.168585 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 May 13 23:49:37.168592 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000bf0000 May 13 23:49:37.168599 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168608 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] May 13 23:49:37.168615 kernel: Detected PIPT I-cache on CPU65 May 13 23:49:37.168623 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 May 13 23:49:37.168630 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000c00000 May 13 23:49:37.168637 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168644 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] May 13 23:49:37.168651 kernel: Detected PIPT I-cache on CPU66 May 13 23:49:37.168659 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 May 13 23:49:37.168666 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000c10000 May 13 23:49:37.168674 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168682 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] May 13 23:49:37.168689 kernel: Detected PIPT I-cache on CPU67 May 13 23:49:37.168696 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 May 13 23:49:37.168703 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000c20000 May 13 23:49:37.168711 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168718 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] May 13 23:49:37.168725 kernel: Detected PIPT I-cache on CPU68 May 13 23:49:37.168732 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 May 13 23:49:37.168739 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000c30000 May 13 23:49:37.168748 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168755 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] May 13 23:49:37.168762 kernel: Detected PIPT I-cache on CPU69 May 13 23:49:37.168770 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 May 13 23:49:37.168777 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000c40000 May 13 23:49:37.168784 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168791 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] May 13 23:49:37.168798 kernel: Detected PIPT I-cache on CPU70 May 13 23:49:37.168806 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 May 13 23:49:37.168814 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000c50000 May 13 23:49:37.168822 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168829 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] May 13 23:49:37.168836 kernel: Detected PIPT I-cache on CPU71 May 13 23:49:37.168843 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 May 13 23:49:37.168850 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000c60000 May 13 23:49:37.168858 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168865 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] May 13 23:49:37.168872 kernel: Detected PIPT I-cache on CPU72 May 13 23:49:37.168879 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 May 13 23:49:37.168888 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c70000 May 13 23:49:37.168895 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168902 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] May 13 23:49:37.168909 kernel: Detected PIPT I-cache on CPU73 May 13 23:49:37.168917 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 May 13 23:49:37.168924 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c80000 May 13 23:49:37.168931 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168938 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] May 13 23:49:37.168945 kernel: Detected PIPT I-cache on CPU74 May 13 23:49:37.168956 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 May 13 23:49:37.168964 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c90000 May 13 23:49:37.168971 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.168978 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] May 13 23:49:37.168985 kernel: Detected PIPT I-cache on CPU75 May 13 23:49:37.168992 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 May 13 23:49:37.169000 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000ca0000 May 13 23:49:37.169007 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.169014 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] May 13 23:49:37.169021 kernel: Detected PIPT I-cache on CPU76 May 13 23:49:37.169030 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 May 13 23:49:37.169038 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000cb0000 May 13 23:49:37.169045 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.169052 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] May 13 23:49:37.169059 kernel: Detected PIPT I-cache on CPU77 May 13 23:49:37.169066 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 May 13 23:49:37.169074 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000cc0000 May 13 23:49:37.169081 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.169088 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] May 13 23:49:37.169097 kernel: Detected PIPT I-cache on CPU78 May 13 23:49:37.169104 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 May 13 23:49:37.169111 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000cd0000 May 13 23:49:37.169118 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.169125 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] May 13 23:49:37.169133 kernel: Detected PIPT I-cache on CPU79 May 13 23:49:37.169140 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 May 13 23:49:37.169147 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000ce0000 May 13 23:49:37.169155 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:49:37.169162 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] May 13 23:49:37.169171 kernel: smp: Brought up 1 node, 80 CPUs May 13 23:49:37.169178 kernel: SMP: Total of 80 processors activated. May 13 23:49:37.169185 kernel: CPU features: detected: 32-bit EL0 Support May 13 23:49:37.169193 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 23:49:37.169200 kernel: CPU features: detected: Common not Private translations May 13 23:49:37.169207 kernel: CPU features: detected: CRC32 instructions May 13 23:49:37.169214 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 23:49:37.169221 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 23:49:37.169229 kernel: CPU features: detected: LSE atomic instructions May 13 23:49:37.169237 kernel: CPU features: detected: Privileged Access Never May 13 23:49:37.169244 kernel: CPU features: detected: RAS Extension Support May 13 23:49:37.169252 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 23:49:37.169259 kernel: CPU: All CPU(s) started at EL2 May 13 23:49:37.169266 kernel: alternatives: applying system-wide alternatives May 13 23:49:37.169273 kernel: devtmpfs: initialized May 13 23:49:37.169280 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:49:37.169288 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 13 23:49:37.169295 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:49:37.169304 kernel: SMBIOS 3.4.0 present. May 13 23:49:37.169311 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 May 13 23:49:37.169318 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:49:37.169325 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations May 13 23:49:37.169333 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 23:49:37.169340 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 23:49:37.169347 kernel: audit: initializing netlink subsys (disabled) May 13 23:49:37.169355 kernel: audit: type=2000 audit(0.042:1): state=initialized audit_enabled=0 res=1 May 13 23:49:37.169364 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:49:37.169371 kernel: cpuidle: using governor menu May 13 23:49:37.169378 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 23:49:37.169385 kernel: ASID allocator initialised with 32768 entries May 13 23:49:37.169392 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:49:37.169399 kernel: Serial: AMBA PL011 UART driver May 13 23:49:37.169407 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 23:49:37.169414 kernel: Modules: 0 pages in range for non-PLT usage May 13 23:49:37.169421 kernel: Modules: 509232 pages in range for PLT usage May 13 23:49:37.169428 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:49:37.169437 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:49:37.169444 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 23:49:37.169451 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 23:49:37.169459 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:49:37.169466 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:49:37.169474 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 23:49:37.169481 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 23:49:37.169488 kernel: ACPI: Added _OSI(Module Device) May 13 23:49:37.169495 kernel: ACPI: Added _OSI(Processor Device) May 13 23:49:37.169504 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:49:37.169511 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:49:37.169518 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded May 13 23:49:37.169525 kernel: ACPI: Interpreter enabled May 13 23:49:37.169532 kernel: ACPI: Using GIC for interrupt routing May 13 23:49:37.169539 kernel: ACPI: MCFG table detected, 8 entries May 13 23:49:37.169547 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 May 13 23:49:37.169554 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 May 13 23:49:37.169561 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 May 13 23:49:37.169570 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 May 13 23:49:37.169578 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 May 13 23:49:37.169585 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 May 13 23:49:37.169592 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 May 13 23:49:37.169599 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 May 13 23:49:37.169606 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA May 13 23:49:37.169614 kernel: printk: console [ttyAMA0] enabled May 13 23:49:37.169621 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA May 13 23:49:37.169630 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) May 13 23:49:37.169765 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:49:37.169835 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] May 13 23:49:37.169896 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] May 13 23:49:37.169959 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 13 23:49:37.170019 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 May 13 23:49:37.170078 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] May 13 23:49:37.170090 kernel: PCI host bridge to bus 000d:00 May 13 23:49:37.170159 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] May 13 23:49:37.170214 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] May 13 23:49:37.170269 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] May 13 23:49:37.170346 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 May 13 23:49:37.170418 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 May 13 23:49:37.170484 kernel: pci 000d:00:01.0: enabling Extended Tags May 13 23:49:37.170547 kernel: pci 000d:00:01.0: supports D1 D2 May 13 23:49:37.170608 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot May 13 23:49:37.170678 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 May 13 23:49:37.170742 kernel: pci 000d:00:02.0: supports D1 D2 May 13 23:49:37.170803 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot May 13 23:49:37.170872 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 May 13 23:49:37.170936 kernel: pci 000d:00:03.0: supports D1 D2 May 13 23:49:37.171009 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot May 13 23:49:37.171081 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 May 13 23:49:37.171142 kernel: pci 000d:00:04.0: supports D1 D2 May 13 23:49:37.171204 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot May 13 23:49:37.171213 kernel: acpiphp: Slot [1] registered May 13 23:49:37.171221 kernel: acpiphp: Slot [2] registered May 13 23:49:37.171230 kernel: acpiphp: Slot [3] registered May 13 23:49:37.171238 kernel: acpiphp: Slot [4] registered May 13 23:49:37.171293 kernel: pci_bus 000d:00: on NUMA node 0 May 13 23:49:37.171356 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 13 23:49:37.171418 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 13 23:49:37.171481 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 13 23:49:37.171544 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 13 23:49:37.171606 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 13 23:49:37.171670 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 13 23:49:37.171731 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 13 23:49:37.171793 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 13 23:49:37.171854 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 13 23:49:37.171916 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 13 23:49:37.171984 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 13 23:49:37.172049 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 13 23:49:37.172117 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] May 13 23:49:37.172181 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] May 13 23:49:37.172244 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] May 13 23:49:37.172308 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] May 13 23:49:37.172370 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] May 13 23:49:37.172431 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] May 13 23:49:37.172492 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] May 13 23:49:37.172556 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] May 13 23:49:37.172617 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.172678 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.172740 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.172802 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.172862 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.172924 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.172989 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.173053 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.173115 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.173176 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.173237 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.173298 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.173359 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.173420 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.173482 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.173545 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.173606 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] May 13 23:49:37.173668 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] May 13 23:49:37.173729 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] May 13 23:49:37.173791 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] May 13 23:49:37.173851 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] May 13 23:49:37.173913 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] May 13 23:49:37.173979 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] May 13 23:49:37.174042 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] May 13 23:49:37.174104 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] May 13 23:49:37.174164 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] May 13 23:49:37.174227 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] May 13 23:49:37.174287 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] May 13 23:49:37.174347 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] May 13 23:49:37.174401 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] May 13 23:49:37.174470 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] May 13 23:49:37.174527 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] May 13 23:49:37.174594 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] May 13 23:49:37.174654 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] May 13 23:49:37.174728 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] May 13 23:49:37.174787 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] May 13 23:49:37.174851 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] May 13 23:49:37.174909 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] May 13 23:49:37.174919 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) May 13 23:49:37.174989 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:49:37.175053 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] May 13 23:49:37.175113 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] May 13 23:49:37.175171 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 13 23:49:37.175231 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 May 13 23:49:37.175289 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] May 13 23:49:37.175298 kernel: PCI host bridge to bus 0000:00 May 13 23:49:37.175363 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] May 13 23:49:37.175420 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] May 13 23:49:37.175476 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 23:49:37.175545 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 May 13 23:49:37.175615 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 May 13 23:49:37.175678 kernel: pci 0000:00:01.0: enabling Extended Tags May 13 23:49:37.175738 kernel: pci 0000:00:01.0: supports D1 D2 May 13 23:49:37.175800 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot May 13 23:49:37.175871 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 May 13 23:49:37.175935 kernel: pci 0000:00:02.0: supports D1 D2 May 13 23:49:37.176000 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot May 13 23:49:37.176070 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 May 13 23:49:37.176135 kernel: pci 0000:00:03.0: supports D1 D2 May 13 23:49:37.176196 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot May 13 23:49:37.176265 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 May 13 23:49:37.176328 kernel: pci 0000:00:04.0: supports D1 D2 May 13 23:49:37.176390 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot May 13 23:49:37.176399 kernel: acpiphp: Slot [1-1] registered May 13 23:49:37.176407 kernel: acpiphp: Slot [2-1] registered May 13 23:49:37.176414 kernel: acpiphp: Slot [3-1] registered May 13 23:49:37.176421 kernel: acpiphp: Slot [4-1] registered May 13 23:49:37.176474 kernel: pci_bus 0000:00: on NUMA node 0 May 13 23:49:37.176537 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 13 23:49:37.176603 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 13 23:49:37.176664 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 13 23:49:37.176728 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 13 23:49:37.176791 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 13 23:49:37.176855 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 13 23:49:37.176916 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 13 23:49:37.176982 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 13 23:49:37.177044 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 13 23:49:37.177106 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 13 23:49:37.177167 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 13 23:49:37.177228 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 13 23:49:37.177290 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] May 13 23:49:37.177351 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 13 23:49:37.177412 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] May 13 23:49:37.177475 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 13 23:49:37.177536 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] May 13 23:49:37.177597 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 13 23:49:37.177657 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] May 13 23:49:37.177719 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 13 23:49:37.177778 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.177840 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.177901 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.177968 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.178030 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.178093 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.178155 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.178215 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.178277 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.178337 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.178398 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.178460 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.178522 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.178583 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.178643 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.178704 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.178764 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 13 23:49:37.178826 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] May 13 23:49:37.178886 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 13 23:49:37.178953 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] May 13 23:49:37.179014 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] May 13 23:49:37.179076 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 13 23:49:37.179138 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] May 13 23:49:37.179200 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] May 13 23:49:37.179262 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 13 23:49:37.179323 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] May 13 23:49:37.179386 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] May 13 23:49:37.179447 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 13 23:49:37.179504 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] May 13 23:49:37.179561 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] May 13 23:49:37.179627 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] May 13 23:49:37.179685 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 13 23:49:37.179750 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] May 13 23:49:37.179808 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 13 23:49:37.179881 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] May 13 23:49:37.179942 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 13 23:49:37.180010 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] May 13 23:49:37.180068 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 13 23:49:37.180077 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) May 13 23:49:37.180145 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:49:37.180205 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] May 13 23:49:37.180268 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] May 13 23:49:37.180327 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 13 23:49:37.180387 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 May 13 23:49:37.180446 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] May 13 23:49:37.180455 kernel: PCI host bridge to bus 0005:00 May 13 23:49:37.180516 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] May 13 23:49:37.180572 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] May 13 23:49:37.180628 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] May 13 23:49:37.180700 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 May 13 23:49:37.180769 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 May 13 23:49:37.180832 kernel: pci 0005:00:01.0: supports D1 D2 May 13 23:49:37.180892 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot May 13 23:49:37.180964 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 May 13 23:49:37.181026 kernel: pci 0005:00:03.0: supports D1 D2 May 13 23:49:37.181091 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot May 13 23:49:37.181159 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 May 13 23:49:37.181220 kernel: pci 0005:00:05.0: supports D1 D2 May 13 23:49:37.181281 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot May 13 23:49:37.181349 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 May 13 23:49:37.181411 kernel: pci 0005:00:07.0: supports D1 D2 May 13 23:49:37.181471 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot May 13 23:49:37.181483 kernel: acpiphp: Slot [1-2] registered May 13 23:49:37.181490 kernel: acpiphp: Slot [2-2] registered May 13 23:49:37.181564 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 May 13 23:49:37.181629 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] May 13 23:49:37.181692 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] May 13 23:49:37.181763 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 May 13 23:49:37.181826 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] May 13 23:49:37.181892 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] May 13 23:49:37.181952 kernel: pci_bus 0005:00: on NUMA node 0 May 13 23:49:37.182017 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 13 23:49:37.182080 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 13 23:49:37.182141 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 13 23:49:37.182203 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 13 23:49:37.182264 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 13 23:49:37.182328 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 13 23:49:37.182390 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 13 23:49:37.182452 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 13 23:49:37.182513 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 13 23:49:37.182574 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 13 23:49:37.182637 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 13 23:49:37.182699 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 May 13 23:49:37.182760 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] May 13 23:49:37.182821 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 13 23:49:37.182883 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] May 13 23:49:37.182944 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 13 23:49:37.183008 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] May 13 23:49:37.183070 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 13 23:49:37.183131 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] May 13 23:49:37.183195 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 13 23:49:37.183255 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.183317 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.183376 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.183439 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.183500 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.183561 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.183622 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.183685 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.183746 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.183807 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.183868 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.183929 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.183992 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.184054 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.184115 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.184177 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.184239 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] May 13 23:49:37.184301 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] May 13 23:49:37.184362 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 13 23:49:37.184423 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] May 13 23:49:37.184484 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] May 13 23:49:37.184545 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 13 23:49:37.184613 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] May 13 23:49:37.184675 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] May 13 23:49:37.184737 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] May 13 23:49:37.184798 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] May 13 23:49:37.184860 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 13 23:49:37.184926 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] May 13 23:49:37.184992 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] May 13 23:49:37.185056 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] May 13 23:49:37.185116 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] May 13 23:49:37.185179 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 13 23:49:37.185236 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] May 13 23:49:37.185291 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] May 13 23:49:37.185358 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] May 13 23:49:37.185415 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 13 23:49:37.185492 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] May 13 23:49:37.185550 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 13 23:49:37.185614 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] May 13 23:49:37.185671 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 13 23:49:37.185736 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] May 13 23:49:37.185795 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 13 23:49:37.185804 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) May 13 23:49:37.185871 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:49:37.185932 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] May 13 23:49:37.185995 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] May 13 23:49:37.186054 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 13 23:49:37.186113 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 May 13 23:49:37.186174 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] May 13 23:49:37.186184 kernel: PCI host bridge to bus 0003:00 May 13 23:49:37.186247 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] May 13 23:49:37.186302 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] May 13 23:49:37.186356 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] May 13 23:49:37.186427 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 May 13 23:49:37.186498 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 May 13 23:49:37.186563 kernel: pci 0003:00:01.0: supports D1 D2 May 13 23:49:37.186627 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot May 13 23:49:37.186694 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 May 13 23:49:37.186758 kernel: pci 0003:00:03.0: supports D1 D2 May 13 23:49:37.186822 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot May 13 23:49:37.186891 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 May 13 23:49:37.186961 kernel: pci 0003:00:05.0: supports D1 D2 May 13 23:49:37.187024 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot May 13 23:49:37.187034 kernel: acpiphp: Slot [1-3] registered May 13 23:49:37.187041 kernel: acpiphp: Slot [2-3] registered May 13 23:49:37.187112 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 May 13 23:49:37.187176 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] May 13 23:49:37.187240 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] May 13 23:49:37.187304 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] May 13 23:49:37.187368 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold May 13 23:49:37.187431 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] May 13 23:49:37.187493 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) May 13 23:49:37.187557 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] May 13 23:49:37.187620 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) May 13 23:49:37.187685 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) May 13 23:49:37.187756 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 May 13 23:49:37.187824 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] May 13 23:49:37.187887 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] May 13 23:49:37.187994 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] May 13 23:49:37.188069 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold May 13 23:49:37.188135 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] May 13 23:49:37.188198 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) May 13 23:49:37.188262 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] May 13 23:49:37.188328 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) May 13 23:49:37.188386 kernel: pci_bus 0003:00: on NUMA node 0 May 13 23:49:37.188453 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 13 23:49:37.188515 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 13 23:49:37.188578 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 13 23:49:37.188640 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 13 23:49:37.188702 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 13 23:49:37.188767 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 13 23:49:37.188832 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 May 13 23:49:37.188894 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 May 13 23:49:37.188961 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 13 23:49:37.189036 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] May 13 23:49:37.189100 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] May 13 23:49:37.189162 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] May 13 23:49:37.189224 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] May 13 23:49:37.189289 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] May 13 23:49:37.189350 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.189412 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.189475 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.189535 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.189598 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.189659 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.189722 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.189786 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.189849 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.189910 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.189976 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.190039 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.190100 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] May 13 23:49:37.190162 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] May 13 23:49:37.190224 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] May 13 23:49:37.190289 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] May 13 23:49:37.190352 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] May 13 23:49:37.190414 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] May 13 23:49:37.190493 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] May 13 23:49:37.190558 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] May 13 23:49:37.190621 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] May 13 23:49:37.190688 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] May 13 23:49:37.190752 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] May 13 23:49:37.190817 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] May 13 23:49:37.190880 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] May 13 23:49:37.190945 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] May 13 23:49:37.191013 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 13 23:49:37.191077 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 13 23:49:37.191143 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 13 23:49:37.191207 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 13 23:49:37.191271 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 13 23:49:37.191334 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 13 23:49:37.191398 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 13 23:49:37.191461 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 13 23:49:37.191524 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] May 13 23:49:37.191587 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] May 13 23:49:37.191650 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] May 13 23:49:37.191709 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc May 13 23:49:37.191764 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] May 13 23:49:37.191820 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] May 13 23:49:37.191896 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] May 13 23:49:37.191962 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] May 13 23:49:37.192032 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] May 13 23:49:37.192092 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] May 13 23:49:37.192158 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] May 13 23:49:37.192231 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] May 13 23:49:37.192241 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) May 13 23:49:37.192318 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:49:37.192383 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] May 13 23:49:37.192443 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] May 13 23:49:37.192504 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 13 23:49:37.192563 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 May 13 23:49:37.192622 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] May 13 23:49:37.192632 kernel: PCI host bridge to bus 000c:00 May 13 23:49:37.192694 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] May 13 23:49:37.192753 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] May 13 23:49:37.192812 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] May 13 23:49:37.192887 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 May 13 23:49:37.192962 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 May 13 23:49:37.193033 kernel: pci 000c:00:01.0: enabling Extended Tags May 13 23:49:37.193097 kernel: pci 000c:00:01.0: supports D1 D2 May 13 23:49:37.193160 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot May 13 23:49:37.193233 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 May 13 23:49:37.193295 kernel: pci 000c:00:02.0: supports D1 D2 May 13 23:49:37.193357 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot May 13 23:49:37.193427 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 May 13 23:49:37.193490 kernel: pci 000c:00:03.0: supports D1 D2 May 13 23:49:37.193552 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot May 13 23:49:37.193622 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 May 13 23:49:37.193687 kernel: pci 000c:00:04.0: supports D1 D2 May 13 23:49:37.193748 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot May 13 23:49:37.193758 kernel: acpiphp: Slot [1-4] registered May 13 23:49:37.193766 kernel: acpiphp: Slot [2-4] registered May 13 23:49:37.193773 kernel: acpiphp: Slot [3-2] registered May 13 23:49:37.193781 kernel: acpiphp: Slot [4-2] registered May 13 23:49:37.193836 kernel: pci_bus 000c:00: on NUMA node 0 May 13 23:49:37.193897 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 13 23:49:37.193965 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 13 23:49:37.194027 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 13 23:49:37.194090 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 13 23:49:37.194152 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 13 23:49:37.194213 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 13 23:49:37.194276 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 13 23:49:37.194337 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 13 23:49:37.194401 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 13 23:49:37.194464 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 13 23:49:37.194528 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 13 23:49:37.194590 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 13 23:49:37.194652 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] May 13 23:49:37.194714 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] May 13 23:49:37.194775 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] May 13 23:49:37.194839 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] May 13 23:49:37.194901 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] May 13 23:49:37.194965 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] May 13 23:49:37.195027 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] May 13 23:49:37.195089 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] May 13 23:49:37.195151 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.195212 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.195273 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.195336 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.195399 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.195461 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.195525 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.195585 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.195647 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.195709 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.195770 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.195831 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.195894 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.195960 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.196022 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.196085 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.196146 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] May 13 23:49:37.196208 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] May 13 23:49:37.196270 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] May 13 23:49:37.196332 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] May 13 23:49:37.196394 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] May 13 23:49:37.196455 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] May 13 23:49:37.196517 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] May 13 23:49:37.196578 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] May 13 23:49:37.196641 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] May 13 23:49:37.196703 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] May 13 23:49:37.196766 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] May 13 23:49:37.196829 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] May 13 23:49:37.196884 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] May 13 23:49:37.196939 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] May 13 23:49:37.197060 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] May 13 23:49:37.197119 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] May 13 23:49:37.197193 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] May 13 23:49:37.197249 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] May 13 23:49:37.197314 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] May 13 23:49:37.197369 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] May 13 23:49:37.197433 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] May 13 23:49:37.197490 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] May 13 23:49:37.197501 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) May 13 23:49:37.197568 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:49:37.197626 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] May 13 23:49:37.197685 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] May 13 23:49:37.197743 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 13 23:49:37.197801 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 May 13 23:49:37.197859 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] May 13 23:49:37.197871 kernel: PCI host bridge to bus 0002:00 May 13 23:49:37.197934 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] May 13 23:49:37.197991 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] May 13 23:49:37.198046 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] May 13 23:49:37.198114 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 May 13 23:49:37.198181 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 May 13 23:49:37.198245 kernel: pci 0002:00:01.0: supports D1 D2 May 13 23:49:37.198307 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot May 13 23:49:37.198377 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 May 13 23:49:37.198437 kernel: pci 0002:00:03.0: supports D1 D2 May 13 23:49:37.198498 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot May 13 23:49:37.198564 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 May 13 23:49:37.198625 kernel: pci 0002:00:05.0: supports D1 D2 May 13 23:49:37.198686 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot May 13 23:49:37.198756 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 May 13 23:49:37.198817 kernel: pci 0002:00:07.0: supports D1 D2 May 13 23:49:37.198877 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot May 13 23:49:37.198886 kernel: acpiphp: Slot [1-5] registered May 13 23:49:37.198894 kernel: acpiphp: Slot [2-5] registered May 13 23:49:37.198902 kernel: acpiphp: Slot [3-3] registered May 13 23:49:37.198909 kernel: acpiphp: Slot [4-3] registered May 13 23:49:37.198966 kernel: pci_bus 0002:00: on NUMA node 0 May 13 23:49:37.199030 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 13 23:49:37.199092 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 13 23:49:37.199153 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 13 23:49:37.199215 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 13 23:49:37.199278 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 13 23:49:37.199340 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 13 23:49:37.199406 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 13 23:49:37.199468 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 13 23:49:37.199531 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 13 23:49:37.199593 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 13 23:49:37.199655 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 13 23:49:37.199719 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 13 23:49:37.199781 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] May 13 23:49:37.199842 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] May 13 23:49:37.199903 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] May 13 23:49:37.199967 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] May 13 23:49:37.200028 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] May 13 23:49:37.200090 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] May 13 23:49:37.200151 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] May 13 23:49:37.200214 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] May 13 23:49:37.200275 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.200337 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.200401 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.200464 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.200526 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.200586 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.200648 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.200712 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.200773 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.200834 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.200895 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.200960 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.201021 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.201082 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.201146 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.201207 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.201271 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] May 13 23:49:37.201331 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] May 13 23:49:37.201394 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] May 13 23:49:37.201454 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] May 13 23:49:37.201517 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] May 13 23:49:37.201578 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] May 13 23:49:37.201642 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] May 13 23:49:37.201704 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] May 13 23:49:37.201765 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] May 13 23:49:37.201827 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] May 13 23:49:37.201889 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] May 13 23:49:37.201954 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] May 13 23:49:37.202014 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] May 13 23:49:37.202069 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] May 13 23:49:37.202136 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] May 13 23:49:37.202194 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] May 13 23:49:37.202267 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] May 13 23:49:37.202326 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] May 13 23:49:37.202390 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] May 13 23:49:37.202450 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] May 13 23:49:37.202515 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] May 13 23:49:37.202573 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] May 13 23:49:37.202583 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) May 13 23:49:37.202650 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:49:37.202712 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] May 13 23:49:37.202773 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] May 13 23:49:37.202833 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 13 23:49:37.202892 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 May 13 23:49:37.202955 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] May 13 23:49:37.202965 kernel: PCI host bridge to bus 0001:00 May 13 23:49:37.203026 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] May 13 23:49:37.203086 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] May 13 23:49:37.203141 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] May 13 23:49:37.203211 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 May 13 23:49:37.203282 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 May 13 23:49:37.203344 kernel: pci 0001:00:01.0: enabling Extended Tags May 13 23:49:37.203406 kernel: pci 0001:00:01.0: supports D1 D2 May 13 23:49:37.203467 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot May 13 23:49:37.203539 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 May 13 23:49:37.203600 kernel: pci 0001:00:02.0: supports D1 D2 May 13 23:49:37.203662 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot May 13 23:49:37.203731 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 May 13 23:49:37.203793 kernel: pci 0001:00:03.0: supports D1 D2 May 13 23:49:37.203854 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot May 13 23:49:37.203922 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 May 13 23:49:37.203995 kernel: pci 0001:00:04.0: supports D1 D2 May 13 23:49:37.204058 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot May 13 23:49:37.204067 kernel: acpiphp: Slot [1-6] registered May 13 23:49:37.204137 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 May 13 23:49:37.204202 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] May 13 23:49:37.204265 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] May 13 23:49:37.204329 kernel: pci 0001:01:00.0: PME# supported from D3cold May 13 23:49:37.204394 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 13 23:49:37.204466 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 May 13 23:49:37.204531 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] May 13 23:49:37.204594 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] May 13 23:49:37.204660 kernel: pci 0001:01:00.1: PME# supported from D3cold May 13 23:49:37.204670 kernel: acpiphp: Slot [2-6] registered May 13 23:49:37.204678 kernel: acpiphp: Slot [3-4] registered May 13 23:49:37.204685 kernel: acpiphp: Slot [4-4] registered May 13 23:49:37.204741 kernel: pci_bus 0001:00: on NUMA node 0 May 13 23:49:37.204803 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 13 23:49:37.204864 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 13 23:49:37.204926 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 13 23:49:37.204992 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 13 23:49:37.205054 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 13 23:49:37.205115 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 13 23:49:37.205179 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 13 23:49:37.205242 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 13 23:49:37.205302 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 13 23:49:37.205364 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 13 23:49:37.205426 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380003ffffff 64bit pref] May 13 23:49:37.205492 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] May 13 23:49:37.205554 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] May 13 23:49:37.205617 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380004000000-0x3800041fffff 64bit pref] May 13 23:49:37.205680 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] May 13 23:49:37.205740 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380004200000-0x3800043fffff 64bit pref] May 13 23:49:37.205802 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] May 13 23:49:37.205863 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380004400000-0x3800045fffff 64bit pref] May 13 23:49:37.205925 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.205990 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.206051 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.206115 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.206176 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.206239 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.206300 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.206362 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.206422 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.206484 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.206545 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.206608 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.206670 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.206732 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.206794 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.206857 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.206922 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] May 13 23:49:37.206992 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] May 13 23:49:37.207055 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] May 13 23:49:37.207122 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] May 13 23:49:37.207183 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] May 13 23:49:37.207245 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] May 13 23:49:37.207308 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] May 13 23:49:37.207370 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] May 13 23:49:37.207432 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] May 13 23:49:37.207494 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] May 13 23:49:37.207558 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] May 13 23:49:37.207619 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] May 13 23:49:37.207682 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] May 13 23:49:37.207743 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] May 13 23:49:37.207806 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] May 13 23:49:37.207871 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] May 13 23:49:37.207931 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] May 13 23:49:37.207999 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] May 13 23:49:37.208078 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] May 13 23:49:37.208142 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] May 13 23:49:37.208207 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] May 13 23:49:37.208266 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] May 13 23:49:37.208334 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] May 13 23:49:37.208393 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] May 13 23:49:37.208460 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] May 13 23:49:37.208518 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] May 13 23:49:37.208528 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) May 13 23:49:37.208598 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:49:37.208661 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] May 13 23:49:37.208721 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] May 13 23:49:37.208780 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 13 23:49:37.208840 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 May 13 23:49:37.208900 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] May 13 23:49:37.208910 kernel: PCI host bridge to bus 0004:00 May 13 23:49:37.209211 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] May 13 23:49:37.209280 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] May 13 23:49:37.209342 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] May 13 23:49:37.209412 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 May 13 23:49:37.209482 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 May 13 23:49:37.209543 kernel: pci 0004:00:01.0: supports D1 D2 May 13 23:49:37.209604 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot May 13 23:49:37.209672 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 May 13 23:49:37.209737 kernel: pci 0004:00:03.0: supports D1 D2 May 13 23:49:37.209798 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot May 13 23:49:37.209866 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 May 13 23:49:37.209927 kernel: pci 0004:00:05.0: supports D1 D2 May 13 23:49:37.209996 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot May 13 23:49:37.210067 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 May 13 23:49:37.210130 kernel: pci 0004:01:00.0: enabling Extended Tags May 13 23:49:37.210195 kernel: pci 0004:01:00.0: supports D1 D2 May 13 23:49:37.210256 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 13 23:49:37.210332 kernel: pci_bus 0004:02: extended config space not accessible May 13 23:49:37.210405 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 May 13 23:49:37.210470 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] May 13 23:49:37.210534 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] May 13 23:49:37.210597 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] May 13 23:49:37.210664 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb May 13 23:49:37.210728 kernel: pci 0004:02:00.0: supports D1 D2 May 13 23:49:37.210793 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 13 23:49:37.210862 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 May 13 23:49:37.210927 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] May 13 23:49:37.210995 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold May 13 23:49:37.211050 kernel: pci_bus 0004:00: on NUMA node 0 May 13 23:49:37.211114 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 May 13 23:49:37.211176 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 13 23:49:37.211236 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 13 23:49:37.211299 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 13 23:49:37.211360 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 13 23:49:37.211422 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 13 23:49:37.211482 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 13 23:49:37.211545 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 13 23:49:37.211606 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] May 13 23:49:37.211666 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] May 13 23:49:37.211727 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] May 13 23:49:37.211788 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] May 13 23:49:37.211849 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] May 13 23:49:37.211909 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.212082 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.212150 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.212210 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.212270 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.212333 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.212393 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.212452 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.212514 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.212574 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.212636 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.212696 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.212759 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 13 23:49:37.212821 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] May 13 23:49:37.212882 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] May 13 23:49:37.212947 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] May 13 23:49:37.213017 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] May 13 23:49:37.213082 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] May 13 23:49:37.213148 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] May 13 23:49:37.213211 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] May 13 23:49:37.213272 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] May 13 23:49:37.213333 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] May 13 23:49:37.213394 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] May 13 23:49:37.213454 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] May 13 23:49:37.213517 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] May 13 23:49:37.213577 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] May 13 23:49:37.213639 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] May 13 23:49:37.213700 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] May 13 23:49:37.213760 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] May 13 23:49:37.213820 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] May 13 23:49:37.213880 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] May 13 23:49:37.213936 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc May 13 23:49:37.213994 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] May 13 23:49:37.214048 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] May 13 23:49:37.214113 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] May 13 23:49:37.214171 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] May 13 23:49:37.214231 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] May 13 23:49:37.214296 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] May 13 23:49:37.214352 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] May 13 23:49:37.214418 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] May 13 23:49:37.214476 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] May 13 23:49:37.214486 kernel: iommu: Default domain type: Translated May 13 23:49:37.214493 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 23:49:37.214501 kernel: efivars: Registered efivars operations May 13 23:49:37.214564 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device May 13 23:49:37.214631 kernel: pci 0004:02:00.0: vgaarb: bridge control possible May 13 23:49:37.214695 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none May 13 23:49:37.214707 kernel: vgaarb: loaded May 13 23:49:37.214715 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 23:49:37.214723 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:49:37.214731 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:49:37.214738 kernel: pnp: PnP ACPI init May 13 23:49:37.214804 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved May 13 23:49:37.214861 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved May 13 23:49:37.214919 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved May 13 23:49:37.214978 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved May 13 23:49:37.215033 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved May 13 23:49:37.215087 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved May 13 23:49:37.215144 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved May 13 23:49:37.215199 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved May 13 23:49:37.215208 kernel: pnp: PnP ACPI: found 1 devices May 13 23:49:37.215219 kernel: NET: Registered PF_INET protocol family May 13 23:49:37.215227 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:49:37.215235 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 13 23:49:37.215242 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:49:37.215250 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:49:37.215258 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 13 23:49:37.215266 kernel: TCP: Hash tables configured (established 524288 bind 65536) May 13 23:49:37.215273 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 13 23:49:37.215283 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 13 23:49:37.215290 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:49:37.215354 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes May 13 23:49:37.215364 kernel: kvm [1]: IPA Size Limit: 48 bits May 13 23:49:37.215372 kernel: kvm [1]: GICv3: no GICV resource entry May 13 23:49:37.215380 kernel: kvm [1]: disabling GICv2 emulation May 13 23:49:37.215388 kernel: kvm [1]: GIC system register CPU interface enabled May 13 23:49:37.215395 kernel: kvm [1]: vgic interrupt IRQ9 May 13 23:49:37.215403 kernel: kvm [1]: VHE mode initialized successfully May 13 23:49:37.215413 kernel: Initialise system trusted keyrings May 13 23:49:37.215420 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 May 13 23:49:37.215428 kernel: Key type asymmetric registered May 13 23:49:37.215436 kernel: Asymmetric key parser 'x509' registered May 13 23:49:37.215443 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 23:49:37.215451 kernel: io scheduler mq-deadline registered May 13 23:49:37.215458 kernel: io scheduler kyber registered May 13 23:49:37.215466 kernel: io scheduler bfq registered May 13 23:49:37.215474 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 23:49:37.215481 kernel: ACPI: button: Power Button [PWRB] May 13 23:49:37.215490 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). May 13 23:49:37.215498 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:49:37.215565 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 May 13 23:49:37.215624 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) May 13 23:49:37.215681 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 13 23:49:37.215737 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq May 13 23:49:37.215793 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq May 13 23:49:37.215852 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq May 13 23:49:37.215916 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 May 13 23:49:37.216159 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) May 13 23:49:37.216221 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 13 23:49:37.216277 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq May 13 23:49:37.216333 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq May 13 23:49:37.216391 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq May 13 23:49:37.216455 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 May 13 23:49:37.216512 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) May 13 23:49:37.216568 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 13 23:49:37.216623 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq May 13 23:49:37.216681 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq May 13 23:49:37.216737 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq May 13 23:49:37.216803 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 May 13 23:49:37.216860 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) May 13 23:49:37.216915 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 13 23:49:37.216975 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq May 13 23:49:37.217032 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq May 13 23:49:37.217088 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq May 13 23:49:37.217160 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 May 13 23:49:37.217220 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) May 13 23:49:37.217276 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 13 23:49:37.217332 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq May 13 23:49:37.217388 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq May 13 23:49:37.217444 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq May 13 23:49:37.217510 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 May 13 23:49:37.217569 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) May 13 23:49:37.217625 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 13 23:49:37.217682 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq May 13 23:49:37.217737 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq May 13 23:49:37.217793 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq May 13 23:49:37.217856 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 May 13 23:49:37.217916 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) May 13 23:49:37.217975 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 13 23:49:37.218032 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq May 13 23:49:37.218088 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq May 13 23:49:37.218144 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq May 13 23:49:37.218207 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 May 13 23:49:37.218266 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) May 13 23:49:37.218323 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 13 23:49:37.218379 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq May 13 23:49:37.218435 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq May 13 23:49:37.218493 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq May 13 23:49:37.218503 kernel: thunder_xcv, ver 1.0 May 13 23:49:37.218511 kernel: thunder_bgx, ver 1.0 May 13 23:49:37.218519 kernel: nicpf, ver 1.0 May 13 23:49:37.218528 kernel: nicvf, ver 1.0 May 13 23:49:37.218590 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 23:49:37.218648 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T23:49:35 UTC (1747180175) May 13 23:49:37.218658 kernel: efifb: probing for efifb May 13 23:49:37.218666 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k May 13 23:49:37.218674 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 13 23:49:37.218682 kernel: efifb: scrolling: redraw May 13 23:49:37.218689 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 13 23:49:37.218699 kernel: Console: switching to colour frame buffer device 100x37 May 13 23:49:37.218707 kernel: fb0: EFI VGA frame buffer device May 13 23:49:37.218714 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 May 13 23:49:37.218722 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 23:49:37.218730 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 23:49:37.218737 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 23:49:37.218745 kernel: watchdog: Hard watchdog permanently disabled May 13 23:49:37.218753 kernel: NET: Registered PF_INET6 protocol family May 13 23:49:37.218760 kernel: Segment Routing with IPv6 May 13 23:49:37.218770 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:49:37.218777 kernel: NET: Registered PF_PACKET protocol family May 13 23:49:37.218785 kernel: Key type dns_resolver registered May 13 23:49:37.218792 kernel: registered taskstats version 1 May 13 23:49:37.218800 kernel: Loading compiled-in X.509 certificates May 13 23:49:37.218808 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 568a15bbab977599d8f910f319ba50c03c8a57bd' May 13 23:49:37.218815 kernel: Key type .fscrypt registered May 13 23:49:37.218823 kernel: Key type fscrypt-provisioning registered May 13 23:49:37.218830 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:49:37.218841 kernel: ima: Allocated hash algorithm: sha1 May 13 23:49:37.218849 kernel: ima: No architecture policies found May 13 23:49:37.218856 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 23:49:37.218921 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 May 13 23:49:37.218987 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 May 13 23:49:37.219049 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 May 13 23:49:37.219111 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 May 13 23:49:37.219174 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 May 13 23:49:37.219235 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 May 13 23:49:37.219300 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 May 13 23:49:37.219360 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 May 13 23:49:37.219423 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 May 13 23:49:37.219484 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 May 13 23:49:37.219546 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 May 13 23:49:37.219607 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 May 13 23:49:37.219669 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 May 13 23:49:37.219730 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 May 13 23:49:37.219793 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 May 13 23:49:37.219855 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 May 13 23:49:37.219917 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 May 13 23:49:37.219982 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 May 13 23:49:37.220044 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 May 13 23:49:37.220105 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 May 13 23:49:37.220167 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 May 13 23:49:37.220228 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 May 13 23:49:37.220292 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 May 13 23:49:37.220352 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 May 13 23:49:37.220414 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 May 13 23:49:37.220474 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 May 13 23:49:37.220536 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 May 13 23:49:37.220596 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 May 13 23:49:37.220658 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 May 13 23:49:37.220719 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 May 13 23:49:37.220781 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 May 13 23:49:37.220845 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 May 13 23:49:37.220907 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 May 13 23:49:37.220971 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 May 13 23:49:37.221033 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 May 13 23:49:37.221094 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 May 13 23:49:37.221155 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 May 13 23:49:37.221216 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 May 13 23:49:37.221278 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 May 13 23:49:37.221341 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 May 13 23:49:37.221404 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 May 13 23:49:37.221464 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 May 13 23:49:37.221526 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 May 13 23:49:37.221587 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 May 13 23:49:37.221649 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 May 13 23:49:37.221710 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 May 13 23:49:37.221772 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 May 13 23:49:37.221835 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 May 13 23:49:37.221895 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 May 13 23:49:37.221960 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 May 13 23:49:37.222022 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 May 13 23:49:37.222083 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 May 13 23:49:37.222144 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 May 13 23:49:37.222204 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 May 13 23:49:37.222267 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 May 13 23:49:37.222329 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 May 13 23:49:37.222391 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 May 13 23:49:37.222451 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 May 13 23:49:37.222515 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 May 13 23:49:37.222575 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 May 13 23:49:37.222638 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 May 13 23:49:37.222648 kernel: clk: Disabling unused clocks May 13 23:49:37.222656 kernel: Freeing unused kernel memory: 38464K May 13 23:49:37.222666 kernel: Run /init as init process May 13 23:49:37.222674 kernel: with arguments: May 13 23:49:37.222681 kernel: /init May 13 23:49:37.222689 kernel: with environment: May 13 23:49:37.222696 kernel: HOME=/ May 13 23:49:37.222704 kernel: TERM=linux May 13 23:49:37.222711 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:49:37.222720 systemd[1]: Successfully made /usr/ read-only. May 13 23:49:37.222731 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:49:37.222741 systemd[1]: Detected architecture arm64. May 13 23:49:37.222749 systemd[1]: Running in initrd. May 13 23:49:37.222757 systemd[1]: No hostname configured, using default hostname. May 13 23:49:37.222765 systemd[1]: Hostname set to . May 13 23:49:37.222773 systemd[1]: Initializing machine ID from random generator. May 13 23:49:37.222781 systemd[1]: Queued start job for default target initrd.target. May 13 23:49:37.222789 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:49:37.222799 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:49:37.222808 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:49:37.222816 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:49:37.222825 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:49:37.222833 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:49:37.222842 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:49:37.222851 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:49:37.222860 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:49:37.222869 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:49:37.222877 systemd[1]: Reached target paths.target - Path Units. May 13 23:49:37.222886 systemd[1]: Reached target slices.target - Slice Units. May 13 23:49:37.222894 systemd[1]: Reached target swap.target - Swaps. May 13 23:49:37.222902 systemd[1]: Reached target timers.target - Timer Units. May 13 23:49:37.222910 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:49:37.222918 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:49:37.222927 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:49:37.222936 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:49:37.222944 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:49:37.222955 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:49:37.222963 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:49:37.222972 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:49:37.222980 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:49:37.222988 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:49:37.222996 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:49:37.223006 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:49:37.223014 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:49:37.223022 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:49:37.223051 systemd-journald[899]: Collecting audit messages is disabled. May 13 23:49:37.223071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:49:37.223080 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:49:37.223088 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:49:37.223095 kernel: Bridge firewalling registered May 13 23:49:37.223104 systemd-journald[899]: Journal started May 13 23:49:37.223122 systemd-journald[899]: Runtime Journal (/run/log/journal/2161c0accae34f9897e3a15f8c4bf966) is 8M, max 4G, 3.9G free. May 13 23:49:37.188629 systemd-modules-load[903]: Inserted module 'overlay' May 13 23:49:37.264500 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:49:37.212530 systemd-modules-load[903]: Inserted module 'br_netfilter' May 13 23:49:37.270282 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:49:37.280936 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:49:37.291699 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:49:37.302258 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:49:37.316087 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:49:37.324412 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:49:37.342528 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:49:37.351438 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:49:37.369101 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:49:37.385054 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:49:37.397084 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:49:37.413603 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:49:37.433710 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:49:37.456938 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:49:37.470008 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:49:37.482879 dracut-cmdline[944]: dracut-dracut-053 May 13 23:49:37.482879 dracut-cmdline[944]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=3174b2682629aa8ad4069807ed6fd62c10f62266ee1e150a1104f2a2fb6489b5 May 13 23:49:37.490223 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:49:37.495000 systemd-resolved[946]: Positive Trust Anchors: May 13 23:49:37.495010 systemd-resolved[946]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:49:37.495040 systemd-resolved[946]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:49:37.510276 systemd-resolved[946]: Defaulting to hostname 'linux'. May 13 23:49:37.511836 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:49:37.540196 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:49:37.645959 kernel: SCSI subsystem initialized May 13 23:49:37.660959 kernel: Loading iSCSI transport class v2.0-870. May 13 23:49:37.678958 kernel: iscsi: registered transport (tcp) May 13 23:49:37.706092 kernel: iscsi: registered transport (qla4xxx) May 13 23:49:37.706116 kernel: QLogic iSCSI HBA Driver May 13 23:49:37.749501 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:49:37.760716 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:49:37.820453 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:49:37.820484 kernel: device-mapper: uevent: version 1.0.3 May 13 23:49:37.830055 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:49:37.894962 kernel: raid6: neonx8 gen() 15849 MB/s May 13 23:49:37.919960 kernel: raid6: neonx4 gen() 15877 MB/s May 13 23:49:37.944960 kernel: raid6: neonx2 gen() 13338 MB/s May 13 23:49:37.969960 kernel: raid6: neonx1 gen() 10463 MB/s May 13 23:49:37.994961 kernel: raid6: int64x8 gen() 6823 MB/s May 13 23:49:38.019960 kernel: raid6: int64x4 gen() 7371 MB/s May 13 23:49:38.044960 kernel: raid6: int64x2 gen() 6133 MB/s May 13 23:49:38.072878 kernel: raid6: int64x1 gen() 5077 MB/s May 13 23:49:38.072898 kernel: raid6: using algorithm neonx4 gen() 15877 MB/s May 13 23:49:38.107308 kernel: raid6: .... xor() 12471 MB/s, rmw enabled May 13 23:49:38.107328 kernel: raid6: using neon recovery algorithm May 13 23:49:38.130131 kernel: xor: measuring software checksum speed May 13 23:49:38.130152 kernel: 8regs : 21421 MB/sec May 13 23:49:38.138054 kernel: 32regs : 21664 MB/sec May 13 23:49:38.145786 kernel: arm64_neon : 28099 MB/sec May 13 23:49:38.153407 kernel: xor: using function: arm64_neon (28099 MB/sec) May 13 23:49:38.213958 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:49:38.223555 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:49:38.232395 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:49:38.261605 systemd-udevd[1142]: Using default interface naming scheme 'v255'. May 13 23:49:38.265198 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:49:38.270742 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:49:38.305937 dracut-pre-trigger[1153]: rd.md=0: removing MD RAID activation May 13 23:49:38.331569 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:49:38.341103 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:49:38.459871 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:49:38.469428 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:49:38.510175 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 23:49:38.510192 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 23:49:38.510202 kernel: PTP clock support registered May 13 23:49:38.511962 kernel: ACPI: bus type USB registered May 13 23:49:38.511977 kernel: usbcore: registered new interface driver usbfs May 13 23:49:38.511986 kernel: usbcore: registered new interface driver hub May 13 23:49:38.511995 kernel: usbcore: registered new device driver usb May 13 23:49:38.568837 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 13 23:49:38.568867 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 13 23:49:38.568886 kernel: igb 0003:03:00.0: Adding to iommu group 31 May 13 23:49:38.584955 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 32 May 13 23:49:38.590315 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:49:38.693862 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 33 May 13 23:49:38.694079 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 13 23:49:38.694214 kernel: nvme 0005:03:00.0: Adding to iommu group 34 May 13 23:49:38.694302 kernel: nvme 0005:04:00.0: Adding to iommu group 35 May 13 23:49:38.694385 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 May 13 23:49:38.694463 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 13 23:49:38.694538 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 May 13 23:49:38.694616 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault May 13 23:49:38.590375 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:49:38.786920 kernel: igb 0003:03:00.0: added PHC on eth0 May 13 23:49:38.787038 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 13 23:49:38.787115 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6f:98 May 13 23:49:38.787189 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 May 13 23:49:38.787261 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 13 23:49:38.787333 kernel: igb 0003:03:00.1: Adding to iommu group 36 May 13 23:49:38.716404 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:49:38.792194 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:49:38.792252 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:49:38.811131 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:49:38.819010 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:49:38.829242 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:49:38.832184 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:49:38.841926 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:49:38.851609 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:49:38.869350 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:49:38.882599 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:49:38.902161 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:49:38.915635 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:49:38.926544 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:49:38.996681 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged May 13 23:49:39.030238 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000010 May 13 23:49:39.030404 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 13 23:49:39.041017 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 May 13 23:49:39.053936 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed May 13 23:49:39.065800 kernel: hub 1-0:1.0: USB hub found May 13 23:49:39.074874 kernel: hub 1-0:1.0: 4 ports detected May 13 23:49:39.084636 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 13 23:49:39.098147 kernel: nvme nvme0: pci function 0005:03:00.0 May 13 23:49:39.118033 kernel: nvme nvme1: pci function 0005:04:00.0 May 13 23:49:39.118117 kernel: hub 2-0:1.0: USB hub found May 13 23:49:39.126927 kernel: hub 2-0:1.0: 4 ports detected May 13 23:49:39.136549 kernel: nvme nvme0: Shutdown timeout set to 8 seconds May 13 23:49:39.147390 kernel: nvme nvme1: Shutdown timeout set to 8 seconds May 13 23:49:39.164839 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:49:39.178956 kernel: nvme nvme0: 32/0/0 default/read/poll queues May 13 23:49:39.200959 kernel: nvme nvme1: 32/0/0 default/read/poll queues May 13 23:49:39.219743 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:49:39.219755 kernel: GPT:9289727 != 1875385007 May 13 23:49:39.228035 kernel: igb 0003:03:00.1: added PHC on eth1 May 13 23:49:39.228198 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:49:39.228208 kernel: GPT:9289727 != 1875385007 May 13 23:49:39.228216 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection May 13 23:49:39.232653 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:49:39.232675 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 13 23:49:39.241675 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6f:99 May 13 23:49:39.296842 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 May 13 23:49:39.306666 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 13 23:49:39.314955 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by (udev-worker) (1233) May 13 23:49:39.318562 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. May 13 23:49:39.371261 kernel: igb 0003:03:00.0 eno1: renamed from eth0 May 13 23:49:39.371374 kernel: BTRFS: device fsid ee830c17-a93d-4109-bd12-3fec8ef6763d devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (1192) May 13 23:49:39.372956 kernel: igb 0003:03:00.1 eno2: renamed from eth1 May 13 23:49:39.403245 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. May 13 23:49:39.416596 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 13 23:49:39.442886 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd May 13 23:49:39.428084 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 13 23:49:39.448024 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. May 13 23:49:39.466397 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:49:39.491354 disk-uuid[1318]: Primary Header is updated. May 13 23:49:39.491354 disk-uuid[1318]: Secondary Entries is updated. May 13 23:49:39.491354 disk-uuid[1318]: Secondary Header is updated. May 13 23:49:39.518160 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 13 23:49:39.552980 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 13 23:49:39.565955 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 May 13 23:49:39.590041 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 May 13 23:49:39.590126 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 13 23:49:39.623962 kernel: hub 1-3:1.0: USB hub found May 13 23:49:39.633957 kernel: hub 1-3:1.0: 4 ports detected May 13 23:49:39.732962 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd May 13 23:49:39.762961 kernel: hub 2-3:1.0: USB hub found May 13 23:49:39.763146 kernel: hub 2-3:1.0: 4 ports detected May 13 23:49:39.954799 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged May 13 23:49:40.259981 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 13 23:49:40.275956 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 May 13 23:49:40.295955 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 May 13 23:49:40.511772 disk-uuid[1319]: The operation has completed successfully. May 13 23:49:40.517207 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 13 23:49:40.541543 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:49:40.541629 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:49:40.580036 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:49:40.597565 sh[1480]: Success May 13 23:49:40.620957 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 23:49:40.654576 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:49:40.665682 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:49:40.689617 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:49:40.698956 kernel: BTRFS info (device dm-0): first mount of filesystem ee830c17-a93d-4109-bd12-3fec8ef6763d May 13 23:49:40.698970 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 23:49:40.698979 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:49:40.698989 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:49:40.699003 kernel: BTRFS info (device dm-0): using free space tree May 13 23:49:40.780956 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 13 23:49:40.782087 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:49:40.792286 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:49:40.793381 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:49:40.807694 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:49:40.920300 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:49:40.920326 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 13 23:49:40.920345 kernel: BTRFS info (device nvme0n1p6): using free space tree May 13 23:49:40.920363 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 13 23:49:40.920381 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 13 23:49:40.920400 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:49:40.921860 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:49:40.932616 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:49:40.940192 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:49:40.958457 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:49:41.000536 systemd-networkd[1669]: lo: Link UP May 13 23:49:41.000542 systemd-networkd[1669]: lo: Gained carrier May 13 23:49:41.004412 systemd-networkd[1669]: Enumeration completed May 13 23:49:41.004515 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:49:41.005695 systemd-networkd[1669]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:49:41.011467 systemd[1]: Reached target network.target - Network. May 13 23:49:41.048348 ignition[1668]: Ignition 2.20.0 May 13 23:49:41.048355 ignition[1668]: Stage: fetch-offline May 13 23:49:41.048389 ignition[1668]: no configs at "/usr/lib/ignition/base.d" May 13 23:49:41.057312 systemd-networkd[1669]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:49:41.048397 ignition[1668]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 13 23:49:41.061473 unknown[1668]: fetched base config from "system" May 13 23:49:41.048560 ignition[1668]: parsed url from cmdline: "" May 13 23:49:41.061480 unknown[1668]: fetched user config from "system" May 13 23:49:41.048563 ignition[1668]: no config URL provided May 13 23:49:41.064792 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:49:41.048567 ignition[1668]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:49:41.079662 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 23:49:41.048616 ignition[1668]: parsing config with SHA512: c73c9db048c6f2d64f1f4327d69e4110934b90882b8b2f0a0be8288aba6f1145a8e8cd07e298d6661a8d4688c8f509641b31b11e0d3913b491228719881171aa May 13 23:49:41.080482 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:49:41.062061 ignition[1668]: fetch-offline: fetch-offline passed May 13 23:49:41.110396 systemd-networkd[1669]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:49:41.062066 ignition[1668]: POST message to Packet Timeline May 13 23:49:41.062070 ignition[1668]: POST Status error: resource requires networking May 13 23:49:41.062139 ignition[1668]: Ignition finished successfully May 13 23:49:41.114011 ignition[1711]: Ignition 2.20.0 May 13 23:49:41.114044 ignition[1711]: Stage: kargs May 13 23:49:41.114280 ignition[1711]: no configs at "/usr/lib/ignition/base.d" May 13 23:49:41.114290 ignition[1711]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 13 23:49:41.115724 ignition[1711]: kargs: kargs passed May 13 23:49:41.115729 ignition[1711]: POST message to Packet Timeline May 13 23:49:41.115943 ignition[1711]: GET https://metadata.packet.net/metadata: attempt #1 May 13 23:49:41.118549 ignition[1711]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51615->[::1]:53: read: connection refused May 13 23:49:41.318876 ignition[1711]: GET https://metadata.packet.net/metadata: attempt #2 May 13 23:49:41.319705 ignition[1711]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55356->[::1]:53: read: connection refused May 13 23:49:41.675963 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 13 23:49:41.679140 systemd-networkd[1669]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:49:41.719995 ignition[1711]: GET https://metadata.packet.net/metadata: attempt #3 May 13 23:49:41.720827 ignition[1711]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40369->[::1]:53: read: connection refused May 13 23:49:42.280958 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 13 23:49:42.284093 systemd-networkd[1669]: eno1: Link UP May 13 23:49:42.284224 systemd-networkd[1669]: eno2: Link UP May 13 23:49:42.284338 systemd-networkd[1669]: enP1p1s0f0np0: Link UP May 13 23:49:42.284474 systemd-networkd[1669]: enP1p1s0f0np0: Gained carrier May 13 23:49:42.295103 systemd-networkd[1669]: enP1p1s0f1np1: Link UP May 13 23:49:42.315980 systemd-networkd[1669]: enP1p1s0f0np0: DHCPv4 address 147.28.150.5/31, gateway 147.28.150.4 acquired from 147.28.144.140 May 13 23:49:42.521629 ignition[1711]: GET https://metadata.packet.net/metadata: attempt #4 May 13 23:49:42.522367 ignition[1711]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54131->[::1]:53: read: connection refused May 13 23:49:42.680143 systemd-networkd[1669]: enP1p1s0f1np1: Gained carrier May 13 23:49:43.656023 systemd-networkd[1669]: enP1p1s0f0np0: Gained IPv6LL May 13 23:49:43.848121 systemd-networkd[1669]: enP1p1s0f1np1: Gained IPv6LL May 13 23:49:44.123150 ignition[1711]: GET https://metadata.packet.net/metadata: attempt #5 May 13 23:49:44.123639 ignition[1711]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50234->[::1]:53: read: connection refused May 13 23:49:47.327036 ignition[1711]: GET https://metadata.packet.net/metadata: attempt #6 May 13 23:49:47.858748 ignition[1711]: GET result: OK May 13 23:49:48.662719 ignition[1711]: Ignition finished successfully May 13 23:49:48.666241 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:49:48.669438 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:49:48.698681 ignition[1734]: Ignition 2.20.0 May 13 23:49:48.698695 ignition[1734]: Stage: disks May 13 23:49:48.698854 ignition[1734]: no configs at "/usr/lib/ignition/base.d" May 13 23:49:48.698883 ignition[1734]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 13 23:49:48.700373 ignition[1734]: disks: disks passed May 13 23:49:48.700378 ignition[1734]: POST message to Packet Timeline May 13 23:49:48.700395 ignition[1734]: GET https://metadata.packet.net/metadata: attempt #1 May 13 23:49:49.291159 ignition[1734]: GET result: OK May 13 23:49:49.587113 ignition[1734]: Ignition finished successfully May 13 23:49:49.590067 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:49:49.595568 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:49:49.603150 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:49:49.611065 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:49:49.619566 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:49:49.628472 systemd[1]: Reached target basic.target - Basic System. May 13 23:49:49.638717 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:49:49.665657 systemd-fsck[1754]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:49:49.668970 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:49:49.676851 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:49:49.755957 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9f8d74e6-c079-469f-823a-18a62077a2c7 r/w with ordered data mode. Quota mode: none. May 13 23:49:49.756373 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:49:49.766753 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:49:49.777868 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:49:49.793413 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:49:49.801956 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/nvme0n1p6 scanned by mount (1765) May 13 23:49:49.801977 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:49:49.801987 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 13 23:49:49.801997 kernel: BTRFS info (device nvme0n1p6): using free space tree May 13 23:49:49.803956 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 13 23:49:49.803969 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 13 23:49:49.886820 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 13 23:49:49.902282 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... May 13 23:49:49.909156 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:49:49.909200 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:49:49.922093 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:49:49.936376 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:49:49.949455 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:49:49.968585 coreos-metadata[1784]: May 13 23:49:49.953 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 13 23:49:49.979477 coreos-metadata[1785]: May 13 23:49:49.953 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 13 23:49:49.998483 initrd-setup-root[1809]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:49:50.004639 initrd-setup-root[1816]: cut: /sysroot/etc/group: No such file or directory May 13 23:49:50.011089 initrd-setup-root[1824]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:49:50.017377 initrd-setup-root[1831]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:49:50.086272 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:49:50.097766 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:49:50.110477 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:49:50.118956 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:49:50.142310 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:49:50.164150 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:49:50.176023 ignition[1909]: INFO : Ignition 2.20.0 May 13 23:49:50.176023 ignition[1909]: INFO : Stage: mount May 13 23:49:50.187729 ignition[1909]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:49:50.187729 ignition[1909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 13 23:49:50.187729 ignition[1909]: INFO : mount: mount passed May 13 23:49:50.187729 ignition[1909]: INFO : POST message to Packet Timeline May 13 23:49:50.187729 ignition[1909]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 13 23:49:50.447385 coreos-metadata[1785]: May 13 23:49:50.447 INFO Fetch successful May 13 23:49:50.452288 coreos-metadata[1784]: May 13 23:49:50.451 INFO Fetch successful May 13 23:49:50.493162 coreos-metadata[1784]: May 13 23:49:50.493 INFO wrote hostname ci-4284.0.0-n-52b3733d51 to /sysroot/etc/hostname May 13 23:49:50.493960 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 13 23:49:50.494104 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. May 13 23:49:50.507443 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:49:50.724068 ignition[1909]: INFO : GET result: OK May 13 23:49:51.014533 ignition[1909]: INFO : Ignition finished successfully May 13 23:49:51.017042 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:49:51.025352 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:49:51.046109 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:49:51.089288 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/nvme0n1p6 scanned by mount (1931) May 13 23:49:51.089322 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e7b30525-8b14-4004-ad68-68a99b3959db May 13 23:49:51.103486 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 13 23:49:51.116345 kernel: BTRFS info (device nvme0n1p6): using free space tree May 13 23:49:51.138980 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 13 23:49:51.139001 kernel: BTRFS info (device nvme0n1p6): auto enabling async discard May 13 23:49:51.147076 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:49:51.179023 ignition[1949]: INFO : Ignition 2.20.0 May 13 23:49:51.179023 ignition[1949]: INFO : Stage: files May 13 23:49:51.188290 ignition[1949]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:49:51.188290 ignition[1949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 13 23:49:51.188290 ignition[1949]: DEBUG : files: compiled without relabeling support, skipping May 13 23:49:51.188290 ignition[1949]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:49:51.188290 ignition[1949]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:49:51.188290 ignition[1949]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:49:51.188290 ignition[1949]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:49:51.188290 ignition[1949]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:49:51.188290 ignition[1949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 23:49:51.188290 ignition[1949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 13 23:49:51.184591 unknown[1949]: wrote ssh authorized keys file for user: core May 13 23:49:51.362194 ignition[1949]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:49:51.422877 ignition[1949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 23:49:51.433364 ignition[1949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 23:49:51.433364 ignition[1949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:49:51.433364 ignition[1949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:49:51.433364 ignition[1949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:49:51.433364 ignition[1949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:49:51.433364 ignition[1949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:49:51.433364 ignition[1949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:49:51.433364 ignition[1949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:49:51.433364 ignition[1949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:49:51.433364 ignition[1949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:49:51.433364 ignition[1949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:49:51.433364 ignition[1949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:49:51.433364 ignition[1949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:49:51.433364 ignition[1949]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 13 23:49:51.613589 ignition[1949]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 23:49:51.837855 ignition[1949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:49:51.850350 ignition[1949]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 23:49:51.850350 ignition[1949]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:49:51.850350 ignition[1949]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:49:51.850350 ignition[1949]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 23:49:51.850350 ignition[1949]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 13 23:49:51.850350 ignition[1949]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:49:51.850350 ignition[1949]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:49:51.850350 ignition[1949]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:49:51.850350 ignition[1949]: INFO : files: files passed May 13 23:49:51.850350 ignition[1949]: INFO : POST message to Packet Timeline May 13 23:49:51.850350 ignition[1949]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 13 23:49:52.333566 ignition[1949]: INFO : GET result: OK May 13 23:49:52.600995 ignition[1949]: INFO : Ignition finished successfully May 13 23:49:52.604031 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:49:52.614312 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:49:52.630433 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:49:52.648700 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:49:52.648881 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:49:52.666598 initrd-setup-root-after-ignition[1991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:49:52.666598 initrd-setup-root-after-ignition[1991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:49:52.661178 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:49:52.717369 initrd-setup-root-after-ignition[1995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:49:52.673763 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:49:52.690213 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:49:52.757693 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:49:52.757839 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:49:52.769175 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:49:52.784984 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:49:52.795918 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:49:52.796759 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:49:52.829214 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:49:52.841694 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:49:52.865038 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:49:52.876690 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:49:52.882497 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:49:52.893863 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:49:52.893966 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:49:52.905287 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:49:52.916581 systemd[1]: Stopped target basic.target - Basic System. May 13 23:49:52.927935 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:49:52.939135 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:49:52.950196 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:49:52.961245 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:49:52.972485 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:49:52.983647 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:49:52.994654 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:49:53.011122 systemd[1]: Stopped target swap.target - Swaps. May 13 23:49:53.022162 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:49:53.022253 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:49:53.033362 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:49:53.044272 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:49:53.055219 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:49:53.059987 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:49:53.066188 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:49:53.066279 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:49:53.077354 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:49:53.077443 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:49:53.088530 systemd[1]: Stopped target paths.target - Path Units. May 13 23:49:53.099548 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:49:53.099649 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:49:53.116449 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:49:53.127789 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:49:53.139144 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:49:53.139233 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:49:53.243781 ignition[2017]: INFO : Ignition 2.20.0 May 13 23:49:53.243781 ignition[2017]: INFO : Stage: umount May 13 23:49:53.243781 ignition[2017]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:49:53.243781 ignition[2017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 13 23:49:53.243781 ignition[2017]: INFO : umount: umount passed May 13 23:49:53.243781 ignition[2017]: INFO : POST message to Packet Timeline May 13 23:49:53.243781 ignition[2017]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 13 23:49:53.150579 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:49:53.150649 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:49:53.162104 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:49:53.162192 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:49:53.173765 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:49:53.173846 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:49:53.185490 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 13 23:49:53.185571 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:49:53.203314 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:49:53.214101 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:49:53.214201 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:49:53.226683 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:49:53.237854 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:49:53.237995 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:49:53.249896 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:49:53.249986 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:49:53.263475 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:49:53.265398 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:49:53.265474 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:49:53.309510 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:49:53.309687 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:49:53.754118 ignition[2017]: INFO : GET result: OK May 13 23:49:54.102276 ignition[2017]: INFO : Ignition finished successfully May 13 23:49:54.105093 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:49:54.105355 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:49:54.112186 systemd[1]: Stopped target network.target - Network. May 13 23:49:54.121324 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:49:54.121395 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:49:54.131095 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:49:54.131156 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:49:54.140550 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:49:54.140594 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:49:54.150097 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:49:54.150129 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:49:54.159862 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:49:54.159909 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:49:54.169818 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:49:54.179350 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:49:54.189581 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:49:54.189666 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:49:54.203252 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:49:54.205927 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:49:54.206078 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:49:54.215296 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:49:54.216236 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:49:54.216462 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:49:54.225643 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:49:54.233663 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:49:54.233710 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:49:54.243701 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:49:54.243738 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:49:54.253811 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:49:54.253858 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:49:54.263783 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:49:54.263815 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:49:54.274362 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:49:54.291205 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:49:54.291304 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:49:54.293201 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:49:54.293514 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:49:54.302750 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:49:54.302960 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:49:54.312389 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:49:54.312450 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:49:54.328572 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:49:54.328610 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:49:54.339889 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:49:54.339944 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:49:54.350776 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:49:54.350826 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:49:54.362713 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:49:54.373059 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:49:54.373104 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:49:54.390490 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:49:54.390545 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:49:54.403733 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:49:54.403803 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:49:54.404143 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:49:54.404215 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:49:54.964427 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:49:54.964566 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:49:54.976180 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:49:54.987499 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:49:55.008274 systemd[1]: Switching root. May 13 23:49:55.064094 systemd-journald[899]: Journal stopped May 13 23:49:57.131141 systemd-journald[899]: Received SIGTERM from PID 1 (systemd). May 13 23:49:57.131167 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:49:57.131178 kernel: SELinux: policy capability open_perms=1 May 13 23:49:57.131186 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:49:57.131193 kernel: SELinux: policy capability always_check_network=0 May 13 23:49:57.131200 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:49:57.131209 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:49:57.131218 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:49:57.131226 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:49:57.131234 kernel: audit: type=1403 audit(1747180195.234:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:49:57.131242 systemd[1]: Successfully loaded SELinux policy in 114.847ms. May 13 23:49:57.131252 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.928ms. May 13 23:49:57.131261 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:49:57.131270 systemd[1]: Detected architecture arm64. May 13 23:49:57.131280 systemd[1]: Detected first boot. May 13 23:49:57.131289 systemd[1]: Hostname set to . May 13 23:49:57.131298 systemd[1]: Initializing machine ID from random generator. May 13 23:49:57.131307 zram_generator::config[2091]: No configuration found. May 13 23:49:57.131318 systemd[1]: Populated /etc with preset unit settings. May 13 23:49:57.131327 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:49:57.131336 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:49:57.131347 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:49:57.131356 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:49:57.131365 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:49:57.131374 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:49:57.131384 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:49:57.131393 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:49:57.131402 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:49:57.131411 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:49:57.131420 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:49:57.131429 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:49:57.131438 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:49:57.131447 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:49:57.131457 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:49:57.131466 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:49:57.131476 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:49:57.131485 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:49:57.131493 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 23:49:57.131502 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:49:57.131511 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:49:57.131522 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:49:57.131531 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:49:57.131542 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:49:57.131551 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:49:57.131560 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:49:57.131569 systemd[1]: Reached target slices.target - Slice Units. May 13 23:49:57.131578 systemd[1]: Reached target swap.target - Swaps. May 13 23:49:57.131587 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:49:57.131596 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:49:57.131607 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:49:57.131616 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:49:57.131625 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:49:57.131635 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:49:57.131644 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:49:57.131654 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:49:57.131663 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:49:57.131672 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:49:57.131682 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:49:57.131691 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:49:57.131700 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:49:57.131710 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:49:57.131720 systemd[1]: Reached target machines.target - Containers. May 13 23:49:57.131730 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:49:57.131741 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:49:57.131750 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:49:57.131760 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:49:57.131769 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:49:57.131778 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:49:57.131787 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:49:57.131796 kernel: ACPI: bus type drm_connector registered May 13 23:49:57.131805 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:49:57.131815 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:49:57.131824 kernel: fuse: init (API version 7.39) May 13 23:49:57.131832 kernel: loop: module loaded May 13 23:49:57.131841 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:49:57.131851 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:49:57.131860 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:49:57.131869 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:49:57.131878 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:49:57.131889 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:49:57.131899 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:49:57.131908 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:49:57.131917 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:49:57.131943 systemd-journald[2201]: Collecting audit messages is disabled. May 13 23:49:57.132038 systemd-journald[2201]: Journal started May 13 23:49:57.132058 systemd-journald[2201]: Runtime Journal (/run/log/journal/583fc2031b1a4f9996d08883ed4ed7c8) is 8M, max 4G, 3.9G free. May 13 23:49:55.782297 systemd[1]: Queued start job for default target multi-user.target. May 13 23:49:55.795176 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 13 23:49:55.795534 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:49:55.795823 systemd[1]: systemd-journald.service: Consumed 3.329s CPU time. May 13 23:49:57.162965 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:49:57.189955 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:49:57.216962 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:49:57.239856 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:49:57.239871 systemd[1]: Stopped verity-setup.service. May 13 23:49:57.264973 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:49:57.270804 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:49:57.276495 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:49:57.281876 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:49:57.287254 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:49:57.292670 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:49:57.298005 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:49:57.303500 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:49:57.308973 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:49:57.314505 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:49:57.314666 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:49:57.320062 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:49:57.321022 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:49:57.326501 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:49:57.326667 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:49:57.331893 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:49:57.333061 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:49:57.338394 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:49:57.338556 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:49:57.343687 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:49:57.343838 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:49:57.348976 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:49:57.355047 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:49:57.360205 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:49:57.366973 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:49:57.371989 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:49:57.388220 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:49:57.394386 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:49:57.409695 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:49:57.414503 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:49:57.414531 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:49:57.419973 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:49:57.425583 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:49:57.431306 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:49:57.436014 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:49:57.437371 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:49:57.442894 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:49:57.447497 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:49:57.448624 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:49:57.453268 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:49:57.453672 systemd-journald[2201]: Time spent on flushing to /var/log/journal/583fc2031b1a4f9996d08883ed4ed7c8 is 32.240ms for 2357 entries. May 13 23:49:57.453672 systemd-journald[2201]: System Journal (/var/log/journal/583fc2031b1a4f9996d08883ed4ed7c8) is 8M, max 195.6M, 187.6M free. May 13 23:49:57.505398 systemd-journald[2201]: Received client request to flush runtime journal. May 13 23:49:57.505445 kernel: loop0: detected capacity change from 0 to 126448 May 13 23:49:57.505472 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:49:57.454381 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:49:57.471591 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:49:57.477215 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:49:57.482844 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:49:57.499423 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:49:57.513128 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:49:57.518979 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:49:57.524980 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:49:57.529603 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:49:57.535459 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:49:57.541753 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:49:57.552826 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:49:57.560961 kernel: loop1: detected capacity change from 0 to 8 May 13 23:49:57.566771 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:49:57.586375 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:49:57.591650 udevadm[2260]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 23:49:57.593892 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:49:57.594477 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:49:57.608020 systemd-tmpfiles[2282]: ACLs are not supported, ignoring. May 13 23:49:57.608033 systemd-tmpfiles[2282]: ACLs are not supported, ignoring. May 13 23:49:57.612726 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:49:57.618957 kernel: loop2: detected capacity change from 0 to 103832 May 13 23:49:57.662966 kernel: loop3: detected capacity change from 0 to 194096 May 13 23:49:57.705065 ldconfig[2242]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:49:57.707174 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:49:57.716957 kernel: loop4: detected capacity change from 0 to 126448 May 13 23:49:57.732965 kernel: loop5: detected capacity change from 0 to 8 May 13 23:49:57.744963 kernel: loop6: detected capacity change from 0 to 103832 May 13 23:49:57.760964 kernel: loop7: detected capacity change from 0 to 194096 May 13 23:49:57.767502 (sd-merge)[2300]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. May 13 23:49:57.767958 (sd-merge)[2300]: Merged extensions into '/usr'. May 13 23:49:57.770803 systemd[1]: Reload requested from client PID 2255 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:49:57.770814 systemd[1]: Reloading... May 13 23:49:57.815961 zram_generator::config[2332]: No configuration found. May 13 23:49:57.908546 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:49:57.969325 systemd[1]: Reloading finished in 198 ms. May 13 23:49:57.989359 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:49:57.994276 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:49:58.014319 systemd[1]: Starting ensure-sysext.service... May 13 23:49:58.020146 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:49:58.026711 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:49:58.037510 systemd[1]: Reload requested from client PID 2383 ('systemctl') (unit ensure-sysext.service)... May 13 23:49:58.037521 systemd[1]: Reloading... May 13 23:49:58.039316 systemd-tmpfiles[2384]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:49:58.039507 systemd-tmpfiles[2384]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:49:58.040116 systemd-tmpfiles[2384]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:49:58.040314 systemd-tmpfiles[2384]: ACLs are not supported, ignoring. May 13 23:49:58.040357 systemd-tmpfiles[2384]: ACLs are not supported, ignoring. May 13 23:49:58.044553 systemd-tmpfiles[2384]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:49:58.044560 systemd-tmpfiles[2384]: Skipping /boot May 13 23:49:58.053105 systemd-tmpfiles[2384]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:49:58.053113 systemd-tmpfiles[2384]: Skipping /boot May 13 23:49:58.053820 systemd-udevd[2385]: Using default interface naming scheme 'v255'. May 13 23:49:58.088963 zram_generator::config[2429]: No configuration found. May 13 23:49:58.113962 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (2461) May 13 23:49:58.137962 kernel: IPMI message handler: version 39.2 May 13 23:49:58.147960 kernel: ipmi device interface May 13 23:49:58.160961 kernel: ipmi_si: IPMI System Interface driver May 13 23:49:58.161051 kernel: ipmi_ssif: IPMI SSIF Interface driver May 13 23:49:58.161103 kernel: ipmi_si: Unable to find any System Interface(s) May 13 23:49:58.198375 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:49:58.277829 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 23:49:58.278153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. May 13 23:49:58.282928 systemd[1]: Reloading finished in 245 ms. May 13 23:49:58.301340 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:49:58.321225 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:49:58.343896 systemd[1]: Finished ensure-sysext.service. May 13 23:49:58.348798 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:49:58.371925 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:49:58.389795 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:49:58.394756 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:49:58.395801 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:49:58.401808 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:49:58.407746 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:49:58.413521 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:49:58.415270 lvm[2597]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:49:58.419321 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:49:58.424429 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:49:58.425380 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:49:58.428622 augenrules[2625]: No rules May 13 23:49:58.430282 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:49:58.431431 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:49:58.437987 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:49:58.444579 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:49:58.450692 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 23:49:58.456199 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:49:58.461833 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:49:58.467326 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:49:58.467535 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:49:58.472480 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:49:58.478615 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:49:58.483685 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:49:58.483867 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:49:58.488609 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:49:58.488751 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:49:58.494131 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:49:58.494847 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:49:58.499734 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:49:58.499886 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:49:58.505178 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:49:58.509984 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:49:58.514649 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:49:58.527687 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:49:58.533277 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:49:58.537777 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:49:58.537836 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:49:58.548664 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:49:58.552181 lvm[2655]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:49:58.555160 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:49:58.559872 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:49:58.560929 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:49:58.565786 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:49:58.589680 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:49:58.595014 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:49:58.649462 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 23:49:58.653853 systemd-resolved[2633]: Positive Trust Anchors: May 13 23:49:58.653865 systemd-resolved[2633]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:49:58.653897 systemd-resolved[2633]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:49:58.654315 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:49:58.657522 systemd-resolved[2633]: Using system hostname 'ci-4284.0.0-n-52b3733d51'. May 13 23:49:58.659142 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:49:58.663698 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:49:58.663840 systemd-networkd[2632]: lo: Link UP May 13 23:49:58.663845 systemd-networkd[2632]: lo: Gained carrier May 13 23:49:58.667764 systemd-networkd[2632]: bond0: netdev ready May 13 23:49:58.668107 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:49:58.672420 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:49:58.676660 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:49:58.676887 systemd-networkd[2632]: Enumeration completed May 13 23:49:58.681066 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:49:58.685443 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:49:58.685734 systemd-networkd[2632]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:49:ce:00.network. May 13 23:49:58.689779 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:49:58.694156 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:49:58.694178 systemd[1]: Reached target paths.target - Path Units. May 13 23:49:58.699144 systemd[1]: Reached target timers.target - Timer Units. May 13 23:49:58.704285 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:49:58.709995 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:49:58.716339 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:49:58.723265 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:49:58.728220 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:49:58.733240 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:49:58.737917 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:49:58.743160 systemd[1]: Reached target network.target - Network. May 13 23:49:58.747520 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:49:58.751838 systemd[1]: Reached target basic.target - Basic System. May 13 23:49:58.756080 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:49:58.756103 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:49:58.757169 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:49:58.777645 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 13 23:49:58.783199 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:49:58.788757 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:49:58.794291 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:49:58.798442 jq[2691]: false May 13 23:49:58.798812 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:49:58.799195 coreos-metadata[2687]: May 13 23:49:58.799 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 13 23:49:58.799937 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:49:58.801967 coreos-metadata[2687]: May 13 23:49:58.801 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 13 23:49:58.803692 dbus-daemon[2688]: [system] SELinux support is enabled May 13 23:49:58.805471 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:49:58.811029 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:49:58.814040 extend-filesystems[2692]: Found loop4 May 13 23:49:58.820231 extend-filesystems[2692]: Found loop5 May 13 23:49:58.820231 extend-filesystems[2692]: Found loop6 May 13 23:49:58.820231 extend-filesystems[2692]: Found loop7 May 13 23:49:58.820231 extend-filesystems[2692]: Found nvme0n1 May 13 23:49:58.820231 extend-filesystems[2692]: Found nvme0n1p1 May 13 23:49:58.820231 extend-filesystems[2692]: Found nvme0n1p2 May 13 23:49:58.820231 extend-filesystems[2692]: Found nvme0n1p3 May 13 23:49:58.820231 extend-filesystems[2692]: Found usr May 13 23:49:58.820231 extend-filesystems[2692]: Found nvme0n1p4 May 13 23:49:58.820231 extend-filesystems[2692]: Found nvme0n1p6 May 13 23:49:58.820231 extend-filesystems[2692]: Found nvme0n1p7 May 13 23:49:58.820231 extend-filesystems[2692]: Found nvme0n1p9 May 13 23:49:58.820231 extend-filesystems[2692]: Checking size of /dev/nvme0n1p9 May 13 23:49:58.958849 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks May 13 23:49:58.958875 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (2437) May 13 23:49:58.816720 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:49:58.958963 extend-filesystems[2692]: Resized partition /dev/nvme0n1p9 May 13 23:49:58.828705 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:49:58.963855 extend-filesystems[2711]: resize2fs 1.47.2 (1-Jan-2025) May 13 23:49:58.835034 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:49:58.964650 dbus-daemon[2688]: [system] Successfully activated service 'org.freedesktop.systemd1' May 13 23:49:58.875187 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:49:58.884402 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:49:58.884968 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:49:58.980096 update_engine[2721]: I20250513 23:49:58.931154 2721 main.cc:92] Flatcar Update Engine starting May 13 23:49:58.980096 update_engine[2721]: I20250513 23:49:58.933897 2721 update_check_scheduler.cc:74] Next update check in 5m35s May 13 23:49:58.885573 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:49:58.980400 jq[2722]: true May 13 23:49:58.893769 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:49:58.902510 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:49:58.980685 tar[2724]: linux-arm64/helm May 13 23:49:58.915816 systemd-logind[2710]: Watching system buttons on /dev/input/event0 (Power Button) May 13 23:49:58.916075 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:49:58.981113 jq[2726]: true May 13 23:49:58.916100 systemd-logind[2710]: New seat seat0. May 13 23:49:58.916298 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:49:58.916702 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:49:58.916875 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:49:58.937195 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:49:58.937462 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:49:58.945653 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:49:58.975398 (ntainerd)[2727]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:49:58.977504 systemd[1]: Started update-engine.service - Update Engine. May 13 23:49:58.989315 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:49:58.989469 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:49:58.994461 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:49:58.994559 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:49:59.000903 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:49:59.017958 bash[2753]: Updated "/home/core/.ssh/authorized_keys" May 13 23:49:59.020987 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:49:59.027827 systemd[1]: Starting sshkeys.service... May 13 23:49:59.042282 locksmithd[2749]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:49:59.055159 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 13 23:49:59.061273 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 13 23:49:59.093891 coreos-metadata[2764]: May 13 23:49:59.093 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 13 23:49:59.095078 coreos-metadata[2764]: May 13 23:49:59.095 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 13 23:49:59.132813 containerd[2727]: time="2025-05-13T23:49:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:49:59.133895 containerd[2727]: time="2025-05-13T23:49:59.133866240Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:49:59.142267 containerd[2727]: time="2025-05-13T23:49:59.142238160Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.36µs" May 13 23:49:59.142287 containerd[2727]: time="2025-05-13T23:49:59.142268600Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:49:59.142316 containerd[2727]: time="2025-05-13T23:49:59.142288160Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:49:59.142457 containerd[2727]: time="2025-05-13T23:49:59.142443760Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:49:59.142477 containerd[2727]: time="2025-05-13T23:49:59.142462360Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:49:59.142501 containerd[2727]: time="2025-05-13T23:49:59.142487040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:49:59.142545 containerd[2727]: time="2025-05-13T23:49:59.142533360Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:49:59.142564 containerd[2727]: time="2025-05-13T23:49:59.142545520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:49:59.142825 containerd[2727]: time="2025-05-13T23:49:59.142809920Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:49:59.142845 containerd[2727]: time="2025-05-13T23:49:59.142826040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:49:59.142845 containerd[2727]: time="2025-05-13T23:49:59.142837280Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:49:59.142882 containerd[2727]: time="2025-05-13T23:49:59.142845320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:49:59.142927 containerd[2727]: time="2025-05-13T23:49:59.142916280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:49:59.143047 sshd_keygen[2715]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:49:59.143186 containerd[2727]: time="2025-05-13T23:49:59.143140520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:49:59.143186 containerd[2727]: time="2025-05-13T23:49:59.143172480Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:49:59.143186 containerd[2727]: time="2025-05-13T23:49:59.143183520Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:49:59.143239 containerd[2727]: time="2025-05-13T23:49:59.143208680Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:49:59.143420 containerd[2727]: time="2025-05-13T23:49:59.143408200Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:49:59.143486 containerd[2727]: time="2025-05-13T23:49:59.143475600Z" level=info msg="metadata content store policy set" policy=shared May 13 23:49:59.150175 containerd[2727]: time="2025-05-13T23:49:59.150155640Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:49:59.150202 containerd[2727]: time="2025-05-13T23:49:59.150193680Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:49:59.150220 containerd[2727]: time="2025-05-13T23:49:59.150208040Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:49:59.150237 containerd[2727]: time="2025-05-13T23:49:59.150220160Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:49:59.150237 containerd[2727]: time="2025-05-13T23:49:59.150231880Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:49:59.150277 containerd[2727]: time="2025-05-13T23:49:59.150242560Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:49:59.150277 containerd[2727]: time="2025-05-13T23:49:59.150257680Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:49:59.150277 containerd[2727]: time="2025-05-13T23:49:59.150270440Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:49:59.150330 containerd[2727]: time="2025-05-13T23:49:59.150281560Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:49:59.150330 containerd[2727]: time="2025-05-13T23:49:59.150293280Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:49:59.150330 containerd[2727]: time="2025-05-13T23:49:59.150303880Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:49:59.150330 containerd[2727]: time="2025-05-13T23:49:59.150315600Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:49:59.150442 containerd[2727]: time="2025-05-13T23:49:59.150428480Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:49:59.150461 containerd[2727]: time="2025-05-13T23:49:59.150451920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:49:59.150478 containerd[2727]: time="2025-05-13T23:49:59.150466240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:49:59.150495 containerd[2727]: time="2025-05-13T23:49:59.150477280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:49:59.150495 containerd[2727]: time="2025-05-13T23:49:59.150487920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:49:59.150526 containerd[2727]: time="2025-05-13T23:49:59.150498040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:49:59.150526 containerd[2727]: time="2025-05-13T23:49:59.150509200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:49:59.150564 containerd[2727]: time="2025-05-13T23:49:59.150524800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:49:59.150564 containerd[2727]: time="2025-05-13T23:49:59.150537000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:49:59.150564 containerd[2727]: time="2025-05-13T23:49:59.150549320Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:49:59.150564 containerd[2727]: time="2025-05-13T23:49:59.150562360Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:49:59.150826 containerd[2727]: time="2025-05-13T23:49:59.150815880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:49:59.150845 containerd[2727]: time="2025-05-13T23:49:59.150830560Z" level=info msg="Start snapshots syncer" May 13 23:49:59.150863 containerd[2727]: time="2025-05-13T23:49:59.150854800Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:49:59.151096 containerd[2727]: time="2025-05-13T23:49:59.151067280Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:49:59.151174 containerd[2727]: time="2025-05-13T23:49:59.151114280Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:49:59.151200 containerd[2727]: time="2025-05-13T23:49:59.151173320Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:49:59.151291 containerd[2727]: time="2025-05-13T23:49:59.151277560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:49:59.151317 containerd[2727]: time="2025-05-13T23:49:59.151303640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:49:59.151336 containerd[2727]: time="2025-05-13T23:49:59.151322600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:49:59.151353 containerd[2727]: time="2025-05-13T23:49:59.151333440Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:49:59.151353 containerd[2727]: time="2025-05-13T23:49:59.151345080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:49:59.151385 containerd[2727]: time="2025-05-13T23:49:59.151355280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:49:59.151385 containerd[2727]: time="2025-05-13T23:49:59.151366400Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:49:59.151421 containerd[2727]: time="2025-05-13T23:49:59.151389640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:49:59.151421 containerd[2727]: time="2025-05-13T23:49:59.151401760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:49:59.151421 containerd[2727]: time="2025-05-13T23:49:59.151410840Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:49:59.151468 containerd[2727]: time="2025-05-13T23:49:59.151450360Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:49:59.151468 containerd[2727]: time="2025-05-13T23:49:59.151462480Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:49:59.151504 containerd[2727]: time="2025-05-13T23:49:59.151471240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:49:59.151504 containerd[2727]: time="2025-05-13T23:49:59.151481440Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:49:59.151504 containerd[2727]: time="2025-05-13T23:49:59.151489560Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:49:59.151504 containerd[2727]: time="2025-05-13T23:49:59.151498800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:49:59.151567 containerd[2727]: time="2025-05-13T23:49:59.151508840Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:49:59.151593 containerd[2727]: time="2025-05-13T23:49:59.151586000Z" level=info msg="runtime interface created" May 13 23:49:59.151614 containerd[2727]: time="2025-05-13T23:49:59.151592600Z" level=info msg="created NRI interface" May 13 23:49:59.151614 containerd[2727]: time="2025-05-13T23:49:59.151601320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:49:59.151614 containerd[2727]: time="2025-05-13T23:49:59.151613320Z" level=info msg="Connect containerd service" May 13 23:49:59.151665 containerd[2727]: time="2025-05-13T23:49:59.151638080Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:49:59.152879 containerd[2727]: time="2025-05-13T23:49:59.152855440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:49:59.161862 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:49:59.168605 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:49:59.196881 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:49:59.197113 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:49:59.203841 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:49:59.230136 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:49:59.236917 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:49:59.239299 containerd[2727]: time="2025-05-13T23:49:59.239254080Z" level=info msg="Start subscribing containerd event" May 13 23:49:59.239332 containerd[2727]: time="2025-05-13T23:49:59.239315880Z" level=info msg="Start recovering state" May 13 23:49:59.239407 containerd[2727]: time="2025-05-13T23:49:59.239398080Z" level=info msg="Start event monitor" May 13 23:49:59.239468 containerd[2727]: time="2025-05-13T23:49:59.239412800Z" level=info msg="Start cni network conf syncer for default" May 13 23:49:59.239468 containerd[2727]: time="2025-05-13T23:49:59.239422120Z" level=info msg="Start streaming server" May 13 23:49:59.239468 containerd[2727]: time="2025-05-13T23:49:59.239431640Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:49:59.239468 containerd[2727]: time="2025-05-13T23:49:59.239440480Z" level=info msg="runtime interface starting up..." May 13 23:49:59.239468 containerd[2727]: time="2025-05-13T23:49:59.239446120Z" level=info msg="starting plugins..." May 13 23:49:59.239468 containerd[2727]: time="2025-05-13T23:49:59.239458480Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:49:59.240130 containerd[2727]: time="2025-05-13T23:49:59.240103520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:49:59.240179 containerd[2727]: time="2025-05-13T23:49:59.240167400Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:49:59.240232 containerd[2727]: time="2025-05-13T23:49:59.240220320Z" level=info msg="containerd successfully booted in 0.107743s" May 13 23:49:59.243232 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 23:49:59.248196 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:49:59.253093 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:49:59.261404 tar[2724]: linux-arm64/LICENSE May 13 23:49:59.261477 tar[2724]: linux-arm64/README.md May 13 23:49:59.278474 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:49:59.362964 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 May 13 23:49:59.377480 extend-filesystems[2711]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 13 23:49:59.377480 extend-filesystems[2711]: old_desc_blocks = 1, new_desc_blocks = 112 May 13 23:49:59.377480 extend-filesystems[2711]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. May 13 23:49:59.404137 extend-filesystems[2692]: Resized filesystem in /dev/nvme0n1p9 May 13 23:49:59.404137 extend-filesystems[2692]: Found nvme1n1 May 13 23:49:59.380113 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:49:59.380407 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:49:59.391562 systemd[1]: extend-filesystems.service: Consumed 203ms CPU time, 68.8M memory peak. May 13 23:49:59.802110 coreos-metadata[2687]: May 13 23:49:59.802 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 13 23:49:59.802551 coreos-metadata[2687]: May 13 23:49:59.802 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 13 23:50:00.005966 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 13 23:50:00.022957 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link May 13 23:50:00.024569 systemd-networkd[2632]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:49:ce:01.network. May 13 23:50:00.095232 coreos-metadata[2764]: May 13 23:50:00.095 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 13 23:50:00.095654 coreos-metadata[2764]: May 13 23:50:00.095 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) May 13 23:50:00.630967 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 13 23:50:00.647965 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link May 13 23:50:00.648317 systemd-networkd[2632]: bond0: Configuring with /etc/systemd/network/05-bond0.network. May 13 23:50:00.649596 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:50:00.650493 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 13 23:50:00.650036 systemd-networkd[2632]: enP1p1s0f0np0: Link UP May 13 23:50:00.650283 systemd-networkd[2632]: enP1p1s0f0np0: Gained carrier May 13 23:50:00.678365 systemd-networkd[2632]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:49:ce:00.network. May 13 23:50:00.678630 systemd-networkd[2632]: enP1p1s0f1np1: Link UP May 13 23:50:00.678834 systemd-networkd[2632]: enP1p1s0f1np1: Gained carrier May 13 23:50:00.693224 systemd-networkd[2632]: bond0: Link UP May 13 23:50:00.693424 systemd-networkd[2632]: bond0: Gained carrier May 13 23:50:00.693577 systemd-timesyncd[2634]: Network configuration changed, trying to establish connection. May 13 23:50:00.694202 systemd-timesyncd[2634]: Network configuration changed, trying to establish connection. May 13 23:50:00.694441 systemd-timesyncd[2634]: Network configuration changed, trying to establish connection. May 13 23:50:00.694570 systemd-timesyncd[2634]: Network configuration changed, trying to establish connection. May 13 23:50:00.770965 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 13 23:50:00.789526 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex May 13 23:50:00.789552 kernel: bond0: active interface up! May 13 23:50:00.912962 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex May 13 23:50:01.802648 coreos-metadata[2687]: May 13 23:50:01.802 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 13 23:50:02.095774 coreos-metadata[2764]: May 13 23:50:02.095 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 13 23:50:02.151998 systemd-networkd[2632]: bond0: Gained IPv6LL May 13 23:50:02.152416 systemd-timesyncd[2634]: Network configuration changed, trying to establish connection. May 13 23:50:02.344319 systemd-timesyncd[2634]: Network configuration changed, trying to establish connection. May 13 23:50:02.344425 systemd-timesyncd[2634]: Network configuration changed, trying to establish connection. May 13 23:50:02.346232 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:50:02.351907 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:50:02.358844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:50:02.376727 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:50:02.408577 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:50:02.944915 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:50:02.950614 (kubelet)[2839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:50:03.394960 kubelet[2839]: E0513 23:50:03.394913 2839 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:50:03.397322 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:50:03.397461 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:50:03.397788 systemd[1]: kubelet.service: Consumed 730ms CPU time, 255.5M memory peak. May 13 23:50:04.296501 login[2808]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying May 13 23:50:04.297934 login[2809]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:04.307328 systemd-logind[2710]: New session 2 of user core. May 13 23:50:04.308686 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:50:04.309961 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:50:04.336072 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:50:04.338424 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:50:04.359055 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 May 13 23:50:04.359319 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity May 13 23:50:04.363969 (systemd)[2868]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:50:04.365615 systemd-logind[2710]: New session c1 of user core. May 13 23:50:04.455780 coreos-metadata[2764]: May 13 23:50:04.455 INFO Fetch successful May 13 23:50:04.481501 systemd[2868]: Queued start job for default target default.target. May 13 23:50:04.494105 systemd[2868]: Created slice app.slice - User Application Slice. May 13 23:50:04.494131 systemd[2868]: Reached target paths.target - Paths. May 13 23:50:04.494161 systemd[2868]: Reached target timers.target - Timers. May 13 23:50:04.495376 systemd[2868]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:50:04.502237 unknown[2764]: wrote ssh authorized keys file for user: core May 13 23:50:04.503502 systemd[2868]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:50:04.503554 systemd[2868]: Reached target sockets.target - Sockets. May 13 23:50:04.503597 systemd[2868]: Reached target basic.target - Basic System. May 13 23:50:04.503623 systemd[2868]: Reached target default.target - Main User Target. May 13 23:50:04.503645 systemd[2868]: Startup finished in 133ms. May 13 23:50:04.503992 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:50:04.507057 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:50:04.518322 update-ssh-keys[2879]: Updated "/home/core/.ssh/authorized_keys" May 13 23:50:04.519510 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 13 23:50:04.521109 systemd[1]: Finished sshkeys.service. May 13 23:50:04.599141 coreos-metadata[2687]: May 13 23:50:04.599 INFO Fetch successful May 13 23:50:04.659845 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:50:04.660995 systemd[1]: Started sshd@0-147.28.150.5:22-139.178.68.195:47636.service - OpenSSH per-connection server daemon (139.178.68.195:47636). May 13 23:50:04.663443 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 13 23:50:04.665149 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... May 13 23:50:05.058030 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. May 13 23:50:05.058478 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:50:05.058602 systemd[1]: Startup finished in 3.221s (kernel) + 18.770s (initrd) + 9.938s (userspace) = 31.929s. May 13 23:50:05.080493 sshd[2900]: Accepted publickey for core from 139.178.68.195 port 47636 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 13 23:50:05.081558 sshd-session[2900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:05.084954 systemd-logind[2710]: New session 3 of user core. May 13 23:50:05.098082 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:50:05.296799 login[2808]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:05.299725 systemd-logind[2710]: New session 1 of user core. May 13 23:50:05.310101 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:50:05.444769 systemd[1]: Started sshd@1-147.28.150.5:22-139.178.68.195:47652.service - OpenSSH per-connection server daemon (139.178.68.195:47652). May 13 23:50:05.860298 sshd[2920]: Accepted publickey for core from 139.178.68.195 port 47652 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 13 23:50:05.861555 sshd-session[2920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:05.864516 systemd-logind[2710]: New session 4 of user core. May 13 23:50:05.873116 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:50:06.153093 sshd[2922]: Connection closed by 139.178.68.195 port 47652 May 13 23:50:06.153569 sshd-session[2920]: pam_unix(sshd:session): session closed for user core May 13 23:50:06.157258 systemd[1]: sshd@1-147.28.150.5:22-139.178.68.195:47652.service: Deactivated successfully. May 13 23:50:06.159309 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:50:06.160345 systemd-logind[2710]: Session 4 logged out. Waiting for processes to exit. May 13 23:50:06.160890 systemd-logind[2710]: Removed session 4. May 13 23:50:06.228625 systemd[1]: Started sshd@2-147.28.150.5:22-139.178.68.195:47666.service - OpenSSH per-connection server daemon (139.178.68.195:47666). May 13 23:50:06.645169 systemd-timesyncd[2634]: Network configuration changed, trying to establish connection. May 13 23:50:06.658176 sshd[2929]: Accepted publickey for core from 139.178.68.195 port 47666 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 13 23:50:06.659321 sshd-session[2929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:06.662053 systemd-logind[2710]: New session 5 of user core. May 13 23:50:06.673103 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:50:06.958739 sshd[2931]: Connection closed by 139.178.68.195 port 47666 May 13 23:50:06.959092 sshd-session[2929]: pam_unix(sshd:session): session closed for user core May 13 23:50:06.961651 systemd[1]: sshd@2-147.28.150.5:22-139.178.68.195:47666.service: Deactivated successfully. May 13 23:50:06.963139 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:50:06.963610 systemd-logind[2710]: Session 5 logged out. Waiting for processes to exit. May 13 23:50:06.964155 systemd-logind[2710]: Removed session 5. May 13 23:50:07.033606 systemd[1]: Started sshd@3-147.28.150.5:22-139.178.68.195:47668.service - OpenSSH per-connection server daemon (139.178.68.195:47668). May 13 23:50:07.467322 sshd[2937]: Accepted publickey for core from 139.178.68.195 port 47668 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 13 23:50:07.468365 sshd-session[2937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:07.471180 systemd-logind[2710]: New session 6 of user core. May 13 23:50:07.486106 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:50:07.773755 sshd[2939]: Connection closed by 139.178.68.195 port 47668 May 13 23:50:07.774312 sshd-session[2937]: pam_unix(sshd:session): session closed for user core May 13 23:50:07.777935 systemd[1]: sshd@3-147.28.150.5:22-139.178.68.195:47668.service: Deactivated successfully. May 13 23:50:07.780625 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:50:07.781166 systemd-logind[2710]: Session 6 logged out. Waiting for processes to exit. May 13 23:50:07.781711 systemd-logind[2710]: Removed session 6. May 13 23:50:07.843684 systemd[1]: Started sshd@4-147.28.150.5:22-139.178.68.195:47682.service - OpenSSH per-connection server daemon (139.178.68.195:47682). May 13 23:50:08.257062 sshd[2945]: Accepted publickey for core from 139.178.68.195 port 47682 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 13 23:50:08.258236 sshd-session[2945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:08.261130 systemd-logind[2710]: New session 7 of user core. May 13 23:50:08.272104 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:50:08.492741 sudo[2949]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:50:08.492999 sudo[2949]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:50:08.515727 sudo[2949]: pam_unix(sudo:session): session closed for user root May 13 23:50:08.577102 sshd[2947]: Connection closed by 139.178.68.195 port 47682 May 13 23:50:08.577763 sshd-session[2945]: pam_unix(sshd:session): session closed for user core May 13 23:50:08.581359 systemd[1]: sshd@4-147.28.150.5:22-139.178.68.195:47682.service: Deactivated successfully. May 13 23:50:08.584371 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:50:08.584900 systemd-logind[2710]: Session 7 logged out. Waiting for processes to exit. May 13 23:50:08.585508 systemd-logind[2710]: Removed session 7. May 13 23:50:08.651740 systemd[1]: Started sshd@5-147.28.150.5:22-139.178.68.195:47696.service - OpenSSH per-connection server daemon (139.178.68.195:47696). May 13 23:50:09.075687 sshd[2956]: Accepted publickey for core from 139.178.68.195 port 47696 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 13 23:50:09.076769 sshd-session[2956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:09.079822 systemd-logind[2710]: New session 8 of user core. May 13 23:50:09.088100 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 23:50:09.311687 sudo[2960]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:50:09.311944 sudo[2960]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:50:09.314280 sudo[2960]: pam_unix(sudo:session): session closed for user root May 13 23:50:09.318484 sudo[2959]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:50:09.318730 sudo[2959]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:50:09.325897 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:50:09.371757 augenrules[2982]: No rules May 13 23:50:09.372840 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:50:09.374990 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:50:09.375762 sudo[2959]: pam_unix(sudo:session): session closed for user root May 13 23:50:09.439921 sshd[2958]: Connection closed by 139.178.68.195 port 47696 May 13 23:50:09.440550 sshd-session[2956]: pam_unix(sshd:session): session closed for user core May 13 23:50:09.444332 systemd[1]: sshd@5-147.28.150.5:22-139.178.68.195:47696.service: Deactivated successfully. May 13 23:50:09.446652 systemd[1]: session-8.scope: Deactivated successfully. May 13 23:50:09.447235 systemd-logind[2710]: Session 8 logged out. Waiting for processes to exit. May 13 23:50:09.447797 systemd-logind[2710]: Removed session 8. May 13 23:50:09.511800 systemd[1]: Started sshd@6-147.28.150.5:22-139.178.68.195:47708.service - OpenSSH per-connection server daemon (139.178.68.195:47708). May 13 23:50:09.941362 sshd[2992]: Accepted publickey for core from 139.178.68.195 port 47708 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 13 23:50:09.942399 sshd-session[2992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:50:09.945148 systemd-logind[2710]: New session 9 of user core. May 13 23:50:09.955054 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 23:50:10.180068 sudo[2995]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:50:10.180341 sudo[2995]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:50:10.454324 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:50:10.467338 (dockerd)[3025]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:50:10.660961 dockerd[3025]: time="2025-05-13T23:50:10.660911000Z" level=info msg="Starting up" May 13 23:50:10.662711 dockerd[3025]: time="2025-05-13T23:50:10.662676440Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:50:10.697212 dockerd[3025]: time="2025-05-13T23:50:10.697184400Z" level=info msg="Loading containers: start." May 13 23:50:10.829963 kernel: Initializing XFRM netlink socket May 13 23:50:10.831403 systemd-timesyncd[2634]: Network configuration changed, trying to establish connection. May 13 23:50:12.425457 systemd-resolved[2633]: Clock change detected. Flushing caches. May 13 23:50:12.425597 systemd-timesyncd[2634]: Contacted time server [2600:3c05::f03c:94ff:fe24:f6eb]:123 (2.flatcar.pool.ntp.org). May 13 23:50:12.425643 systemd-timesyncd[2634]: Initial clock synchronization to Tue 2025-05-13 23:50:12.425416 UTC. May 13 23:50:12.478906 systemd-networkd[2632]: docker0: Link UP May 13 23:50:12.537093 dockerd[3025]: time="2025-05-13T23:50:12.537060455Z" level=info msg="Loading containers: done." May 13 23:50:12.546666 dockerd[3025]: time="2025-05-13T23:50:12.546630015Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:50:12.546790 dockerd[3025]: time="2025-05-13T23:50:12.546700215Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:50:12.546887 dockerd[3025]: time="2025-05-13T23:50:12.546872895Z" level=info msg="Daemon has completed initialization" May 13 23:50:12.565813 dockerd[3025]: time="2025-05-13T23:50:12.565777375Z" level=info msg="API listen on /run/docker.sock" May 13 23:50:12.565874 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:50:13.247255 containerd[2727]: time="2025-05-13T23:50:13.247221695Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 23:50:13.279075 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3394548400-merged.mount: Deactivated successfully. May 13 23:50:13.748138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2181868451.mount: Deactivated successfully. May 13 23:50:14.433226 containerd[2727]: time="2025-05-13T23:50:14.433159055Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794150" May 13 23:50:14.433519 containerd[2727]: time="2025-05-13T23:50:14.433242495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:14.434140 containerd[2727]: time="2025-05-13T23:50:14.434114935Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:14.436705 containerd[2727]: time="2025-05-13T23:50:14.436688895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:14.437716 containerd[2727]: time="2025-05-13T23:50:14.437690055Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.19043028s" May 13 23:50:14.437756 containerd[2727]: time="2025-05-13T23:50:14.437727815Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 13 23:50:14.454641 containerd[2727]: time="2025-05-13T23:50:14.454615255Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 23:50:15.130788 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:50:15.132303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:50:15.250419 containerd[2727]: time="2025-05-13T23:50:15.250382015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:15.250527 containerd[2727]: time="2025-05-13T23:50:15.250394055Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855550" May 13 23:50:15.251386 containerd[2727]: time="2025-05-13T23:50:15.251363575Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:15.253724 containerd[2727]: time="2025-05-13T23:50:15.253706775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:15.254677 containerd[2727]: time="2025-05-13T23:50:15.254661575Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 800.01276ms" May 13 23:50:15.254704 containerd[2727]: time="2025-05-13T23:50:15.254685015Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 13 23:50:15.261272 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:50:15.264809 (kubelet)[3378]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:50:15.271962 containerd[2727]: time="2025-05-13T23:50:15.271936375Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 23:50:15.300220 kubelet[3378]: E0513 23:50:15.300184 3378 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:50:15.303141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:50:15.303273 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:50:15.304960 systemd[1]: kubelet.service: Consumed 142ms CPU time, 112.4M memory peak. May 13 23:50:16.319439 containerd[2727]: time="2025-05-13T23:50:16.319399015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:16.319776 containerd[2727]: time="2025-05-13T23:50:16.319447615Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263945" May 13 23:50:16.320260 containerd[2727]: time="2025-05-13T23:50:16.320239855Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:16.322582 containerd[2727]: time="2025-05-13T23:50:16.322565815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:16.323561 containerd[2727]: time="2025-05-13T23:50:16.323531975Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.0515594s" May 13 23:50:16.323585 containerd[2727]: time="2025-05-13T23:50:16.323568375Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 13 23:50:16.340731 containerd[2727]: time="2025-05-13T23:50:16.340705855Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 23:50:17.056767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2786338714.mount: Deactivated successfully. May 13 23:50:17.493079 containerd[2727]: time="2025-05-13T23:50:17.493046375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:17.493289 containerd[2727]: time="2025-05-13T23:50:17.493114975Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775705" May 13 23:50:17.493780 containerd[2727]: time="2025-05-13T23:50:17.493762495Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:17.495268 containerd[2727]: time="2025-05-13T23:50:17.495244495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:17.495864 containerd[2727]: time="2025-05-13T23:50:17.495840775Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.15510312s" May 13 23:50:17.495905 containerd[2727]: time="2025-05-13T23:50:17.495872495Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 13 23:50:17.512492 containerd[2727]: time="2025-05-13T23:50:17.512459415Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 23:50:17.852960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1702556204.mount: Deactivated successfully. May 13 23:50:18.590245 containerd[2727]: time="2025-05-13T23:50:18.590144695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:18.590245 containerd[2727]: time="2025-05-13T23:50:18.590129535Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 13 23:50:18.591157 containerd[2727]: time="2025-05-13T23:50:18.591128775Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:18.593534 containerd[2727]: time="2025-05-13T23:50:18.593499055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:18.594569 containerd[2727]: time="2025-05-13T23:50:18.594523055Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.08202836s" May 13 23:50:18.594569 containerd[2727]: time="2025-05-13T23:50:18.594560055Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 13 23:50:18.610638 containerd[2727]: time="2025-05-13T23:50:18.610557575Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 23:50:18.833225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4059653485.mount: Deactivated successfully. May 13 23:50:18.833639 containerd[2727]: time="2025-05-13T23:50:18.833610095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:18.833731 containerd[2727]: time="2025-05-13T23:50:18.833688815Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" May 13 23:50:18.834351 containerd[2727]: time="2025-05-13T23:50:18.834333735Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:18.835925 containerd[2727]: time="2025-05-13T23:50:18.835904215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:18.836590 containerd[2727]: time="2025-05-13T23:50:18.836571215Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 225.98084ms" May 13 23:50:18.836621 containerd[2727]: time="2025-05-13T23:50:18.836595655Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 13 23:50:18.852848 containerd[2727]: time="2025-05-13T23:50:18.852794015Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 23:50:19.129081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2513603463.mount: Deactivated successfully. May 13 23:50:21.825848 containerd[2727]: time="2025-05-13T23:50:21.825805895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:21.826137 containerd[2727]: time="2025-05-13T23:50:21.825830335Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" May 13 23:50:21.826800 containerd[2727]: time="2025-05-13T23:50:21.826772695Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:21.829285 containerd[2727]: time="2025-05-13T23:50:21.829262935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:21.830317 containerd[2727]: time="2025-05-13T23:50:21.830294655Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.97747216s" May 13 23:50:21.830338 containerd[2727]: time="2025-05-13T23:50:21.830324335Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 13 23:50:25.380846 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 23:50:25.382762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:50:25.493677 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:50:25.496965 (kubelet)[3750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:50:25.529162 kubelet[3750]: E0513 23:50:25.529134 3750 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:50:25.531552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:50:25.531681 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:50:25.532101 systemd[1]: kubelet.service: Consumed 133ms CPU time, 108.9M memory peak. May 13 23:50:27.840953 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:50:27.841163 systemd[1]: kubelet.service: Consumed 133ms CPU time, 108.9M memory peak. May 13 23:50:27.843522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:50:27.867316 systemd[1]: Reload requested from client PID 3776 ('systemctl') (unit session-9.scope)... May 13 23:50:27.867327 systemd[1]: Reloading... May 13 23:50:27.943896 zram_generator::config[3827]: No configuration found. May 13 23:50:28.032713 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:50:28.123194 systemd[1]: Reloading finished in 255 ms. May 13 23:50:28.173513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:50:28.176307 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:50:28.176670 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:50:28.176869 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:50:28.176909 systemd[1]: kubelet.service: Consumed 81ms CPU time, 82.5M memory peak. May 13 23:50:28.178382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:50:28.275945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:50:28.279471 (kubelet)[3892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:50:28.311198 kubelet[3892]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:50:28.311198 kubelet[3892]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:50:28.311198 kubelet[3892]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:50:28.312228 kubelet[3892]: I0513 23:50:28.312192 3892 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:50:28.811318 kubelet[3892]: I0513 23:50:28.811293 3892 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 23:50:28.811318 kubelet[3892]: I0513 23:50:28.811314 3892 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:50:28.811578 kubelet[3892]: I0513 23:50:28.811565 3892 server.go:927] "Client rotation is on, will bootstrap in background" May 13 23:50:28.825632 kubelet[3892]: E0513 23:50:28.825614 3892 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.28.150.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.28.150.5:6443: connect: connection refused May 13 23:50:28.825774 kubelet[3892]: I0513 23:50:28.825750 3892 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:50:28.851443 kubelet[3892]: I0513 23:50:28.851423 3892 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:50:28.861767 kubelet[3892]: I0513 23:50:28.861730 3892 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:50:28.861920 kubelet[3892]: I0513 23:50:28.861766 3892 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-52b3733d51","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 23:50:28.862005 kubelet[3892]: I0513 23:50:28.861994 3892 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:50:28.862005 kubelet[3892]: I0513 23:50:28.862005 3892 container_manager_linux.go:301] "Creating device plugin manager" May 13 23:50:28.862262 kubelet[3892]: I0513 23:50:28.862251 3892 state_mem.go:36] "Initialized new in-memory state store" May 13 23:50:28.863209 kubelet[3892]: I0513 23:50:28.863193 3892 kubelet.go:400] "Attempting to sync node with API server" May 13 23:50:28.863236 kubelet[3892]: I0513 23:50:28.863211 3892 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:50:28.863530 kubelet[3892]: I0513 23:50:28.863520 3892 kubelet.go:312] "Adding apiserver pod source" May 13 23:50:28.863677 kubelet[3892]: I0513 23:50:28.863668 3892 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:50:28.864647 kubelet[3892]: I0513 23:50:28.864630 3892 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:50:28.865245 kubelet[3892]: I0513 23:50:28.865226 3892 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:50:28.865938 kubelet[3892]: W0513 23:50:28.865376 3892 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:50:28.866010 kubelet[3892]: W0513 23:50:28.865964 3892 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.150.5:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.150.5:6443: connect: connection refused May 13 23:50:28.866032 kubelet[3892]: E0513 23:50:28.866026 3892 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.28.150.5:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.150.5:6443: connect: connection refused May 13 23:50:28.866080 kubelet[3892]: W0513 23:50:28.866032 3892 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.150.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-52b3733d51&limit=500&resourceVersion=0": dial tcp 147.28.150.5:6443: connect: connection refused May 13 23:50:28.866105 kubelet[3892]: E0513 23:50:28.866095 3892 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.28.150.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-52b3733d51&limit=500&resourceVersion=0": dial tcp 147.28.150.5:6443: connect: connection refused May 13 23:50:28.866719 kubelet[3892]: I0513 23:50:28.866705 3892 server.go:1264] "Started kubelet" May 13 23:50:28.866946 kubelet[3892]: I0513 23:50:28.866910 3892 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:50:28.866986 kubelet[3892]: I0513 23:50:28.866935 3892 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:50:28.867203 kubelet[3892]: I0513 23:50:28.867189 3892 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:50:28.867833 kubelet[3892]: I0513 23:50:28.867814 3892 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:50:28.868048 kubelet[3892]: I0513 23:50:28.868033 3892 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:50:28.868219 kubelet[3892]: I0513 23:50:28.868197 3892 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 23:50:28.868251 kubelet[3892]: W0513 23:50:28.868191 3892 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.150.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.150.5:6443: connect: connection refused May 13 23:50:28.868274 kubelet[3892]: E0513 23:50:28.868253 3892 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.28.150.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.150.5:6443: connect: connection refused May 13 23:50:28.868319 kubelet[3892]: E0513 23:50:28.868288 3892 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.150.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-52b3733d51?timeout=10s\": dial tcp 147.28.150.5:6443: connect: connection refused" interval="200ms" May 13 23:50:28.868365 kubelet[3892]: I0513 23:50:28.868348 3892 factory.go:221] Registration of the systemd container factory successfully May 13 23:50:28.868460 kubelet[3892]: I0513 23:50:28.868442 3892 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:50:28.868794 kubelet[3892]: E0513 23:50:28.868637 3892 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.150.5:6443/api/v1/namespaces/default/events\": dial tcp 147.28.150.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-n-52b3733d51.183f3b29442f251f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-n-52b3733d51,UID:ci-4284.0.0-n-52b3733d51,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-n-52b3733d51,},FirstTimestamp:2025-05-13 23:50:28.866680095 +0000 UTC m=+0.584407361,LastTimestamp:2025-05-13 23:50:28.866680095 +0000 UTC m=+0.584407361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-n-52b3733d51,}" May 13 23:50:28.869088 kubelet[3892]: I0513 23:50:28.869072 3892 server.go:455] "Adding debug handlers to kubelet server" May 13 23:50:28.870005 kubelet[3892]: I0513 23:50:28.869991 3892 reconciler.go:26] "Reconciler: start to sync state" May 13 23:50:28.870220 kubelet[3892]: I0513 23:50:28.870205 3892 factory.go:221] Registration of the containerd container factory successfully May 13 23:50:28.870326 kubelet[3892]: E0513 23:50:28.870309 3892 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:50:28.881106 kubelet[3892]: I0513 23:50:28.881072 3892 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:50:28.882115 kubelet[3892]: I0513 23:50:28.882104 3892 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:50:28.882260 kubelet[3892]: I0513 23:50:28.882253 3892 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:50:28.882279 kubelet[3892]: I0513 23:50:28.882271 3892 kubelet.go:2337] "Starting kubelet main sync loop" May 13 23:50:28.882329 kubelet[3892]: E0513 23:50:28.882313 3892 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:50:28.882767 kubelet[3892]: W0513 23:50:28.882726 3892 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.150.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.150.5:6443: connect: connection refused May 13 23:50:28.882790 kubelet[3892]: E0513 23:50:28.882782 3892 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.28.150.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.150.5:6443: connect: connection refused May 13 23:50:28.886732 kubelet[3892]: I0513 23:50:28.886711 3892 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:50:28.886732 kubelet[3892]: I0513 23:50:28.886726 3892 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:50:28.886817 kubelet[3892]: I0513 23:50:28.886741 3892 state_mem.go:36] "Initialized new in-memory state store" May 13 23:50:28.887397 kubelet[3892]: I0513 23:50:28.887382 3892 policy_none.go:49] "None policy: Start" May 13 23:50:28.887784 kubelet[3892]: I0513 23:50:28.887771 3892 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:50:28.887820 kubelet[3892]: I0513 23:50:28.887789 3892 state_mem.go:35] "Initializing new in-memory state store" May 13 23:50:28.893328 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:50:28.914955 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:50:28.917394 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:50:28.927527 kubelet[3892]: I0513 23:50:28.927503 3892 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:50:28.927729 kubelet[3892]: I0513 23:50:28.927697 3892 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:50:28.927814 kubelet[3892]: I0513 23:50:28.927804 3892 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:50:28.928692 kubelet[3892]: E0513 23:50:28.928676 3892 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284.0.0-n-52b3733d51\" not found" May 13 23:50:28.969555 kubelet[3892]: I0513 23:50:28.969532 3892 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-n-52b3733d51" May 13 23:50:28.969847 kubelet[3892]: E0513 23:50:28.969818 3892 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.150.5:6443/api/v1/nodes\": dial tcp 147.28.150.5:6443: connect: connection refused" node="ci-4284.0.0-n-52b3733d51" May 13 23:50:28.983023 kubelet[3892]: I0513 23:50:28.982990 3892 topology_manager.go:215] "Topology Admit Handler" podUID="e458a91072b84d9e0dde60aa2a3b5292" podNamespace="kube-system" podName="kube-scheduler-ci-4284.0.0-n-52b3733d51" May 13 23:50:28.984326 kubelet[3892]: I0513 23:50:28.984305 3892 topology_manager.go:215] "Topology Admit Handler" podUID="70769d0c51f0d9f24164d6362c30b1b3" podNamespace="kube-system" podName="kube-apiserver-ci-4284.0.0-n-52b3733d51" May 13 23:50:28.985729 kubelet[3892]: I0513 23:50:28.985710 3892 topology_manager.go:215] "Topology Admit Handler" podUID="64f3076e666ebb271657449f59419e20" podNamespace="kube-system" podName="kube-controller-manager-ci-4284.0.0-n-52b3733d51" May 13 23:50:28.989695 systemd[1]: Created slice kubepods-burstable-pode458a91072b84d9e0dde60aa2a3b5292.slice - libcontainer container kubepods-burstable-pode458a91072b84d9e0dde60aa2a3b5292.slice. May 13 23:50:29.002742 systemd[1]: Created slice kubepods-burstable-pod70769d0c51f0d9f24164d6362c30b1b3.slice - libcontainer container kubepods-burstable-pod70769d0c51f0d9f24164d6362c30b1b3.slice. May 13 23:50:29.018065 systemd[1]: Created slice kubepods-burstable-pod64f3076e666ebb271657449f59419e20.slice - libcontainer container kubepods-burstable-pod64f3076e666ebb271657449f59419e20.slice. May 13 23:50:29.068683 kubelet[3892]: E0513 23:50:29.068622 3892 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.150.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-52b3733d51?timeout=10s\": dial tcp 147.28.150.5:6443: connect: connection refused" interval="400ms" May 13 23:50:29.071825 kubelet[3892]: I0513 23:50:29.071790 3892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70769d0c51f0d9f24164d6362c30b1b3-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-52b3733d51\" (UID: \"70769d0c51f0d9f24164d6362c30b1b3\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-52b3733d51" May 13 23:50:29.071947 kubelet[3892]: I0513 23:50:29.071850 3892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70769d0c51f0d9f24164d6362c30b1b3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-52b3733d51\" (UID: \"70769d0c51f0d9f24164d6362c30b1b3\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-52b3733d51" May 13 23:50:29.071947 kubelet[3892]: I0513 23:50:29.071887 3892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/64f3076e666ebb271657449f59419e20-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-52b3733d51\" (UID: \"64f3076e666ebb271657449f59419e20\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-52b3733d51" May 13 23:50:29.071947 kubelet[3892]: I0513 23:50:29.071920 3892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64f3076e666ebb271657449f59419e20-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-52b3733d51\" (UID: \"64f3076e666ebb271657449f59419e20\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-52b3733d51" May 13 23:50:29.071947 kubelet[3892]: I0513 23:50:29.071937 3892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64f3076e666ebb271657449f59419e20-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-52b3733d51\" (UID: \"64f3076e666ebb271657449f59419e20\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-52b3733d51" May 13 23:50:29.072094 kubelet[3892]: I0513 23:50:29.071953 3892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e458a91072b84d9e0dde60aa2a3b5292-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-52b3733d51\" (UID: \"e458a91072b84d9e0dde60aa2a3b5292\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-52b3733d51" May 13 23:50:29.072094 kubelet[3892]: I0513 23:50:29.071969 3892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70769d0c51f0d9f24164d6362c30b1b3-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-52b3733d51\" (UID: \"70769d0c51f0d9f24164d6362c30b1b3\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-52b3733d51" May 13 23:50:29.072094 kubelet[3892]: I0513 23:50:29.071989 3892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64f3076e666ebb271657449f59419e20-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-52b3733d51\" (UID: \"64f3076e666ebb271657449f59419e20\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-52b3733d51" May 13 23:50:29.072094 kubelet[3892]: I0513 23:50:29.072004 3892 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/64f3076e666ebb271657449f59419e20-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-52b3733d51\" (UID: \"64f3076e666ebb271657449f59419e20\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-52b3733d51" May 13 23:50:29.171389 kubelet[3892]: I0513 23:50:29.171367 3892 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-n-52b3733d51" May 13 23:50:29.171616 kubelet[3892]: E0513 23:50:29.171595 3892 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.150.5:6443/api/v1/nodes\": dial tcp 147.28.150.5:6443: connect: connection refused" node="ci-4284.0.0-n-52b3733d51" May 13 23:50:29.301567 containerd[2727]: time="2025-05-13T23:50:29.301535415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-52b3733d51,Uid:e458a91072b84d9e0dde60aa2a3b5292,Namespace:kube-system,Attempt:0,}" May 13 23:50:29.305016 containerd[2727]: time="2025-05-13T23:50:29.304989775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-52b3733d51,Uid:70769d0c51f0d9f24164d6362c30b1b3,Namespace:kube-system,Attempt:0,}" May 13 23:50:29.320539 containerd[2727]: time="2025-05-13T23:50:29.320472775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-52b3733d51,Uid:64f3076e666ebb271657449f59419e20,Namespace:kube-system,Attempt:0,}" May 13 23:50:29.469624 kubelet[3892]: E0513 23:50:29.469571 3892 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.150.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-52b3733d51?timeout=10s\": dial tcp 147.28.150.5:6443: connect: connection refused" interval="800ms" May 13 23:50:29.573432 kubelet[3892]: I0513 23:50:29.573355 3892 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-n-52b3733d51" May 13 23:50:29.573625 kubelet[3892]: E0513 23:50:29.573596 3892 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.150.5:6443/api/v1/nodes\": dial tcp 147.28.150.5:6443: connect: connection refused" node="ci-4284.0.0-n-52b3733d51" May 13 23:50:29.625690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2005817663.mount: Deactivated successfully. May 13 23:50:29.626853 containerd[2727]: time="2025-05-13T23:50:29.626798895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:50:29.627519 containerd[2727]: time="2025-05-13T23:50:29.627470815Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 13 23:50:29.628030 containerd[2727]: time="2025-05-13T23:50:29.628006575Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:50:29.628614 containerd[2727]: time="2025-05-13T23:50:29.628394215Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 13 23:50:29.628614 containerd[2727]: time="2025-05-13T23:50:29.628555775Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 13 23:50:29.628713 containerd[2727]: time="2025-05-13T23:50:29.628649695Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:50:29.631730 containerd[2727]: time="2025-05-13T23:50:29.631703335Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:50:29.632381 containerd[2727]: time="2025-05-13T23:50:29.632359615Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 321.14804ms" May 13 23:50:29.632782 containerd[2727]: time="2025-05-13T23:50:29.632765135Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 321.39944ms" May 13 23:50:29.633615 containerd[2727]: time="2025-05-13T23:50:29.633555255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:50:29.634211 containerd[2727]: time="2025-05-13T23:50:29.634182215Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 313.02816ms" May 13 23:50:29.641906 containerd[2727]: time="2025-05-13T23:50:29.641863735Z" level=info msg="connecting to shim b0f7b079c5ddbdebe4f6d6844dc2973614ba4d0e119dabb160af790c4612d03b" address="unix:///run/containerd/s/2c492c4c3c521bfcab34bc307ee42fedc79515649231107021983d2d230f58b9" namespace=k8s.io protocol=ttrpc version=3 May 13 23:50:29.641977 containerd[2727]: time="2025-05-13T23:50:29.641910175Z" level=info msg="connecting to shim 08f7bf3a8b36a55ed65b65eeb588ccb95c41d31dfe01ac06147198a6a7e24813" address="unix:///run/containerd/s/e22f7e589580dcb371f15dc2b9290cd986e4e0cd19913ab704cde36e040d8caa" namespace=k8s.io protocol=ttrpc version=3 May 13 23:50:29.642000 containerd[2727]: time="2025-05-13T23:50:29.641980455Z" level=info msg="connecting to shim 3d1572ef75def43a7f2b5fee3a325cdf0ed7429fd63db3d819ebd1539bfca608" address="unix:///run/containerd/s/1cbb6d5bc7793e75e7cf13cbfd78126b310f273a8363432032aae01728d51382" namespace=k8s.io protocol=ttrpc version=3 May 13 23:50:29.668097 systemd[1]: Started cri-containerd-08f7bf3a8b36a55ed65b65eeb588ccb95c41d31dfe01ac06147198a6a7e24813.scope - libcontainer container 08f7bf3a8b36a55ed65b65eeb588ccb95c41d31dfe01ac06147198a6a7e24813. May 13 23:50:29.669412 systemd[1]: Started cri-containerd-3d1572ef75def43a7f2b5fee3a325cdf0ed7429fd63db3d819ebd1539bfca608.scope - libcontainer container 3d1572ef75def43a7f2b5fee3a325cdf0ed7429fd63db3d819ebd1539bfca608. May 13 23:50:29.670721 systemd[1]: Started cri-containerd-b0f7b079c5ddbdebe4f6d6844dc2973614ba4d0e119dabb160af790c4612d03b.scope - libcontainer container b0f7b079c5ddbdebe4f6d6844dc2973614ba4d0e119dabb160af790c4612d03b. May 13 23:50:29.694233 containerd[2727]: time="2025-05-13T23:50:29.694085095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-52b3733d51,Uid:e458a91072b84d9e0dde60aa2a3b5292,Namespace:kube-system,Attempt:0,} returns sandbox id \"08f7bf3a8b36a55ed65b65eeb588ccb95c41d31dfe01ac06147198a6a7e24813\"" May 13 23:50:29.694346 containerd[2727]: time="2025-05-13T23:50:29.694311775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-52b3733d51,Uid:64f3076e666ebb271657449f59419e20,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d1572ef75def43a7f2b5fee3a325cdf0ed7429fd63db3d819ebd1539bfca608\"" May 13 23:50:29.695577 containerd[2727]: time="2025-05-13T23:50:29.695552855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-52b3733d51,Uid:70769d0c51f0d9f24164d6362c30b1b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0f7b079c5ddbdebe4f6d6844dc2973614ba4d0e119dabb160af790c4612d03b\"" May 13 23:50:29.696899 containerd[2727]: time="2025-05-13T23:50:29.696867335Z" level=info msg="CreateContainer within sandbox \"08f7bf3a8b36a55ed65b65eeb588ccb95c41d31dfe01ac06147198a6a7e24813\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 23:50:29.696958 containerd[2727]: time="2025-05-13T23:50:29.696914415Z" level=info msg="CreateContainer within sandbox \"3d1572ef75def43a7f2b5fee3a325cdf0ed7429fd63db3d819ebd1539bfca608\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 23:50:29.697246 containerd[2727]: time="2025-05-13T23:50:29.697224415Z" level=info msg="CreateContainer within sandbox \"b0f7b079c5ddbdebe4f6d6844dc2973614ba4d0e119dabb160af790c4612d03b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 23:50:29.703177 containerd[2727]: time="2025-05-13T23:50:29.703144615Z" level=info msg="Container 85e46046574e402242e84def5048bf7e61f7350a6e1c3b525cfd51fe15bb4c61: CDI devices from CRI Config.CDIDevices: []" May 13 23:50:29.703688 containerd[2727]: time="2025-05-13T23:50:29.703664575Z" level=info msg="Container 7b55c023c0d1c0232e071447d0fed2f68b2b2c0c4f823616b7b0bd11889e7d8c: CDI devices from CRI Config.CDIDevices: []" May 13 23:50:29.704092 containerd[2727]: time="2025-05-13T23:50:29.704069095Z" level=info msg="Container 88bd3caa792de1314021d7fb091aba6b8837c7f30810f67844ac9d8dbfad5e59: CDI devices from CRI Config.CDIDevices: []" May 13 23:50:29.707154 containerd[2727]: time="2025-05-13T23:50:29.707130855Z" level=info msg="CreateContainer within sandbox \"08f7bf3a8b36a55ed65b65eeb588ccb95c41d31dfe01ac06147198a6a7e24813\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"85e46046574e402242e84def5048bf7e61f7350a6e1c3b525cfd51fe15bb4c61\"" May 13 23:50:29.707244 containerd[2727]: time="2025-05-13T23:50:29.707218015Z" level=info msg="CreateContainer within sandbox \"3d1572ef75def43a7f2b5fee3a325cdf0ed7429fd63db3d819ebd1539bfca608\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7b55c023c0d1c0232e071447d0fed2f68b2b2c0c4f823616b7b0bd11889e7d8c\"" May 13 23:50:29.707659 containerd[2727]: time="2025-05-13T23:50:29.707640815Z" level=info msg="StartContainer for \"85e46046574e402242e84def5048bf7e61f7350a6e1c3b525cfd51fe15bb4c61\"" May 13 23:50:29.707694 containerd[2727]: time="2025-05-13T23:50:29.707675455Z" level=info msg="StartContainer for \"7b55c023c0d1c0232e071447d0fed2f68b2b2c0c4f823616b7b0bd11889e7d8c\"" May 13 23:50:29.707764 containerd[2727]: time="2025-05-13T23:50:29.707656415Z" level=info msg="CreateContainer within sandbox \"b0f7b079c5ddbdebe4f6d6844dc2973614ba4d0e119dabb160af790c4612d03b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"88bd3caa792de1314021d7fb091aba6b8837c7f30810f67844ac9d8dbfad5e59\"" May 13 23:50:29.708005 containerd[2727]: time="2025-05-13T23:50:29.707989535Z" level=info msg="StartContainer for \"88bd3caa792de1314021d7fb091aba6b8837c7f30810f67844ac9d8dbfad5e59\"" May 13 23:50:29.708724 containerd[2727]: time="2025-05-13T23:50:29.708703935Z" level=info msg="connecting to shim 7b55c023c0d1c0232e071447d0fed2f68b2b2c0c4f823616b7b0bd11889e7d8c" address="unix:///run/containerd/s/1cbb6d5bc7793e75e7cf13cbfd78126b310f273a8363432032aae01728d51382" protocol=ttrpc version=3 May 13 23:50:29.708812 containerd[2727]: time="2025-05-13T23:50:29.708786815Z" level=info msg="connecting to shim 85e46046574e402242e84def5048bf7e61f7350a6e1c3b525cfd51fe15bb4c61" address="unix:///run/containerd/s/e22f7e589580dcb371f15dc2b9290cd986e4e0cd19913ab704cde36e040d8caa" protocol=ttrpc version=3 May 13 23:50:29.708971 containerd[2727]: time="2025-05-13T23:50:29.708945735Z" level=info msg="connecting to shim 88bd3caa792de1314021d7fb091aba6b8837c7f30810f67844ac9d8dbfad5e59" address="unix:///run/containerd/s/2c492c4c3c521bfcab34bc307ee42fedc79515649231107021983d2d230f58b9" protocol=ttrpc version=3 May 13 23:50:29.734006 systemd[1]: Started cri-containerd-7b55c023c0d1c0232e071447d0fed2f68b2b2c0c4f823616b7b0bd11889e7d8c.scope - libcontainer container 7b55c023c0d1c0232e071447d0fed2f68b2b2c0c4f823616b7b0bd11889e7d8c. May 13 23:50:29.735100 systemd[1]: Started cri-containerd-85e46046574e402242e84def5048bf7e61f7350a6e1c3b525cfd51fe15bb4c61.scope - libcontainer container 85e46046574e402242e84def5048bf7e61f7350a6e1c3b525cfd51fe15bb4c61. May 13 23:50:29.736219 systemd[1]: Started cri-containerd-88bd3caa792de1314021d7fb091aba6b8837c7f30810f67844ac9d8dbfad5e59.scope - libcontainer container 88bd3caa792de1314021d7fb091aba6b8837c7f30810f67844ac9d8dbfad5e59. May 13 23:50:29.744068 kubelet[3892]: W0513 23:50:29.744018 3892 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.150.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-52b3733d51&limit=500&resourceVersion=0": dial tcp 147.28.150.5:6443: connect: connection refused May 13 23:50:29.744121 kubelet[3892]: E0513 23:50:29.744074 3892 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.28.150.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-52b3733d51&limit=500&resourceVersion=0": dial tcp 147.28.150.5:6443: connect: connection refused May 13 23:50:29.763177 containerd[2727]: time="2025-05-13T23:50:29.763150815Z" level=info msg="StartContainer for \"85e46046574e402242e84def5048bf7e61f7350a6e1c3b525cfd51fe15bb4c61\" returns successfully" May 13 23:50:29.763395 containerd[2727]: time="2025-05-13T23:50:29.763377815Z" level=info msg="StartContainer for \"88bd3caa792de1314021d7fb091aba6b8837c7f30810f67844ac9d8dbfad5e59\" returns successfully" May 13 23:50:29.766127 containerd[2727]: time="2025-05-13T23:50:29.766103535Z" level=info msg="StartContainer for \"7b55c023c0d1c0232e071447d0fed2f68b2b2c0c4f823616b7b0bd11889e7d8c\" returns successfully" May 13 23:50:30.375792 kubelet[3892]: I0513 23:50:30.375768 3892 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-n-52b3733d51" May 13 23:50:31.273015 kubelet[3892]: E0513 23:50:31.272981 3892 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284.0.0-n-52b3733d51\" not found" node="ci-4284.0.0-n-52b3733d51" May 13 23:50:31.375400 kubelet[3892]: I0513 23:50:31.375370 3892 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284.0.0-n-52b3733d51" May 13 23:50:31.381991 kubelet[3892]: E0513 23:50:31.381964 3892 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-52b3733d51\" not found" May 13 23:50:31.482236 kubelet[3892]: E0513 23:50:31.482217 3892 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-52b3733d51\" not found" May 13 23:50:31.865455 kubelet[3892]: I0513 23:50:31.865428 3892 apiserver.go:52] "Watching apiserver" May 13 23:50:31.868369 kubelet[3892]: I0513 23:50:31.868350 3892 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:50:31.895301 kubelet[3892]: E0513 23:50:31.895276 3892 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4284.0.0-n-52b3733d51\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4284.0.0-n-52b3733d51" May 13 23:50:33.206434 systemd[1]: Reload requested from client PID 4324 ('systemctl') (unit session-9.scope)... May 13 23:50:33.206445 systemd[1]: Reloading... May 13 23:50:33.283927 zram_generator::config[4374]: No configuration found. May 13 23:50:33.381506 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:50:33.482901 systemd[1]: Reloading finished in 276 ms. May 13 23:50:33.505364 kubelet[3892]: I0513 23:50:33.505323 3892 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:50:33.505462 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:50:33.519696 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:50:33.520801 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:50:33.520860 systemd[1]: kubelet.service: Consumed 983ms CPU time, 136.5M memory peak. May 13 23:50:33.522633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:50:33.653879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:50:33.657303 (kubelet)[4434]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:50:33.688296 kubelet[4434]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:50:33.688296 kubelet[4434]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:50:33.688296 kubelet[4434]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:50:33.688546 kubelet[4434]: I0513 23:50:33.688342 4434 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:50:33.692081 kubelet[4434]: I0513 23:50:33.692063 4434 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 23:50:33.692081 kubelet[4434]: I0513 23:50:33.692082 4434 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:50:33.692247 kubelet[4434]: I0513 23:50:33.692238 4434 server.go:927] "Client rotation is on, will bootstrap in background" May 13 23:50:33.693472 kubelet[4434]: I0513 23:50:33.693460 4434 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 23:50:33.694522 kubelet[4434]: I0513 23:50:33.694504 4434 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:50:33.715213 kubelet[4434]: I0513 23:50:33.715192 4434 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:50:33.715393 kubelet[4434]: I0513 23:50:33.715370 4434 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:50:33.715545 kubelet[4434]: I0513 23:50:33.715395 4434 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-52b3733d51","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 23:50:33.715610 kubelet[4434]: I0513 23:50:33.715553 4434 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:50:33.715610 kubelet[4434]: I0513 23:50:33.715561 4434 container_manager_linux.go:301] "Creating device plugin manager" May 13 23:50:33.715610 kubelet[4434]: I0513 23:50:33.715595 4434 state_mem.go:36] "Initialized new in-memory state store" May 13 23:50:33.715686 kubelet[4434]: I0513 23:50:33.715679 4434 kubelet.go:400] "Attempting to sync node with API server" May 13 23:50:33.715709 kubelet[4434]: I0513 23:50:33.715705 4434 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:50:33.715738 kubelet[4434]: I0513 23:50:33.715733 4434 kubelet.go:312] "Adding apiserver pod source" May 13 23:50:33.715758 kubelet[4434]: I0513 23:50:33.715752 4434 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:50:33.716198 kubelet[4434]: I0513 23:50:33.716182 4434 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 13 23:50:33.716438 kubelet[4434]: I0513 23:50:33.716429 4434 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:50:33.716814 kubelet[4434]: I0513 23:50:33.716804 4434 server.go:1264] "Started kubelet" May 13 23:50:33.716885 kubelet[4434]: I0513 23:50:33.716850 4434 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:50:33.716920 kubelet[4434]: I0513 23:50:33.716866 4434 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:50:33.717070 kubelet[4434]: I0513 23:50:33.717060 4434 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:50:33.718524 kubelet[4434]: I0513 23:50:33.718502 4434 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:50:33.718673 kubelet[4434]: I0513 23:50:33.718655 4434 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 23:50:33.718724 kubelet[4434]: I0513 23:50:33.718703 4434 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:50:33.719402 kubelet[4434]: I0513 23:50:33.719382 4434 reconciler.go:26] "Reconciler: start to sync state" May 13 23:50:33.719512 kubelet[4434]: E0513 23:50:33.719494 4434 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:50:33.719585 kubelet[4434]: I0513 23:50:33.719571 4434 factory.go:221] Registration of the systemd container factory successfully May 13 23:50:33.719681 kubelet[4434]: I0513 23:50:33.719667 4434 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:50:33.720345 kubelet[4434]: I0513 23:50:33.720331 4434 factory.go:221] Registration of the containerd container factory successfully May 13 23:50:33.720377 kubelet[4434]: I0513 23:50:33.720367 4434 server.go:455] "Adding debug handlers to kubelet server" May 13 23:50:33.725923 kubelet[4434]: I0513 23:50:33.725897 4434 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:50:33.727211 kubelet[4434]: I0513 23:50:33.727183 4434 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:50:33.727274 kubelet[4434]: I0513 23:50:33.727256 4434 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:50:33.727303 kubelet[4434]: I0513 23:50:33.727297 4434 kubelet.go:2337] "Starting kubelet main sync loop" May 13 23:50:33.727364 kubelet[4434]: E0513 23:50:33.727349 4434 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:50:33.750014 kubelet[4434]: I0513 23:50:33.749952 4434 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:50:33.750014 kubelet[4434]: I0513 23:50:33.749965 4434 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:50:33.750014 kubelet[4434]: I0513 23:50:33.749982 4434 state_mem.go:36] "Initialized new in-memory state store" May 13 23:50:33.750118 kubelet[4434]: I0513 23:50:33.750108 4434 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 23:50:33.750139 kubelet[4434]: I0513 23:50:33.750118 4434 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 23:50:33.750139 kubelet[4434]: I0513 23:50:33.750137 4434 policy_none.go:49] "None policy: Start" May 13 23:50:33.750574 kubelet[4434]: I0513 23:50:33.750565 4434 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:50:33.750595 kubelet[4434]: I0513 23:50:33.750581 4434 state_mem.go:35] "Initializing new in-memory state store" May 13 23:50:33.750701 kubelet[4434]: I0513 23:50:33.750688 4434 state_mem.go:75] "Updated machine memory state" May 13 23:50:33.753784 kubelet[4434]: I0513 23:50:33.753769 4434 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:50:33.753962 kubelet[4434]: I0513 23:50:33.753934 4434 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:50:33.754040 kubelet[4434]: I0513 23:50:33.754033 4434 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:50:33.821151 kubelet[4434]: I0513 23:50:33.821128 4434 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-n-52b3733d51" May 13 23:50:33.828106 kubelet[4434]: I0513 23:50:33.828065 4434 topology_manager.go:215] "Topology Admit Handler" podUID="64f3076e666ebb271657449f59419e20" podNamespace="kube-system" podName="kube-controller-manager-ci-4284.0.0-n-52b3733d51" May 13 23:50:33.828185 kubelet[4434]: I0513 23:50:33.828175 4434 topology_manager.go:215] "Topology Admit Handler" podUID="e458a91072b84d9e0dde60aa2a3b5292" podNamespace="kube-system" podName="kube-scheduler-ci-4284.0.0-n-52b3733d51" May 13 23:50:33.828222 kubelet[4434]: I0513 23:50:33.828207 4434 topology_manager.go:215] "Topology Admit Handler" podUID="70769d0c51f0d9f24164d6362c30b1b3" podNamespace="kube-system" podName="kube-apiserver-ci-4284.0.0-n-52b3733d51" May 13 23:50:33.830217 kubelet[4434]: I0513 23:50:33.830203 4434 kubelet_node_status.go:112] "Node was previously registered" node="ci-4284.0.0-n-52b3733d51" May 13 23:50:33.830279 kubelet[4434]: I0513 23:50:33.830272 4434 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284.0.0-n-52b3733d51" May 13 23:50:33.831136 kubelet[4434]: W0513 23:50:33.831124 4434 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:50:33.831233 kubelet[4434]: W0513 23:50:33.831217 4434 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:50:33.831310 kubelet[4434]: W0513 23:50:33.831298 4434 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:50:33.920597 kubelet[4434]: I0513 23:50:33.920571 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64f3076e666ebb271657449f59419e20-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-52b3733d51\" (UID: \"64f3076e666ebb271657449f59419e20\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-52b3733d51" May 13 23:50:33.920634 kubelet[4434]: I0513 23:50:33.920604 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/64f3076e666ebb271657449f59419e20-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-52b3733d51\" (UID: \"64f3076e666ebb271657449f59419e20\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-52b3733d51" May 13 23:50:33.920634 kubelet[4434]: I0513 23:50:33.920621 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64f3076e666ebb271657449f59419e20-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-52b3733d51\" (UID: \"64f3076e666ebb271657449f59419e20\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-52b3733d51" May 13 23:50:33.920681 kubelet[4434]: I0513 23:50:33.920641 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64f3076e666ebb271657449f59419e20-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-52b3733d51\" (UID: \"64f3076e666ebb271657449f59419e20\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-52b3733d51" May 13 23:50:33.920681 kubelet[4434]: I0513 23:50:33.920658 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e458a91072b84d9e0dde60aa2a3b5292-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-52b3733d51\" (UID: \"e458a91072b84d9e0dde60aa2a3b5292\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-52b3733d51" May 13 23:50:33.920681 kubelet[4434]: I0513 23:50:33.920676 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70769d0c51f0d9f24164d6362c30b1b3-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-52b3733d51\" (UID: \"70769d0c51f0d9f24164d6362c30b1b3\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-52b3733d51" May 13 23:50:33.920804 kubelet[4434]: I0513 23:50:33.920693 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/64f3076e666ebb271657449f59419e20-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-52b3733d51\" (UID: \"64f3076e666ebb271657449f59419e20\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-52b3733d51" May 13 23:50:33.920804 kubelet[4434]: I0513 23:50:33.920710 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70769d0c51f0d9f24164d6362c30b1b3-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-52b3733d51\" (UID: \"70769d0c51f0d9f24164d6362c30b1b3\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-52b3733d51" May 13 23:50:33.920804 kubelet[4434]: I0513 23:50:33.920728 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70769d0c51f0d9f24164d6362c30b1b3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-52b3733d51\" (UID: \"70769d0c51f0d9f24164d6362c30b1b3\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-52b3733d51" May 13 23:50:34.716900 kubelet[4434]: I0513 23:50:34.716869 4434 apiserver.go:52] "Watching apiserver" May 13 23:50:34.720170 kubelet[4434]: I0513 23:50:34.720158 4434 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:50:34.737569 kubelet[4434]: W0513 23:50:34.737544 4434 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 23:50:34.737628 kubelet[4434]: E0513 23:50:34.737594 4434 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4284.0.0-n-52b3733d51\" already exists" pod="kube-system/kube-scheduler-ci-4284.0.0-n-52b3733d51" May 13 23:50:34.747272 kubelet[4434]: I0513 23:50:34.747233 4434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284.0.0-n-52b3733d51" podStartSLOduration=1.7472192949999998 podStartE2EDuration="1.747219295s" podCreationTimestamp="2025-05-13 23:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:50:34.747147295 +0000 UTC m=+1.086989921" watchObservedRunningTime="2025-05-13 23:50:34.747219295 +0000 UTC m=+1.087061921" May 13 23:50:34.769487 kubelet[4434]: I0513 23:50:34.769437 4434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284.0.0-n-52b3733d51" podStartSLOduration=1.769421775 podStartE2EDuration="1.769421775s" podCreationTimestamp="2025-05-13 23:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:50:34.769400335 +0000 UTC m=+1.109242961" watchObservedRunningTime="2025-05-13 23:50:34.769421775 +0000 UTC m=+1.109264401" May 13 23:50:34.769554 kubelet[4434]: I0513 23:50:34.769534 4434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-52b3733d51" podStartSLOduration=1.7695283750000002 podStartE2EDuration="1.769528375s" podCreationTimestamp="2025-05-13 23:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:50:34.764135135 +0000 UTC m=+1.103977761" watchObservedRunningTime="2025-05-13 23:50:34.769528375 +0000 UTC m=+1.109371041" May 13 23:50:38.126625 sudo[2995]: pam_unix(sudo:session): session closed for user root May 13 23:50:38.190362 sshd[2994]: Connection closed by 139.178.68.195 port 47708 May 13 23:50:38.190732 sshd-session[2992]: pam_unix(sshd:session): session closed for user core May 13 23:50:38.193924 systemd[1]: sshd@6-147.28.150.5:22-139.178.68.195:47708.service: Deactivated successfully. May 13 23:50:38.195642 systemd[1]: session-9.scope: Deactivated successfully. May 13 23:50:38.195843 systemd[1]: session-9.scope: Consumed 7.958s CPU time, 268.3M memory peak. May 13 23:50:38.196856 systemd-logind[2710]: Session 9 logged out. Waiting for processes to exit. May 13 23:50:38.197429 systemd-logind[2710]: Removed session 9. May 13 23:50:45.728516 update_engine[2721]: I20250513 23:50:45.728451 2721 update_attempter.cc:509] Updating boot flags... May 13 23:50:45.760906 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (4683) May 13 23:50:45.792905 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (4685) May 13 23:50:46.623738 kubelet[4434]: I0513 23:50:46.623699 4434 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 23:50:46.624186 containerd[2727]: time="2025-05-13T23:50:46.624020569Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:50:46.624347 kubelet[4434]: I0513 23:50:46.624188 4434 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 23:50:47.349683 kubelet[4434]: I0513 23:50:47.349648 4434 topology_manager.go:215] "Topology Admit Handler" podUID="d5097c51-5c14-47bf-b301-2674d3803350" podNamespace="kube-system" podName="kube-proxy-sj2d7" May 13 23:50:47.354187 systemd[1]: Created slice kubepods-besteffort-podd5097c51_5c14_47bf_b301_2674d3803350.slice - libcontainer container kubepods-besteffort-podd5097c51_5c14_47bf_b301_2674d3803350.slice. May 13 23:50:47.410302 kubelet[4434]: I0513 23:50:47.410280 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5097c51-5c14-47bf-b301-2674d3803350-xtables-lock\") pod \"kube-proxy-sj2d7\" (UID: \"d5097c51-5c14-47bf-b301-2674d3803350\") " pod="kube-system/kube-proxy-sj2d7" May 13 23:50:47.410388 kubelet[4434]: I0513 23:50:47.410310 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89hsj\" (UniqueName: \"kubernetes.io/projected/d5097c51-5c14-47bf-b301-2674d3803350-kube-api-access-89hsj\") pod \"kube-proxy-sj2d7\" (UID: \"d5097c51-5c14-47bf-b301-2674d3803350\") " pod="kube-system/kube-proxy-sj2d7" May 13 23:50:47.410388 kubelet[4434]: I0513 23:50:47.410340 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d5097c51-5c14-47bf-b301-2674d3803350-kube-proxy\") pod \"kube-proxy-sj2d7\" (UID: \"d5097c51-5c14-47bf-b301-2674d3803350\") " pod="kube-system/kube-proxy-sj2d7" May 13 23:50:47.410388 kubelet[4434]: I0513 23:50:47.410359 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5097c51-5c14-47bf-b301-2674d3803350-lib-modules\") pod \"kube-proxy-sj2d7\" (UID: \"d5097c51-5c14-47bf-b301-2674d3803350\") " pod="kube-system/kube-proxy-sj2d7" May 13 23:50:47.517572 kubelet[4434]: E0513 23:50:47.517518 4434 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 13 23:50:47.517572 kubelet[4434]: E0513 23:50:47.517553 4434 projected.go:200] Error preparing data for projected volume kube-api-access-89hsj for pod kube-system/kube-proxy-sj2d7: configmap "kube-root-ca.crt" not found May 13 23:50:47.517668 kubelet[4434]: E0513 23:50:47.517600 4434 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d5097c51-5c14-47bf-b301-2674d3803350-kube-api-access-89hsj podName:d5097c51-5c14-47bf-b301-2674d3803350 nodeName:}" failed. No retries permitted until 2025-05-13 23:50:48.017583798 +0000 UTC m=+14.357426424 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-89hsj" (UniqueName: "kubernetes.io/projected/d5097c51-5c14-47bf-b301-2674d3803350-kube-api-access-89hsj") pod "kube-proxy-sj2d7" (UID: "d5097c51-5c14-47bf-b301-2674d3803350") : configmap "kube-root-ca.crt" not found May 13 23:50:47.705548 kubelet[4434]: I0513 23:50:47.705513 4434 topology_manager.go:215] "Topology Admit Handler" podUID="d0a1943c-3fc2-419a-b0f3-a87a8ec14101" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-f4q9j" May 13 23:50:47.712787 systemd[1]: Created slice kubepods-besteffort-podd0a1943c_3fc2_419a_b0f3_a87a8ec14101.slice - libcontainer container kubepods-besteffort-podd0a1943c_3fc2_419a_b0f3_a87a8ec14101.slice. May 13 23:50:47.713298 kubelet[4434]: I0513 23:50:47.713130 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d0a1943c-3fc2-419a-b0f3-a87a8ec14101-var-lib-calico\") pod \"tigera-operator-797db67f8-f4q9j\" (UID: \"d0a1943c-3fc2-419a-b0f3-a87a8ec14101\") " pod="tigera-operator/tigera-operator-797db67f8-f4q9j" May 13 23:50:47.713298 kubelet[4434]: I0513 23:50:47.713166 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dv7x\" (UniqueName: \"kubernetes.io/projected/d0a1943c-3fc2-419a-b0f3-a87a8ec14101-kube-api-access-2dv7x\") pod \"tigera-operator-797db67f8-f4q9j\" (UID: \"d0a1943c-3fc2-419a-b0f3-a87a8ec14101\") " pod="tigera-operator/tigera-operator-797db67f8-f4q9j" May 13 23:50:48.022755 containerd[2727]: time="2025-05-13T23:50:48.022642498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-f4q9j,Uid:d0a1943c-3fc2-419a-b0f3-a87a8ec14101,Namespace:tigera-operator,Attempt:0,}" May 13 23:50:48.030936 containerd[2727]: time="2025-05-13T23:50:48.030912286Z" level=info msg="connecting to shim a8f8a6b73ebaaeda104d2336b8a357f032892613f2ad23f9b68269b644d7bb01" address="unix:///run/containerd/s/3ef15ee7692603b9c034259393f65f5ff5e47befd9a1749d2c5722852c31826d" namespace=k8s.io protocol=ttrpc version=3 May 13 23:50:48.053008 systemd[1]: Started cri-containerd-a8f8a6b73ebaaeda104d2336b8a357f032892613f2ad23f9b68269b644d7bb01.scope - libcontainer container a8f8a6b73ebaaeda104d2336b8a357f032892613f2ad23f9b68269b644d7bb01. May 13 23:50:48.077331 containerd[2727]: time="2025-05-13T23:50:48.077302464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-f4q9j,Uid:d0a1943c-3fc2-419a-b0f3-a87a8ec14101,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a8f8a6b73ebaaeda104d2336b8a357f032892613f2ad23f9b68269b644d7bb01\"" May 13 23:50:48.078565 containerd[2727]: time="2025-05-13T23:50:48.078548074Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 23:50:48.271069 containerd[2727]: time="2025-05-13T23:50:48.271043243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sj2d7,Uid:d5097c51-5c14-47bf-b301-2674d3803350,Namespace:kube-system,Attempt:0,}" May 13 23:50:48.278552 containerd[2727]: time="2025-05-13T23:50:48.278491024Z" level=info msg="connecting to shim e2fc8235faf13232aeaab0ea9cd45391f8d5586d224e4ae94335cb929f65ffa9" address="unix:///run/containerd/s/e034e16b794ecab478a053ed4614084952369321b3e24d2f20d4e5bb9380277f" namespace=k8s.io protocol=ttrpc version=3 May 13 23:50:48.300060 systemd[1]: Started cri-containerd-e2fc8235faf13232aeaab0ea9cd45391f8d5586d224e4ae94335cb929f65ffa9.scope - libcontainer container e2fc8235faf13232aeaab0ea9cd45391f8d5586d224e4ae94335cb929f65ffa9. May 13 23:50:48.317279 containerd[2727]: time="2025-05-13T23:50:48.317255780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sj2d7,Uid:d5097c51-5c14-47bf-b301-2674d3803350,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2fc8235faf13232aeaab0ea9cd45391f8d5586d224e4ae94335cb929f65ffa9\"" May 13 23:50:48.319242 containerd[2727]: time="2025-05-13T23:50:48.319220516Z" level=info msg="CreateContainer within sandbox \"e2fc8235faf13232aeaab0ea9cd45391f8d5586d224e4ae94335cb929f65ffa9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:50:48.324612 containerd[2727]: time="2025-05-13T23:50:48.324586680Z" level=info msg="Container 9681bd32731c0a446116412a8f63b5c458e3c0006285a5b51eee664f3611ec69: CDI devices from CRI Config.CDIDevices: []" May 13 23:50:48.328397 containerd[2727]: time="2025-05-13T23:50:48.328374871Z" level=info msg="CreateContainer within sandbox \"e2fc8235faf13232aeaab0ea9cd45391f8d5586d224e4ae94335cb929f65ffa9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9681bd32731c0a446116412a8f63b5c458e3c0006285a5b51eee664f3611ec69\"" May 13 23:50:48.328830 containerd[2727]: time="2025-05-13T23:50:48.328808314Z" level=info msg="StartContainer for \"9681bd32731c0a446116412a8f63b5c458e3c0006285a5b51eee664f3611ec69\"" May 13 23:50:48.330146 containerd[2727]: time="2025-05-13T23:50:48.330124005Z" level=info msg="connecting to shim 9681bd32731c0a446116412a8f63b5c458e3c0006285a5b51eee664f3611ec69" address="unix:///run/containerd/s/e034e16b794ecab478a053ed4614084952369321b3e24d2f20d4e5bb9380277f" protocol=ttrpc version=3 May 13 23:50:48.359003 systemd[1]: Started cri-containerd-9681bd32731c0a446116412a8f63b5c458e3c0006285a5b51eee664f3611ec69.scope - libcontainer container 9681bd32731c0a446116412a8f63b5c458e3c0006285a5b51eee664f3611ec69. May 13 23:50:48.385576 containerd[2727]: time="2025-05-13T23:50:48.385552417Z" level=info msg="StartContainer for \"9681bd32731c0a446116412a8f63b5c458e3c0006285a5b51eee664f3611ec69\" returns successfully" May 13 23:50:48.758193 kubelet[4434]: I0513 23:50:48.758151 4434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sj2d7" podStartSLOduration=1.7581362139999999 podStartE2EDuration="1.758136214s" podCreationTimestamp="2025-05-13 23:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:50:48.758025973 +0000 UTC m=+15.097868599" watchObservedRunningTime="2025-05-13 23:50:48.758136214 +0000 UTC m=+15.097978840" May 13 23:50:48.894622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount808192273.mount: Deactivated successfully. May 13 23:50:49.852480 containerd[2727]: time="2025-05-13T23:50:49.852435102Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:49.852849 containerd[2727]: time="2025-05-13T23:50:49.852482702Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 13 23:50:49.853171 containerd[2727]: time="2025-05-13T23:50:49.853152627Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:49.854769 containerd[2727]: time="2025-05-13T23:50:49.854748639Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:49.855437 containerd[2727]: time="2025-05-13T23:50:49.855423125Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 1.776848251s" May 13 23:50:49.855470 containerd[2727]: time="2025-05-13T23:50:49.855459045Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 13 23:50:49.857092 containerd[2727]: time="2025-05-13T23:50:49.857073217Z" level=info msg="CreateContainer within sandbox \"a8f8a6b73ebaaeda104d2336b8a357f032892613f2ad23f9b68269b644d7bb01\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 23:50:49.860747 containerd[2727]: time="2025-05-13T23:50:49.860720685Z" level=info msg="Container 7129567b6d4ee73cd8d0334627db244b65c00242556c1cd4b9577261020aff13: CDI devices from CRI Config.CDIDevices: []" May 13 23:50:49.863420 containerd[2727]: time="2025-05-13T23:50:49.863394626Z" level=info msg="CreateContainer within sandbox \"a8f8a6b73ebaaeda104d2336b8a357f032892613f2ad23f9b68269b644d7bb01\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7129567b6d4ee73cd8d0334627db244b65c00242556c1cd4b9577261020aff13\"" May 13 23:50:49.863683 containerd[2727]: time="2025-05-13T23:50:49.863663388Z" level=info msg="StartContainer for \"7129567b6d4ee73cd8d0334627db244b65c00242556c1cd4b9577261020aff13\"" May 13 23:50:49.864383 containerd[2727]: time="2025-05-13T23:50:49.864363033Z" level=info msg="connecting to shim 7129567b6d4ee73cd8d0334627db244b65c00242556c1cd4b9577261020aff13" address="unix:///run/containerd/s/3ef15ee7692603b9c034259393f65f5ff5e47befd9a1749d2c5722852c31826d" protocol=ttrpc version=3 May 13 23:50:49.896058 systemd[1]: Started cri-containerd-7129567b6d4ee73cd8d0334627db244b65c00242556c1cd4b9577261020aff13.scope - libcontainer container 7129567b6d4ee73cd8d0334627db244b65c00242556c1cd4b9577261020aff13. May 13 23:50:49.915834 containerd[2727]: time="2025-05-13T23:50:49.915804066Z" level=info msg="StartContainer for \"7129567b6d4ee73cd8d0334627db244b65c00242556c1cd4b9577261020aff13\" returns successfully" May 13 23:50:50.766517 kubelet[4434]: I0513 23:50:50.766468 4434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-f4q9j" podStartSLOduration=1.988677065 podStartE2EDuration="3.766453682s" podCreationTimestamp="2025-05-13 23:50:47 +0000 UTC" firstStartedPulling="2025-05-13 23:50:48.078260552 +0000 UTC m=+14.418103178" lastFinishedPulling="2025-05-13 23:50:49.856037169 +0000 UTC m=+16.195879795" observedRunningTime="2025-05-13 23:50:50.763853463 +0000 UTC m=+17.103696089" watchObservedRunningTime="2025-05-13 23:50:50.766453682 +0000 UTC m=+17.106296268" May 13 23:50:53.611485 kubelet[4434]: I0513 23:50:53.611442 4434 topology_manager.go:215] "Topology Admit Handler" podUID="ad43d250-7096-4611-ae14-4fe8c5db7de6" podNamespace="calico-system" podName="calico-typha-7bc68bd765-9ttg2" May 13 23:50:53.616556 systemd[1]: Created slice kubepods-besteffort-podad43d250_7096_4611_ae14_4fe8c5db7de6.slice - libcontainer container kubepods-besteffort-podad43d250_7096_4611_ae14_4fe8c5db7de6.slice. May 13 23:50:53.646488 kubelet[4434]: I0513 23:50:53.646449 4434 topology_manager.go:215] "Topology Admit Handler" podUID="a69233ba-1d6c-4710-81cc-95f9dafb8ed8" podNamespace="calico-system" podName="calico-node-4tmkp" May 13 23:50:53.652156 systemd[1]: Created slice kubepods-besteffort-poda69233ba_1d6c_4710_81cc_95f9dafb8ed8.slice - libcontainer container kubepods-besteffort-poda69233ba_1d6c_4710_81cc_95f9dafb8ed8.slice. May 13 23:50:53.652710 kubelet[4434]: I0513 23:50:53.652690 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbsw2\" (UniqueName: \"kubernetes.io/projected/ad43d250-7096-4611-ae14-4fe8c5db7de6-kube-api-access-wbsw2\") pod \"calico-typha-7bc68bd765-9ttg2\" (UID: \"ad43d250-7096-4611-ae14-4fe8c5db7de6\") " pod="calico-system/calico-typha-7bc68bd765-9ttg2" May 13 23:50:53.652763 kubelet[4434]: I0513 23:50:53.652722 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ad43d250-7096-4611-ae14-4fe8c5db7de6-typha-certs\") pod \"calico-typha-7bc68bd765-9ttg2\" (UID: \"ad43d250-7096-4611-ae14-4fe8c5db7de6\") " pod="calico-system/calico-typha-7bc68bd765-9ttg2" May 13 23:50:53.652763 kubelet[4434]: I0513 23:50:53.652744 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad43d250-7096-4611-ae14-4fe8c5db7de6-tigera-ca-bundle\") pod \"calico-typha-7bc68bd765-9ttg2\" (UID: \"ad43d250-7096-4611-ae14-4fe8c5db7de6\") " pod="calico-system/calico-typha-7bc68bd765-9ttg2" May 13 23:50:53.753142 kubelet[4434]: I0513 23:50:53.753114 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a69233ba-1d6c-4710-81cc-95f9dafb8ed8-cni-net-dir\") pod \"calico-node-4tmkp\" (UID: \"a69233ba-1d6c-4710-81cc-95f9dafb8ed8\") " pod="calico-system/calico-node-4tmkp" May 13 23:50:53.753142 kubelet[4434]: I0513 23:50:53.753147 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a69233ba-1d6c-4710-81cc-95f9dafb8ed8-var-run-calico\") pod \"calico-node-4tmkp\" (UID: \"a69233ba-1d6c-4710-81cc-95f9dafb8ed8\") " pod="calico-system/calico-node-4tmkp" May 13 23:50:53.753282 kubelet[4434]: I0513 23:50:53.753162 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a69233ba-1d6c-4710-81cc-95f9dafb8ed8-var-lib-calico\") pod \"calico-node-4tmkp\" (UID: \"a69233ba-1d6c-4710-81cc-95f9dafb8ed8\") " pod="calico-system/calico-node-4tmkp" May 13 23:50:53.753282 kubelet[4434]: I0513 23:50:53.753180 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a69233ba-1d6c-4710-81cc-95f9dafb8ed8-tigera-ca-bundle\") pod \"calico-node-4tmkp\" (UID: \"a69233ba-1d6c-4710-81cc-95f9dafb8ed8\") " pod="calico-system/calico-node-4tmkp" May 13 23:50:53.753282 kubelet[4434]: I0513 23:50:53.753206 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a69233ba-1d6c-4710-81cc-95f9dafb8ed8-lib-modules\") pod \"calico-node-4tmkp\" (UID: \"a69233ba-1d6c-4710-81cc-95f9dafb8ed8\") " pod="calico-system/calico-node-4tmkp" May 13 23:50:53.753282 kubelet[4434]: I0513 23:50:53.753220 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a69233ba-1d6c-4710-81cc-95f9dafb8ed8-node-certs\") pod \"calico-node-4tmkp\" (UID: \"a69233ba-1d6c-4710-81cc-95f9dafb8ed8\") " pod="calico-system/calico-node-4tmkp" May 13 23:50:53.753282 kubelet[4434]: I0513 23:50:53.753235 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a69233ba-1d6c-4710-81cc-95f9dafb8ed8-policysync\") pod \"calico-node-4tmkp\" (UID: \"a69233ba-1d6c-4710-81cc-95f9dafb8ed8\") " pod="calico-system/calico-node-4tmkp" May 13 23:50:53.753381 kubelet[4434]: I0513 23:50:53.753249 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a69233ba-1d6c-4710-81cc-95f9dafb8ed8-flexvol-driver-host\") pod \"calico-node-4tmkp\" (UID: \"a69233ba-1d6c-4710-81cc-95f9dafb8ed8\") " pod="calico-system/calico-node-4tmkp" May 13 23:50:53.753381 kubelet[4434]: I0513 23:50:53.753295 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a69233ba-1d6c-4710-81cc-95f9dafb8ed8-xtables-lock\") pod \"calico-node-4tmkp\" (UID: \"a69233ba-1d6c-4710-81cc-95f9dafb8ed8\") " pod="calico-system/calico-node-4tmkp" May 13 23:50:53.753381 kubelet[4434]: I0513 23:50:53.753309 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a69233ba-1d6c-4710-81cc-95f9dafb8ed8-cni-bin-dir\") pod \"calico-node-4tmkp\" (UID: \"a69233ba-1d6c-4710-81cc-95f9dafb8ed8\") " pod="calico-system/calico-node-4tmkp" May 13 23:50:53.753381 kubelet[4434]: I0513 23:50:53.753326 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a69233ba-1d6c-4710-81cc-95f9dafb8ed8-cni-log-dir\") pod \"calico-node-4tmkp\" (UID: \"a69233ba-1d6c-4710-81cc-95f9dafb8ed8\") " pod="calico-system/calico-node-4tmkp" May 13 23:50:53.753381 kubelet[4434]: I0513 23:50:53.753340 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctsdt\" (UniqueName: \"kubernetes.io/projected/a69233ba-1d6c-4710-81cc-95f9dafb8ed8-kube-api-access-ctsdt\") pod \"calico-node-4tmkp\" (UID: \"a69233ba-1d6c-4710-81cc-95f9dafb8ed8\") " pod="calico-system/calico-node-4tmkp" May 13 23:50:53.754984 kubelet[4434]: I0513 23:50:53.754956 4434 topology_manager.go:215] "Topology Admit Handler" podUID="e357e965-48ea-458f-845a-7872eece6386" podNamespace="calico-system" podName="csi-node-driver-zgfch" May 13 23:50:53.755406 kubelet[4434]: E0513 23:50:53.755228 4434 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgfch" podUID="e357e965-48ea-458f-845a-7872eece6386" May 13 23:50:53.853715 kubelet[4434]: I0513 23:50:53.853651 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bncq8\" (UniqueName: \"kubernetes.io/projected/e357e965-48ea-458f-845a-7872eece6386-kube-api-access-bncq8\") pod \"csi-node-driver-zgfch\" (UID: \"e357e965-48ea-458f-845a-7872eece6386\") " pod="calico-system/csi-node-driver-zgfch" May 13 23:50:53.853881 kubelet[4434]: I0513 23:50:53.853862 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e357e965-48ea-458f-845a-7872eece6386-socket-dir\") pod \"csi-node-driver-zgfch\" (UID: \"e357e965-48ea-458f-845a-7872eece6386\") " pod="calico-system/csi-node-driver-zgfch" May 13 23:50:53.853954 kubelet[4434]: I0513 23:50:53.853940 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e357e965-48ea-458f-845a-7872eece6386-varrun\") pod \"csi-node-driver-zgfch\" (UID: \"e357e965-48ea-458f-845a-7872eece6386\") " pod="calico-system/csi-node-driver-zgfch" May 13 23:50:53.854129 kubelet[4434]: I0513 23:50:53.854089 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e357e965-48ea-458f-845a-7872eece6386-kubelet-dir\") pod \"csi-node-driver-zgfch\" (UID: \"e357e965-48ea-458f-845a-7872eece6386\") " pod="calico-system/csi-node-driver-zgfch" May 13 23:50:53.854377 kubelet[4434]: E0513 23:50:53.854359 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.854415 kubelet[4434]: W0513 23:50:53.854379 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.854415 kubelet[4434]: E0513 23:50:53.854396 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.854641 kubelet[4434]: E0513 23:50:53.854632 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.854673 kubelet[4434]: W0513 23:50:53.854641 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.854673 kubelet[4434]: E0513 23:50:53.854653 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.854855 kubelet[4434]: E0513 23:50:53.854847 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.854855 kubelet[4434]: W0513 23:50:53.854854 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.854926 kubelet[4434]: E0513 23:50:53.854865 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.855079 kubelet[4434]: E0513 23:50:53.855071 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.855079 kubelet[4434]: W0513 23:50:53.855078 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.855137 kubelet[4434]: E0513 23:50:53.855089 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.855317 kubelet[4434]: E0513 23:50:53.855304 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.855317 kubelet[4434]: W0513 23:50:53.855312 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.855366 kubelet[4434]: E0513 23:50:53.855320 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.855366 kubelet[4434]: I0513 23:50:53.855334 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e357e965-48ea-458f-845a-7872eece6386-registration-dir\") pod \"csi-node-driver-zgfch\" (UID: \"e357e965-48ea-458f-845a-7872eece6386\") " pod="calico-system/csi-node-driver-zgfch" May 13 23:50:53.855608 kubelet[4434]: E0513 23:50:53.855590 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.855639 kubelet[4434]: W0513 23:50:53.855609 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.855639 kubelet[4434]: E0513 23:50:53.855627 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.855852 kubelet[4434]: E0513 23:50:53.855841 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.855880 kubelet[4434]: W0513 23:50:53.855852 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.855880 kubelet[4434]: E0513 23:50:53.855866 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.856124 kubelet[4434]: E0513 23:50:53.856113 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.856155 kubelet[4434]: W0513 23:50:53.856124 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.856155 kubelet[4434]: E0513 23:50:53.856138 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.856348 kubelet[4434]: E0513 23:50:53.856340 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.856348 kubelet[4434]: W0513 23:50:53.856348 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.856397 kubelet[4434]: E0513 23:50:53.856359 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.856592 kubelet[4434]: E0513 23:50:53.856584 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.856620 kubelet[4434]: W0513 23:50:53.856592 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.856620 kubelet[4434]: E0513 23:50:53.856610 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.856807 kubelet[4434]: E0513 23:50:53.856798 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.856807 kubelet[4434]: W0513 23:50:53.856807 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.856855 kubelet[4434]: E0513 23:50:53.856821 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.857111 kubelet[4434]: E0513 23:50:53.857099 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.857145 kubelet[4434]: W0513 23:50:53.857110 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.857145 kubelet[4434]: E0513 23:50:53.857125 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.857393 kubelet[4434]: E0513 23:50:53.857383 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.857422 kubelet[4434]: W0513 23:50:53.857394 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.857422 kubelet[4434]: E0513 23:50:53.857404 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.862514 kubelet[4434]: E0513 23:50:53.862465 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.862514 kubelet[4434]: W0513 23:50:53.862479 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.862514 kubelet[4434]: E0513 23:50:53.862491 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.921930 containerd[2727]: time="2025-05-13T23:50:53.921856815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bc68bd765-9ttg2,Uid:ad43d250-7096-4611-ae14-4fe8c5db7de6,Namespace:calico-system,Attempt:0,}" May 13 23:50:53.930425 containerd[2727]: time="2025-05-13T23:50:53.930396705Z" level=info msg="connecting to shim 23b329964eb78bc079cc8230d4e305cc38b529b3dd0e755eba8b6e5206adf3a3" address="unix:///run/containerd/s/fd3e2a094a6b63431041838a57fd87bfbeb9e90e813d4a85f2b91636027f6afa" namespace=k8s.io protocol=ttrpc version=3 May 13 23:50:53.954766 containerd[2727]: time="2025-05-13T23:50:53.954738449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4tmkp,Uid:a69233ba-1d6c-4710-81cc-95f9dafb8ed8,Namespace:calico-system,Attempt:0,}" May 13 23:50:53.956115 kubelet[4434]: E0513 23:50:53.956100 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.956153 kubelet[4434]: W0513 23:50:53.956116 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.956153 kubelet[4434]: E0513 23:50:53.956132 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.956348 kubelet[4434]: E0513 23:50:53.956339 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.956348 kubelet[4434]: W0513 23:50:53.956347 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.956405 kubelet[4434]: E0513 23:50:53.956358 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.956626 kubelet[4434]: E0513 23:50:53.956618 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.956626 kubelet[4434]: W0513 23:50:53.956626 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.956677 kubelet[4434]: E0513 23:50:53.956637 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.956801 kubelet[4434]: E0513 23:50:53.956794 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.956827 kubelet[4434]: W0513 23:50:53.956804 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.956827 kubelet[4434]: E0513 23:50:53.956815 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.956990 kubelet[4434]: E0513 23:50:53.956982 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.956990 kubelet[4434]: W0513 23:50:53.956990 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.957049 kubelet[4434]: E0513 23:50:53.957000 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.957239 kubelet[4434]: E0513 23:50:53.957223 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.957239 kubelet[4434]: W0513 23:50:53.957230 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.957239 kubelet[4434]: E0513 23:50:53.957241 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.957440 kubelet[4434]: E0513 23:50:53.957430 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.957440 kubelet[4434]: W0513 23:50:53.957439 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.957500 kubelet[4434]: E0513 23:50:53.957459 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.957640 kubelet[4434]: E0513 23:50:53.957631 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.957640 kubelet[4434]: W0513 23:50:53.957639 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.957703 kubelet[4434]: E0513 23:50:53.957654 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.957839 kubelet[4434]: E0513 23:50:53.957830 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.957839 kubelet[4434]: W0513 23:50:53.957837 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.957915 kubelet[4434]: E0513 23:50:53.957857 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.958033 kubelet[4434]: E0513 23:50:53.958022 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.958033 kubelet[4434]: W0513 23:50:53.958029 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.958106 kubelet[4434]: E0513 23:50:53.958044 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.958218 kubelet[4434]: E0513 23:50:53.958209 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.958218 kubelet[4434]: W0513 23:50:53.958218 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.958274 kubelet[4434]: E0513 23:50:53.958232 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.958400 kubelet[4434]: E0513 23:50:53.958391 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.958400 kubelet[4434]: W0513 23:50:53.958399 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.958466 kubelet[4434]: E0513 23:50:53.958410 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.958686 kubelet[4434]: E0513 23:50:53.958675 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.958726 kubelet[4434]: W0513 23:50:53.958686 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.958726 kubelet[4434]: E0513 23:50:53.958700 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.958925 kubelet[4434]: E0513 23:50:53.958914 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.958925 kubelet[4434]: W0513 23:50:53.958924 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.959009 kubelet[4434]: E0513 23:50:53.958937 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.959140 kubelet[4434]: E0513 23:50:53.959131 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.959140 kubelet[4434]: W0513 23:50:53.959139 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.959200 kubelet[4434]: E0513 23:50:53.959149 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.959357 kubelet[4434]: E0513 23:50:53.959348 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.959357 kubelet[4434]: W0513 23:50:53.959356 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.959444 kubelet[4434]: E0513 23:50:53.959371 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.959553 kubelet[4434]: E0513 23:50:53.959544 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.959553 kubelet[4434]: W0513 23:50:53.959551 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.959607 kubelet[4434]: E0513 23:50:53.959566 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.959759 kubelet[4434]: E0513 23:50:53.959747 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.959759 kubelet[4434]: W0513 23:50:53.959754 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.959813 kubelet[4434]: E0513 23:50:53.959769 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.959958 kubelet[4434]: E0513 23:50:53.959950 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.959958 kubelet[4434]: W0513 23:50:53.959957 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.960005 kubelet[4434]: E0513 23:50:53.959971 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.960163 kubelet[4434]: E0513 23:50:53.960155 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.960163 kubelet[4434]: W0513 23:50:53.960163 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.960227 kubelet[4434]: E0513 23:50:53.960176 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.960354 kubelet[4434]: E0513 23:50:53.960347 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.960354 kubelet[4434]: W0513 23:50:53.960353 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.960423 kubelet[4434]: E0513 23:50:53.960365 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.960595 kubelet[4434]: E0513 23:50:53.960587 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.960595 kubelet[4434]: W0513 23:50:53.960594 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.960651 kubelet[4434]: E0513 23:50:53.960605 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.960774 kubelet[4434]: E0513 23:50:53.960766 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.960803 kubelet[4434]: W0513 23:50:53.960774 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.960803 kubelet[4434]: E0513 23:50:53.960784 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.960982 kubelet[4434]: E0513 23:50:53.960973 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.961005 kubelet[4434]: W0513 23:50:53.960982 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.961005 kubelet[4434]: E0513 23:50:53.960992 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.961267 kubelet[4434]: E0513 23:50:53.961255 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.961300 kubelet[4434]: W0513 23:50:53.961267 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.961300 kubelet[4434]: E0513 23:50:53.961279 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.961995 containerd[2727]: time="2025-05-13T23:50:53.961972131Z" level=info msg="connecting to shim 4b73f2f1b676eab3f300146ec078811b68e53bfef9af2dd1b75b20b1763f4b7a" address="unix:///run/containerd/s/b7ea4ca868db594311f5048abf9d013f40060fab49349edc659efee261db769a" namespace=k8s.io protocol=ttrpc version=3 May 13 23:50:53.964069 systemd[1]: Started cri-containerd-23b329964eb78bc079cc8230d4e305cc38b529b3dd0e755eba8b6e5206adf3a3.scope - libcontainer container 23b329964eb78bc079cc8230d4e305cc38b529b3dd0e755eba8b6e5206adf3a3. May 13 23:50:53.969633 kubelet[4434]: E0513 23:50:53.969615 4434 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:50:53.969633 kubelet[4434]: W0513 23:50:53.969629 4434 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:50:53.969724 kubelet[4434]: E0513 23:50:53.969644 4434 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:50:53.974278 systemd[1]: Started cri-containerd-4b73f2f1b676eab3f300146ec078811b68e53bfef9af2dd1b75b20b1763f4b7a.scope - libcontainer container 4b73f2f1b676eab3f300146ec078811b68e53bfef9af2dd1b75b20b1763f4b7a. May 13 23:50:53.988816 containerd[2727]: time="2025-05-13T23:50:53.988786090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bc68bd765-9ttg2,Uid:ad43d250-7096-4611-ae14-4fe8c5db7de6,Namespace:calico-system,Attempt:0,} returns sandbox id \"23b329964eb78bc079cc8230d4e305cc38b529b3dd0e755eba8b6e5206adf3a3\"" May 13 23:50:53.989914 containerd[2727]: time="2025-05-13T23:50:53.989894696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 23:50:53.990505 containerd[2727]: time="2025-05-13T23:50:53.990486420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4tmkp,Uid:a69233ba-1d6c-4710-81cc-95f9dafb8ed8,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b73f2f1b676eab3f300146ec078811b68e53bfef9af2dd1b75b20b1763f4b7a\"" May 13 23:50:54.722209 containerd[2727]: time="2025-05-13T23:50:54.722167753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:54.722354 containerd[2727]: time="2025-05-13T23:50:54.722210714Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 13 23:50:54.722855 containerd[2727]: time="2025-05-13T23:50:54.722835397Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:54.724330 containerd[2727]: time="2025-05-13T23:50:54.724298125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:54.724904 containerd[2727]: time="2025-05-13T23:50:54.724880408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 734.956672ms" May 13 23:50:54.724928 containerd[2727]: time="2025-05-13T23:50:54.724908169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 13 23:50:54.725577 containerd[2727]: time="2025-05-13T23:50:54.725559332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 23:50:54.730413 containerd[2727]: time="2025-05-13T23:50:54.730388359Z" level=info msg="CreateContainer within sandbox \"23b329964eb78bc079cc8230d4e305cc38b529b3dd0e755eba8b6e5206adf3a3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 23:50:54.734358 containerd[2727]: time="2025-05-13T23:50:54.734328381Z" level=info msg="Container 464dc1dfe90f9b98258b789d8a6dbacf4325770639c8de263c629152fa2b9c37: CDI devices from CRI Config.CDIDevices: []" May 13 23:50:54.737614 containerd[2727]: time="2025-05-13T23:50:54.737582719Z" level=info msg="CreateContainer within sandbox \"23b329964eb78bc079cc8230d4e305cc38b529b3dd0e755eba8b6e5206adf3a3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"464dc1dfe90f9b98258b789d8a6dbacf4325770639c8de263c629152fa2b9c37\"" May 13 23:50:54.737929 containerd[2727]: time="2025-05-13T23:50:54.737903601Z" level=info msg="StartContainer for \"464dc1dfe90f9b98258b789d8a6dbacf4325770639c8de263c629152fa2b9c37\"" May 13 23:50:54.738912 containerd[2727]: time="2025-05-13T23:50:54.738881926Z" level=info msg="connecting to shim 464dc1dfe90f9b98258b789d8a6dbacf4325770639c8de263c629152fa2b9c37" address="unix:///run/containerd/s/fd3e2a094a6b63431041838a57fd87bfbeb9e90e813d4a85f2b91636027f6afa" protocol=ttrpc version=3 May 13 23:50:54.770055 systemd[1]: Started cri-containerd-464dc1dfe90f9b98258b789d8a6dbacf4325770639c8de263c629152fa2b9c37.scope - libcontainer container 464dc1dfe90f9b98258b789d8a6dbacf4325770639c8de263c629152fa2b9c37. May 13 23:50:54.798167 containerd[2727]: time="2025-05-13T23:50:54.798135934Z" level=info msg="StartContainer for \"464dc1dfe90f9b98258b789d8a6dbacf4325770639c8de263c629152fa2b9c37\" returns successfully" May 13 23:50:54.984019 containerd[2727]: time="2025-05-13T23:50:54.983920722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:54.984019 containerd[2727]: time="2025-05-13T23:50:54.983975163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 13 23:50:54.984621 containerd[2727]: time="2025-05-13T23:50:54.984601326Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:54.986122 containerd[2727]: time="2025-05-13T23:50:54.986100454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:54.986721 containerd[2727]: time="2025-05-13T23:50:54.986697698Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 261.113206ms" May 13 23:50:54.986773 containerd[2727]: time="2025-05-13T23:50:54.986727418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 13 23:50:54.988335 containerd[2727]: time="2025-05-13T23:50:54.988317507Z" level=info msg="CreateContainer within sandbox \"4b73f2f1b676eab3f300146ec078811b68e53bfef9af2dd1b75b20b1763f4b7a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 23:50:54.992751 containerd[2727]: time="2025-05-13T23:50:54.992726971Z" level=info msg="Container 68660c8b05215e98661e5d2dc763329cc1315a6e85aaedaa5d7b0ce5d71fb4e4: CDI devices from CRI Config.CDIDevices: []" May 13 23:50:54.996553 containerd[2727]: time="2025-05-13T23:50:54.996526792Z" level=info msg="CreateContainer within sandbox \"4b73f2f1b676eab3f300146ec078811b68e53bfef9af2dd1b75b20b1763f4b7a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"68660c8b05215e98661e5d2dc763329cc1315a6e85aaedaa5d7b0ce5d71fb4e4\"" May 13 23:50:54.996853 containerd[2727]: time="2025-05-13T23:50:54.996833274Z" level=info msg="StartContainer for \"68660c8b05215e98661e5d2dc763329cc1315a6e85aaedaa5d7b0ce5d71fb4e4\"" May 13 23:50:54.998171 containerd[2727]: time="2025-05-13T23:50:54.998148281Z" level=info msg="connecting to shim 68660c8b05215e98661e5d2dc763329cc1315a6e85aaedaa5d7b0ce5d71fb4e4" address="unix:///run/containerd/s/b7ea4ca868db594311f5048abf9d013f40060fab49349edc659efee261db769a" protocol=ttrpc version=3 May 13 23:50:55.024077 systemd[1]: Started cri-containerd-68660c8b05215e98661e5d2dc763329cc1315a6e85aaedaa5d7b0ce5d71fb4e4.scope - libcontainer container 68660c8b05215e98661e5d2dc763329cc1315a6e85aaedaa5d7b0ce5d71fb4e4. May 13 23:50:55.050315 containerd[2727]: time="2025-05-13T23:50:55.050283233Z" level=info msg="StartContainer for \"68660c8b05215e98661e5d2dc763329cc1315a6e85aaedaa5d7b0ce5d71fb4e4\" returns successfully" May 13 23:50:55.061593 systemd[1]: cri-containerd-68660c8b05215e98661e5d2dc763329cc1315a6e85aaedaa5d7b0ce5d71fb4e4.scope: Deactivated successfully. May 13 23:50:55.063132 containerd[2727]: time="2025-05-13T23:50:55.063104819Z" level=info msg="received exit event container_id:\"68660c8b05215e98661e5d2dc763329cc1315a6e85aaedaa5d7b0ce5d71fb4e4\" id:\"68660c8b05215e98661e5d2dc763329cc1315a6e85aaedaa5d7b0ce5d71fb4e4\" pid:5273 exited_at:{seconds:1747180255 nanos:62814938}" May 13 23:50:55.063213 containerd[2727]: time="2025-05-13T23:50:55.063190620Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68660c8b05215e98661e5d2dc763329cc1315a6e85aaedaa5d7b0ce5d71fb4e4\" id:\"68660c8b05215e98661e5d2dc763329cc1315a6e85aaedaa5d7b0ce5d71fb4e4\" pid:5273 exited_at:{seconds:1747180255 nanos:62814938}" May 13 23:50:55.077934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68660c8b05215e98661e5d2dc763329cc1315a6e85aaedaa5d7b0ce5d71fb4e4-rootfs.mount: Deactivated successfully. May 13 23:50:55.728270 kubelet[4434]: E0513 23:50:55.728234 4434 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgfch" podUID="e357e965-48ea-458f-845a-7872eece6386" May 13 23:50:55.765060 containerd[2727]: time="2025-05-13T23:50:55.765027981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 23:50:55.770777 kubelet[4434]: I0513 23:50:55.770734 4434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7bc68bd765-9ttg2" podStartSLOduration=2.034827893 podStartE2EDuration="2.770721331s" podCreationTimestamp="2025-05-13 23:50:53 +0000 UTC" firstStartedPulling="2025-05-13 23:50:53.989576414 +0000 UTC m=+20.329419000" lastFinishedPulling="2025-05-13 23:50:54.725469812 +0000 UTC m=+21.065312438" observedRunningTime="2025-05-13 23:50:55.769991207 +0000 UTC m=+22.109833833" watchObservedRunningTime="2025-05-13 23:50:55.770721331 +0000 UTC m=+22.110563917" May 13 23:50:56.766266 kubelet[4434]: I0513 23:50:56.766220 4434 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:50:56.859356 containerd[2727]: time="2025-05-13T23:50:56.859314021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:56.859690 containerd[2727]: time="2025-05-13T23:50:56.859384022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 13 23:50:56.859967 containerd[2727]: time="2025-05-13T23:50:56.859946185Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:56.861484 containerd[2727]: time="2025-05-13T23:50:56.861463472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:56.862123 containerd[2727]: time="2025-05-13T23:50:56.862106755Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 1.097045813s" May 13 23:50:56.862157 containerd[2727]: time="2025-05-13T23:50:56.862129715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 13 23:50:56.863815 containerd[2727]: time="2025-05-13T23:50:56.863791403Z" level=info msg="CreateContainer within sandbox \"4b73f2f1b676eab3f300146ec078811b68e53bfef9af2dd1b75b20b1763f4b7a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 23:50:56.868200 containerd[2727]: time="2025-05-13T23:50:56.868170705Z" level=info msg="Container f5d4ab2789db62bea4e815fb2b80a1c568956f9d60ecef9352544e72ca90b295: CDI devices from CRI Config.CDIDevices: []" May 13 23:50:56.872619 containerd[2727]: time="2025-05-13T23:50:56.872591646Z" level=info msg="CreateContainer within sandbox \"4b73f2f1b676eab3f300146ec078811b68e53bfef9af2dd1b75b20b1763f4b7a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f5d4ab2789db62bea4e815fb2b80a1c568956f9d60ecef9352544e72ca90b295\"" May 13 23:50:56.872914 containerd[2727]: time="2025-05-13T23:50:56.872893128Z" level=info msg="StartContainer for \"f5d4ab2789db62bea4e815fb2b80a1c568956f9d60ecef9352544e72ca90b295\"" May 13 23:50:56.874172 containerd[2727]: time="2025-05-13T23:50:56.874150294Z" level=info msg="connecting to shim f5d4ab2789db62bea4e815fb2b80a1c568956f9d60ecef9352544e72ca90b295" address="unix:///run/containerd/s/b7ea4ca868db594311f5048abf9d013f40060fab49349edc659efee261db769a" protocol=ttrpc version=3 May 13 23:50:56.894998 systemd[1]: Started cri-containerd-f5d4ab2789db62bea4e815fb2b80a1c568956f9d60ecef9352544e72ca90b295.scope - libcontainer container f5d4ab2789db62bea4e815fb2b80a1c568956f9d60ecef9352544e72ca90b295. May 13 23:50:56.922675 containerd[2727]: time="2025-05-13T23:50:56.922648010Z" level=info msg="StartContainer for \"f5d4ab2789db62bea4e815fb2b80a1c568956f9d60ecef9352544e72ca90b295\" returns successfully" May 13 23:50:57.265343 containerd[2727]: time="2025-05-13T23:50:57.265294676Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:50:57.266947 systemd[1]: cri-containerd-f5d4ab2789db62bea4e815fb2b80a1c568956f9d60ecef9352544e72ca90b295.scope: Deactivated successfully. May 13 23:50:57.267273 systemd[1]: cri-containerd-f5d4ab2789db62bea4e815fb2b80a1c568956f9d60ecef9352544e72ca90b295.scope: Consumed 872ms CPU time, 178.9M memory peak, 150.3M written to disk. May 13 23:50:57.267747 containerd[2727]: time="2025-05-13T23:50:57.267719767Z" level=info msg="received exit event container_id:\"f5d4ab2789db62bea4e815fb2b80a1c568956f9d60ecef9352544e72ca90b295\" id:\"f5d4ab2789db62bea4e815fb2b80a1c568956f9d60ecef9352544e72ca90b295\" pid:5337 exited_at:{seconds:1747180257 nanos:267582887}" May 13 23:50:57.267879 containerd[2727]: time="2025-05-13T23:50:57.267854968Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5d4ab2789db62bea4e815fb2b80a1c568956f9d60ecef9352544e72ca90b295\" id:\"f5d4ab2789db62bea4e815fb2b80a1c568956f9d60ecef9352544e72ca90b295\" pid:5337 exited_at:{seconds:1747180257 nanos:267582887}" May 13 23:50:57.282789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5d4ab2789db62bea4e815fb2b80a1c568956f9d60ecef9352544e72ca90b295-rootfs.mount: Deactivated successfully. May 13 23:50:57.366507 kubelet[4434]: I0513 23:50:57.366482 4434 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 23:50:57.379049 kubelet[4434]: I0513 23:50:57.378982 4434 topology_manager.go:215] "Topology Admit Handler" podUID="6f3d937f-5a05-4710-848e-a72ad66b575a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2fr6r" May 13 23:50:57.379333 kubelet[4434]: I0513 23:50:57.379315 4434 topology_manager.go:215] "Topology Admit Handler" podUID="ffdf6adc-1b17-406c-b980-57f01a9a63f7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fgr9c" May 13 23:50:57.379550 kubelet[4434]: I0513 23:50:57.379535 4434 topology_manager.go:215] "Topology Admit Handler" podUID="717a5660-3f69-4a52-ad46-6c160f9b0ac2" podNamespace="calico-system" podName="calico-kube-controllers-6455d95b7f-ghsc4" May 13 23:50:57.379764 kubelet[4434]: I0513 23:50:57.379750 4434 topology_manager.go:215] "Topology Admit Handler" podUID="f78ac610-6b31-4bcf-9a1e-ee8b7e7362ed" podNamespace="calico-apiserver" podName="calico-apiserver-6686cdc664-6f5c8" May 13 23:50:57.380126 kubelet[4434]: I0513 23:50:57.380001 4434 topology_manager.go:215] "Topology Admit Handler" podUID="8c49faed-4701-4590-8323-1a4eef2facd0" podNamespace="calico-apiserver" podName="calico-apiserver-6686cdc664-vf65w" May 13 23:50:57.384000 systemd[1]: Created slice kubepods-burstable-pod6f3d937f_5a05_4710_848e_a72ad66b575a.slice - libcontainer container kubepods-burstable-pod6f3d937f_5a05_4710_848e_a72ad66b575a.slice. May 13 23:50:57.388459 systemd[1]: Created slice kubepods-burstable-podffdf6adc_1b17_406c_b980_57f01a9a63f7.slice - libcontainer container kubepods-burstable-podffdf6adc_1b17_406c_b980_57f01a9a63f7.slice. May 13 23:50:57.391498 systemd[1]: Created slice kubepods-besteffort-pod717a5660_3f69_4a52_ad46_6c160f9b0ac2.slice - libcontainer container kubepods-besteffort-pod717a5660_3f69_4a52_ad46_6c160f9b0ac2.slice. May 13 23:50:57.395347 systemd[1]: Created slice kubepods-besteffort-podf78ac610_6b31_4bcf_9a1e_ee8b7e7362ed.slice - libcontainer container kubepods-besteffort-podf78ac610_6b31_4bcf_9a1e_ee8b7e7362ed.slice. May 13 23:50:57.399036 systemd[1]: Created slice kubepods-besteffort-pod8c49faed_4701_4590_8323_1a4eef2facd0.slice - libcontainer container kubepods-besteffort-pod8c49faed_4701_4590_8323_1a4eef2facd0.slice. May 13 23:50:57.479246 kubelet[4434]: I0513 23:50:57.479218 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f3d937f-5a05-4710-848e-a72ad66b575a-config-volume\") pod \"coredns-7db6d8ff4d-2fr6r\" (UID: \"6f3d937f-5a05-4710-848e-a72ad66b575a\") " pod="kube-system/coredns-7db6d8ff4d-2fr6r" May 13 23:50:57.479345 kubelet[4434]: I0513 23:50:57.479255 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/717a5660-3f69-4a52-ad46-6c160f9b0ac2-tigera-ca-bundle\") pod \"calico-kube-controllers-6455d95b7f-ghsc4\" (UID: \"717a5660-3f69-4a52-ad46-6c160f9b0ac2\") " pod="calico-system/calico-kube-controllers-6455d95b7f-ghsc4" May 13 23:50:57.479345 kubelet[4434]: I0513 23:50:57.479278 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmvmw\" (UniqueName: \"kubernetes.io/projected/f78ac610-6b31-4bcf-9a1e-ee8b7e7362ed-kube-api-access-hmvmw\") pod \"calico-apiserver-6686cdc664-6f5c8\" (UID: \"f78ac610-6b31-4bcf-9a1e-ee8b7e7362ed\") " pod="calico-apiserver/calico-apiserver-6686cdc664-6f5c8" May 13 23:50:57.479345 kubelet[4434]: I0513 23:50:57.479297 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkq8p\" (UniqueName: \"kubernetes.io/projected/8c49faed-4701-4590-8323-1a4eef2facd0-kube-api-access-pkq8p\") pod \"calico-apiserver-6686cdc664-vf65w\" (UID: \"8c49faed-4701-4590-8323-1a4eef2facd0\") " pod="calico-apiserver/calico-apiserver-6686cdc664-vf65w" May 13 23:50:57.479345 kubelet[4434]: I0513 23:50:57.479330 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjstf\" (UniqueName: \"kubernetes.io/projected/6f3d937f-5a05-4710-848e-a72ad66b575a-kube-api-access-gjstf\") pod \"coredns-7db6d8ff4d-2fr6r\" (UID: \"6f3d937f-5a05-4710-848e-a72ad66b575a\") " pod="kube-system/coredns-7db6d8ff4d-2fr6r" May 13 23:50:57.479522 kubelet[4434]: I0513 23:50:57.479353 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffdf6adc-1b17-406c-b980-57f01a9a63f7-config-volume\") pod \"coredns-7db6d8ff4d-fgr9c\" (UID: \"ffdf6adc-1b17-406c-b980-57f01a9a63f7\") " pod="kube-system/coredns-7db6d8ff4d-fgr9c" May 13 23:50:57.479522 kubelet[4434]: I0513 23:50:57.479372 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f78ac610-6b31-4bcf-9a1e-ee8b7e7362ed-calico-apiserver-certs\") pod \"calico-apiserver-6686cdc664-6f5c8\" (UID: \"f78ac610-6b31-4bcf-9a1e-ee8b7e7362ed\") " pod="calico-apiserver/calico-apiserver-6686cdc664-6f5c8" May 13 23:50:57.479522 kubelet[4434]: I0513 23:50:57.479389 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8c49faed-4701-4590-8323-1a4eef2facd0-calico-apiserver-certs\") pod \"calico-apiserver-6686cdc664-vf65w\" (UID: \"8c49faed-4701-4590-8323-1a4eef2facd0\") " pod="calico-apiserver/calico-apiserver-6686cdc664-vf65w" May 13 23:50:57.479522 kubelet[4434]: I0513 23:50:57.479408 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrr8x\" (UniqueName: \"kubernetes.io/projected/717a5660-3f69-4a52-ad46-6c160f9b0ac2-kube-api-access-jrr8x\") pod \"calico-kube-controllers-6455d95b7f-ghsc4\" (UID: \"717a5660-3f69-4a52-ad46-6c160f9b0ac2\") " pod="calico-system/calico-kube-controllers-6455d95b7f-ghsc4" May 13 23:50:57.479522 kubelet[4434]: I0513 23:50:57.479429 4434 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwwc2\" (UniqueName: \"kubernetes.io/projected/ffdf6adc-1b17-406c-b980-57f01a9a63f7-kube-api-access-zwwc2\") pod \"coredns-7db6d8ff4d-fgr9c\" (UID: \"ffdf6adc-1b17-406c-b980-57f01a9a63f7\") " pod="kube-system/coredns-7db6d8ff4d-fgr9c" May 13 23:50:57.686901 containerd[2727]: time="2025-05-13T23:50:57.686865759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2fr6r,Uid:6f3d937f-5a05-4710-848e-a72ad66b575a,Namespace:kube-system,Attempt:0,}" May 13 23:50:57.691344 containerd[2727]: time="2025-05-13T23:50:57.691321659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fgr9c,Uid:ffdf6adc-1b17-406c-b980-57f01a9a63f7,Namespace:kube-system,Attempt:0,}" May 13 23:50:57.693954 containerd[2727]: time="2025-05-13T23:50:57.693920031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6455d95b7f-ghsc4,Uid:717a5660-3f69-4a52-ad46-6c160f9b0ac2,Namespace:calico-system,Attempt:0,}" May 13 23:50:57.697488 containerd[2727]: time="2025-05-13T23:50:57.697455287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6686cdc664-6f5c8,Uid:f78ac610-6b31-4bcf-9a1e-ee8b7e7362ed,Namespace:calico-apiserver,Attempt:0,}" May 13 23:50:57.701979 containerd[2727]: time="2025-05-13T23:50:57.701943788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6686cdc664-vf65w,Uid:8c49faed-4701-4590-8323-1a4eef2facd0,Namespace:calico-apiserver,Attempt:0,}" May 13 23:50:57.733107 systemd[1]: Created slice kubepods-besteffort-pode357e965_48ea_458f_845a_7872eece6386.slice - libcontainer container kubepods-besteffort-pode357e965_48ea_458f_845a_7872eece6386.slice. May 13 23:50:57.734917 containerd[2727]: time="2025-05-13T23:50:57.734851298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgfch,Uid:e357e965-48ea-458f-845a-7872eece6386,Namespace:calico-system,Attempt:0,}" May 13 23:50:57.756300 containerd[2727]: time="2025-05-13T23:50:57.756257755Z" level=error msg="Failed to destroy network for sandbox \"0a465ab7d61d9fe30c44449caffa76921164ce962ca7259298a6f475d133b09b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:50:57.756523 containerd[2727]: time="2025-05-13T23:50:57.756490476Z" level=error msg="Failed to destroy network for sandbox \"b8c2e97424f0ee119bf96f4036d319b6d7a57a9af9060b2c9c768e83e707a1f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:50:57.756607 containerd[2727]: time="2025-05-13T23:50:57.756581117Z" level=error msg="Failed to destroy network for sandbox \"8b949e87e75f0058b5a7d98b10b5960cfaf5d1cdffee8238581fe9aa78438bc9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:50:57.756680 containerd[2727]: time="2025-05-13T23:50:57.756655837Z" level=error msg="Failed to destroy network for sandbox \"ca2ec60fccab7dc9bdfe4573c2d03da0777648485949d12a289dc84a45e6d2b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:50:57.756753 containerd[2727]: time="2025-05-13T23:50:57.756658197Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6686cdc664-vf65w,Uid:8c49faed-4701-4590-8323-1a4eef2facd0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a465ab7d61d9fe30c44449caffa76921164ce962ca7259298a6f475d133b09b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:50:57.756908 containerd[2727]: time="2025-05-13T23:50:57.756828558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fgr9c,Uid:ffdf6adc-1b17-406c-b980-57f01a9a63f7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c2e97424f0ee119bf96f4036d319b6d7a57a9af9060b2c9c768e83e707a1f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:50:57.757004 kubelet[4434]: E0513 23:50:57.756966 4434 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a465ab7d61d9fe30c44449caffa76921164ce962ca7259298a6f475d133b09b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:50:57.757050 kubelet[4434]: E0513 23:50:57.757032 4434 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a465ab7d61d9fe30c44449caffa76921164ce962ca7259298a6f475d133b09b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6686cdc664-vf65w" May 13 23:50:57.757076 containerd[2727]: time="2025-05-13T23:50:57.757011639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2fr6r,Uid:6f3d937f-5a05-4710-848e-a72ad66b575a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca2ec60fccab7dc9bdfe4573c2d03da0777648485949d12a289dc84a45e6d2b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:50:57.757114 kubelet[4434]: E0513 23:50:57.757054 4434 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a465ab7d61d9fe30c44449caffa76921164ce962ca7259298a6f475d133b09b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6686cdc664-vf65w" May 13 23:50:57.757114 kubelet[4434]: E0513 23:50:57.757099 4434 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6686cdc664-vf65w_calico-apiserver(8c49faed-4701-4590-8323-1a4eef2facd0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6686cdc664-vf65w_calico-apiserver(8c49faed-4701-4590-8323-1a4eef2facd0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a465ab7d61d9fe30c44449caffa76921164ce962ca7259298a6f475d133b09b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6686cdc664-vf65w" podUID="8c49faed-4701-4590-8323-1a4eef2facd0" May 13 23:50:57.757229 kubelet[4434]: E0513 23:50:57.756979 4434 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c2e97424f0ee119bf96f4036d319b6d7a57a9af9060b2c9c768e83e707a1f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:50:57.757229 kubelet[4434]: E0513 23:50:57.757136 4434 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca2ec60fccab7dc9bdfe4573c2d03da0777648485949d12a289dc84a45e6d2b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:50:57.757229 kubelet[4434]: E0513 23:50:57.757167 4434 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c2e97424f0ee119bf96f4036d319b6d7a57a9af9060b2c9c768e83e707a1f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-fgr9c" May 13 23:50:57.757229 kubelet[4434]: E0513 23:50:57.757172 4434 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca2ec60fccab7dc9bdfe4573c2d03da0777648485949d12a289dc84a45e6d2b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2fr6r" May 13 23:50:57.757318 kubelet[4434]: E0513 23:50:57.757185 4434 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c2e97424f0ee119bf96f4036d319b6d7a57a9af9060b2c9c768e83e707a1f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-fgr9c" May 13 23:50:57.757318 kubelet[4434]: E0513 23:50:57.757188 4434 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca2ec60fccab7dc9bdfe4573c2d03da0777648485949d12a289dc84a45e6d2b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2fr6r" May 13 23:50:57.757318 kubelet[4434]: E0513 23:50:57.757223 4434 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-fgr9c_kube-system(ffdf6adc-1b17-406c-b980-57f01a9a63f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-fgr9c_kube-system(ffdf6adc-1b17-406c-b980-57f01a9a63f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8c2e97424f0ee119bf96f4036d319b6d7a57a9af9060b2c9c768e83e707a1f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-fgr9c" podUID="ffdf6adc-1b17-406c-b980-57f01a9a63f7" May 13 23:50:57.757398 containerd[2727]: time="2025-05-13T23:50:57.757260520Z" level=error msg="Failed to destroy network for sandbox \"515bef56bc3722222d8eb66399db505323273a0c5e7c8d0aabd051dfd66ae3aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:50:57.757420 kubelet[4434]: E0513 23:50:57.757225 4434 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2fr6r_kube-system(6f3d937f-5a05-4710-848e-a72ad66b575a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2fr6r_kube-system(6f3d937f-5a05-4710-848e-a72ad66b575a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca2ec60fccab7dc9bdfe4573c2d03da0777648485949d12a289dc84a45e6d2b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2fr6r" podUID="6f3d937f-5a05-4710-848e-a72ad66b575a" May 13 23:50:57.757455 containerd[2727]: time="2025-05-13T23:50:57.757386000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6455d95b7f-ghsc4,Uid:717a5660-3f69-4a52-ad46-6c160f9b0ac2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b949e87e75f0058b5a7d98b10b5960cfaf5d1cdffee8238581fe9aa78438bc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:50:57.757522 kubelet[4434]: E0513 23:50:57.757498 4434 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b949e87e75f0058b5a7d98b10b5960cfaf5d1cdffee8238581fe9aa78438bc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:50:57.757555 kubelet[4434]: E0513 23:50:57.757521 4434 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b949e87e75f0058b5a7d98b10b5960cfaf5d1cdffee8238581fe9aa78438bc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6455d95b7f-ghsc4" May 13 23:50:57.757555 kubelet[4434]: E0513 23:50:57.757536 4434 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b949e87e75f0058b5a7d98b10b5960cfaf5d1cdffee8238581fe9aa78438bc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6455d95b7f-ghsc4" May 13 23:50:57.757599 kubelet[4434]: E0513 23:50:57.757562 4434 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6455d95b7f-ghsc4_calico-system(717a5660-3f69-4a52-ad46-6c160f9b0ac2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6455d95b7f-ghsc4_calico-system(717a5660-3f69-4a52-ad46-6c160f9b0ac2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b949e87e75f0058b5a7d98b10b5960cfaf5d1cdffee8238581fe9aa78438bc9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6455d95b7f-ghsc4" podUID="717a5660-3f69-4a52-ad46-6c160f9b0ac2" May 13 23:50:57.757642 containerd[2727]: time="2025-05-13T23:50:57.757545281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6686cdc664-6f5c8,Uid:f78ac610-6b31-4bcf-9a1e-ee8b7e7362ed,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"515bef56bc3722222d8eb66399db505323273a0c5e7c8d0aabd051dfd66ae3aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:50:57.757760 kubelet[4434]: E0513 23:50:57.757738 4434 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"515bef56bc3722222d8eb66399db505323273a0c5e7c8d0aabd051dfd66ae3aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:50:57.757801 kubelet[4434]: E0513 23:50:57.757768 4434 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"515bef56bc3722222d8eb66399db505323273a0c5e7c8d0aabd051dfd66ae3aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6686cdc664-6f5c8" May 13 23:50:57.757801 kubelet[4434]: E0513 23:50:57.757784 4434 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"515bef56bc3722222d8eb66399db505323273a0c5e7c8d0aabd051dfd66ae3aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6686cdc664-6f5c8" May 13 23:50:57.757847 kubelet[4434]: E0513 23:50:57.757814 4434 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6686cdc664-6f5c8_calico-apiserver(f78ac610-6b31-4bcf-9a1e-ee8b7e7362ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6686cdc664-6f5c8_calico-apiserver(f78ac610-6b31-4bcf-9a1e-ee8b7e7362ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"515bef56bc3722222d8eb66399db505323273a0c5e7c8d0aabd051dfd66ae3aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6686cdc664-6f5c8" podUID="f78ac610-6b31-4bcf-9a1e-ee8b7e7362ed" May 13 23:50:57.771837 containerd[2727]: time="2025-05-13T23:50:57.771806026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 23:50:57.781375 containerd[2727]: time="2025-05-13T23:50:57.781331990Z" level=error msg="Failed to destroy network for sandbox \"2e683bf5d6b37d92bcbb0fb6d722f2b8d1b7f53245d9f2f39a13860e7a04009e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:50:57.782900 containerd[2727]: time="2025-05-13T23:50:57.782846037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgfch,Uid:e357e965-48ea-458f-845a-7872eece6386,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e683bf5d6b37d92bcbb0fb6d722f2b8d1b7f53245d9f2f39a13860e7a04009e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:50:57.783910 kubelet[4434]: E0513 23:50:57.783059 4434 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e683bf5d6b37d92bcbb0fb6d722f2b8d1b7f53245d9f2f39a13860e7a04009e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:50:57.783910 kubelet[4434]: E0513 23:50:57.783119 4434 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e683bf5d6b37d92bcbb0fb6d722f2b8d1b7f53245d9f2f39a13860e7a04009e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zgfch" May 13 23:50:57.783910 kubelet[4434]: E0513 23:50:57.783142 4434 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e683bf5d6b37d92bcbb0fb6d722f2b8d1b7f53245d9f2f39a13860e7a04009e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zgfch" May 13 23:50:57.784334 kubelet[4434]: E0513 23:50:57.783182 4434 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zgfch_calico-system(e357e965-48ea-458f-845a-7872eece6386)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zgfch_calico-system(e357e965-48ea-458f-845a-7872eece6386)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e683bf5d6b37d92bcbb0fb6d722f2b8d1b7f53245d9f2f39a13860e7a04009e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zgfch" podUID="e357e965-48ea-458f-845a-7872eece6386" May 13 23:50:59.919242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3047305123.mount: Deactivated successfully. May 13 23:50:59.941202 containerd[2727]: time="2025-05-13T23:50:59.941135796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 13 23:50:59.941410 containerd[2727]: time="2025-05-13T23:50:59.941168236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:59.942847 containerd[2727]: time="2025-05-13T23:50:59.942817602Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:59.944335 containerd[2727]: time="2025-05-13T23:50:59.944311608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:50:59.945245 containerd[2727]: time="2025-05-13T23:50:59.945226652Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 2.173385026s" May 13 23:50:59.945290 containerd[2727]: time="2025-05-13T23:50:59.945250772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 13 23:50:59.950743 containerd[2727]: time="2025-05-13T23:50:59.950721714Z" level=info msg="CreateContainer within sandbox \"4b73f2f1b676eab3f300146ec078811b68e53bfef9af2dd1b75b20b1763f4b7a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 23:50:59.955881 containerd[2727]: time="2025-05-13T23:50:59.955857935Z" level=info msg="Container 4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4: CDI devices from CRI Config.CDIDevices: []" May 13 23:50:59.961080 containerd[2727]: time="2025-05-13T23:50:59.961058236Z" level=info msg="CreateContainer within sandbox \"4b73f2f1b676eab3f300146ec078811b68e53bfef9af2dd1b75b20b1763f4b7a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\"" May 13 23:50:59.961341 containerd[2727]: time="2025-05-13T23:50:59.961325157Z" level=info msg="StartContainer for \"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\"" May 13 23:50:59.962623 containerd[2727]: time="2025-05-13T23:50:59.962603962Z" level=info msg="connecting to shim 4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4" address="unix:///run/containerd/s/b7ea4ca868db594311f5048abf9d013f40060fab49349edc659efee261db769a" protocol=ttrpc version=3 May 13 23:50:59.983003 systemd[1]: Started cri-containerd-4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4.scope - libcontainer container 4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4. May 13 23:51:00.012424 containerd[2727]: time="2025-05-13T23:51:00.012397838Z" level=info msg="StartContainer for \"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" returns successfully" May 13 23:51:00.118717 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 23:51:00.118785 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 23:51:00.789913 kubelet[4434]: I0513 23:51:00.789782 4434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4tmkp" podStartSLOduration=1.835126889 podStartE2EDuration="7.78976712s" podCreationTimestamp="2025-05-13 23:50:53 +0000 UTC" firstStartedPulling="2025-05-13 23:50:53.991102823 +0000 UTC m=+20.330945449" lastFinishedPulling="2025-05-13 23:50:59.945743094 +0000 UTC m=+26.285585680" observedRunningTime="2025-05-13 23:51:00.789322678 +0000 UTC m=+27.129165304" watchObservedRunningTime="2025-05-13 23:51:00.78976712 +0000 UTC m=+27.129609706" May 13 23:51:00.827294 containerd[2727]: time="2025-05-13T23:51:00.827254781Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" id:\"cd98b72f70f53430f5824ba59ba4a360e21daa45de2c315d75659c38eb840029\" pid:5865 exit_status:1 exited_at:{seconds:1747180260 nanos:827019340}" May 13 23:51:01.836882 containerd[2727]: time="2025-05-13T23:51:01.836844818Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" id:\"8387a4c23bb78657ffad7e6dbfdc3429eb9102f781fea09b755247090f5548e0\" pid:6026 exit_status:1 exited_at:{seconds:1747180261 nanos:836635818}" May 13 23:51:08.728457 containerd[2727]: time="2025-05-13T23:51:08.728406755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6686cdc664-6f5c8,Uid:f78ac610-6b31-4bcf-9a1e-ee8b7e7362ed,Namespace:calico-apiserver,Attempt:0,}" May 13 23:51:08.728907 containerd[2727]: time="2025-05-13T23:51:08.728475875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fgr9c,Uid:ffdf6adc-1b17-406c-b980-57f01a9a63f7,Namespace:kube-system,Attempt:0,}" May 13 23:51:08.837010 systemd-networkd[2632]: calidddd5b6e619: Link UP May 13 23:51:08.837263 systemd-networkd[2632]: calidddd5b6e619: Gained carrier May 13 23:51:08.845551 containerd[2727]: 2025-05-13 23:51:08.746 [INFO][6333] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 23:51:08.845551 containerd[2727]: 2025-05-13 23:51:08.764 [INFO][6333] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--fgr9c-eth0 coredns-7db6d8ff4d- kube-system ffdf6adc-1b17-406c-b980-57f01a9a63f7 659 0 2025-05-13 23:50:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284.0.0-n-52b3733d51 coredns-7db6d8ff4d-fgr9c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidddd5b6e619 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fgr9c" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--fgr9c-" May 13 23:51:08.845551 containerd[2727]: 2025-05-13 23:51:08.764 [INFO][6333] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fgr9c" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--fgr9c-eth0" May 13 23:51:08.845551 containerd[2727]: 2025-05-13 23:51:08.800 [INFO][6388] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a" HandleID="k8s-pod-network.f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a" Workload="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--fgr9c-eth0" May 13 23:51:08.845692 containerd[2727]: 2025-05-13 23:51:08.813 [INFO][6388] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a" HandleID="k8s-pod-network.f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a" Workload="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--fgr9c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400063dcd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284.0.0-n-52b3733d51", "pod":"coredns-7db6d8ff4d-fgr9c", "timestamp":"2025-05-13 23:51:08.800498717 +0000 UTC"}, Hostname:"ci-4284.0.0-n-52b3733d51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:51:08.845692 containerd[2727]: 2025-05-13 23:51:08.813 [INFO][6388] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:51:08.845692 containerd[2727]: 2025-05-13 23:51:08.813 [INFO][6388] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:51:08.845692 containerd[2727]: 2025-05-13 23:51:08.814 [INFO][6388] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-52b3733d51' May 13 23:51:08.845692 containerd[2727]: 2025-05-13 23:51:08.815 [INFO][6388] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:08.845692 containerd[2727]: 2025-05-13 23:51:08.818 [INFO][6388] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-52b3733d51" May 13 23:51:08.845692 containerd[2727]: 2025-05-13 23:51:08.821 [INFO][6388] ipam/ipam.go 489: Trying affinity for 192.168.33.0/26 host="ci-4284.0.0-n-52b3733d51" May 13 23:51:08.845692 containerd[2727]: 2025-05-13 23:51:08.822 [INFO][6388] ipam/ipam.go 155: Attempting to load block cidr=192.168.33.0/26 host="ci-4284.0.0-n-52b3733d51" May 13 23:51:08.845692 containerd[2727]: 2025-05-13 23:51:08.823 [INFO][6388] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.33.0/26 host="ci-4284.0.0-n-52b3733d51" May 13 23:51:08.845867 containerd[2727]: 2025-05-13 23:51:08.823 [INFO][6388] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.33.0/26 handle="k8s-pod-network.f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:08.845867 containerd[2727]: 2025-05-13 23:51:08.824 [INFO][6388] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a May 13 23:51:08.845867 containerd[2727]: 2025-05-13 23:51:08.826 [INFO][6388] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.33.0/26 handle="k8s-pod-network.f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:08.845867 containerd[2727]: 2025-05-13 23:51:08.830 [INFO][6388] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.33.1/26] block=192.168.33.0/26 handle="k8s-pod-network.f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:08.845867 containerd[2727]: 2025-05-13 23:51:08.830 [INFO][6388] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.33.1/26] handle="k8s-pod-network.f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:08.845867 containerd[2727]: 2025-05-13 23:51:08.830 [INFO][6388] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:51:08.845867 containerd[2727]: 2025-05-13 23:51:08.830 [INFO][6388] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.1/26] IPv6=[] ContainerID="f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a" HandleID="k8s-pod-network.f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a" Workload="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--fgr9c-eth0" May 13 23:51:08.846023 containerd[2727]: 2025-05-13 23:51:08.832 [INFO][6333] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fgr9c" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--fgr9c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--fgr9c-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ffdf6adc-1b17-406c-b980-57f01a9a63f7", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 50, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-52b3733d51", ContainerID:"", Pod:"coredns-7db6d8ff4d-fgr9c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidddd5b6e619", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:51:08.846023 containerd[2727]: 2025-05-13 23:51:08.832 [INFO][6333] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.33.1/32] ContainerID="f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fgr9c" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--fgr9c-eth0" May 13 23:51:08.846023 containerd[2727]: 2025-05-13 23:51:08.832 [INFO][6333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidddd5b6e619 ContainerID="f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fgr9c" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--fgr9c-eth0" May 13 23:51:08.846023 containerd[2727]: 2025-05-13 23:51:08.837 [INFO][6333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fgr9c" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--fgr9c-eth0" May 13 23:51:08.846023 containerd[2727]: 2025-05-13 23:51:08.837 [INFO][6333] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fgr9c" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--fgr9c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--fgr9c-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ffdf6adc-1b17-406c-b980-57f01a9a63f7", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 50, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-52b3733d51", ContainerID:"f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a", Pod:"coredns-7db6d8ff4d-fgr9c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidddd5b6e619", MAC:"16:36:e7:7a:b5:a9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:51:08.846023 containerd[2727]: 2025-05-13 23:51:08.844 [INFO][6333] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fgr9c" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--fgr9c-eth0" May 13 23:51:08.850657 systemd-networkd[2632]: cali8ee81955e1e: Link UP May 13 23:51:08.850804 systemd-networkd[2632]: cali8ee81955e1e: Gained carrier May 13 23:51:08.856744 containerd[2727]: time="2025-05-13T23:51:08.856712923Z" level=info msg="connecting to shim f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a" address="unix:///run/containerd/s/f781849ed4d195212a1ecbe9149c4ea6736b4ecfb3e857fc6acc75577050eabb" namespace=k8s.io protocol=ttrpc version=3 May 13 23:51:08.858148 containerd[2727]: 2025-05-13 23:51:08.746 [INFO][6331] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 23:51:08.858148 containerd[2727]: 2025-05-13 23:51:08.764 [INFO][6331] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--6f5c8-eth0 calico-apiserver-6686cdc664- calico-apiserver f78ac610-6b31-4bcf-9a1e-ee8b7e7362ed 660 0 2025-05-13 23:50:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6686cdc664 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-n-52b3733d51 calico-apiserver-6686cdc664-6f5c8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8ee81955e1e [] []}} ContainerID="261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8" Namespace="calico-apiserver" Pod="calico-apiserver-6686cdc664-6f5c8" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--6f5c8-" May 13 23:51:08.858148 containerd[2727]: 2025-05-13 23:51:08.764 [INFO][6331] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8" Namespace="calico-apiserver" Pod="calico-apiserver-6686cdc664-6f5c8" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--6f5c8-eth0" May 13 23:51:08.858148 containerd[2727]: 2025-05-13 23:51:08.800 [INFO][6390] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8" HandleID="k8s-pod-network.261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8" Workload="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--6f5c8-eth0" May 13 23:51:08.858148 containerd[2727]: 2025-05-13 23:51:08.814 [INFO][6390] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8" HandleID="k8s-pod-network.261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8" Workload="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--6f5c8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038a9d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-n-52b3733d51", "pod":"calico-apiserver-6686cdc664-6f5c8", "timestamp":"2025-05-13 23:51:08.800500277 +0000 UTC"}, Hostname:"ci-4284.0.0-n-52b3733d51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:51:08.858148 containerd[2727]: 2025-05-13 23:51:08.814 [INFO][6390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:51:08.858148 containerd[2727]: 2025-05-13 23:51:08.830 [INFO][6390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:51:08.858148 containerd[2727]: 2025-05-13 23:51:08.830 [INFO][6390] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-52b3733d51' May 13 23:51:08.858148 containerd[2727]: 2025-05-13 23:51:08.831 [INFO][6390] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:08.858148 containerd[2727]: 2025-05-13 23:51:08.834 [INFO][6390] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-52b3733d51" May 13 23:51:08.858148 containerd[2727]: 2025-05-13 23:51:08.836 [INFO][6390] ipam/ipam.go 489: Trying affinity for 192.168.33.0/26 host="ci-4284.0.0-n-52b3733d51" May 13 23:51:08.858148 containerd[2727]: 2025-05-13 23:51:08.838 [INFO][6390] ipam/ipam.go 155: Attempting to load block cidr=192.168.33.0/26 host="ci-4284.0.0-n-52b3733d51" May 13 23:51:08.858148 containerd[2727]: 2025-05-13 23:51:08.839 [INFO][6390] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.33.0/26 host="ci-4284.0.0-n-52b3733d51" May 13 23:51:08.858148 containerd[2727]: 2025-05-13 23:51:08.839 [INFO][6390] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.33.0/26 handle="k8s-pod-network.261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:08.858148 containerd[2727]: 2025-05-13 23:51:08.840 [INFO][6390] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8 May 13 23:51:08.858148 containerd[2727]: 2025-05-13 23:51:08.844 [INFO][6390] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.33.0/26 handle="k8s-pod-network.261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:08.858148 containerd[2727]: 2025-05-13 23:51:08.847 [INFO][6390] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.33.2/26] block=192.168.33.0/26 handle="k8s-pod-network.261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:08.858148 containerd[2727]: 2025-05-13 23:51:08.847 [INFO][6390] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.33.2/26] handle="k8s-pod-network.261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:08.858148 containerd[2727]: 2025-05-13 23:51:08.847 [INFO][6390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:51:08.858148 containerd[2727]: 2025-05-13 23:51:08.847 [INFO][6390] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.2/26] IPv6=[] ContainerID="261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8" HandleID="k8s-pod-network.261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8" Workload="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--6f5c8-eth0" May 13 23:51:08.858553 containerd[2727]: 2025-05-13 23:51:08.849 [INFO][6331] cni-plugin/k8s.go 386: Populated endpoint ContainerID="261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8" Namespace="calico-apiserver" Pod="calico-apiserver-6686cdc664-6f5c8" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--6f5c8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--6f5c8-eth0", GenerateName:"calico-apiserver-6686cdc664-", Namespace:"calico-apiserver", SelfLink:"", UID:"f78ac610-6b31-4bcf-9a1e-ee8b7e7362ed", ResourceVersion:"660", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 50, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6686cdc664", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-52b3733d51", ContainerID:"", Pod:"calico-apiserver-6686cdc664-6f5c8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ee81955e1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:51:08.858553 containerd[2727]: 2025-05-13 23:51:08.849 [INFO][6331] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.33.2/32] ContainerID="261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8" Namespace="calico-apiserver" Pod="calico-apiserver-6686cdc664-6f5c8" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--6f5c8-eth0" May 13 23:51:08.858553 containerd[2727]: 2025-05-13 23:51:08.849 [INFO][6331] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ee81955e1e ContainerID="261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8" Namespace="calico-apiserver" Pod="calico-apiserver-6686cdc664-6f5c8" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--6f5c8-eth0" May 13 23:51:08.858553 containerd[2727]: 2025-05-13 23:51:08.850 [INFO][6331] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8" Namespace="calico-apiserver" Pod="calico-apiserver-6686cdc664-6f5c8" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--6f5c8-eth0" May 13 23:51:08.858553 containerd[2727]: 2025-05-13 23:51:08.851 [INFO][6331] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8" Namespace="calico-apiserver" Pod="calico-apiserver-6686cdc664-6f5c8" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--6f5c8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--6f5c8-eth0", GenerateName:"calico-apiserver-6686cdc664-", Namespace:"calico-apiserver", SelfLink:"", UID:"f78ac610-6b31-4bcf-9a1e-ee8b7e7362ed", ResourceVersion:"660", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 50, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6686cdc664", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-52b3733d51", ContainerID:"261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8", Pod:"calico-apiserver-6686cdc664-6f5c8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ee81955e1e", MAC:"5e:12:cf:90:83:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:51:08.858553 containerd[2727]: 2025-05-13 23:51:08.856 [INFO][6331] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8" Namespace="calico-apiserver" Pod="calico-apiserver-6686cdc664-6f5c8" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--6f5c8-eth0" May 13 23:51:08.869800 containerd[2727]: time="2025-05-13T23:51:08.869769632Z" level=info msg="connecting to shim 261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8" address="unix:///run/containerd/s/ad20c4b2883c948527b38584b34556e06effc4f39ceed135a639fadb2dabec6e" namespace=k8s.io protocol=ttrpc version=3 May 13 23:51:08.888062 systemd[1]: Started cri-containerd-f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a.scope - libcontainer container f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a. May 13 23:51:08.890323 systemd[1]: Started cri-containerd-261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8.scope - libcontainer container 261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8. May 13 23:51:08.913120 containerd[2727]: time="2025-05-13T23:51:08.913094369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fgr9c,Uid:ffdf6adc-1b17-406c-b980-57f01a9a63f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a\"" May 13 23:51:08.914762 containerd[2727]: time="2025-05-13T23:51:08.914741053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6686cdc664-6f5c8,Uid:f78ac610-6b31-4bcf-9a1e-ee8b7e7362ed,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8\"" May 13 23:51:08.915106 containerd[2727]: time="2025-05-13T23:51:08.915086494Z" level=info msg="CreateContainer within sandbox \"f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:51:08.915529 containerd[2727]: time="2025-05-13T23:51:08.915512894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 23:51:08.919366 containerd[2727]: time="2025-05-13T23:51:08.919344383Z" level=info msg="Container 10f361b1d4ed0e231b734bd01cc2106a035d01957073f0a536875b78aae2368c: CDI devices from CRI Config.CDIDevices: []" May 13 23:51:08.921773 containerd[2727]: time="2025-05-13T23:51:08.921747668Z" level=info msg="CreateContainer within sandbox \"f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"10f361b1d4ed0e231b734bd01cc2106a035d01957073f0a536875b78aae2368c\"" May 13 23:51:08.922053 containerd[2727]: time="2025-05-13T23:51:08.922034309Z" level=info msg="StartContainer for \"10f361b1d4ed0e231b734bd01cc2106a035d01957073f0a536875b78aae2368c\"" May 13 23:51:08.922782 containerd[2727]: time="2025-05-13T23:51:08.922763071Z" level=info msg="connecting to shim 10f361b1d4ed0e231b734bd01cc2106a035d01957073f0a536875b78aae2368c" address="unix:///run/containerd/s/f781849ed4d195212a1ecbe9149c4ea6736b4ecfb3e857fc6acc75577050eabb" protocol=ttrpc version=3 May 13 23:51:08.946067 systemd[1]: Started cri-containerd-10f361b1d4ed0e231b734bd01cc2106a035d01957073f0a536875b78aae2368c.scope - libcontainer container 10f361b1d4ed0e231b734bd01cc2106a035d01957073f0a536875b78aae2368c. May 13 23:51:08.965912 containerd[2727]: time="2025-05-13T23:51:08.965876567Z" level=info msg="StartContainer for \"10f361b1d4ed0e231b734bd01cc2106a035d01957073f0a536875b78aae2368c\" returns successfully" May 13 23:51:09.630435 containerd[2727]: time="2025-05-13T23:51:09.630395249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:51:09.630531 containerd[2727]: time="2025-05-13T23:51:09.630428609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 13 23:51:09.631120 containerd[2727]: time="2025-05-13T23:51:09.631094971Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:51:09.632829 containerd[2727]: time="2025-05-13T23:51:09.632805654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:51:09.633553 containerd[2727]: time="2025-05-13T23:51:09.633531696Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 717.990441ms" May 13 23:51:09.633629 containerd[2727]: time="2025-05-13T23:51:09.633561216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 23:51:09.635279 containerd[2727]: time="2025-05-13T23:51:09.635259420Z" level=info msg="CreateContainer within sandbox \"261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:51:09.639289 containerd[2727]: time="2025-05-13T23:51:09.639259468Z" level=info msg="Container 7732df9389a4ada612da3ac467eccc79c7033e6e5435f693998aea160c8f3857: CDI devices from CRI Config.CDIDevices: []" May 13 23:51:09.642620 containerd[2727]: time="2025-05-13T23:51:09.642593435Z" level=info msg="CreateContainer within sandbox \"261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7732df9389a4ada612da3ac467eccc79c7033e6e5435f693998aea160c8f3857\"" May 13 23:51:09.642966 containerd[2727]: time="2025-05-13T23:51:09.642942116Z" level=info msg="StartContainer for \"7732df9389a4ada612da3ac467eccc79c7033e6e5435f693998aea160c8f3857\"" May 13 23:51:09.643932 containerd[2727]: time="2025-05-13T23:51:09.643909798Z" level=info msg="connecting to shim 7732df9389a4ada612da3ac467eccc79c7033e6e5435f693998aea160c8f3857" address="unix:///run/containerd/s/ad20c4b2883c948527b38584b34556e06effc4f39ceed135a639fadb2dabec6e" protocol=ttrpc version=3 May 13 23:51:09.670007 systemd[1]: Started cri-containerd-7732df9389a4ada612da3ac467eccc79c7033e6e5435f693998aea160c8f3857.scope - libcontainer container 7732df9389a4ada612da3ac467eccc79c7033e6e5435f693998aea160c8f3857. May 13 23:51:09.697927 containerd[2727]: time="2025-05-13T23:51:09.697900111Z" level=info msg="StartContainer for \"7732df9389a4ada612da3ac467eccc79c7033e6e5435f693998aea160c8f3857\" returns successfully" May 13 23:51:09.800879 kubelet[4434]: I0513 23:51:09.800819 4434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fgr9c" podStartSLOduration=22.800798208 podStartE2EDuration="22.800798208s" podCreationTimestamp="2025-05-13 23:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:51:09.800543647 +0000 UTC m=+36.140386273" watchObservedRunningTime="2025-05-13 23:51:09.800798208 +0000 UTC m=+36.140640834" May 13 23:51:09.807348 kubelet[4434]: I0513 23:51:09.807306 4434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6686cdc664-6f5c8" podStartSLOduration=16.088566778 podStartE2EDuration="16.807293101s" podCreationTimestamp="2025-05-13 23:50:53 +0000 UTC" firstStartedPulling="2025-05-13 23:51:08.915361654 +0000 UTC m=+35.255204280" lastFinishedPulling="2025-05-13 23:51:09.634088017 +0000 UTC m=+35.973930603" observedRunningTime="2025-05-13 23:51:09.807196501 +0000 UTC m=+36.147039167" watchObservedRunningTime="2025-05-13 23:51:09.807293101 +0000 UTC m=+36.147135727" May 13 23:51:09.924307 kubelet[4434]: I0513 23:51:09.924215 4434 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:51:10.572936 kernel: bpftool[6732]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 23:51:10.728643 containerd[2727]: time="2025-05-13T23:51:10.728393742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2fr6r,Uid:6f3d937f-5a05-4710-848e-a72ad66b575a,Namespace:kube-system,Attempt:0,}" May 13 23:51:10.728643 containerd[2727]: time="2025-05-13T23:51:10.728414782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6455d95b7f-ghsc4,Uid:717a5660-3f69-4a52-ad46-6c160f9b0ac2,Namespace:calico-system,Attempt:0,}" May 13 23:51:10.733503 systemd-networkd[2632]: vxlan.calico: Link UP May 13 23:51:10.733507 systemd-networkd[2632]: vxlan.calico: Gained carrier May 13 23:51:10.751949 systemd-networkd[2632]: calidddd5b6e619: Gained IPv6LL May 13 23:51:10.795146 kubelet[4434]: I0513 23:51:10.795114 4434 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:51:10.816973 systemd-networkd[2632]: cali8ee81955e1e: Gained IPv6LL May 13 23:51:10.825019 systemd-networkd[2632]: cali381b8ab74bb: Link UP May 13 23:51:10.825190 systemd-networkd[2632]: cali381b8ab74bb: Gained carrier May 13 23:51:10.832711 containerd[2727]: 2025-05-13 23:51:10.760 [INFO][6828] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--52b3733d51-k8s-calico--kube--controllers--6455d95b7f--ghsc4-eth0 calico-kube-controllers-6455d95b7f- calico-system 717a5660-3f69-4a52-ad46-6c160f9b0ac2 661 0 2025-05-13 23:50:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6455d95b7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4284.0.0-n-52b3733d51 calico-kube-controllers-6455d95b7f-ghsc4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali381b8ab74bb [] []}} ContainerID="6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba" Namespace="calico-system" Pod="calico-kube-controllers-6455d95b7f-ghsc4" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--kube--controllers--6455d95b7f--ghsc4-" May 13 23:51:10.832711 containerd[2727]: 2025-05-13 23:51:10.760 [INFO][6828] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba" Namespace="calico-system" Pod="calico-kube-controllers-6455d95b7f-ghsc4" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--kube--controllers--6455d95b7f--ghsc4-eth0" May 13 23:51:10.832711 containerd[2727]: 2025-05-13 23:51:10.782 [INFO][6957] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba" HandleID="k8s-pod-network.6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba" Workload="ci--4284.0.0--n--52b3733d51-k8s-calico--kube--controllers--6455d95b7f--ghsc4-eth0" May 13 23:51:10.832711 containerd[2727]: 2025-05-13 23:51:10.794 [INFO][6957] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba" HandleID="k8s-pod-network.6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba" Workload="ci--4284.0.0--n--52b3733d51-k8s-calico--kube--controllers--6455d95b7f--ghsc4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-n-52b3733d51", "pod":"calico-kube-controllers-6455d95b7f-ghsc4", "timestamp":"2025-05-13 23:51:10.782477809 +0000 UTC"}, Hostname:"ci-4284.0.0-n-52b3733d51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:51:10.832711 containerd[2727]: 2025-05-13 23:51:10.794 [INFO][6957] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:51:10.832711 containerd[2727]: 2025-05-13 23:51:10.794 [INFO][6957] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:51:10.832711 containerd[2727]: 2025-05-13 23:51:10.794 [INFO][6957] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-52b3733d51' May 13 23:51:10.832711 containerd[2727]: 2025-05-13 23:51:10.795 [INFO][6957] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:10.832711 containerd[2727]: 2025-05-13 23:51:10.799 [INFO][6957] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-52b3733d51" May 13 23:51:10.832711 containerd[2727]: 2025-05-13 23:51:10.802 [INFO][6957] ipam/ipam.go 489: Trying affinity for 192.168.33.0/26 host="ci-4284.0.0-n-52b3733d51" May 13 23:51:10.832711 containerd[2727]: 2025-05-13 23:51:10.804 [INFO][6957] ipam/ipam.go 155: Attempting to load block cidr=192.168.33.0/26 host="ci-4284.0.0-n-52b3733d51" May 13 23:51:10.832711 containerd[2727]: 2025-05-13 23:51:10.805 [INFO][6957] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.33.0/26 host="ci-4284.0.0-n-52b3733d51" May 13 23:51:10.832711 containerd[2727]: 2025-05-13 23:51:10.805 [INFO][6957] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.33.0/26 handle="k8s-pod-network.6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:10.832711 containerd[2727]: 2025-05-13 23:51:10.806 [INFO][6957] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba May 13 23:51:10.832711 containerd[2727]: 2025-05-13 23:51:10.808 [INFO][6957] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.33.0/26 handle="k8s-pod-network.6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:10.832711 containerd[2727]: 2025-05-13 23:51:10.821 [INFO][6957] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.33.3/26] block=192.168.33.0/26 handle="k8s-pod-network.6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:10.832711 containerd[2727]: 2025-05-13 23:51:10.821 [INFO][6957] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.33.3/26] handle="k8s-pod-network.6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:10.832711 containerd[2727]: 2025-05-13 23:51:10.821 [INFO][6957] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:51:10.832711 containerd[2727]: 2025-05-13 23:51:10.821 [INFO][6957] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.3/26] IPv6=[] ContainerID="6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba" HandleID="k8s-pod-network.6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba" Workload="ci--4284.0.0--n--52b3733d51-k8s-calico--kube--controllers--6455d95b7f--ghsc4-eth0" May 13 23:51:10.833179 containerd[2727]: 2025-05-13 23:51:10.823 [INFO][6828] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba" Namespace="calico-system" Pod="calico-kube-controllers-6455d95b7f-ghsc4" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--kube--controllers--6455d95b7f--ghsc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--52b3733d51-k8s-calico--kube--controllers--6455d95b7f--ghsc4-eth0", GenerateName:"calico-kube-controllers-6455d95b7f-", Namespace:"calico-system", SelfLink:"", UID:"717a5660-3f69-4a52-ad46-6c160f9b0ac2", ResourceVersion:"661", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 50, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6455d95b7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-52b3733d51", ContainerID:"", Pod:"calico-kube-controllers-6455d95b7f-ghsc4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali381b8ab74bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:51:10.833179 containerd[2727]: 2025-05-13 23:51:10.823 [INFO][6828] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.33.3/32] ContainerID="6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba" Namespace="calico-system" Pod="calico-kube-controllers-6455d95b7f-ghsc4" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--kube--controllers--6455d95b7f--ghsc4-eth0" May 13 23:51:10.833179 containerd[2727]: 2025-05-13 23:51:10.823 [INFO][6828] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali381b8ab74bb ContainerID="6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba" Namespace="calico-system" Pod="calico-kube-controllers-6455d95b7f-ghsc4" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--kube--controllers--6455d95b7f--ghsc4-eth0" May 13 23:51:10.833179 containerd[2727]: 2025-05-13 23:51:10.825 [INFO][6828] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba" Namespace="calico-system" Pod="calico-kube-controllers-6455d95b7f-ghsc4" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--kube--controllers--6455d95b7f--ghsc4-eth0" May 13 23:51:10.833179 containerd[2727]: 2025-05-13 23:51:10.825 [INFO][6828] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba" Namespace="calico-system" Pod="calico-kube-controllers-6455d95b7f-ghsc4" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--kube--controllers--6455d95b7f--ghsc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--52b3733d51-k8s-calico--kube--controllers--6455d95b7f--ghsc4-eth0", GenerateName:"calico-kube-controllers-6455d95b7f-", Namespace:"calico-system", SelfLink:"", UID:"717a5660-3f69-4a52-ad46-6c160f9b0ac2", ResourceVersion:"661", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 50, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6455d95b7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-52b3733d51", ContainerID:"6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba", Pod:"calico-kube-controllers-6455d95b7f-ghsc4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali381b8ab74bb", MAC:"46:8a:fc:af:40:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:51:10.833179 containerd[2727]: 2025-05-13 23:51:10.831 [INFO][6828] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba" Namespace="calico-system" Pod="calico-kube-controllers-6455d95b7f-ghsc4" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--kube--controllers--6455d95b7f--ghsc4-eth0" May 13 23:51:10.846041 containerd[2727]: time="2025-05-13T23:51:10.845999454Z" level=info msg="connecting to shim 6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba" address="unix:///run/containerd/s/97bc9935cbc349a9d42253357cc68b9d43f4ae22ca111047a3f7c1331f369cfa" namespace=k8s.io protocol=ttrpc version=3 May 13 23:51:10.871893 systemd-networkd[2632]: caliefa45032a87: Link UP May 13 23:51:10.872063 systemd-networkd[2632]: caliefa45032a87: Gained carrier May 13 23:51:10.875012 systemd[1]: Started cri-containerd-6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba.scope - libcontainer container 6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba. May 13 23:51:10.878983 containerd[2727]: 2025-05-13 23:51:10.759 [INFO][6826] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--2fr6r-eth0 coredns-7db6d8ff4d- kube-system 6f3d937f-5a05-4710-848e-a72ad66b575a 656 0 2025-05-13 23:50:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284.0.0-n-52b3733d51 coredns-7db6d8ff4d-2fr6r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliefa45032a87 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2fr6r" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--2fr6r-" May 13 23:51:10.878983 containerd[2727]: 2025-05-13 23:51:10.760 [INFO][6826] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2fr6r" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--2fr6r-eth0" May 13 23:51:10.878983 containerd[2727]: 2025-05-13 23:51:10.783 [INFO][6968] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469" HandleID="k8s-pod-network.eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469" Workload="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--2fr6r-eth0" May 13 23:51:10.878983 containerd[2727]: 2025-05-13 23:51:10.796 [INFO][6968] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469" HandleID="k8s-pod-network.eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469" Workload="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--2fr6r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003c0000), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284.0.0-n-52b3733d51", "pod":"coredns-7db6d8ff4d-2fr6r", "timestamp":"2025-05-13 23:51:10.783448171 +0000 UTC"}, Hostname:"ci-4284.0.0-n-52b3733d51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:51:10.878983 containerd[2727]: 2025-05-13 23:51:10.796 [INFO][6968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:51:10.878983 containerd[2727]: 2025-05-13 23:51:10.821 [INFO][6968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:51:10.878983 containerd[2727]: 2025-05-13 23:51:10.821 [INFO][6968] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-52b3733d51' May 13 23:51:10.878983 containerd[2727]: 2025-05-13 23:51:10.824 [INFO][6968] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:10.878983 containerd[2727]: 2025-05-13 23:51:10.832 [INFO][6968] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-52b3733d51" May 13 23:51:10.878983 containerd[2727]: 2025-05-13 23:51:10.835 [INFO][6968] ipam/ipam.go 489: Trying affinity for 192.168.33.0/26 host="ci-4284.0.0-n-52b3733d51" May 13 23:51:10.878983 containerd[2727]: 2025-05-13 23:51:10.836 [INFO][6968] ipam/ipam.go 155: Attempting to load block cidr=192.168.33.0/26 host="ci-4284.0.0-n-52b3733d51" May 13 23:51:10.878983 containerd[2727]: 2025-05-13 23:51:10.838 [INFO][6968] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.33.0/26 host="ci-4284.0.0-n-52b3733d51" May 13 23:51:10.878983 containerd[2727]: 2025-05-13 23:51:10.838 [INFO][6968] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.33.0/26 handle="k8s-pod-network.eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:10.878983 containerd[2727]: 2025-05-13 23:51:10.842 [INFO][6968] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469 May 13 23:51:10.878983 containerd[2727]: 2025-05-13 23:51:10.845 [INFO][6968] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.33.0/26 handle="k8s-pod-network.eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:10.878983 containerd[2727]: 2025-05-13 23:51:10.860 [INFO][6968] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.33.4/26] block=192.168.33.0/26 handle="k8s-pod-network.eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:10.878983 containerd[2727]: 2025-05-13 23:51:10.860 [INFO][6968] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.33.4/26] handle="k8s-pod-network.eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:10.878983 containerd[2727]: 2025-05-13 23:51:10.860 [INFO][6968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:51:10.878983 containerd[2727]: 2025-05-13 23:51:10.860 [INFO][6968] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.4/26] IPv6=[] ContainerID="eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469" HandleID="k8s-pod-network.eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469" Workload="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--2fr6r-eth0" May 13 23:51:10.879436 containerd[2727]: 2025-05-13 23:51:10.861 [INFO][6826] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2fr6r" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--2fr6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--2fr6r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6f3d937f-5a05-4710-848e-a72ad66b575a", ResourceVersion:"656", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 50, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-52b3733d51", ContainerID:"", Pod:"coredns-7db6d8ff4d-2fr6r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliefa45032a87", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:51:10.879436 containerd[2727]: 2025-05-13 23:51:10.861 [INFO][6826] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.33.4/32] ContainerID="eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2fr6r" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--2fr6r-eth0" May 13 23:51:10.879436 containerd[2727]: 2025-05-13 23:51:10.861 [INFO][6826] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefa45032a87 ContainerID="eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2fr6r" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--2fr6r-eth0" May 13 23:51:10.879436 containerd[2727]: 2025-05-13 23:51:10.872 [INFO][6826] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2fr6r" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--2fr6r-eth0" May 13 23:51:10.879436 containerd[2727]: 2025-05-13 23:51:10.872 [INFO][6826] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2fr6r" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--2fr6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--2fr6r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6f3d937f-5a05-4710-848e-a72ad66b575a", ResourceVersion:"656", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 50, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-52b3733d51", ContainerID:"eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469", Pod:"coredns-7db6d8ff4d-2fr6r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliefa45032a87", MAC:"6e:d1:42:8d:2b:bb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:51:10.879436 containerd[2727]: 2025-05-13 23:51:10.877 [INFO][6826] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2fr6r" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-coredns--7db6d8ff4d--2fr6r-eth0" May 13 23:51:10.890075 containerd[2727]: time="2025-05-13T23:51:10.890044901Z" level=info msg="connecting to shim eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469" address="unix:///run/containerd/s/122d9de511cdb16c0105657a30b5daf6d8cac15701cf2a051f5ea0d128dcd52f" namespace=k8s.io protocol=ttrpc version=3 May 13 23:51:10.900119 containerd[2727]: time="2025-05-13T23:51:10.900087920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6455d95b7f-ghsc4,Uid:717a5660-3f69-4a52-ad46-6c160f9b0ac2,Namespace:calico-system,Attempt:0,} returns sandbox id \"6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba\"" May 13 23:51:10.901238 containerd[2727]: time="2025-05-13T23:51:10.901213523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 23:51:10.905585 systemd[1]: Started cri-containerd-eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469.scope - libcontainer container eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469. May 13 23:51:10.931103 containerd[2727]: time="2025-05-13T23:51:10.931072902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2fr6r,Uid:6f3d937f-5a05-4710-848e-a72ad66b575a,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469\"" May 13 23:51:10.933373 containerd[2727]: time="2025-05-13T23:51:10.933346226Z" level=info msg="CreateContainer within sandbox \"eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 23:51:10.937620 containerd[2727]: time="2025-05-13T23:51:10.937595074Z" level=info msg="Container 302476b6acd1ff953767f98282607933bbe832ab0a23c1bb234ef16701c08e82: CDI devices from CRI Config.CDIDevices: []" May 13 23:51:10.940191 containerd[2727]: time="2025-05-13T23:51:10.940166799Z" level=info msg="CreateContainer within sandbox \"eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"302476b6acd1ff953767f98282607933bbe832ab0a23c1bb234ef16701c08e82\"" May 13 23:51:10.940529 containerd[2727]: time="2025-05-13T23:51:10.940506560Z" level=info msg="StartContainer for \"302476b6acd1ff953767f98282607933bbe832ab0a23c1bb234ef16701c08e82\"" May 13 23:51:10.941271 containerd[2727]: time="2025-05-13T23:51:10.941247922Z" level=info msg="connecting to shim 302476b6acd1ff953767f98282607933bbe832ab0a23c1bb234ef16701c08e82" address="unix:///run/containerd/s/122d9de511cdb16c0105657a30b5daf6d8cac15701cf2a051f5ea0d128dcd52f" protocol=ttrpc version=3 May 13 23:51:10.964078 systemd[1]: Started cri-containerd-302476b6acd1ff953767f98282607933bbe832ab0a23c1bb234ef16701c08e82.scope - libcontainer container 302476b6acd1ff953767f98282607933bbe832ab0a23c1bb234ef16701c08e82. May 13 23:51:10.986530 containerd[2727]: time="2025-05-13T23:51:10.986497211Z" level=info msg="StartContainer for \"302476b6acd1ff953767f98282607933bbe832ab0a23c1bb234ef16701c08e82\" returns successfully" May 13 23:51:11.545654 containerd[2727]: time="2025-05-13T23:51:11.545609206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:51:11.545834 containerd[2727]: time="2025-05-13T23:51:11.545650086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 13 23:51:11.546325 containerd[2727]: time="2025-05-13T23:51:11.546297287Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:51:11.547863 containerd[2727]: time="2025-05-13T23:51:11.547836090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:51:11.548475 containerd[2727]: time="2025-05-13T23:51:11.548447691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 647.200888ms" May 13 23:51:11.548511 containerd[2727]: time="2025-05-13T23:51:11.548480371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 13 23:51:11.554025 containerd[2727]: time="2025-05-13T23:51:11.553988981Z" level=info msg="CreateContainer within sandbox \"6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 23:51:11.557503 containerd[2727]: time="2025-05-13T23:51:11.557475068Z" level=info msg="Container a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1: CDI devices from CRI Config.CDIDevices: []" May 13 23:51:11.562862 containerd[2727]: time="2025-05-13T23:51:11.562824437Z" level=info msg="CreateContainer within sandbox \"6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\"" May 13 23:51:11.563207 containerd[2727]: time="2025-05-13T23:51:11.563181078Z" level=info msg="StartContainer for \"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\"" May 13 23:51:11.564156 containerd[2727]: time="2025-05-13T23:51:11.564133880Z" level=info msg="connecting to shim a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1" address="unix:///run/containerd/s/97bc9935cbc349a9d42253357cc68b9d43f4ae22ca111047a3f7c1331f369cfa" protocol=ttrpc version=3 May 13 23:51:11.589060 systemd[1]: Started cri-containerd-a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1.scope - libcontainer container a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1. May 13 23:51:11.617369 containerd[2727]: time="2025-05-13T23:51:11.617342738Z" level=info msg="StartContainer for \"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" returns successfully" May 13 23:51:11.806267 kubelet[4434]: I0513 23:51:11.805615 4434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2fr6r" podStartSLOduration=24.805599446 podStartE2EDuration="24.805599446s" podCreationTimestamp="2025-05-13 23:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:51:11.805573846 +0000 UTC m=+38.145416472" watchObservedRunningTime="2025-05-13 23:51:11.805599446 +0000 UTC m=+38.145442072" May 13 23:51:11.812480 kubelet[4434]: I0513 23:51:11.812428 4434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6455d95b7f-ghsc4" podStartSLOduration=18.164410609 podStartE2EDuration="18.812416459s" podCreationTimestamp="2025-05-13 23:50:53 +0000 UTC" firstStartedPulling="2025-05-13 23:51:10.901024082 +0000 UTC m=+37.240866668" lastFinishedPulling="2025-05-13 23:51:11.549029932 +0000 UTC m=+37.888872518" observedRunningTime="2025-05-13 23:51:11.812248738 +0000 UTC m=+38.152091324" watchObservedRunningTime="2025-05-13 23:51:11.812416459 +0000 UTC m=+38.152259045" May 13 23:51:12.096012 systemd-networkd[2632]: cali381b8ab74bb: Gained IPv6LL May 13 23:51:12.159953 systemd-networkd[2632]: vxlan.calico: Gained IPv6LL May 13 23:51:12.671933 systemd-networkd[2632]: caliefa45032a87: Gained IPv6LL May 13 23:51:12.730489 containerd[2727]: time="2025-05-13T23:51:12.730458951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgfch,Uid:e357e965-48ea-458f-845a-7872eece6386,Namespace:calico-system,Attempt:0,}" May 13 23:51:12.730714 containerd[2727]: time="2025-05-13T23:51:12.730460471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6686cdc664-vf65w,Uid:8c49faed-4701-4590-8323-1a4eef2facd0,Namespace:calico-apiserver,Attempt:0,}" May 13 23:51:12.801198 kubelet[4434]: I0513 23:51:12.801163 4434 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:51:12.815681 systemd-networkd[2632]: cali65518de40e3: Link UP May 13 23:51:12.815858 systemd-networkd[2632]: cali65518de40e3: Gained carrier May 13 23:51:12.822094 containerd[2727]: 2025-05-13 23:51:12.763 [INFO][7385] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--vf65w-eth0 calico-apiserver-6686cdc664- calico-apiserver 8c49faed-4701-4590-8323-1a4eef2facd0 662 0 2025-05-13 23:50:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6686cdc664 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284.0.0-n-52b3733d51 calico-apiserver-6686cdc664-vf65w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali65518de40e3 [] []}} ContainerID="10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d" Namespace="calico-apiserver" Pod="calico-apiserver-6686cdc664-vf65w" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--vf65w-" May 13 23:51:12.822094 containerd[2727]: 2025-05-13 23:51:12.763 [INFO][7385] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d" Namespace="calico-apiserver" Pod="calico-apiserver-6686cdc664-vf65w" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--vf65w-eth0" May 13 23:51:12.822094 containerd[2727]: 2025-05-13 23:51:12.784 [INFO][7434] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d" HandleID="k8s-pod-network.10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d" Workload="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--vf65w-eth0" May 13 23:51:12.822094 containerd[2727]: 2025-05-13 23:51:12.794 [INFO][7434] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d" HandleID="k8s-pod-network.10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d" Workload="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--vf65w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e6ce0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284.0.0-n-52b3733d51", "pod":"calico-apiserver-6686cdc664-vf65w", "timestamp":"2025-05-13 23:51:12.784536484 +0000 UTC"}, Hostname:"ci-4284.0.0-n-52b3733d51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:51:12.822094 containerd[2727]: 2025-05-13 23:51:12.794 [INFO][7434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:51:12.822094 containerd[2727]: 2025-05-13 23:51:12.794 [INFO][7434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:51:12.822094 containerd[2727]: 2025-05-13 23:51:12.794 [INFO][7434] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-52b3733d51' May 13 23:51:12.822094 containerd[2727]: 2025-05-13 23:51:12.796 [INFO][7434] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:12.822094 containerd[2727]: 2025-05-13 23:51:12.799 [INFO][7434] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-52b3733d51" May 13 23:51:12.822094 containerd[2727]: 2025-05-13 23:51:12.802 [INFO][7434] ipam/ipam.go 489: Trying affinity for 192.168.33.0/26 host="ci-4284.0.0-n-52b3733d51" May 13 23:51:12.822094 containerd[2727]: 2025-05-13 23:51:12.803 [INFO][7434] ipam/ipam.go 155: Attempting to load block cidr=192.168.33.0/26 host="ci-4284.0.0-n-52b3733d51" May 13 23:51:12.822094 containerd[2727]: 2025-05-13 23:51:12.804 [INFO][7434] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.33.0/26 host="ci-4284.0.0-n-52b3733d51" May 13 23:51:12.822094 containerd[2727]: 2025-05-13 23:51:12.805 [INFO][7434] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.33.0/26 handle="k8s-pod-network.10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:12.822094 containerd[2727]: 2025-05-13 23:51:12.806 [INFO][7434] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d May 13 23:51:12.822094 containerd[2727]: 2025-05-13 23:51:12.808 [INFO][7434] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.33.0/26 handle="k8s-pod-network.10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:12.822094 containerd[2727]: 2025-05-13 23:51:12.813 [INFO][7434] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.33.5/26] block=192.168.33.0/26 handle="k8s-pod-network.10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:12.822094 containerd[2727]: 2025-05-13 23:51:12.813 [INFO][7434] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.33.5/26] handle="k8s-pod-network.10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:12.822094 containerd[2727]: 2025-05-13 23:51:12.813 [INFO][7434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:51:12.822094 containerd[2727]: 2025-05-13 23:51:12.813 [INFO][7434] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.5/26] IPv6=[] ContainerID="10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d" HandleID="k8s-pod-network.10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d" Workload="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--vf65w-eth0" May 13 23:51:12.822525 containerd[2727]: 2025-05-13 23:51:12.814 [INFO][7385] cni-plugin/k8s.go 386: Populated endpoint ContainerID="10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d" Namespace="calico-apiserver" Pod="calico-apiserver-6686cdc664-vf65w" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--vf65w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--vf65w-eth0", GenerateName:"calico-apiserver-6686cdc664-", Namespace:"calico-apiserver", SelfLink:"", UID:"8c49faed-4701-4590-8323-1a4eef2facd0", ResourceVersion:"662", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 50, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6686cdc664", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-52b3733d51", ContainerID:"", Pod:"calico-apiserver-6686cdc664-vf65w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65518de40e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:51:12.822525 containerd[2727]: 2025-05-13 23:51:12.814 [INFO][7385] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.33.5/32] ContainerID="10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d" Namespace="calico-apiserver" Pod="calico-apiserver-6686cdc664-vf65w" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--vf65w-eth0" May 13 23:51:12.822525 containerd[2727]: 2025-05-13 23:51:12.814 [INFO][7385] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65518de40e3 ContainerID="10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d" Namespace="calico-apiserver" Pod="calico-apiserver-6686cdc664-vf65w" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--vf65w-eth0" May 13 23:51:12.822525 containerd[2727]: 2025-05-13 23:51:12.815 [INFO][7385] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d" Namespace="calico-apiserver" Pod="calico-apiserver-6686cdc664-vf65w" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--vf65w-eth0" May 13 23:51:12.822525 containerd[2727]: 2025-05-13 23:51:12.816 [INFO][7385] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d" Namespace="calico-apiserver" Pod="calico-apiserver-6686cdc664-vf65w" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--vf65w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--vf65w-eth0", GenerateName:"calico-apiserver-6686cdc664-", Namespace:"calico-apiserver", SelfLink:"", UID:"8c49faed-4701-4590-8323-1a4eef2facd0", ResourceVersion:"662", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 50, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6686cdc664", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-52b3733d51", ContainerID:"10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d", Pod:"calico-apiserver-6686cdc664-vf65w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali65518de40e3", MAC:"1e:97:b8:f1:50:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:51:12.822525 containerd[2727]: 2025-05-13 23:51:12.821 [INFO][7385] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d" Namespace="calico-apiserver" Pod="calico-apiserver-6686cdc664-vf65w" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-calico--apiserver--6686cdc664--vf65w-eth0" May 13 23:51:12.834701 containerd[2727]: time="2025-05-13T23:51:12.834665731Z" level=info msg="connecting to shim 10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d" address="unix:///run/containerd/s/58fe09df9c0da99eca984165269520cac6f664f431b14c8284047b5b2e3b1208" namespace=k8s.io protocol=ttrpc version=3 May 13 23:51:12.836114 systemd-networkd[2632]: cali705785fe9d1: Link UP May 13 23:51:12.836474 systemd-networkd[2632]: cali705785fe9d1: Gained carrier May 13 23:51:12.844205 containerd[2727]: 2025-05-13 23:51:12.762 [INFO][7384] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284.0.0--n--52b3733d51-k8s-csi--node--driver--zgfch-eth0 csi-node-driver- calico-system e357e965-48ea-458f-845a-7872eece6386 599 0 2025-05-13 23:50:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4284.0.0-n-52b3733d51 csi-node-driver-zgfch eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali705785fe9d1 [] []}} ContainerID="bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175" Namespace="calico-system" Pod="csi-node-driver-zgfch" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-csi--node--driver--zgfch-" May 13 23:51:12.844205 containerd[2727]: 2025-05-13 23:51:12.762 [INFO][7384] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175" Namespace="calico-system" Pod="csi-node-driver-zgfch" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-csi--node--driver--zgfch-eth0" May 13 23:51:12.844205 containerd[2727]: 2025-05-13 23:51:12.784 [INFO][7432] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175" HandleID="k8s-pod-network.bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175" Workload="ci--4284.0.0--n--52b3733d51-k8s-csi--node--driver--zgfch-eth0" May 13 23:51:12.844205 containerd[2727]: 2025-05-13 23:51:12.794 [INFO][7432] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175" HandleID="k8s-pod-network.bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175" Workload="ci--4284.0.0--n--52b3733d51-k8s-csi--node--driver--zgfch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000460840), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284.0.0-n-52b3733d51", "pod":"csi-node-driver-zgfch", "timestamp":"2025-05-13 23:51:12.784754765 +0000 UTC"}, Hostname:"ci-4284.0.0-n-52b3733d51", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:51:12.844205 containerd[2727]: 2025-05-13 23:51:12.794 [INFO][7432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:51:12.844205 containerd[2727]: 2025-05-13 23:51:12.813 [INFO][7432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:51:12.844205 containerd[2727]: 2025-05-13 23:51:12.813 [INFO][7432] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284.0.0-n-52b3733d51' May 13 23:51:12.844205 containerd[2727]: 2025-05-13 23:51:12.814 [INFO][7432] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:12.844205 containerd[2727]: 2025-05-13 23:51:12.817 [INFO][7432] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284.0.0-n-52b3733d51" May 13 23:51:12.844205 containerd[2727]: 2025-05-13 23:51:12.820 [INFO][7432] ipam/ipam.go 489: Trying affinity for 192.168.33.0/26 host="ci-4284.0.0-n-52b3733d51" May 13 23:51:12.844205 containerd[2727]: 2025-05-13 23:51:12.822 [INFO][7432] ipam/ipam.go 155: Attempting to load block cidr=192.168.33.0/26 host="ci-4284.0.0-n-52b3733d51" May 13 23:51:12.844205 containerd[2727]: 2025-05-13 23:51:12.824 [INFO][7432] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.33.0/26 host="ci-4284.0.0-n-52b3733d51" May 13 23:51:12.844205 containerd[2727]: 2025-05-13 23:51:12.824 [INFO][7432] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.33.0/26 handle="k8s-pod-network.bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:12.844205 containerd[2727]: 2025-05-13 23:51:12.825 [INFO][7432] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175 May 13 23:51:12.844205 containerd[2727]: 2025-05-13 23:51:12.829 [INFO][7432] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.33.0/26 handle="k8s-pod-network.bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:12.844205 containerd[2727]: 2025-05-13 23:51:12.833 [INFO][7432] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.33.6/26] block=192.168.33.0/26 handle="k8s-pod-network.bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:12.844205 containerd[2727]: 2025-05-13 23:51:12.833 [INFO][7432] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.33.6/26] handle="k8s-pod-network.bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175" host="ci-4284.0.0-n-52b3733d51" May 13 23:51:12.844205 containerd[2727]: 2025-05-13 23:51:12.833 [INFO][7432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:51:12.844205 containerd[2727]: 2025-05-13 23:51:12.833 [INFO][7432] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.33.6/26] IPv6=[] ContainerID="bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175" HandleID="k8s-pod-network.bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175" Workload="ci--4284.0.0--n--52b3733d51-k8s-csi--node--driver--zgfch-eth0" May 13 23:51:12.844623 containerd[2727]: 2025-05-13 23:51:12.834 [INFO][7384] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175" Namespace="calico-system" Pod="csi-node-driver-zgfch" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-csi--node--driver--zgfch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--52b3733d51-k8s-csi--node--driver--zgfch-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e357e965-48ea-458f-845a-7872eece6386", ResourceVersion:"599", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 50, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-52b3733d51", ContainerID:"", Pod:"csi-node-driver-zgfch", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali705785fe9d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:51:12.844623 containerd[2727]: 2025-05-13 23:51:12.835 [INFO][7384] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.33.6/32] ContainerID="bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175" Namespace="calico-system" Pod="csi-node-driver-zgfch" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-csi--node--driver--zgfch-eth0" May 13 23:51:12.844623 containerd[2727]: 2025-05-13 23:51:12.835 [INFO][7384] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali705785fe9d1 ContainerID="bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175" Namespace="calico-system" Pod="csi-node-driver-zgfch" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-csi--node--driver--zgfch-eth0" May 13 23:51:12.844623 containerd[2727]: 2025-05-13 23:51:12.836 [INFO][7384] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175" Namespace="calico-system" Pod="csi-node-driver-zgfch" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-csi--node--driver--zgfch-eth0" May 13 23:51:12.844623 containerd[2727]: 2025-05-13 23:51:12.836 [INFO][7384] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175" Namespace="calico-system" Pod="csi-node-driver-zgfch" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-csi--node--driver--zgfch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284.0.0--n--52b3733d51-k8s-csi--node--driver--zgfch-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e357e965-48ea-458f-845a-7872eece6386", ResourceVersion:"599", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 50, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284.0.0-n-52b3733d51", ContainerID:"bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175", Pod:"csi-node-driver-zgfch", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali705785fe9d1", MAC:"32:c5:8e:c1:73:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:51:12.844623 containerd[2727]: 2025-05-13 23:51:12.842 [INFO][7384] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175" Namespace="calico-system" Pod="csi-node-driver-zgfch" WorkloadEndpoint="ci--4284.0.0--n--52b3733d51-k8s-csi--node--driver--zgfch-eth0" May 13 23:51:12.855169 containerd[2727]: time="2025-05-13T23:51:12.855136487Z" level=info msg="connecting to shim bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175" address="unix:///run/containerd/s/f49e0925002a70879fa68b06aac29094f1b941124af7a946d0729178d2abd0d7" namespace=k8s.io protocol=ttrpc version=3 May 13 23:51:12.867086 systemd[1]: Started cri-containerd-10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d.scope - libcontainer container 10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d. May 13 23:51:12.873328 systemd[1]: Started cri-containerd-bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175.scope - libcontainer container bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175. May 13 23:51:12.889800 containerd[2727]: time="2025-05-13T23:51:12.889770147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgfch,Uid:e357e965-48ea-458f-845a-7872eece6386,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175\"" May 13 23:51:12.890819 containerd[2727]: time="2025-05-13T23:51:12.890795708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 23:51:12.891481 containerd[2727]: time="2025-05-13T23:51:12.891460950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6686cdc664-vf65w,Uid:8c49faed-4701-4590-8323-1a4eef2facd0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d\"" May 13 23:51:12.893243 containerd[2727]: time="2025-05-13T23:51:12.893225113Z" level=info msg="CreateContainer within sandbox \"10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:51:12.896631 containerd[2727]: time="2025-05-13T23:51:12.896608638Z" level=info msg="Container 57b5820b5bcfcf6e0dd8bb794d7422734a69f28d35e154be55b669601511e962: CDI devices from CRI Config.CDIDevices: []" May 13 23:51:12.899545 containerd[2727]: time="2025-05-13T23:51:12.899523444Z" level=info msg="CreateContainer within sandbox \"10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"57b5820b5bcfcf6e0dd8bb794d7422734a69f28d35e154be55b669601511e962\"" May 13 23:51:12.899838 containerd[2727]: time="2025-05-13T23:51:12.899819644Z" level=info msg="StartContainer for \"57b5820b5bcfcf6e0dd8bb794d7422734a69f28d35e154be55b669601511e962\"" May 13 23:51:12.900746 containerd[2727]: time="2025-05-13T23:51:12.900724446Z" level=info msg="connecting to shim 57b5820b5bcfcf6e0dd8bb794d7422734a69f28d35e154be55b669601511e962" address="unix:///run/containerd/s/58fe09df9c0da99eca984165269520cac6f664f431b14c8284047b5b2e3b1208" protocol=ttrpc version=3 May 13 23:51:12.918062 systemd[1]: Started cri-containerd-57b5820b5bcfcf6e0dd8bb794d7422734a69f28d35e154be55b669601511e962.scope - libcontainer container 57b5820b5bcfcf6e0dd8bb794d7422734a69f28d35e154be55b669601511e962. May 13 23:51:12.946001 containerd[2727]: time="2025-05-13T23:51:12.945938484Z" level=info msg="StartContainer for \"57b5820b5bcfcf6e0dd8bb794d7422734a69f28d35e154be55b669601511e962\" returns successfully" May 13 23:51:13.176222 containerd[2727]: time="2025-05-13T23:51:13.176178904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:51:13.176302 containerd[2727]: time="2025-05-13T23:51:13.176222384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 13 23:51:13.176877 containerd[2727]: time="2025-05-13T23:51:13.176854785Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:51:13.178389 containerd[2727]: time="2025-05-13T23:51:13.178365787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:51:13.179995 containerd[2727]: time="2025-05-13T23:51:13.179964110Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 289.133322ms" May 13 23:51:13.180017 containerd[2727]: time="2025-05-13T23:51:13.180003030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 13 23:51:13.182729 containerd[2727]: time="2025-05-13T23:51:13.182696714Z" level=info msg="CreateContainer within sandbox \"bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 23:51:13.187790 containerd[2727]: time="2025-05-13T23:51:13.187758563Z" level=info msg="Container a7c646e3f4f38deb4fc7294eeb9783ea7c19f4dc9a9b6a3714513ecf91d8da83: CDI devices from CRI Config.CDIDevices: []" May 13 23:51:13.191757 containerd[2727]: time="2025-05-13T23:51:13.191727089Z" level=info msg="CreateContainer within sandbox \"bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a7c646e3f4f38deb4fc7294eeb9783ea7c19f4dc9a9b6a3714513ecf91d8da83\"" May 13 23:51:13.192106 containerd[2727]: time="2025-05-13T23:51:13.192078810Z" level=info msg="StartContainer for \"a7c646e3f4f38deb4fc7294eeb9783ea7c19f4dc9a9b6a3714513ecf91d8da83\"" May 13 23:51:13.193423 containerd[2727]: time="2025-05-13T23:51:13.193397412Z" level=info msg="connecting to shim a7c646e3f4f38deb4fc7294eeb9783ea7c19f4dc9a9b6a3714513ecf91d8da83" address="unix:///run/containerd/s/f49e0925002a70879fa68b06aac29094f1b941124af7a946d0729178d2abd0d7" protocol=ttrpc version=3 May 13 23:51:13.213062 systemd[1]: Started cri-containerd-a7c646e3f4f38deb4fc7294eeb9783ea7c19f4dc9a9b6a3714513ecf91d8da83.scope - libcontainer container a7c646e3f4f38deb4fc7294eeb9783ea7c19f4dc9a9b6a3714513ecf91d8da83. May 13 23:51:13.239970 containerd[2727]: time="2025-05-13T23:51:13.239929607Z" level=info msg="StartContainer for \"a7c646e3f4f38deb4fc7294eeb9783ea7c19f4dc9a9b6a3714513ecf91d8da83\" returns successfully" May 13 23:51:13.240712 containerd[2727]: time="2025-05-13T23:51:13.240690929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 23:51:13.622945 containerd[2727]: time="2025-05-13T23:51:13.622855229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:51:13.622945 containerd[2727]: time="2025-05-13T23:51:13.622898069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 13 23:51:13.623571 containerd[2727]: time="2025-05-13T23:51:13.623549110Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:51:13.625184 containerd[2727]: time="2025-05-13T23:51:13.625158633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:51:13.625829 containerd[2727]: time="2025-05-13T23:51:13.625805354Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 385.082505ms" May 13 23:51:13.625862 containerd[2727]: time="2025-05-13T23:51:13.625837434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 13 23:51:13.627636 containerd[2727]: time="2025-05-13T23:51:13.627617637Z" level=info msg="CreateContainer within sandbox \"bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 23:51:13.631918 containerd[2727]: time="2025-05-13T23:51:13.631895044Z" level=info msg="Container e65ccaf52adf53873eb6ee467152d165cbf124c21e38346b2bc4d64522b14dbc: CDI devices from CRI Config.CDIDevices: []" May 13 23:51:13.636394 containerd[2727]: time="2025-05-13T23:51:13.636369691Z" level=info msg="CreateContainer within sandbox \"bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e65ccaf52adf53873eb6ee467152d165cbf124c21e38346b2bc4d64522b14dbc\"" May 13 23:51:13.636712 containerd[2727]: time="2025-05-13T23:51:13.636692892Z" level=info msg="StartContainer for \"e65ccaf52adf53873eb6ee467152d165cbf124c21e38346b2bc4d64522b14dbc\"" May 13 23:51:13.638012 containerd[2727]: time="2025-05-13T23:51:13.637986654Z" level=info msg="connecting to shim e65ccaf52adf53873eb6ee467152d165cbf124c21e38346b2bc4d64522b14dbc" address="unix:///run/containerd/s/f49e0925002a70879fa68b06aac29094f1b941124af7a946d0729178d2abd0d7" protocol=ttrpc version=3 May 13 23:51:13.663063 systemd[1]: Started cri-containerd-e65ccaf52adf53873eb6ee467152d165cbf124c21e38346b2bc4d64522b14dbc.scope - libcontainer container e65ccaf52adf53873eb6ee467152d165cbf124c21e38346b2bc4d64522b14dbc. May 13 23:51:13.693908 containerd[2727]: time="2025-05-13T23:51:13.693873465Z" level=info msg="StartContainer for \"e65ccaf52adf53873eb6ee467152d165cbf124c21e38346b2bc4d64522b14dbc\" returns successfully" May 13 23:51:13.775257 kubelet[4434]: I0513 23:51:13.775235 4434 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 23:51:13.775257 kubelet[4434]: I0513 23:51:13.775259 4434 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 23:51:13.810319 kubelet[4434]: I0513 23:51:13.810277 4434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6686cdc664-vf65w" podStartSLOduration=20.810262134 podStartE2EDuration="20.810262134s" podCreationTimestamp="2025-05-13 23:50:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 23:51:13.810050213 +0000 UTC m=+40.149892839" watchObservedRunningTime="2025-05-13 23:51:13.810262134 +0000 UTC m=+40.150104760" May 13 23:51:13.817431 kubelet[4434]: I0513 23:51:13.817395 4434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zgfch" podStartSLOduration=20.081599618 podStartE2EDuration="20.817384625s" podCreationTimestamp="2025-05-13 23:50:53 +0000 UTC" firstStartedPulling="2025-05-13 23:51:12.890614748 +0000 UTC m=+39.230457374" lastFinishedPulling="2025-05-13 23:51:13.626399755 +0000 UTC m=+39.966242381" observedRunningTime="2025-05-13 23:51:13.817340905 +0000 UTC m=+40.157183531" watchObservedRunningTime="2025-05-13 23:51:13.817384625 +0000 UTC m=+40.157227251" May 13 23:51:14.144024 systemd-networkd[2632]: cali705785fe9d1: Gained IPv6LL May 13 23:51:14.439778 kubelet[4434]: I0513 23:51:14.439686 4434 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:51:14.472416 containerd[2727]: time="2025-05-13T23:51:14.472383201Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"d65a92d514c03dc1e086f920ba1d701a84daa30f4b0c82bdee1d56d32cdfede9\" pid:7770 exited_at:{seconds:1747180274 nanos:472140481}" May 13 23:51:14.501608 containerd[2727]: time="2025-05-13T23:51:14.501580885Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"41a4b637b37942841ac55cc1bf6716441ebc02040a974717432d4d3fa87e0f3e\" pid:7792 exited_at:{seconds:1747180274 nanos:501426005}" May 13 23:51:14.784002 systemd-networkd[2632]: cali65518de40e3: Gained IPv6LL May 13 23:51:14.807845 kubelet[4434]: I0513 23:51:14.807811 4434 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:51:27.660931 containerd[2727]: time="2025-05-13T23:51:27.660887310Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" id:\"7b53435b0ee6238358dcdc59e12644ba92e62e9848ff2331d5a586017f6c9fa5\" pid:7835 exited_at:{seconds:1747180287 nanos:660690550}" May 13 23:51:28.605383 kubelet[4434]: I0513 23:51:28.605283 4434 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:51:32.808901 systemd[1]: Started sshd@7-147.28.150.5:22-193.32.162.137:55822.service - OpenSSH per-connection server daemon (193.32.162.137:55822). May 13 23:51:33.303718 sshd[7880]: Invalid user hanna from 193.32.162.137 port 55822 May 13 23:51:33.421642 sshd[7880]: Connection closed by invalid user hanna 193.32.162.137 port 55822 [preauth] May 13 23:51:33.423730 systemd[1]: sshd@7-147.28.150.5:22-193.32.162.137:55822.service: Deactivated successfully. May 13 23:51:44.474509 containerd[2727]: time="2025-05-13T23:51:44.474471313Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"39034ec043ccdf66c07e9828b61c2687e44e907b29e638720150d4158136e858\" pid:7911 exited_at:{seconds:1747180304 nanos:474274353}" May 13 23:51:48.614989 kubelet[4434]: I0513 23:51:48.614917 4434 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:51:57.658242 containerd[2727]: time="2025-05-13T23:51:57.658201270Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" id:\"59f77f6ea2df03988b4897f31ce4f5e0c955f66a875e11ed64198a24c631b65d\" pid:7947 exited_at:{seconds:1747180317 nanos:657984989}" May 13 23:52:14.473524 containerd[2727]: time="2025-05-13T23:52:14.473485376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"ceff59f8925a9210d733dbdf0a48b51c8430642dd92a654b2c617b924d89e51a\" pid:7991 exited_at:{seconds:1747180334 nanos:473293656}" May 13 23:52:16.473372 containerd[2727]: time="2025-05-13T23:52:16.473328805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"ccf6a384ecd94e29be0311503bfd28a000028a17e061e5b47437e062bef3d381\" pid:8012 exited_at:{seconds:1747180336 nanos:473129244}" May 13 23:52:27.667046 containerd[2727]: time="2025-05-13T23:52:27.666999757Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" id:\"52195a755d9e83f9897939a2edf5106ae470403cddd81ffdfad5491a930a208a\" pid:8038 exited_at:{seconds:1747180347 nanos:666774556}" May 13 23:52:44.471354 containerd[2727]: time="2025-05-13T23:52:44.471308046Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"5fa276c5b4bd49eed54b3541ad8f99efe3b8a412bee944bdd331ee44d5a85140\" pid:8097 exited_at:{seconds:1747180364 nanos:471106365}" May 13 23:52:57.666256 containerd[2727]: time="2025-05-13T23:52:57.666206087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" id:\"d49ffd198d8cc6f55b348d8c631a9682a04d4adec47d94557cf1117d2cc2adfc\" pid:8127 exited_at:{seconds:1747180377 nanos:665996087}" May 13 23:53:14.469619 containerd[2727]: time="2025-05-13T23:53:14.469536114Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"95f1c05b96641ca818afc94d05eb6a05055a612001553e30d1328240be3f8f3e\" pid:8177 exited_at:{seconds:1747180394 nanos:469308514}" May 13 23:53:16.476613 containerd[2727]: time="2025-05-13T23:53:16.476573766Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"0d3a8d6fab8b9f6a537d508629bf92cf036404b8bdd1fabb5b859bdb4a9f1190\" pid:8202 exited_at:{seconds:1747180396 nanos:476337046}" May 13 23:53:20.526873 systemd[1]: Started sshd@8-147.28.150.5:22-193.32.162.135:35844.service - OpenSSH per-connection server daemon (193.32.162.135:35844). May 13 23:53:21.003219 sshd[8219]: Invalid user ts3 from 193.32.162.135 port 35844 May 13 23:53:21.115143 sshd[8219]: Connection closed by invalid user ts3 193.32.162.135 port 35844 [preauth] May 13 23:53:21.117098 systemd[1]: sshd@8-147.28.150.5:22-193.32.162.135:35844.service: Deactivated successfully. May 13 23:53:27.665324 containerd[2727]: time="2025-05-13T23:53:27.665286989Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" id:\"72a6495ad9e71caffd90941f39f600f9f5c90e1367820299f9353b62c81c8ffb\" pid:8235 exited_at:{seconds:1747180407 nanos:665070989}" May 13 23:53:44.474214 containerd[2727]: time="2025-05-13T23:53:44.474170980Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"c5a58e9af9a9593dcfb204502e9293d4141e4b2fe8c4710d50c4d6ad4539182e\" pid:8266 exited_at:{seconds:1747180424 nanos:473933060}" May 13 23:53:57.661098 containerd[2727]: time="2025-05-13T23:53:57.661056803Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" id:\"49fcf82fd015150194e1d5a1eec273049a0a6a57298bca921a57919bd0688200\" pid:8303 exited_at:{seconds:1747180437 nanos:660752483}" May 13 23:54:14.470711 containerd[2727]: time="2025-05-13T23:54:14.470666212Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"584e39e7e362d9cf0c6b861d8fa89af9217ea2934d468977cd576a9bfa3a6b09\" pid:8351 exited_at:{seconds:1747180454 nanos:470497492}" May 13 23:54:16.472413 containerd[2727]: time="2025-05-13T23:54:16.472377766Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"dc222e4d166c7f5be6d773e77b15130eada9cff5820b411b0de532b7ac00aff6\" pid:8377 exited_at:{seconds:1747180456 nanos:472183206}" May 13 23:54:27.667612 containerd[2727]: time="2025-05-13T23:54:27.667571635Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" id:\"0351fc95ce5812dfd7251b356cbf8f6611cfbc977bc9659ccd94a7feb1121b12\" pid:8400 exited_at:{seconds:1747180467 nanos:667319914}" May 13 23:54:44.473668 containerd[2727]: time="2025-05-13T23:54:44.473624674Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"47d3634343be9276d675c256df8a6393bf88a27366f6d942f893f5d065114ae6\" pid:8432 exited_at:{seconds:1747180484 nanos:473443593}" May 13 23:54:57.658597 containerd[2727]: time="2025-05-13T23:54:57.658545117Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" id:\"8b85ec6a9e309ababa8e41e60782cfe67f18ccaf27e6af129feb878468e30890\" pid:8455 exited_at:{seconds:1747180497 nanos:658289076}" May 13 23:55:14.474637 containerd[2727]: time="2025-05-13T23:55:14.474584915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"28da5e98bf4992a6677121f01c18cda474622416f7060efc19a4eb13f5a0a084\" pid:8503 exited_at:{seconds:1747180514 nanos:474399554}" May 13 23:55:16.473767 containerd[2727]: time="2025-05-13T23:55:16.473736170Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"1dce4e6a90a1ef09c8a7e63ea060884672551571a9b8725c1e01df861c48f20b\" pid:8529 exited_at:{seconds:1747180516 nanos:473562570}" May 13 23:55:24.472847 systemd[1]: Started sshd@9-147.28.150.5:22-195.178.110.26:53660.service - OpenSSH per-connection server daemon (195.178.110.26:53660). May 13 23:55:24.919470 sshd[8549]: Connection closed by authenticating user root 195.178.110.26 port 53660 [preauth] May 13 23:55:24.921306 systemd[1]: sshd@9-147.28.150.5:22-195.178.110.26:53660.service: Deactivated successfully. May 13 23:55:27.660689 containerd[2727]: time="2025-05-13T23:55:27.660647388Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" id:\"4ceac7c2b606e10587ed9e1a593e3ed4a8c1672811886048365fdff2ea4a14af\" pid:8566 exited_at:{seconds:1747180527 nanos:660438228}" May 13 23:55:29.694181 containerd[2727]: time="2025-05-13T23:55:29.694089219Z" level=warning msg="container event discarded" container=08f7bf3a8b36a55ed65b65eeb588ccb95c41d31dfe01ac06147198a6a7e24813 type=CONTAINER_CREATED_EVENT May 13 23:55:29.705360 containerd[2727]: time="2025-05-13T23:55:29.705322558Z" level=warning msg="container event discarded" container=08f7bf3a8b36a55ed65b65eeb588ccb95c41d31dfe01ac06147198a6a7e24813 type=CONTAINER_STARTED_EVENT May 13 23:55:29.705360 containerd[2727]: time="2025-05-13T23:55:29.705351478Z" level=warning msg="container event discarded" container=3d1572ef75def43a7f2b5fee3a325cdf0ed7429fd63db3d819ebd1539bfca608 type=CONTAINER_CREATED_EVENT May 13 23:55:29.705360 containerd[2727]: time="2025-05-13T23:55:29.705360158Z" level=warning msg="container event discarded" container=3d1572ef75def43a7f2b5fee3a325cdf0ed7429fd63db3d819ebd1539bfca608 type=CONTAINER_STARTED_EVENT May 13 23:55:29.705505 containerd[2727]: time="2025-05-13T23:55:29.705367798Z" level=warning msg="container event discarded" container=b0f7b079c5ddbdebe4f6d6844dc2973614ba4d0e119dabb160af790c4612d03b type=CONTAINER_CREATED_EVENT May 13 23:55:29.705505 containerd[2727]: time="2025-05-13T23:55:29.705374638Z" level=warning msg="container event discarded" container=b0f7b079c5ddbdebe4f6d6844dc2973614ba4d0e119dabb160af790c4612d03b type=CONTAINER_STARTED_EVENT May 13 23:55:29.716554 containerd[2727]: time="2025-05-13T23:55:29.716532096Z" level=warning msg="container event discarded" container=7b55c023c0d1c0232e071447d0fed2f68b2b2c0c4f823616b7b0bd11889e7d8c type=CONTAINER_CREATED_EVENT May 13 23:55:29.716554 containerd[2727]: time="2025-05-13T23:55:29.716546136Z" level=warning msg="container event discarded" container=85e46046574e402242e84def5048bf7e61f7350a6e1c3b525cfd51fe15bb4c61 type=CONTAINER_CREATED_EVENT May 13 23:55:29.716554 containerd[2727]: time="2025-05-13T23:55:29.716554016Z" level=warning msg="container event discarded" container=88bd3caa792de1314021d7fb091aba6b8837c7f30810f67844ac9d8dbfad5e59 type=CONTAINER_CREATED_EVENT May 13 23:55:29.772597 containerd[2727]: time="2025-05-13T23:55:29.772574468Z" level=warning msg="container event discarded" container=85e46046574e402242e84def5048bf7e61f7350a6e1c3b525cfd51fe15bb4c61 type=CONTAINER_STARTED_EVENT May 13 23:55:29.772597 containerd[2727]: time="2025-05-13T23:55:29.772595188Z" level=warning msg="container event discarded" container=88bd3caa792de1314021d7fb091aba6b8837c7f30810f67844ac9d8dbfad5e59 type=CONTAINER_STARTED_EVENT May 13 23:55:29.772687 containerd[2727]: time="2025-05-13T23:55:29.772605868Z" level=warning msg="container event discarded" container=7b55c023c0d1c0232e071447d0fed2f68b2b2c0c4f823616b7b0bd11889e7d8c type=CONTAINER_STARTED_EVENT May 13 23:55:35.821035 update_engine[2721]: I20250513 23:55:35.820979 2721 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 13 23:55:35.821931 update_engine[2721]: I20250513 23:55:35.821502 2721 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 13 23:55:35.821931 update_engine[2721]: I20250513 23:55:35.821714 2721 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 13 23:55:35.822091 update_engine[2721]: I20250513 23:55:35.822057 2721 omaha_request_params.cc:62] Current group set to alpha May 13 23:55:35.822159 update_engine[2721]: I20250513 23:55:35.822139 2721 update_attempter.cc:499] Already updated boot flags. Skipping. May 13 23:55:35.822159 update_engine[2721]: I20250513 23:55:35.822149 2721 update_attempter.cc:643] Scheduling an action processor start. May 13 23:55:35.822207 update_engine[2721]: I20250513 23:55:35.822162 2721 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 13 23:55:35.822207 update_engine[2721]: I20250513 23:55:35.822189 2721 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 13 23:55:35.822250 update_engine[2721]: I20250513 23:55:35.822234 2721 omaha_request_action.cc:271] Posting an Omaha request to disabled May 13 23:55:35.822250 update_engine[2721]: I20250513 23:55:35.822242 2721 omaha_request_action.cc:272] Request: May 13 23:55:35.822250 update_engine[2721]: May 13 23:55:35.822250 update_engine[2721]: May 13 23:55:35.822250 update_engine[2721]: May 13 23:55:35.822250 update_engine[2721]: May 13 23:55:35.822250 update_engine[2721]: May 13 23:55:35.822250 update_engine[2721]: May 13 23:55:35.822250 update_engine[2721]: May 13 23:55:35.822250 update_engine[2721]: May 13 23:55:35.822250 update_engine[2721]: I20250513 23:55:35.822247 2721 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 23:55:35.822444 locksmithd[2749]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 13 23:55:35.823265 update_engine[2721]: I20250513 23:55:35.823194 2721 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 23:55:35.823519 update_engine[2721]: I20250513 23:55:35.823478 2721 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 23:55:35.824192 update_engine[2721]: E20250513 23:55:35.824112 2721 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 23:55:35.824192 update_engine[2721]: I20250513 23:55:35.824172 2721 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 13 23:55:44.474538 containerd[2727]: time="2025-05-13T23:55:44.474486361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"461b6e06cac5d0a3f5640a259f49b350594b60e0c719781834827f8bf9a32d11\" pid:8600 exited_at:{seconds:1747180544 nanos:474278800}" May 13 23:55:45.729987 update_engine[2721]: I20250513 23:55:45.729936 2721 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 23:55:45.730361 update_engine[2721]: I20250513 23:55:45.730130 2721 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 23:55:45.730361 update_engine[2721]: I20250513 23:55:45.730331 2721 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 23:55:45.730861 update_engine[2721]: E20250513 23:55:45.730819 2721 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 23:55:45.730861 update_engine[2721]: I20250513 23:55:45.730861 2721 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 13 23:55:48.087862 containerd[2727]: time="2025-05-13T23:55:48.087775387Z" level=warning msg="container event discarded" container=a8f8a6b73ebaaeda104d2336b8a357f032892613f2ad23f9b68269b644d7bb01 type=CONTAINER_CREATED_EVENT May 13 23:55:48.087862 containerd[2727]: time="2025-05-13T23:55:48.087841668Z" level=warning msg="container event discarded" container=a8f8a6b73ebaaeda104d2336b8a357f032892613f2ad23f9b68269b644d7bb01 type=CONTAINER_STARTED_EVENT May 13 23:55:48.327553 containerd[2727]: time="2025-05-13T23:55:48.327523458Z" level=warning msg="container event discarded" container=e2fc8235faf13232aeaab0ea9cd45391f8d5586d224e4ae94335cb929f65ffa9 type=CONTAINER_CREATED_EVENT May 13 23:55:48.327680 containerd[2727]: time="2025-05-13T23:55:48.327651618Z" level=warning msg="container event discarded" container=e2fc8235faf13232aeaab0ea9cd45391f8d5586d224e4ae94335cb929f65ffa9 type=CONTAINER_STARTED_EVENT May 13 23:55:48.338931 containerd[2727]: time="2025-05-13T23:55:48.338825154Z" level=warning msg="container event discarded" container=9681bd32731c0a446116412a8f63b5c458e3c0006285a5b51eee664f3611ec69 type=CONTAINER_CREATED_EVENT May 13 23:55:48.395033 containerd[2727]: time="2025-05-13T23:55:48.395004876Z" level=warning msg="container event discarded" container=9681bd32731c0a446116412a8f63b5c458e3c0006285a5b51eee664f3611ec69 type=CONTAINER_STARTED_EVENT May 13 23:55:49.873802 containerd[2727]: time="2025-05-13T23:55:49.873758311Z" level=warning msg="container event discarded" container=7129567b6d4ee73cd8d0334627db244b65c00242556c1cd4b9577261020aff13 type=CONTAINER_CREATED_EVENT May 13 23:55:49.925978 containerd[2727]: time="2025-05-13T23:55:49.925910146Z" level=warning msg="container event discarded" container=7129567b6d4ee73cd8d0334627db244b65c00242556c1cd4b9577261020aff13 type=CONTAINER_STARTED_EVENT May 13 23:55:53.999697 containerd[2727]: time="2025-05-13T23:55:53.999653830Z" level=warning msg="container event discarded" container=23b329964eb78bc079cc8230d4e305cc38b529b3dd0e755eba8b6e5206adf3a3 type=CONTAINER_CREATED_EVENT May 13 23:55:53.999697 containerd[2727]: time="2025-05-13T23:55:53.999682191Z" level=warning msg="container event discarded" container=23b329964eb78bc079cc8230d4e305cc38b529b3dd0e755eba8b6e5206adf3a3 type=CONTAINER_STARTED_EVENT May 13 23:55:53.999697 containerd[2727]: time="2025-05-13T23:55:53.999691231Z" level=warning msg="container event discarded" container=4b73f2f1b676eab3f300146ec078811b68e53bfef9af2dd1b75b20b1763f4b7a type=CONTAINER_CREATED_EVENT May 13 23:55:53.999697 containerd[2727]: time="2025-05-13T23:55:53.999699551Z" level=warning msg="container event discarded" container=4b73f2f1b676eab3f300146ec078811b68e53bfef9af2dd1b75b20b1763f4b7a type=CONTAINER_STARTED_EVENT May 13 23:55:54.747192 containerd[2727]: time="2025-05-13T23:55:54.747155089Z" level=warning msg="container event discarded" container=464dc1dfe90f9b98258b789d8a6dbacf4325770639c8de263c629152fa2b9c37 type=CONTAINER_CREATED_EVENT May 13 23:55:54.808379 containerd[2727]: time="2025-05-13T23:55:54.808350335Z" level=warning msg="container event discarded" container=464dc1dfe90f9b98258b789d8a6dbacf4325770639c8de263c629152fa2b9c37 type=CONTAINER_STARTED_EVENT May 13 23:55:55.005868 containerd[2727]: time="2025-05-13T23:55:55.005772095Z" level=warning msg="container event discarded" container=68660c8b05215e98661e5d2dc763329cc1315a6e85aaedaa5d7b0ce5d71fb4e4 type=CONTAINER_CREATED_EVENT May 13 23:55:55.059997 containerd[2727]: time="2025-05-13T23:55:55.059972531Z" level=warning msg="container event discarded" container=68660c8b05215e98661e5d2dc763329cc1315a6e85aaedaa5d7b0ce5d71fb4e4 type=CONTAINER_STARTED_EVENT May 13 23:55:55.209313 containerd[2727]: time="2025-05-13T23:55:55.209291261Z" level=warning msg="container event discarded" container=68660c8b05215e98661e5d2dc763329cc1315a6e85aaedaa5d7b0ce5d71fb4e4 type=CONTAINER_STOPPED_EVENT May 13 23:55:55.731013 update_engine[2721]: I20250513 23:55:55.730916 2721 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 23:55:55.731426 update_engine[2721]: I20250513 23:55:55.731148 2721 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 23:55:55.731426 update_engine[2721]: I20250513 23:55:55.731351 2721 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 23:55:55.731730 update_engine[2721]: E20250513 23:55:55.731711 2721 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 23:55:55.731760 update_engine[2721]: I20250513 23:55:55.731748 2721 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 13 23:55:56.882016 containerd[2727]: time="2025-05-13T23:55:56.881978731Z" level=warning msg="container event discarded" container=f5d4ab2789db62bea4e815fb2b80a1c568956f9d60ecef9352544e72ca90b295 type=CONTAINER_CREATED_EVENT May 13 23:55:56.932218 containerd[2727]: time="2025-05-13T23:55:56.932176401Z" level=warning msg="container event discarded" container=f5d4ab2789db62bea4e815fb2b80a1c568956f9d60ecef9352544e72ca90b295 type=CONTAINER_STARTED_EVENT May 13 23:55:57.462444 containerd[2727]: time="2025-05-13T23:55:57.462413941Z" level=warning msg="container event discarded" container=f5d4ab2789db62bea4e815fb2b80a1c568956f9d60ecef9352544e72ca90b295 type=CONTAINER_STOPPED_EVENT May 13 23:55:57.662714 containerd[2727]: time="2025-05-13T23:55:57.662679540Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" id:\"29f20d313c273c8bd90066737e865ccb70e99b9fc1c826d67f1cb9b901e138a7\" pid:8644 exited_at:{seconds:1747180557 nanos:662463500}" May 13 23:55:59.971017 containerd[2727]: time="2025-05-13T23:55:59.970923978Z" level=warning msg="container event discarded" container=4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4 type=CONTAINER_CREATED_EVENT May 13 23:56:00.022140 containerd[2727]: time="2025-05-13T23:56:00.022100528Z" level=warning msg="container event discarded" container=4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4 type=CONTAINER_STARTED_EVENT May 13 23:56:05.727329 update_engine[2721]: I20250513 23:56:05.727259 2721 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 23:56:05.727745 update_engine[2721]: I20250513 23:56:05.727555 2721 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 23:56:05.727778 update_engine[2721]: I20250513 23:56:05.727754 2721 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 23:56:05.728138 update_engine[2721]: E20250513 23:56:05.728120 2721 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 23:56:05.728171 update_engine[2721]: I20250513 23:56:05.728152 2721 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 13 23:56:05.728171 update_engine[2721]: I20250513 23:56:05.728158 2721 omaha_request_action.cc:617] Omaha request response: May 13 23:56:05.728239 update_engine[2721]: E20250513 23:56:05.728227 2721 omaha_request_action.cc:636] Omaha request network transfer failed. May 13 23:56:05.728260 update_engine[2721]: I20250513 23:56:05.728243 2721 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 13 23:56:05.728260 update_engine[2721]: I20250513 23:56:05.728250 2721 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 23:56:05.728260 update_engine[2721]: I20250513 23:56:05.728253 2721 update_attempter.cc:306] Processing Done. May 13 23:56:05.728315 update_engine[2721]: E20250513 23:56:05.728267 2721 update_attempter.cc:619] Update failed. May 13 23:56:05.728315 update_engine[2721]: I20250513 23:56:05.728272 2721 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 13 23:56:05.728315 update_engine[2721]: I20250513 23:56:05.728276 2721 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 13 23:56:05.728315 update_engine[2721]: I20250513 23:56:05.728281 2721 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 13 23:56:05.728395 update_engine[2721]: I20250513 23:56:05.728341 2721 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 13 23:56:05.728395 update_engine[2721]: I20250513 23:56:05.728361 2721 omaha_request_action.cc:271] Posting an Omaha request to disabled May 13 23:56:05.728395 update_engine[2721]: I20250513 23:56:05.728366 2721 omaha_request_action.cc:272] Request: May 13 23:56:05.728395 update_engine[2721]: May 13 23:56:05.728395 update_engine[2721]: May 13 23:56:05.728395 update_engine[2721]: May 13 23:56:05.728395 update_engine[2721]: May 13 23:56:05.728395 update_engine[2721]: May 13 23:56:05.728395 update_engine[2721]: May 13 23:56:05.728395 update_engine[2721]: I20250513 23:56:05.728371 2721 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 23:56:05.728562 update_engine[2721]: I20250513 23:56:05.728482 2721 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 23:56:05.728583 locksmithd[2749]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 13 23:56:05.728750 update_engine[2721]: I20250513 23:56:05.728631 2721 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 23:56:05.729129 update_engine[2721]: E20250513 23:56:05.729113 2721 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 23:56:05.729156 update_engine[2721]: I20250513 23:56:05.729146 2721 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 13 23:56:05.729156 update_engine[2721]: I20250513 23:56:05.729152 2721 omaha_request_action.cc:617] Omaha request response: May 13 23:56:05.729194 update_engine[2721]: I20250513 23:56:05.729157 2721 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 23:56:05.729194 update_engine[2721]: I20250513 23:56:05.729161 2721 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 23:56:05.729194 update_engine[2721]: I20250513 23:56:05.729166 2721 update_attempter.cc:306] Processing Done. May 13 23:56:05.729194 update_engine[2721]: I20250513 23:56:05.729171 2721 update_attempter.cc:310] Error event sent. May 13 23:56:05.729194 update_engine[2721]: I20250513 23:56:05.729177 2721 update_check_scheduler.cc:74] Next update check in 47m42s May 13 23:56:05.729374 locksmithd[2749]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 13 23:56:08.924079 containerd[2727]: time="2025-05-13T23:56:08.924010973Z" level=warning msg="container event discarded" container=f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a type=CONTAINER_CREATED_EVENT May 13 23:56:08.924079 containerd[2727]: time="2025-05-13T23:56:08.924065134Z" level=warning msg="container event discarded" container=f738b63aa58a40aef138fe2356ed1bf422c2d72734ddf855a55f730ba1b8879a type=CONTAINER_STARTED_EVENT May 13 23:56:08.924079 containerd[2727]: time="2025-05-13T23:56:08.924073254Z" level=warning msg="container event discarded" container=261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8 type=CONTAINER_CREATED_EVENT May 13 23:56:08.924079 containerd[2727]: time="2025-05-13T23:56:08.924080854Z" level=warning msg="container event discarded" container=261f6427e66e38c2b8f7e5ae55c7e3b8a3b0dafa2b6374adac69a46eaa83e7f8 type=CONTAINER_STARTED_EVENT May 13 23:56:08.924079 containerd[2727]: time="2025-05-13T23:56:08.924087614Z" level=warning msg="container event discarded" container=10f361b1d4ed0e231b734bd01cc2106a035d01957073f0a536875b78aae2368c type=CONTAINER_CREATED_EVENT May 13 23:56:08.975278 containerd[2727]: time="2025-05-13T23:56:08.975245041Z" level=warning msg="container event discarded" container=10f361b1d4ed0e231b734bd01cc2106a035d01957073f0a536875b78aae2368c type=CONTAINER_STARTED_EVENT May 13 23:56:09.652708 containerd[2727]: time="2025-05-13T23:56:09.652674174Z" level=warning msg="container event discarded" container=7732df9389a4ada612da3ac467eccc79c7033e6e5435f693998aea160c8f3857 type=CONTAINER_CREATED_EVENT May 13 23:56:09.708103 containerd[2727]: time="2025-05-13T23:56:09.708050207Z" level=warning msg="container event discarded" container=7732df9389a4ada612da3ac467eccc79c7033e6e5435f693998aea160c8f3857 type=CONTAINER_STARTED_EVENT May 13 23:56:10.911181 containerd[2727]: time="2025-05-13T23:56:10.911117348Z" level=warning msg="container event discarded" container=6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba type=CONTAINER_CREATED_EVENT May 13 23:56:10.911181 containerd[2727]: time="2025-05-13T23:56:10.911157148Z" level=warning msg="container event discarded" container=6dc4757f5b6988d2213f0d4a83c443505b20b65acdeabb28d65aa440937bd3ba type=CONTAINER_STARTED_EVENT May 13 23:56:10.941385 containerd[2727]: time="2025-05-13T23:56:10.941324508Z" level=warning msg="container event discarded" container=eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469 type=CONTAINER_CREATED_EVENT May 13 23:56:10.941385 containerd[2727]: time="2025-05-13T23:56:10.941354668Z" level=warning msg="container event discarded" container=eb7661daaae14fcca85b54223536a7ecaaff3f8cc0209e1fa269e849d2b98469 type=CONTAINER_STARTED_EVENT May 13 23:56:10.941385 containerd[2727]: time="2025-05-13T23:56:10.941362868Z" level=warning msg="container event discarded" container=302476b6acd1ff953767f98282607933bbe832ab0a23c1bb234ef16701c08e82 type=CONTAINER_CREATED_EVENT May 13 23:56:10.996570 containerd[2727]: time="2025-05-13T23:56:10.996532500Z" level=warning msg="container event discarded" container=302476b6acd1ff953767f98282607933bbe832ab0a23c1bb234ef16701c08e82 type=CONTAINER_STARTED_EVENT May 13 23:56:11.572731 containerd[2727]: time="2025-05-13T23:56:11.572689893Z" level=warning msg="container event discarded" container=a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1 type=CONTAINER_CREATED_EVENT May 13 23:56:11.626942 containerd[2727]: time="2025-05-13T23:56:11.626877164Z" level=warning msg="container event discarded" container=a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1 type=CONTAINER_STARTED_EVENT May 13 23:56:12.900257 containerd[2727]: time="2025-05-13T23:56:12.900180943Z" level=warning msg="container event discarded" container=bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175 type=CONTAINER_CREATED_EVENT May 13 23:56:12.900257 containerd[2727]: time="2025-05-13T23:56:12.900209983Z" level=warning msg="container event discarded" container=bb31eae0fc81be6582597cdd40960040446b43ca38194beeee0661a2a79c7175 type=CONTAINER_STARTED_EVENT May 13 23:56:12.900257 containerd[2727]: time="2025-05-13T23:56:12.900218023Z" level=warning msg="container event discarded" container=10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d type=CONTAINER_CREATED_EVENT May 13 23:56:12.900257 containerd[2727]: time="2025-05-13T23:56:12.900224783Z" level=warning msg="container event discarded" container=10cc98918afc258711930b9201a03df2b670bb0d671f4caaeb64e88acd46465d type=CONTAINER_STARTED_EVENT May 13 23:56:12.900257 containerd[2727]: time="2025-05-13T23:56:12.900237783Z" level=warning msg="container event discarded" container=57b5820b5bcfcf6e0dd8bb794d7422734a69f28d35e154be55b669601511e962 type=CONTAINER_CREATED_EVENT May 13 23:56:12.955433 containerd[2727]: time="2025-05-13T23:56:12.955397135Z" level=warning msg="container event discarded" container=57b5820b5bcfcf6e0dd8bb794d7422734a69f28d35e154be55b669601511e962 type=CONTAINER_STARTED_EVENT May 13 23:56:13.201951 containerd[2727]: time="2025-05-13T23:56:13.201849014Z" level=warning msg="container event discarded" container=a7c646e3f4f38deb4fc7294eeb9783ea7c19f4dc9a9b6a3714513ecf91d8da83 type=CONTAINER_CREATED_EVENT May 13 23:56:13.249092 containerd[2727]: time="2025-05-13T23:56:13.249041196Z" level=warning msg="container event discarded" container=a7c646e3f4f38deb4fc7294eeb9783ea7c19f4dc9a9b6a3714513ecf91d8da83 type=CONTAINER_STARTED_EVENT May 13 23:56:13.646539 containerd[2727]: time="2025-05-13T23:56:13.646506231Z" level=warning msg="container event discarded" container=e65ccaf52adf53873eb6ee467152d165cbf124c21e38346b2bc4d64522b14dbc type=CONTAINER_CREATED_EVENT May 13 23:56:13.702715 containerd[2727]: time="2025-05-13T23:56:13.702683183Z" level=warning msg="container event discarded" container=e65ccaf52adf53873eb6ee467152d165cbf124c21e38346b2bc4d64522b14dbc type=CONTAINER_STARTED_EVENT May 13 23:56:14.473412 containerd[2727]: time="2025-05-13T23:56:14.473363300Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"49555be30656eb1b3cbabe20500dc95dc07c571cb825b056db5f3154e6988d4f\" pid:8677 exited_at:{seconds:1747180574 nanos:473165939}" May 13 23:56:16.477447 containerd[2727]: time="2025-05-13T23:56:16.477416116Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"bfe7b45582637d1696fa1696cb059461b5011e9b54a0f4a8795289638bd90017\" pid:8700 exited_at:{seconds:1747180576 nanos:477257675}" May 13 23:56:27.660140 containerd[2727]: time="2025-05-13T23:56:27.660096367Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" id:\"4e84b939410e3cb0ba701f1b58516335c1e7e290ba1e4b13cb867e43eb152445\" pid:8723 exited_at:{seconds:1747180587 nanos:659829527}" May 13 23:56:44.484490 containerd[2727]: time="2025-05-13T23:56:44.484449814Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"7f4f2bcc8ad0ef58ae5b76bdfe66b4751ce39a4760db1b94112c74f1ea959829\" pid:8766 exited_at:{seconds:1747180604 nanos:484247174}" May 13 23:56:57.660843 containerd[2727]: time="2025-05-13T23:56:57.660795434Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" id:\"f1672b4816cde01d47de46a712a6d06432a243eac82fc76bb136e808be93d962\" pid:8792 exited_at:{seconds:1747180617 nanos:660549714}" May 13 23:57:14.469406 containerd[2727]: time="2025-05-13T23:57:14.469365736Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"674b5ad4f1f3c20c90b2b3a650d0b0e39ea6dd5e73f869796b8afbb4504b1e86\" pid:8822 exited_at:{seconds:1747180634 nanos:469165456}" May 13 23:57:16.473497 containerd[2727]: time="2025-05-13T23:57:16.473458945Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"d8f03e350682897ff962c9eddee55209deaad68a9a90d225f21278c2657d4525\" pid:8843 exited_at:{seconds:1747180636 nanos:473285304}" May 13 23:57:27.663055 containerd[2727]: time="2025-05-13T23:57:27.663010231Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" id:\"1ede5d5598e7852fc5ff2009a09d5e5d11a450375df63502fcfd00f94e9ed5af\" pid:8896 exited_at:{seconds:1747180647 nanos:662772671}" May 13 23:57:44.469345 containerd[2727]: time="2025-05-13T23:57:44.469299939Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"12993aaae10b677a62986cc472374bb60a09fe0a4b1c85617cb957080cbb743b\" pid:8928 exited_at:{seconds:1747180664 nanos:469124939}" May 13 23:57:47.910915 systemd[1]: Started sshd@10-147.28.150.5:22-193.32.162.137:37208.service - OpenSSH per-connection server daemon (193.32.162.137:37208). May 13 23:57:48.353263 sshd[8940]: Invalid user haoran from 193.32.162.137 port 37208 May 13 23:57:48.458288 sshd[8940]: Connection closed by invalid user haoran 193.32.162.137 port 37208 [preauth] May 13 23:57:48.460209 systemd[1]: sshd@10-147.28.150.5:22-193.32.162.137:37208.service: Deactivated successfully. May 13 23:57:57.664412 containerd[2727]: time="2025-05-13T23:57:57.664359646Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" id:\"6026dac5485ad92a4c04026739a47360b00c42ffb8092e36552ce452160bb46f\" pid:8958 exited_at:{seconds:1747180677 nanos:664117366}" May 13 23:58:14.473475 containerd[2727]: time="2025-05-13T23:58:14.473432878Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"4a803d929fae06a5883e42f603cd388a3f40a92878975fb973c6cf1c0485ab92\" pid:8988 exited_at:{seconds:1747180694 nanos:473264357}" May 13 23:58:16.477401 containerd[2727]: time="2025-05-13T23:58:16.477352056Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"caa9ed556ec5e889853839226ea90c094e47702cd392a67d88f5854351fdb54b\" pid:9011 exited_at:{seconds:1747180696 nanos:477187575}" May 13 23:58:27.659865 containerd[2727]: time="2025-05-13T23:58:27.659824421Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" id:\"969ee360ae7ccb2c7fe57853425aaee607ef6ab610efa1eaa30e27fec3327895\" pid:9036 exited_at:{seconds:1747180707 nanos:659571820}" May 13 23:58:44.476771 containerd[2727]: time="2025-05-13T23:58:44.476719739Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"9ed1bc4f759346f9b85ff2da691ddbfd4c4338cd7ea775104975adf6f1b2db28\" pid:9068 exited_at:{seconds:1747180724 nanos:476525218}" May 13 23:58:57.666666 containerd[2727]: time="2025-05-13T23:58:57.666621796Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" id:\"cf6fd4c34809545ddd65dc18fe9ffa91b3a890fb8f4b2768d58a3ad3f550e4c0\" pid:9108 exited_at:{seconds:1747180737 nanos:666391355}" May 13 23:59:14.471683 containerd[2727]: time="2025-05-13T23:59:14.471603438Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"88dd2a1fd92843fccae63ef8d5634a74f401e76e68c6f0f9b9166f674d176827\" pid:9148 exited_at:{seconds:1747180754 nanos:471367917}" May 13 23:59:16.474314 containerd[2727]: time="2025-05-13T23:59:16.474279627Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"906161c84f94baafdb89d0aa7d4a1cf22cab912fc545fa48c8a0e52f6442a989\" pid:9171 exited_at:{seconds:1747180756 nanos:474089666}" May 13 23:59:27.663780 containerd[2727]: time="2025-05-13T23:59:27.663721863Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" id:\"f16a35e05f063177a170012217070440bafbaab0b83aed1fe15c1bc03b14a31c\" pid:9200 exited_at:{seconds:1747180767 nanos:663447022}" May 13 23:59:44.470557 containerd[2727]: time="2025-05-13T23:59:44.470507097Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"0e6d63f0ebab838020f6fec0bae15f80561a046a1f9f620f59e2f0229b8f4b9b\" pid:9242 exited_at:{seconds:1747180784 nanos:470251296}" May 13 23:59:45.406803 systemd[1]: Started sshd@11-147.28.150.5:22-139.178.68.195:59588.service - OpenSSH per-connection server daemon (139.178.68.195:59588). May 13 23:59:45.839517 sshd[9257]: Accepted publickey for core from 139.178.68.195 port 59588 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 13 23:59:45.840549 sshd-session[9257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:45.843961 systemd-logind[2710]: New session 10 of user core. May 13 23:59:45.856990 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 23:59:46.206758 sshd[9259]: Connection closed by 139.178.68.195 port 59588 May 13 23:59:46.207144 sshd-session[9257]: pam_unix(sshd:session): session closed for user core May 13 23:59:46.210114 systemd[1]: sshd@11-147.28.150.5:22-139.178.68.195:59588.service: Deactivated successfully. May 13 23:59:46.212579 systemd[1]: session-10.scope: Deactivated successfully. May 13 23:59:46.213203 systemd-logind[2710]: Session 10 logged out. Waiting for processes to exit. May 13 23:59:46.213784 systemd-logind[2710]: Removed session 10. May 13 23:59:47.416969 systemd[1]: Started sshd@12-147.28.150.5:22-193.32.162.135:38732.service - OpenSSH per-connection server daemon (193.32.162.135:38732). May 13 23:59:47.911711 sshd[9299]: Invalid user node from 193.32.162.135 port 38732 May 13 23:59:48.029430 sshd[9299]: Connection closed by invalid user node 193.32.162.135 port 38732 [preauth] May 13 23:59:48.031413 systemd[1]: sshd@12-147.28.150.5:22-193.32.162.135:38732.service: Deactivated successfully. May 13 23:59:51.279853 systemd[1]: Started sshd@13-147.28.150.5:22-139.178.68.195:59598.service - OpenSSH per-connection server daemon (139.178.68.195:59598). May 13 23:59:51.690451 sshd[9307]: Accepted publickey for core from 139.178.68.195 port 59598 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 13 23:59:51.691489 sshd-session[9307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:51.694842 systemd-logind[2710]: New session 11 of user core. May 13 23:59:51.708988 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 23:59:52.042484 sshd[9309]: Connection closed by 139.178.68.195 port 59598 May 13 23:59:52.042836 sshd-session[9307]: pam_unix(sshd:session): session closed for user core May 13 23:59:52.045744 systemd[1]: sshd@13-147.28.150.5:22-139.178.68.195:59598.service: Deactivated successfully. May 13 23:59:52.047429 systemd[1]: session-11.scope: Deactivated successfully. May 13 23:59:52.047994 systemd-logind[2710]: Session 11 logged out. Waiting for processes to exit. May 13 23:59:52.048529 systemd-logind[2710]: Removed session 11. May 13 23:59:52.113831 systemd[1]: Started sshd@14-147.28.150.5:22-139.178.68.195:59602.service - OpenSSH per-connection server daemon (139.178.68.195:59602). May 13 23:59:52.523238 sshd[9349]: Accepted publickey for core from 139.178.68.195 port 59602 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 13 23:59:52.524360 sshd-session[9349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:52.527705 systemd-logind[2710]: New session 12 of user core. May 13 23:59:52.541046 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 23:59:52.900358 sshd[9355]: Connection closed by 139.178.68.195 port 59602 May 13 23:59:52.900729 sshd-session[9349]: pam_unix(sshd:session): session closed for user core May 13 23:59:52.903669 systemd[1]: sshd@14-147.28.150.5:22-139.178.68.195:59602.service: Deactivated successfully. May 13 23:59:52.905371 systemd[1]: session-12.scope: Deactivated successfully. May 13 23:59:52.905917 systemd-logind[2710]: Session 12 logged out. Waiting for processes to exit. May 13 23:59:52.906465 systemd-logind[2710]: Removed session 12. May 13 23:59:52.972772 systemd[1]: Started sshd@15-147.28.150.5:22-139.178.68.195:59616.service - OpenSSH per-connection server daemon (139.178.68.195:59616). May 13 23:59:53.389371 sshd[9389]: Accepted publickey for core from 139.178.68.195 port 59616 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 13 23:59:53.390540 sshd-session[9389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:53.394011 systemd-logind[2710]: New session 13 of user core. May 13 23:59:53.403994 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 23:59:53.740736 sshd[9391]: Connection closed by 139.178.68.195 port 59616 May 13 23:59:53.741072 sshd-session[9389]: pam_unix(sshd:session): session closed for user core May 13 23:59:53.743976 systemd[1]: sshd@15-147.28.150.5:22-139.178.68.195:59616.service: Deactivated successfully. May 13 23:59:53.745686 systemd[1]: session-13.scope: Deactivated successfully. May 13 23:59:53.746247 systemd-logind[2710]: Session 13 logged out. Waiting for processes to exit. May 13 23:59:53.746805 systemd-logind[2710]: Removed session 13. May 13 23:59:57.662823 containerd[2727]: time="2025-05-13T23:59:57.662783586Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4761cab5b651740e146617757c323bb62b8f152b764be94ff9fa14511038fca4\" id:\"bc0244f64b870a2992341eae52e83972cadc8e0b77e5094041b95750dcd8a632\" pid:9438 exited_at:{seconds:1747180797 nanos:662547066}" May 13 23:59:58.814919 systemd[1]: Started sshd@16-147.28.150.5:22-139.178.68.195:42240.service - OpenSSH per-connection server daemon (139.178.68.195:42240). May 13 23:59:59.224029 sshd[9457]: Accepted publickey for core from 139.178.68.195 port 42240 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 13 23:59:59.225060 sshd-session[9457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:59.228143 systemd-logind[2710]: New session 14 of user core. May 13 23:59:59.239983 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 23:59:59.575176 sshd[9459]: Connection closed by 139.178.68.195 port 42240 May 13 23:59:59.575522 sshd-session[9457]: pam_unix(sshd:session): session closed for user core May 13 23:59:59.578413 systemd[1]: sshd@16-147.28.150.5:22-139.178.68.195:42240.service: Deactivated successfully. May 13 23:59:59.580115 systemd[1]: session-14.scope: Deactivated successfully. May 13 23:59:59.580655 systemd-logind[2710]: Session 14 logged out. Waiting for processes to exit. May 13 23:59:59.581194 systemd-logind[2710]: Removed session 14. May 13 23:59:59.654762 systemd[1]: Started sshd@17-147.28.150.5:22-139.178.68.195:42256.service - OpenSSH per-connection server daemon (139.178.68.195:42256). May 14 00:00:00.004566 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 14 00:00:00.020996 systemd[1]: logrotate.service: Deactivated successfully. May 14 00:00:00.077355 sshd[9494]: Accepted publickey for core from 139.178.68.195 port 42256 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:00:00.078419 sshd-session[9494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:00.081553 systemd-logind[2710]: New session 15 of user core. May 14 00:00:00.091001 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 00:00:00.563270 sshd[9498]: Connection closed by 139.178.68.195 port 42256 May 14 00:00:00.563605 sshd-session[9494]: pam_unix(sshd:session): session closed for user core May 14 00:00:00.566529 systemd[1]: sshd@17-147.28.150.5:22-139.178.68.195:42256.service: Deactivated successfully. May 14 00:00:00.568241 systemd[1]: session-15.scope: Deactivated successfully. May 14 00:00:00.568796 systemd-logind[2710]: Session 15 logged out. Waiting for processes to exit. May 14 00:00:00.569387 systemd-logind[2710]: Removed session 15. May 14 00:00:00.639789 systemd[1]: Started sshd@18-147.28.150.5:22-139.178.68.195:42270.service - OpenSSH per-connection server daemon (139.178.68.195:42270). May 14 00:00:01.059520 sshd[9530]: Accepted publickey for core from 139.178.68.195 port 42270 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:00:01.060565 sshd-session[9530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:01.063784 systemd-logind[2710]: New session 16 of user core. May 14 00:00:01.074000 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 00:00:02.433647 sshd[9532]: Connection closed by 139.178.68.195 port 42270 May 14 00:00:02.434037 sshd-session[9530]: pam_unix(sshd:session): session closed for user core May 14 00:00:02.436953 systemd[1]: sshd@18-147.28.150.5:22-139.178.68.195:42270.service: Deactivated successfully. May 14 00:00:02.438616 systemd[1]: session-16.scope: Deactivated successfully. May 14 00:00:02.438843 systemd[1]: session-16.scope: Consumed 2.596s CPU time, 113.9M memory peak. May 14 00:00:02.439198 systemd-logind[2710]: Session 16 logged out. Waiting for processes to exit. May 14 00:00:02.439752 systemd-logind[2710]: Removed session 16. May 14 00:00:02.510804 systemd[1]: Started sshd@19-147.28.150.5:22-139.178.68.195:42274.service - OpenSSH per-connection server daemon (139.178.68.195:42274). May 14 00:00:02.929612 sshd[9628]: Accepted publickey for core from 139.178.68.195 port 42274 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:00:02.930755 sshd-session[9628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:02.934057 systemd-logind[2710]: New session 17 of user core. May 14 00:00:02.945987 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 00:00:03.367753 sshd[9632]: Connection closed by 139.178.68.195 port 42274 May 14 00:00:03.368049 sshd-session[9628]: pam_unix(sshd:session): session closed for user core May 14 00:00:03.370863 systemd[1]: sshd@19-147.28.150.5:22-139.178.68.195:42274.service: Deactivated successfully. May 14 00:00:03.372564 systemd[1]: session-17.scope: Deactivated successfully. May 14 00:00:03.373148 systemd-logind[2710]: Session 17 logged out. Waiting for processes to exit. May 14 00:00:03.373699 systemd-logind[2710]: Removed session 17. May 14 00:00:03.441835 systemd[1]: Started sshd@20-147.28.150.5:22-139.178.68.195:42280.service - OpenSSH per-connection server daemon (139.178.68.195:42280). May 14 00:00:03.867309 sshd[9681]: Accepted publickey for core from 139.178.68.195 port 42280 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:00:03.868325 sshd-session[9681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:03.871412 systemd-logind[2710]: New session 18 of user core. May 14 00:00:03.884988 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 00:00:04.220933 sshd[9683]: Connection closed by 139.178.68.195 port 42280 May 14 00:00:04.221312 sshd-session[9681]: pam_unix(sshd:session): session closed for user core May 14 00:00:04.224102 systemd[1]: sshd@20-147.28.150.5:22-139.178.68.195:42280.service: Deactivated successfully. May 14 00:00:04.226411 systemd[1]: session-18.scope: Deactivated successfully. May 14 00:00:04.226992 systemd-logind[2710]: Session 18 logged out. Waiting for processes to exit. May 14 00:00:04.227617 systemd-logind[2710]: Removed session 18. May 14 00:00:09.293910 systemd[1]: Started sshd@21-147.28.150.5:22-139.178.68.195:55776.service - OpenSSH per-connection server daemon (139.178.68.195:55776). May 14 00:00:09.710274 sshd[9726]: Accepted publickey for core from 139.178.68.195 port 55776 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:00:09.711287 sshd-session[9726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:09.714265 systemd-logind[2710]: New session 19 of user core. May 14 00:00:09.735041 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 00:00:10.059671 sshd[9728]: Connection closed by 139.178.68.195 port 55776 May 14 00:00:10.060011 sshd-session[9726]: pam_unix(sshd:session): session closed for user core May 14 00:00:10.062951 systemd[1]: sshd@21-147.28.150.5:22-139.178.68.195:55776.service: Deactivated successfully. May 14 00:00:10.064651 systemd[1]: session-19.scope: Deactivated successfully. May 14 00:00:10.065216 systemd-logind[2710]: Session 19 logged out. Waiting for processes to exit. May 14 00:00:10.065784 systemd-logind[2710]: Removed session 19. May 14 00:00:14.471602 containerd[2727]: time="2025-05-14T00:00:14.471561994Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"dbbaa06f909e6b01f928af940144caa3944e9aa72a1dc79ae71ab68a5de34888\" pid:9771 exited_at:{seconds:1747180814 nanos:471366194}" May 14 00:00:15.135975 systemd[1]: Started sshd@22-147.28.150.5:22-139.178.68.195:57146.service - OpenSSH per-connection server daemon (139.178.68.195:57146). May 14 00:00:15.567518 sshd[9783]: Accepted publickey for core from 139.178.68.195 port 57146 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:00:15.568554 sshd-session[9783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:15.571880 systemd-logind[2710]: New session 20 of user core. May 14 00:00:15.588988 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 00:00:15.927103 sshd[9785]: Connection closed by 139.178.68.195 port 57146 May 14 00:00:15.927491 sshd-session[9783]: pam_unix(sshd:session): session closed for user core May 14 00:00:15.930472 systemd[1]: sshd@22-147.28.150.5:22-139.178.68.195:57146.service: Deactivated successfully. May 14 00:00:15.932190 systemd[1]: session-20.scope: Deactivated successfully. May 14 00:00:15.932748 systemd-logind[2710]: Session 20 logged out. Waiting for processes to exit. May 14 00:00:15.933329 systemd-logind[2710]: Removed session 20. May 14 00:00:16.474296 containerd[2727]: time="2025-05-14T00:00:16.474264952Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5923f02a854248678ca719c23f0987f2015019af10c260e91e43a522f6403f1\" id:\"45f4e983fcd90ed8d55e9500b3c052c6dd35906f221ea281615cde2ea7a1c8a9\" pid:9832 exited_at:{seconds:1747180816 nanos:474078591}" May 14 00:00:21.001898 systemd[1]: Started sshd@23-147.28.150.5:22-139.178.68.195:57152.service - OpenSSH per-connection server daemon (139.178.68.195:57152). May 14 00:00:21.409247 sshd[9845]: Accepted publickey for core from 139.178.68.195 port 57152 ssh2: RSA SHA256:IJnQfAq6CQWRPc6rpbn0zU2zgPslx6s04ioGHnmMYW4 May 14 00:00:21.410275 sshd-session[9845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:00:21.413285 systemd-logind[2710]: New session 21 of user core. May 14 00:00:21.428988 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 00:00:21.757425 sshd[9848]: Connection closed by 139.178.68.195 port 57152 May 14 00:00:21.757793 sshd-session[9845]: pam_unix(sshd:session): session closed for user core May 14 00:00:21.760746 systemd[1]: sshd@23-147.28.150.5:22-139.178.68.195:57152.service: Deactivated successfully. May 14 00:00:21.763031 systemd[1]: session-21.scope: Deactivated successfully. May 14 00:00:21.763580 systemd-logind[2710]: Session 21 logged out. Waiting for processes to exit. May 14 00:00:21.764145 systemd-logind[2710]: Removed session 21.