Jul 16 00:45:32.346782 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] Jul 16 00:45:32.346805 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 15 22:00:45 -00 2025 Jul 16 00:45:32.346813 kernel: KASLR enabled Jul 16 00:45:32.346819 kernel: efi: EFI v2.7 by American Megatrends Jul 16 00:45:32.346825 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea47c818 RNG=0xebf10018 MEMRESERVE=0xe465af98 Jul 16 00:45:32.346830 kernel: random: crng init done Jul 16 00:45:32.346837 kernel: secureboot: Secure boot disabled Jul 16 00:45:32.346842 kernel: esrt: Reserving ESRT space from 0x00000000ea47c818 to 0x00000000ea47c878. Jul 16 00:45:32.346850 kernel: ACPI: Early table checksum verification disabled Jul 16 00:45:32.346855 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) Jul 16 00:45:32.346861 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) Jul 16 00:45:32.346867 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) Jul 16 00:45:32.346872 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) Jul 16 00:45:32.346878 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) Jul 16 00:45:32.346887 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) Jul 16 00:45:32.346893 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) Jul 16 00:45:32.346899 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) Jul 16 00:45:32.346905 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) Jul 16 00:45:32.346911 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) Jul 16 00:45:32.346917 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) Jul 16 00:45:32.346923 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) Jul 16 00:45:32.346929 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) Jul 16 00:45:32.346935 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) Jul 16 00:45:32.346941 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) Jul 16 00:45:32.346948 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) Jul 16 00:45:32.346954 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) Jul 16 00:45:32.346960 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) Jul 16 00:45:32.346966 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) Jul 16 00:45:32.346972 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 Jul 16 00:45:32.346978 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 16 00:45:32.346984 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] Jul 16 00:45:32.346990 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] Jul 16 00:45:32.346997 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] Jul 16 00:45:32.347003 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] Jul 16 00:45:32.347008 kernel: NUMA: Initialized distance table, cnt=1 Jul 16 00:45:32.347016 kernel: NUMA: Node 0 [mem 0x88300000-0x883fffff] + [mem 0x90000000-0xffffffff] -> [mem 0x88300000-0xffffffff] Jul 16 00:45:32.347022 kernel: NUMA: Node 0 [mem 0x88300000-0xffffffff] + [mem 0x80000000000-0x8007fffffff] -> [mem 0x88300000-0x8007fffffff] Jul 16 00:45:32.347029 kernel: NUMA: Node 0 [mem 0x88300000-0x8007fffffff] + [mem 0x80100000000-0x83fffffffff] -> [mem 0x88300000-0x83fffffffff] Jul 16 00:45:32.347035 kernel: NODE_DATA(0) allocated [mem 0x83fdffd8a00-0x83fdffdffff] Jul 16 00:45:32.347041 kernel: Zone ranges: Jul 16 00:45:32.347049 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] Jul 16 00:45:32.347057 kernel: DMA32 empty Jul 16 00:45:32.347063 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] Jul 16 00:45:32.347070 kernel: Device empty Jul 16 00:45:32.347076 kernel: Movable zone start for each node Jul 16 00:45:32.347082 kernel: Early memory node ranges Jul 16 00:45:32.347089 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] Jul 16 00:45:32.347095 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] Jul 16 00:45:32.347101 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] Jul 16 00:45:32.347108 kernel: node 0: [mem 0x0000000094000000-0x00000000eba32fff] Jul 16 00:45:32.347114 kernel: node 0: [mem 0x00000000eba33000-0x00000000ebeb4fff] Jul 16 00:45:32.347122 kernel: node 0: [mem 0x00000000ebeb5000-0x00000000ebeb9fff] Jul 16 00:45:32.347128 kernel: node 0: [mem 0x00000000ebeba000-0x00000000ebeccfff] Jul 16 00:45:32.347134 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] Jul 16 00:45:32.347141 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] Jul 16 00:45:32.347147 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] Jul 16 00:45:32.347153 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] Jul 16 00:45:32.347160 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee53ffff] Jul 16 00:45:32.347166 kernel: node 0: [mem 0x00000000ee540000-0x00000000f765ffff] Jul 16 00:45:32.347172 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] Jul 16 00:45:32.347179 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] Jul 16 00:45:32.347185 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] Jul 16 00:45:32.347191 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] Jul 16 00:45:32.347199 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] Jul 16 00:45:32.347205 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] Jul 16 00:45:32.347212 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] Jul 16 00:45:32.347218 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] Jul 16 00:45:32.347224 kernel: On node 0, zone DMA: 768 pages in unavailable ranges Jul 16 00:45:32.347231 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges Jul 16 00:45:32.347237 kernel: cma: Reserved 16 MiB at 0x00000000fec00000 on node -1 Jul 16 00:45:32.347243 kernel: psci: probing for conduit method from ACPI. Jul 16 00:45:32.347250 kernel: psci: PSCIv1.1 detected in firmware. Jul 16 00:45:32.347256 kernel: psci: Using standard PSCI v0.2 function IDs Jul 16 00:45:32.347266 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 16 00:45:32.347274 kernel: psci: SMC Calling Convention v1.2 Jul 16 00:45:32.347281 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 16 00:45:32.347287 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 Jul 16 00:45:32.347293 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 Jul 16 00:45:32.347300 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 Jul 16 00:45:32.347306 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 Jul 16 00:45:32.347313 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 Jul 16 00:45:32.347319 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 Jul 16 00:45:32.347325 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 Jul 16 00:45:32.347332 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 Jul 16 00:45:32.347338 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 Jul 16 00:45:32.347344 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 Jul 16 00:45:32.347352 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 Jul 16 00:45:32.347358 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 Jul 16 00:45:32.347365 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 Jul 16 00:45:32.347371 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 Jul 16 00:45:32.347377 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 Jul 16 00:45:32.347384 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 Jul 16 00:45:32.347390 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 Jul 16 00:45:32.347397 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 Jul 16 00:45:32.347403 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 Jul 16 00:45:32.347409 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 Jul 16 00:45:32.347416 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 Jul 16 00:45:32.347422 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 Jul 16 00:45:32.347430 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 Jul 16 00:45:32.347436 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 Jul 16 00:45:32.347442 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 Jul 16 00:45:32.347449 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 Jul 16 00:45:32.347455 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 Jul 16 00:45:32.347462 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 Jul 16 00:45:32.347468 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 Jul 16 00:45:32.347474 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 Jul 16 00:45:32.347481 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 Jul 16 00:45:32.347487 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 Jul 16 00:45:32.347494 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 Jul 16 00:45:32.347501 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 Jul 16 00:45:32.347508 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 Jul 16 00:45:32.347514 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 Jul 16 00:45:32.347520 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 Jul 16 00:45:32.347527 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 Jul 16 00:45:32.347533 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 Jul 16 00:45:32.347540 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 Jul 16 00:45:32.347546 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 Jul 16 00:45:32.347552 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 Jul 16 00:45:32.347565 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 Jul 16 00:45:32.347574 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 Jul 16 00:45:32.347580 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 Jul 16 00:45:32.347587 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 Jul 16 00:45:32.347594 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 Jul 16 00:45:32.347601 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 Jul 16 00:45:32.347608 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 Jul 16 00:45:32.347616 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 Jul 16 00:45:32.347622 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 Jul 16 00:45:32.347629 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 Jul 16 00:45:32.347636 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 Jul 16 00:45:32.347643 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 Jul 16 00:45:32.347650 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 Jul 16 00:45:32.347656 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 Jul 16 00:45:32.347663 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 Jul 16 00:45:32.347670 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 Jul 16 00:45:32.347677 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 Jul 16 00:45:32.347683 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 Jul 16 00:45:32.347690 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 Jul 16 00:45:32.347698 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 Jul 16 00:45:32.347705 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 Jul 16 00:45:32.347712 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 Jul 16 00:45:32.347718 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 Jul 16 00:45:32.347725 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 Jul 16 00:45:32.347732 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 Jul 16 00:45:32.347738 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 Jul 16 00:45:32.347745 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 Jul 16 00:45:32.347752 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 Jul 16 00:45:32.347759 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 Jul 16 00:45:32.347765 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 Jul 16 00:45:32.347772 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 Jul 16 00:45:32.347780 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 Jul 16 00:45:32.347787 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 Jul 16 00:45:32.347794 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 Jul 16 00:45:32.347801 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 Jul 16 00:45:32.347807 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 Jul 16 00:45:32.347814 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 Jul 16 00:45:32.347821 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 16 00:45:32.347828 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 16 00:45:32.347835 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 Jul 16 00:45:32.347842 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 Jul 16 00:45:32.347849 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 Jul 16 00:45:32.347856 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 Jul 16 00:45:32.347863 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 Jul 16 00:45:32.347870 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 Jul 16 00:45:32.347877 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 Jul 16 00:45:32.347883 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 Jul 16 00:45:32.347890 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 Jul 16 00:45:32.347897 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 Jul 16 00:45:32.347903 kernel: Detected PIPT I-cache on CPU0 Jul 16 00:45:32.347910 kernel: CPU features: detected: GIC system register CPU interface Jul 16 00:45:32.347917 kernel: CPU features: detected: Virtualization Host Extensions Jul 16 00:45:32.347924 kernel: CPU features: detected: Spectre-v4 Jul 16 00:45:32.347932 kernel: CPU features: detected: Spectre-BHB Jul 16 00:45:32.347938 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 16 00:45:32.347946 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 16 00:45:32.347952 kernel: CPU features: detected: ARM erratum 1418040 Jul 16 00:45:32.347959 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 16 00:45:32.347966 kernel: alternatives: applying boot alternatives Jul 16 00:45:32.347974 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 16 00:45:32.347981 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 16 00:45:32.347988 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 16 00:45:32.347995 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes Jul 16 00:45:32.348002 kernel: printk: log_buf_len min size: 262144 bytes Jul 16 00:45:32.348010 kernel: printk: log_buf_len: 1048576 bytes Jul 16 00:45:32.348017 kernel: printk: early log buf free: 249376(95%) Jul 16 00:45:32.348024 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) Jul 16 00:45:32.348031 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) Jul 16 00:45:32.348037 kernel: Fallback order for Node 0: 0 Jul 16 00:45:32.348044 kernel: Built 1 zonelists, mobility grouping on. Total pages: 67043584 Jul 16 00:45:32.348051 kernel: Policy zone: Normal Jul 16 00:45:32.348058 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 16 00:45:32.348065 kernel: software IO TLB: area num 128. Jul 16 00:45:32.348072 kernel: software IO TLB: mapped [mem 0x00000000fac00000-0x00000000fec00000] (64MB) Jul 16 00:45:32.348078 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 Jul 16 00:45:32.348087 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 16 00:45:32.348094 kernel: rcu: RCU event tracing is enabled. Jul 16 00:45:32.348101 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. Jul 16 00:45:32.348108 kernel: Trampoline variant of Tasks RCU enabled. Jul 16 00:45:32.348115 kernel: Tracing variant of Tasks RCU enabled. Jul 16 00:45:32.348122 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 16 00:45:32.348129 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 Jul 16 00:45:32.348135 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Jul 16 00:45:32.348142 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Jul 16 00:45:32.348149 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 16 00:45:32.348156 kernel: GICv3: GIC: Using split EOI/Deactivate mode Jul 16 00:45:32.348163 kernel: GICv3: 672 SPIs implemented Jul 16 00:45:32.348171 kernel: GICv3: 0 Extended SPIs implemented Jul 16 00:45:32.348178 kernel: Root IRQ handler: gic_handle_irq Jul 16 00:45:32.348185 kernel: GICv3: GICv3 features: 16 PPIs Jul 16 00:45:32.348192 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=1 Jul 16 00:45:32.348198 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 Jul 16 00:45:32.348205 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 Jul 16 00:45:32.348212 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 Jul 16 00:45:32.348219 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 Jul 16 00:45:32.348225 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 Jul 16 00:45:32.348232 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 Jul 16 00:45:32.348239 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 Jul 16 00:45:32.348245 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 Jul 16 00:45:32.348253 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 Jul 16 00:45:32.348260 kernel: ITS [mem 0x100100040000-0x10010005ffff] Jul 16 00:45:32.348269 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000340000 (indirect, esz 8, psz 64K, shr 1) Jul 16 00:45:32.348277 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000350000 (flat, esz 2, psz 64K, shr 1) Jul 16 00:45:32.348284 kernel: ITS [mem 0x100100060000-0x10010007ffff] Jul 16 00:45:32.348291 kernel: ITS@0x0000100100060000: allocated 8192 Devices @80000370000 (indirect, esz 8, psz 64K, shr 1) Jul 16 00:45:32.348298 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @80000380000 (flat, esz 2, psz 64K, shr 1) Jul 16 00:45:32.348304 kernel: ITS [mem 0x100100080000-0x10010009ffff] Jul 16 00:45:32.348311 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800003a0000 (indirect, esz 8, psz 64K, shr 1) Jul 16 00:45:32.348318 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800003b0000 (flat, esz 2, psz 64K, shr 1) Jul 16 00:45:32.348325 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] Jul 16 00:45:32.348333 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @800003d0000 (indirect, esz 8, psz 64K, shr 1) Jul 16 00:45:32.348340 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @800003e0000 (flat, esz 2, psz 64K, shr 1) Jul 16 00:45:32.348347 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] Jul 16 00:45:32.348354 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000800000 (indirect, esz 8, psz 64K, shr 1) Jul 16 00:45:32.348361 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000810000 (flat, esz 2, psz 64K, shr 1) Jul 16 00:45:32.348368 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] Jul 16 00:45:32.348375 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000830000 (indirect, esz 8, psz 64K, shr 1) Jul 16 00:45:32.348382 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000840000 (flat, esz 2, psz 64K, shr 1) Jul 16 00:45:32.348388 kernel: ITS [mem 0x100100100000-0x10010011ffff] Jul 16 00:45:32.348395 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000860000 (indirect, esz 8, psz 64K, shr 1) Jul 16 00:45:32.348402 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @80000870000 (flat, esz 2, psz 64K, shr 1) Jul 16 00:45:32.348410 kernel: ITS [mem 0x100100120000-0x10010013ffff] Jul 16 00:45:32.348417 kernel: ITS@0x0000100100120000: allocated 8192 Devices @80000890000 (indirect, esz 8, psz 64K, shr 1) Jul 16 00:45:32.348424 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800008a0000 (flat, esz 2, psz 64K, shr 1) Jul 16 00:45:32.348431 kernel: GICv3: using LPI property table @0x00000800008b0000 Jul 16 00:45:32.348438 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800008c0000 Jul 16 00:45:32.348445 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 16 00:45:32.348452 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.348458 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). Jul 16 00:45:32.348465 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). Jul 16 00:45:32.348472 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 16 00:45:32.348479 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 16 00:45:32.348488 kernel: Console: colour dummy device 80x25 Jul 16 00:45:32.348495 kernel: printk: legacy console [tty0] enabled Jul 16 00:45:32.348502 kernel: ACPI: Core revision 20240827 Jul 16 00:45:32.348509 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 16 00:45:32.348516 kernel: pid_max: default: 81920 minimum: 640 Jul 16 00:45:32.348523 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 16 00:45:32.348530 kernel: landlock: Up and running. Jul 16 00:45:32.348537 kernel: SELinux: Initializing. Jul 16 00:45:32.348544 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 16 00:45:32.348551 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 16 00:45:32.348559 kernel: rcu: Hierarchical SRCU implementation. Jul 16 00:45:32.348566 kernel: rcu: Max phase no-delay instances is 400. Jul 16 00:45:32.348573 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Jul 16 00:45:32.348580 kernel: Remapping and enabling EFI services. Jul 16 00:45:32.348587 kernel: smp: Bringing up secondary CPUs ... Jul 16 00:45:32.348594 kernel: Detected PIPT I-cache on CPU1 Jul 16 00:45:32.348601 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 Jul 16 00:45:32.348608 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000800008d0000 Jul 16 00:45:32.348615 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.348623 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] Jul 16 00:45:32.348630 kernel: Detected PIPT I-cache on CPU2 Jul 16 00:45:32.348637 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 Jul 16 00:45:32.348644 kernel: GICv3: CPU2: using allocated LPI pending table @0x00000800008e0000 Jul 16 00:45:32.348651 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.348657 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] Jul 16 00:45:32.348664 kernel: Detected PIPT I-cache on CPU3 Jul 16 00:45:32.348671 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 Jul 16 00:45:32.348678 kernel: GICv3: CPU3: using allocated LPI pending table @0x00000800008f0000 Jul 16 00:45:32.348686 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.348693 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] Jul 16 00:45:32.348700 kernel: Detected PIPT I-cache on CPU4 Jul 16 00:45:32.348707 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 Jul 16 00:45:32.348714 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000900000 Jul 16 00:45:32.348721 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.348728 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] Jul 16 00:45:32.348734 kernel: Detected PIPT I-cache on CPU5 Jul 16 00:45:32.348741 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 Jul 16 00:45:32.348748 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000910000 Jul 16 00:45:32.348757 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.348764 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] Jul 16 00:45:32.348770 kernel: Detected PIPT I-cache on CPU6 Jul 16 00:45:32.348777 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 Jul 16 00:45:32.348784 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000920000 Jul 16 00:45:32.348791 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.348798 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] Jul 16 00:45:32.348805 kernel: Detected PIPT I-cache on CPU7 Jul 16 00:45:32.348812 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 Jul 16 00:45:32.348820 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000930000 Jul 16 00:45:32.348827 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.348834 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] Jul 16 00:45:32.348841 kernel: Detected PIPT I-cache on CPU8 Jul 16 00:45:32.348848 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 Jul 16 00:45:32.348855 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000940000 Jul 16 00:45:32.348861 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.348868 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] Jul 16 00:45:32.348875 kernel: Detected PIPT I-cache on CPU9 Jul 16 00:45:32.348882 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 Jul 16 00:45:32.348890 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000950000 Jul 16 00:45:32.348897 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.348904 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] Jul 16 00:45:32.348911 kernel: Detected PIPT I-cache on CPU10 Jul 16 00:45:32.348918 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 Jul 16 00:45:32.348925 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000960000 Jul 16 00:45:32.348932 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.348938 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] Jul 16 00:45:32.348946 kernel: Detected PIPT I-cache on CPU11 Jul 16 00:45:32.348953 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 Jul 16 00:45:32.348961 kernel: GICv3: CPU11: using allocated LPI pending table @0x0000080000970000 Jul 16 00:45:32.348968 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.348975 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] Jul 16 00:45:32.348982 kernel: Detected PIPT I-cache on CPU12 Jul 16 00:45:32.348988 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 Jul 16 00:45:32.348996 kernel: GICv3: CPU12: using allocated LPI pending table @0x0000080000980000 Jul 16 00:45:32.349003 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349009 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] Jul 16 00:45:32.349016 kernel: Detected PIPT I-cache on CPU13 Jul 16 00:45:32.349025 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 Jul 16 00:45:32.349032 kernel: GICv3: CPU13: using allocated LPI pending table @0x0000080000990000 Jul 16 00:45:32.349039 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349046 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] Jul 16 00:45:32.349053 kernel: Detected PIPT I-cache on CPU14 Jul 16 00:45:32.349060 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 Jul 16 00:45:32.349066 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800009a0000 Jul 16 00:45:32.349074 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349080 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] Jul 16 00:45:32.349088 kernel: Detected PIPT I-cache on CPU15 Jul 16 00:45:32.349096 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 Jul 16 00:45:32.349103 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800009b0000 Jul 16 00:45:32.349110 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349117 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] Jul 16 00:45:32.349124 kernel: Detected PIPT I-cache on CPU16 Jul 16 00:45:32.349131 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 Jul 16 00:45:32.349138 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800009c0000 Jul 16 00:45:32.349145 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349152 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] Jul 16 00:45:32.349160 kernel: Detected PIPT I-cache on CPU17 Jul 16 00:45:32.349167 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 Jul 16 00:45:32.349174 kernel: GICv3: CPU17: using allocated LPI pending table @0x00000800009d0000 Jul 16 00:45:32.349181 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349187 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] Jul 16 00:45:32.349194 kernel: Detected PIPT I-cache on CPU18 Jul 16 00:45:32.349201 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 Jul 16 00:45:32.349217 kernel: GICv3: CPU18: using allocated LPI pending table @0x00000800009e0000 Jul 16 00:45:32.349225 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349234 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] Jul 16 00:45:32.349241 kernel: Detected PIPT I-cache on CPU19 Jul 16 00:45:32.349248 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 Jul 16 00:45:32.349256 kernel: GICv3: CPU19: using allocated LPI pending table @0x00000800009f0000 Jul 16 00:45:32.349265 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349273 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] Jul 16 00:45:32.349280 kernel: Detected PIPT I-cache on CPU20 Jul 16 00:45:32.349287 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 Jul 16 00:45:32.349294 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000a00000 Jul 16 00:45:32.349303 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349310 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] Jul 16 00:45:32.349317 kernel: Detected PIPT I-cache on CPU21 Jul 16 00:45:32.349325 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 Jul 16 00:45:32.349332 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000a10000 Jul 16 00:45:32.349339 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349348 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] Jul 16 00:45:32.349356 kernel: Detected PIPT I-cache on CPU22 Jul 16 00:45:32.349363 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 Jul 16 00:45:32.349371 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000a20000 Jul 16 00:45:32.349378 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349385 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] Jul 16 00:45:32.349392 kernel: Detected PIPT I-cache on CPU23 Jul 16 00:45:32.349399 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 Jul 16 00:45:32.349408 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000a30000 Jul 16 00:45:32.349415 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349424 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] Jul 16 00:45:32.349431 kernel: Detected PIPT I-cache on CPU24 Jul 16 00:45:32.349439 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 Jul 16 00:45:32.349446 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000a40000 Jul 16 00:45:32.349453 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349460 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] Jul 16 00:45:32.349468 kernel: Detected PIPT I-cache on CPU25 Jul 16 00:45:32.349475 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 Jul 16 00:45:32.349482 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000a50000 Jul 16 00:45:32.349491 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349498 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] Jul 16 00:45:32.349505 kernel: Detected PIPT I-cache on CPU26 Jul 16 00:45:32.349513 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 Jul 16 00:45:32.349520 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000a60000 Jul 16 00:45:32.349527 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349535 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] Jul 16 00:45:32.349542 kernel: Detected PIPT I-cache on CPU27 Jul 16 00:45:32.349549 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 Jul 16 00:45:32.349556 kernel: GICv3: CPU27: using allocated LPI pending table @0x0000080000a70000 Jul 16 00:45:32.349565 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349572 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] Jul 16 00:45:32.349580 kernel: Detected PIPT I-cache on CPU28 Jul 16 00:45:32.349587 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 Jul 16 00:45:32.349594 kernel: GICv3: CPU28: using allocated LPI pending table @0x0000080000a80000 Jul 16 00:45:32.349601 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349609 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] Jul 16 00:45:32.349616 kernel: Detected PIPT I-cache on CPU29 Jul 16 00:45:32.349623 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 Jul 16 00:45:32.349632 kernel: GICv3: CPU29: using allocated LPI pending table @0x0000080000a90000 Jul 16 00:45:32.349639 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349646 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] Jul 16 00:45:32.349654 kernel: Detected PIPT I-cache on CPU30 Jul 16 00:45:32.349661 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 Jul 16 00:45:32.349668 kernel: GICv3: CPU30: using allocated LPI pending table @0x0000080000aa0000 Jul 16 00:45:32.349676 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349683 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] Jul 16 00:45:32.349690 kernel: Detected PIPT I-cache on CPU31 Jul 16 00:45:32.349698 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 Jul 16 00:45:32.349706 kernel: GICv3: CPU31: using allocated LPI pending table @0x0000080000ab0000 Jul 16 00:45:32.349713 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349721 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] Jul 16 00:45:32.349728 kernel: Detected PIPT I-cache on CPU32 Jul 16 00:45:32.349735 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 Jul 16 00:45:32.349742 kernel: GICv3: CPU32: using allocated LPI pending table @0x0000080000ac0000 Jul 16 00:45:32.349750 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349757 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] Jul 16 00:45:32.349764 kernel: Detected PIPT I-cache on CPU33 Jul 16 00:45:32.349773 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 Jul 16 00:45:32.349780 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000ad0000 Jul 16 00:45:32.349787 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349794 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] Jul 16 00:45:32.349801 kernel: Detected PIPT I-cache on CPU34 Jul 16 00:45:32.349809 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 Jul 16 00:45:32.349816 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000ae0000 Jul 16 00:45:32.349823 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349831 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] Jul 16 00:45:32.349838 kernel: Detected PIPT I-cache on CPU35 Jul 16 00:45:32.349847 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 Jul 16 00:45:32.349854 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000af0000 Jul 16 00:45:32.349861 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349868 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] Jul 16 00:45:32.349876 kernel: Detected PIPT I-cache on CPU36 Jul 16 00:45:32.349883 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 Jul 16 00:45:32.349890 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000b00000 Jul 16 00:45:32.349898 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349905 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] Jul 16 00:45:32.349913 kernel: Detected PIPT I-cache on CPU37 Jul 16 00:45:32.349921 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 Jul 16 00:45:32.349928 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000b10000 Jul 16 00:45:32.349935 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349942 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] Jul 16 00:45:32.349950 kernel: Detected PIPT I-cache on CPU38 Jul 16 00:45:32.349957 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 Jul 16 00:45:32.349964 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000b20000 Jul 16 00:45:32.349971 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.349979 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] Jul 16 00:45:32.349987 kernel: Detected PIPT I-cache on CPU39 Jul 16 00:45:32.349995 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 Jul 16 00:45:32.350002 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000b30000 Jul 16 00:45:32.350010 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350017 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] Jul 16 00:45:32.350024 kernel: Detected PIPT I-cache on CPU40 Jul 16 00:45:32.350032 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 Jul 16 00:45:32.350040 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000b40000 Jul 16 00:45:32.350048 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350055 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] Jul 16 00:45:32.350062 kernel: Detected PIPT I-cache on CPU41 Jul 16 00:45:32.350069 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 Jul 16 00:45:32.350077 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000b50000 Jul 16 00:45:32.350084 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350091 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] Jul 16 00:45:32.350099 kernel: Detected PIPT I-cache on CPU42 Jul 16 00:45:32.350106 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 Jul 16 00:45:32.350115 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000b60000 Jul 16 00:45:32.350122 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350130 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] Jul 16 00:45:32.350137 kernel: Detected PIPT I-cache on CPU43 Jul 16 00:45:32.350144 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 Jul 16 00:45:32.350152 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000b70000 Jul 16 00:45:32.350159 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350166 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] Jul 16 00:45:32.350174 kernel: Detected PIPT I-cache on CPU44 Jul 16 00:45:32.350182 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 Jul 16 00:45:32.350189 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000b80000 Jul 16 00:45:32.350197 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350204 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] Jul 16 00:45:32.350211 kernel: Detected PIPT I-cache on CPU45 Jul 16 00:45:32.350218 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 Jul 16 00:45:32.350226 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000b90000 Jul 16 00:45:32.350233 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350240 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] Jul 16 00:45:32.350248 kernel: Detected PIPT I-cache on CPU46 Jul 16 00:45:32.350256 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 Jul 16 00:45:32.350265 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ba0000 Jul 16 00:45:32.350273 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350280 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] Jul 16 00:45:32.350287 kernel: Detected PIPT I-cache on CPU47 Jul 16 00:45:32.350294 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 Jul 16 00:45:32.350302 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000bb0000 Jul 16 00:45:32.350309 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350316 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] Jul 16 00:45:32.350325 kernel: Detected PIPT I-cache on CPU48 Jul 16 00:45:32.350333 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 Jul 16 00:45:32.350341 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000bc0000 Jul 16 00:45:32.350350 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350357 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] Jul 16 00:45:32.350364 kernel: Detected PIPT I-cache on CPU49 Jul 16 00:45:32.350372 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 Jul 16 00:45:32.350379 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000bd0000 Jul 16 00:45:32.350386 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350393 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] Jul 16 00:45:32.350402 kernel: Detected PIPT I-cache on CPU50 Jul 16 00:45:32.350409 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 Jul 16 00:45:32.350416 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000be0000 Jul 16 00:45:32.350424 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350431 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] Jul 16 00:45:32.350438 kernel: Detected PIPT I-cache on CPU51 Jul 16 00:45:32.350445 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 Jul 16 00:45:32.350453 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000bf0000 Jul 16 00:45:32.350460 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350469 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] Jul 16 00:45:32.350476 kernel: Detected PIPT I-cache on CPU52 Jul 16 00:45:32.350483 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 Jul 16 00:45:32.350490 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000c00000 Jul 16 00:45:32.350498 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350505 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] Jul 16 00:45:32.350512 kernel: Detected PIPT I-cache on CPU53 Jul 16 00:45:32.350519 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 Jul 16 00:45:32.350526 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000c10000 Jul 16 00:45:32.350535 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350542 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] Jul 16 00:45:32.350549 kernel: Detected PIPT I-cache on CPU54 Jul 16 00:45:32.350557 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 Jul 16 00:45:32.350564 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000c20000 Jul 16 00:45:32.350571 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350579 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] Jul 16 00:45:32.350586 kernel: Detected PIPT I-cache on CPU55 Jul 16 00:45:32.350593 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 Jul 16 00:45:32.350601 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000c30000 Jul 16 00:45:32.350609 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350616 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] Jul 16 00:45:32.350624 kernel: Detected PIPT I-cache on CPU56 Jul 16 00:45:32.350631 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 Jul 16 00:45:32.350638 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000c40000 Jul 16 00:45:32.350645 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350653 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] Jul 16 00:45:32.350660 kernel: Detected PIPT I-cache on CPU57 Jul 16 00:45:32.350667 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 Jul 16 00:45:32.350676 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000c50000 Jul 16 00:45:32.350683 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350691 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] Jul 16 00:45:32.350698 kernel: Detected PIPT I-cache on CPU58 Jul 16 00:45:32.350705 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 Jul 16 00:45:32.350712 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000c60000 Jul 16 00:45:32.350720 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350727 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] Jul 16 00:45:32.350734 kernel: Detected PIPT I-cache on CPU59 Jul 16 00:45:32.350742 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 Jul 16 00:45:32.350750 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000c70000 Jul 16 00:45:32.350758 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350765 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] Jul 16 00:45:32.350772 kernel: Detected PIPT I-cache on CPU60 Jul 16 00:45:32.350780 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 Jul 16 00:45:32.350787 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000c80000 Jul 16 00:45:32.350794 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350801 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] Jul 16 00:45:32.350809 kernel: Detected PIPT I-cache on CPU61 Jul 16 00:45:32.350817 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 Jul 16 00:45:32.350825 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000c90000 Jul 16 00:45:32.350832 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350840 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] Jul 16 00:45:32.350847 kernel: Detected PIPT I-cache on CPU62 Jul 16 00:45:32.350854 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 Jul 16 00:45:32.350862 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000ca0000 Jul 16 00:45:32.350869 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350876 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] Jul 16 00:45:32.350883 kernel: Detected PIPT I-cache on CPU63 Jul 16 00:45:32.350892 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 Jul 16 00:45:32.350899 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000cb0000 Jul 16 00:45:32.350907 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350914 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] Jul 16 00:45:32.350921 kernel: Detected PIPT I-cache on CPU64 Jul 16 00:45:32.350929 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 Jul 16 00:45:32.350936 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000cc0000 Jul 16 00:45:32.350943 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350951 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] Jul 16 00:45:32.350959 kernel: Detected PIPT I-cache on CPU65 Jul 16 00:45:32.350966 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 Jul 16 00:45:32.350974 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000cd0000 Jul 16 00:45:32.350981 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.350988 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] Jul 16 00:45:32.350996 kernel: Detected PIPT I-cache on CPU66 Jul 16 00:45:32.351003 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 Jul 16 00:45:32.351010 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000ce0000 Jul 16 00:45:32.351017 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.351025 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] Jul 16 00:45:32.351033 kernel: Detected PIPT I-cache on CPU67 Jul 16 00:45:32.351041 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 Jul 16 00:45:32.351048 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000cf0000 Jul 16 00:45:32.351055 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.351063 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] Jul 16 00:45:32.351070 kernel: Detected PIPT I-cache on CPU68 Jul 16 00:45:32.351077 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 Jul 16 00:45:32.351085 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000d00000 Jul 16 00:45:32.351092 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.351100 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] Jul 16 00:45:32.351108 kernel: Detected PIPT I-cache on CPU69 Jul 16 00:45:32.351115 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 Jul 16 00:45:32.351122 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000d10000 Jul 16 00:45:32.351130 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.351137 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] Jul 16 00:45:32.351144 kernel: Detected PIPT I-cache on CPU70 Jul 16 00:45:32.351152 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 Jul 16 00:45:32.351159 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000d20000 Jul 16 00:45:32.351167 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.351175 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] Jul 16 00:45:32.351182 kernel: Detected PIPT I-cache on CPU71 Jul 16 00:45:32.351189 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 Jul 16 00:45:32.351197 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000d30000 Jul 16 00:45:32.351204 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.351211 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] Jul 16 00:45:32.351218 kernel: Detected PIPT I-cache on CPU72 Jul 16 00:45:32.351226 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 Jul 16 00:45:32.351233 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000d40000 Jul 16 00:45:32.351242 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.351250 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] Jul 16 00:45:32.351257 kernel: Detected PIPT I-cache on CPU73 Jul 16 00:45:32.351267 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 Jul 16 00:45:32.351275 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000d50000 Jul 16 00:45:32.351282 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.351289 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] Jul 16 00:45:32.351296 kernel: Detected PIPT I-cache on CPU74 Jul 16 00:45:32.351304 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 Jul 16 00:45:32.351313 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000d60000 Jul 16 00:45:32.351321 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.351328 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] Jul 16 00:45:32.351335 kernel: Detected PIPT I-cache on CPU75 Jul 16 00:45:32.351342 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 Jul 16 00:45:32.351350 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000d70000 Jul 16 00:45:32.351357 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.351364 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] Jul 16 00:45:32.351372 kernel: Detected PIPT I-cache on CPU76 Jul 16 00:45:32.351379 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 Jul 16 00:45:32.351388 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000d80000 Jul 16 00:45:32.351395 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.351402 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] Jul 16 00:45:32.351410 kernel: Detected PIPT I-cache on CPU77 Jul 16 00:45:32.351417 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 Jul 16 00:45:32.351424 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000d90000 Jul 16 00:45:32.351432 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.351439 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] Jul 16 00:45:32.351446 kernel: Detected PIPT I-cache on CPU78 Jul 16 00:45:32.351455 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 Jul 16 00:45:32.351462 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000da0000 Jul 16 00:45:32.351470 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.351477 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] Jul 16 00:45:32.351484 kernel: Detected PIPT I-cache on CPU79 Jul 16 00:45:32.351491 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 Jul 16 00:45:32.351499 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000db0000 Jul 16 00:45:32.351506 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:45:32.351513 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] Jul 16 00:45:32.351520 kernel: smp: Brought up 1 node, 80 CPUs Jul 16 00:45:32.351529 kernel: SMP: Total of 80 processors activated. Jul 16 00:45:32.351536 kernel: CPU: All CPU(s) started at EL2 Jul 16 00:45:32.351543 kernel: CPU features: detected: 32-bit EL0 Support Jul 16 00:45:32.351551 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 16 00:45:32.351558 kernel: CPU features: detected: Common not Private translations Jul 16 00:45:32.351566 kernel: CPU features: detected: CRC32 instructions Jul 16 00:45:32.351573 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 16 00:45:32.351581 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 16 00:45:32.351588 kernel: CPU features: detected: LSE atomic instructions Jul 16 00:45:32.351597 kernel: CPU features: detected: Privileged Access Never Jul 16 00:45:32.351604 kernel: CPU features: detected: RAS Extension Support Jul 16 00:45:32.351611 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 16 00:45:32.351619 kernel: alternatives: applying system-wide alternatives Jul 16 00:45:32.351626 kernel: CPU features: detected: Hardware dirty bit management on CPU0-79 Jul 16 00:45:32.351634 kernel: Memory: 262843548K/268174336K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 5254856K reserved, 16384K cma-reserved) Jul 16 00:45:32.351641 kernel: devtmpfs: initialized Jul 16 00:45:32.351649 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 16 00:45:32.351656 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 16 00:45:32.351665 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 16 00:45:32.351672 kernel: 0 pages in range for non-PLT usage Jul 16 00:45:32.351680 kernel: 508432 pages in range for PLT usage Jul 16 00:45:32.351687 kernel: pinctrl core: initialized pinctrl subsystem Jul 16 00:45:32.351695 kernel: SMBIOS 3.4.0 present. Jul 16 00:45:32.351702 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 Jul 16 00:45:32.351709 kernel: DMI: Memory slots populated: 8/16 Jul 16 00:45:32.351717 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 16 00:45:32.351724 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations Jul 16 00:45:32.351733 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 16 00:45:32.351740 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 16 00:45:32.351748 kernel: audit: initializing netlink subsys (disabled) Jul 16 00:45:32.351755 kernel: audit: type=2000 audit(0.066:1): state=initialized audit_enabled=0 res=1 Jul 16 00:45:32.351762 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 16 00:45:32.351769 kernel: cpuidle: using governor menu Jul 16 00:45:32.351777 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 16 00:45:32.351784 kernel: ASID allocator initialised with 32768 entries Jul 16 00:45:32.351791 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 16 00:45:32.351800 kernel: Serial: AMBA PL011 UART driver Jul 16 00:45:32.351808 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 16 00:45:32.351815 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 16 00:45:32.351823 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 16 00:45:32.351830 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 16 00:45:32.351837 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 16 00:45:32.351845 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 16 00:45:32.351852 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 16 00:45:32.351859 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 16 00:45:32.351868 kernel: ACPI: Added _OSI(Module Device) Jul 16 00:45:32.351875 kernel: ACPI: Added _OSI(Processor Device) Jul 16 00:45:32.351882 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 16 00:45:32.351890 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded Jul 16 00:45:32.351897 kernel: ACPI: Interpreter enabled Jul 16 00:45:32.351904 kernel: ACPI: Using GIC for interrupt routing Jul 16 00:45:32.351912 kernel: ACPI: MCFG table detected, 8 entries Jul 16 00:45:32.351919 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 Jul 16 00:45:32.351926 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 Jul 16 00:45:32.351935 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 Jul 16 00:45:32.351942 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 Jul 16 00:45:32.351950 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 Jul 16 00:45:32.351957 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 Jul 16 00:45:32.351964 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 Jul 16 00:45:32.351972 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 Jul 16 00:45:32.351979 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA Jul 16 00:45:32.351986 kernel: printk: legacy console [ttyAMA0] enabled Jul 16 00:45:32.351994 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA Jul 16 00:45:32.352003 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) Jul 16 00:45:32.352128 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 16 00:45:32.352192 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 16 00:45:32.352251 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] Jul 16 00:45:32.352313 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 16 00:45:32.352370 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 Jul 16 00:45:32.352428 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] Jul 16 00:45:32.352438 kernel: PCI host bridge to bus 000d:00 Jul 16 00:45:32.352503 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] Jul 16 00:45:32.352556 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] Jul 16 00:45:32.352608 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] Jul 16 00:45:32.352684 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jul 16 00:45:32.352754 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.352817 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Jul 16 00:45:32.352877 kernel: pci 000d:00:01.0: enabling Extended Tags Jul 16 00:45:32.352935 kernel: pci 000d:00:01.0: supports D1 D2 Jul 16 00:45:32.352994 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.353061 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.353120 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Jul 16 00:45:32.353180 kernel: pci 000d:00:02.0: supports D1 D2 Jul 16 00:45:32.353239 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.353320 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.353384 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Jul 16 00:45:32.353444 kernel: pci 000d:00:03.0: supports D1 D2 Jul 16 00:45:32.353503 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.353570 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.353631 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Jul 16 00:45:32.353690 kernel: pci 000d:00:04.0: supports D1 D2 Jul 16 00:45:32.353748 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.353757 kernel: acpiphp: Slot [1] registered Jul 16 00:45:32.353764 kernel: acpiphp: Slot [2] registered Jul 16 00:45:32.353772 kernel: acpiphp: Slot [3] registered Jul 16 00:45:32.353779 kernel: acpiphp: Slot [4] registered Jul 16 00:45:32.353831 kernel: pci_bus 000d:00: on NUMA node 0 Jul 16 00:45:32.353891 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 16 00:45:32.353952 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:45:32.354010 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:45:32.354069 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 16 00:45:32.354128 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:45:32.354187 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:45:32.354245 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 16 00:45:32.354309 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:45:32.354369 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:45:32.354428 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 16 00:45:32.354487 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:45:32.354545 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:45:32.354604 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff]: assigned Jul 16 00:45:32.354663 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref]: assigned Jul 16 00:45:32.354723 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff]: assigned Jul 16 00:45:32.354782 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref]: assigned Jul 16 00:45:32.354839 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff]: assigned Jul 16 00:45:32.354897 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref]: assigned Jul 16 00:45:32.354956 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff]: assigned Jul 16 00:45:32.355014 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref]: assigned Jul 16 00:45:32.355072 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.355130 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.355190 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.355248 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.355309 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.355368 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.355426 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.355484 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.355542 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.355602 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.355660 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.355718 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.355776 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.355834 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.355893 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.355951 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.356009 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Jul 16 00:45:32.356070 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] Jul 16 00:45:32.356128 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] Jul 16 00:45:32.356186 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Jul 16 00:45:32.356244 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] Jul 16 00:45:32.356305 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] Jul 16 00:45:32.356365 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Jul 16 00:45:32.356423 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] Jul 16 00:45:32.356483 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] Jul 16 00:45:32.356541 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Jul 16 00:45:32.356599 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] Jul 16 00:45:32.356657 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] Jul 16 00:45:32.356709 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] Jul 16 00:45:32.356761 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] Jul 16 00:45:32.356829 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] Jul 16 00:45:32.356883 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] Jul 16 00:45:32.356945 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] Jul 16 00:45:32.357000 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] Jul 16 00:45:32.357070 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] Jul 16 00:45:32.357124 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] Jul 16 00:45:32.357185 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] Jul 16 00:45:32.357241 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] Jul 16 00:45:32.357251 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) Jul 16 00:45:32.357319 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 16 00:45:32.357376 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 16 00:45:32.357432 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] Jul 16 00:45:32.357488 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 16 00:45:32.357546 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 Jul 16 00:45:32.357602 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] Jul 16 00:45:32.357611 kernel: PCI host bridge to bus 0000:00 Jul 16 00:45:32.357672 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] Jul 16 00:45:32.357725 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] Jul 16 00:45:32.357777 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 16 00:45:32.357844 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jul 16 00:45:32.357913 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.357973 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 16 00:45:32.358031 kernel: pci 0000:00:01.0: enabling Extended Tags Jul 16 00:45:32.358090 kernel: pci 0000:00:01.0: supports D1 D2 Jul 16 00:45:32.358148 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.358214 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.358276 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Jul 16 00:45:32.358340 kernel: pci 0000:00:02.0: supports D1 D2 Jul 16 00:45:32.358401 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.358466 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.358526 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Jul 16 00:45:32.358585 kernel: pci 0000:00:03.0: supports D1 D2 Jul 16 00:45:32.358644 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.358711 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.358773 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Jul 16 00:45:32.358831 kernel: pci 0000:00:04.0: supports D1 D2 Jul 16 00:45:32.358890 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.358900 kernel: acpiphp: Slot [1-1] registered Jul 16 00:45:32.358907 kernel: acpiphp: Slot [2-1] registered Jul 16 00:45:32.358915 kernel: acpiphp: Slot [3-1] registered Jul 16 00:45:32.358923 kernel: acpiphp: Slot [4-1] registered Jul 16 00:45:32.358974 kernel: pci_bus 0000:00: on NUMA node 0 Jul 16 00:45:32.359035 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 16 00:45:32.359095 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:45:32.359153 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:45:32.359212 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 16 00:45:32.359317 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:45:32.359388 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:45:32.359469 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 16 00:45:32.359538 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:45:32.359597 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:45:32.359658 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 16 00:45:32.359717 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:45:32.359776 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:45:32.359838 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff]: assigned Jul 16 00:45:32.359902 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref]: assigned Jul 16 00:45:32.359964 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff]: assigned Jul 16 00:45:32.360026 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref]: assigned Jul 16 00:45:32.360085 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff]: assigned Jul 16 00:45:32.360144 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref]: assigned Jul 16 00:45:32.360202 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff]: assigned Jul 16 00:45:32.360260 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref]: assigned Jul 16 00:45:32.360322 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.360383 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.360445 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.360504 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.360563 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.360621 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.360680 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.360739 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.360797 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.360857 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.360916 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.360973 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.361032 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.361090 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.361150 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.361210 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.361271 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 16 00:45:32.361330 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] Jul 16 00:45:32.361389 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Jul 16 00:45:32.361447 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Jul 16 00:45:32.361506 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] Jul 16 00:45:32.361567 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Jul 16 00:45:32.361625 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Jul 16 00:45:32.361684 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] Jul 16 00:45:32.361742 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Jul 16 00:45:32.361801 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Jul 16 00:45:32.361860 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] Jul 16 00:45:32.361919 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Jul 16 00:45:32.361974 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] Jul 16 00:45:32.362026 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] Jul 16 00:45:32.362090 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] Jul 16 00:45:32.362145 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Jul 16 00:45:32.362208 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] Jul 16 00:45:32.362266 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Jul 16 00:45:32.362339 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] Jul 16 00:45:32.362395 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Jul 16 00:45:32.362457 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] Jul 16 00:45:32.362512 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Jul 16 00:45:32.362522 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) Jul 16 00:45:32.362585 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 16 00:45:32.362642 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 16 00:45:32.362701 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] Jul 16 00:45:32.362757 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 16 00:45:32.362812 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 Jul 16 00:45:32.362868 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] Jul 16 00:45:32.362878 kernel: PCI host bridge to bus 0005:00 Jul 16 00:45:32.362938 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] Jul 16 00:45:32.362992 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] Jul 16 00:45:32.363044 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] Jul 16 00:45:32.363109 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jul 16 00:45:32.363175 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.363235 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Jul 16 00:45:32.363297 kernel: pci 0005:00:01.0: supports D1 D2 Jul 16 00:45:32.363356 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.363423 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.363484 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Jul 16 00:45:32.363543 kernel: pci 0005:00:03.0: supports D1 D2 Jul 16 00:45:32.363602 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.363667 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.363726 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Jul 16 00:45:32.363786 kernel: pci 0005:00:05.0: bridge window [mem 0x30100000-0x301fffff] Jul 16 00:45:32.363846 kernel: pci 0005:00:05.0: supports D1 D2 Jul 16 00:45:32.363905 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.363971 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.364030 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Jul 16 00:45:32.364088 kernel: pci 0005:00:07.0: bridge window [mem 0x30000000-0x300fffff] Jul 16 00:45:32.364147 kernel: pci 0005:00:07.0: supports D1 D2 Jul 16 00:45:32.364205 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.364214 kernel: acpiphp: Slot [1-2] registered Jul 16 00:45:32.364223 kernel: acpiphp: Slot [2-2] registered Jul 16 00:45:32.364293 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 PCIe Endpoint Jul 16 00:45:32.364358 kernel: pci 0005:03:00.0: BAR 0 [mem 0x30110000-0x30113fff 64bit] Jul 16 00:45:32.364418 kernel: pci 0005:03:00.0: ROM [mem 0x30100000-0x3010ffff pref] Jul 16 00:45:32.364485 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 PCIe Endpoint Jul 16 00:45:32.364546 kernel: pci 0005:04:00.0: BAR 0 [mem 0x30010000-0x30013fff 64bit] Jul 16 00:45:32.364606 kernel: pci 0005:04:00.0: ROM [mem 0x30000000-0x3000ffff pref] Jul 16 00:45:32.364660 kernel: pci_bus 0005:00: on NUMA node 0 Jul 16 00:45:32.364719 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 16 00:45:32.364777 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:45:32.364836 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:45:32.364895 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 16 00:45:32.364954 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:45:32.365012 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:45:32.365075 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 16 00:45:32.365135 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:45:32.365194 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jul 16 00:45:32.365253 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 16 00:45:32.365316 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:45:32.365375 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 Jul 16 00:45:32.365434 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff]: assigned Jul 16 00:45:32.365496 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref]: assigned Jul 16 00:45:32.365555 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff]: assigned Jul 16 00:45:32.365614 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref]: assigned Jul 16 00:45:32.365673 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff]: assigned Jul 16 00:45:32.365733 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref]: assigned Jul 16 00:45:32.365791 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff]: assigned Jul 16 00:45:32.365849 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref]: assigned Jul 16 00:45:32.365910 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.365968 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.366027 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.366085 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.366144 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.366202 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.366261 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.366322 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.366383 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.366441 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.366500 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.366558 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.366617 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.366675 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.366733 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.366793 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.366851 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Jul 16 00:45:32.366910 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] Jul 16 00:45:32.366968 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Jul 16 00:45:32.367027 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Jul 16 00:45:32.367086 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] Jul 16 00:45:32.367144 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Jul 16 00:45:32.367207 kernel: pci 0005:03:00.0: ROM [mem 0x30400000-0x3040ffff pref]: assigned Jul 16 00:45:32.367269 kernel: pci 0005:03:00.0: BAR 0 [mem 0x30410000-0x30413fff 64bit]: assigned Jul 16 00:45:32.367329 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Jul 16 00:45:32.367387 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] Jul 16 00:45:32.367445 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Jul 16 00:45:32.367506 kernel: pci 0005:04:00.0: ROM [mem 0x30600000-0x3060ffff pref]: assigned Jul 16 00:45:32.367567 kernel: pci 0005:04:00.0: BAR 0 [mem 0x30610000-0x30613fff 64bit]: assigned Jul 16 00:45:32.367627 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Jul 16 00:45:32.367687 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] Jul 16 00:45:32.367746 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Jul 16 00:45:32.367799 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] Jul 16 00:45:32.367851 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] Jul 16 00:45:32.367913 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] Jul 16 00:45:32.367969 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Jul 16 00:45:32.368037 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] Jul 16 00:45:32.368094 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Jul 16 00:45:32.368155 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] Jul 16 00:45:32.368209 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Jul 16 00:45:32.368273 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] Jul 16 00:45:32.368329 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Jul 16 00:45:32.368340 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) Jul 16 00:45:32.368405 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 16 00:45:32.368462 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 16 00:45:32.368518 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] Jul 16 00:45:32.368573 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 16 00:45:32.368630 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 Jul 16 00:45:32.368687 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] Jul 16 00:45:32.368698 kernel: PCI host bridge to bus 0003:00 Jul 16 00:45:32.368757 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] Jul 16 00:45:32.368810 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] Jul 16 00:45:32.368861 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] Jul 16 00:45:32.368927 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jul 16 00:45:32.368993 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.369052 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Jul 16 00:45:32.369112 kernel: pci 0003:00:01.0: supports D1 D2 Jul 16 00:45:32.369171 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.369236 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.369298 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Jul 16 00:45:32.369357 kernel: pci 0003:00:03.0: supports D1 D2 Jul 16 00:45:32.369416 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.369482 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.369543 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] Jul 16 00:45:32.369602 kernel: pci 0003:00:05.0: bridge window [io 0x0000-0x0fff] Jul 16 00:45:32.369661 kernel: pci 0003:00:05.0: bridge window [mem 0x10000000-0x100fffff] Jul 16 00:45:32.369719 kernel: pci 0003:00:05.0: bridge window [mem 0x240000000000-0x2400000fffff 64bit pref] Jul 16 00:45:32.369778 kernel: pci 0003:00:05.0: supports D1 D2 Jul 16 00:45:32.369838 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.369848 kernel: acpiphp: Slot [1-3] registered Jul 16 00:45:32.369855 kernel: acpiphp: Slot [2-3] registered Jul 16 00:45:32.369923 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 PCIe Endpoint Jul 16 00:45:32.369984 kernel: pci 0003:03:00.0: BAR 0 [mem 0x10020000-0x1003ffff] Jul 16 00:45:32.370044 kernel: pci 0003:03:00.0: BAR 2 [io 0x0020-0x003f] Jul 16 00:45:32.370104 kernel: pci 0003:03:00.0: BAR 3 [mem 0x10044000-0x10047fff] Jul 16 00:45:32.370163 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold Jul 16 00:45:32.370223 kernel: pci 0003:03:00.0: VF BAR 0 [mem 0x240000060000-0x240000063fff 64bit pref] Jul 16 00:45:32.370286 kernel: pci 0003:03:00.0: VF BAR 0 [mem 0x240000060000-0x24000007ffff 64bit pref]: contains BAR 0 for 8 VFs Jul 16 00:45:32.370348 kernel: pci 0003:03:00.0: VF BAR 3 [mem 0x240000040000-0x240000043fff 64bit pref] Jul 16 00:45:32.370408 kernel: pci 0003:03:00.0: VF BAR 3 [mem 0x240000040000-0x24000005ffff 64bit pref]: contains BAR 3 for 8 VFs Jul 16 00:45:32.370468 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) Jul 16 00:45:32.370537 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 PCIe Endpoint Jul 16 00:45:32.370598 kernel: pci 0003:03:00.1: BAR 0 [mem 0x10000000-0x1001ffff] Jul 16 00:45:32.370658 kernel: pci 0003:03:00.1: BAR 2 [io 0x0000-0x001f] Jul 16 00:45:32.370718 kernel: pci 0003:03:00.1: BAR 3 [mem 0x10040000-0x10043fff] Jul 16 00:45:32.370779 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold Jul 16 00:45:32.370845 kernel: pci 0003:03:00.1: VF BAR 0 [mem 0x240000020000-0x240000023fff 64bit pref] Jul 16 00:45:32.370918 kernel: pci 0003:03:00.1: VF BAR 0 [mem 0x240000020000-0x24000003ffff 64bit pref]: contains BAR 0 for 8 VFs Jul 16 00:45:32.370984 kernel: pci 0003:03:00.1: VF BAR 3 [mem 0x240000000000-0x240000003fff 64bit pref] Jul 16 00:45:32.371044 kernel: pci 0003:03:00.1: VF BAR 3 [mem 0x240000000000-0x24000001ffff 64bit pref]: contains BAR 3 for 8 VFs Jul 16 00:45:32.371099 kernel: pci_bus 0003:00: on NUMA node 0 Jul 16 00:45:32.371162 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 16 00:45:32.371221 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:45:32.371287 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:45:32.371347 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 16 00:45:32.371406 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:45:32.371465 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:45:32.371525 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 Jul 16 00:45:32.371584 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 Jul 16 00:45:32.371642 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jul 16 00:45:32.371703 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref]: assigned Jul 16 00:45:32.371761 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff]: assigned Jul 16 00:45:32.371819 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref]: assigned Jul 16 00:45:32.371878 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff]: assigned Jul 16 00:45:32.371937 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref]: assigned Jul 16 00:45:32.371996 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.372054 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.372115 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.372173 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.372233 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.372299 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.372358 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.372418 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.372477 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.372537 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.372596 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.372655 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.372714 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Jul 16 00:45:32.372773 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] Jul 16 00:45:32.372831 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] Jul 16 00:45:32.372889 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Jul 16 00:45:32.372950 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] Jul 16 00:45:32.373008 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] Jul 16 00:45:32.373071 kernel: pci 0003:03:00.0: BAR 0 [mem 0x10400000-0x1041ffff]: assigned Jul 16 00:45:32.373132 kernel: pci 0003:03:00.1: BAR 0 [mem 0x10420000-0x1043ffff]: assigned Jul 16 00:45:32.373192 kernel: pci 0003:03:00.0: BAR 3 [mem 0x10440000-0x10443fff]: assigned Jul 16 00:45:32.373252 kernel: pci 0003:03:00.0: VF BAR 0 [mem 0x240000400000-0x24000041ffff 64bit pref]: assigned Jul 16 00:45:32.373316 kernel: pci 0003:03:00.0: VF BAR 3 [mem 0x240000420000-0x24000043ffff 64bit pref]: assigned Jul 16 00:45:32.373380 kernel: pci 0003:03:00.1: BAR 3 [mem 0x10444000-0x10447fff]: assigned Jul 16 00:45:32.373441 kernel: pci 0003:03:00.1: VF BAR 0 [mem 0x240000440000-0x24000045ffff 64bit pref]: assigned Jul 16 00:45:32.373501 kernel: pci 0003:03:00.1: VF BAR 3 [mem 0x240000460000-0x24000047ffff 64bit pref]: assigned Jul 16 00:45:32.373561 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: can't assign; no space Jul 16 00:45:32.373621 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: failed to assign Jul 16 00:45:32.373681 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: can't assign; no space Jul 16 00:45:32.373741 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: failed to assign Jul 16 00:45:32.373803 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: can't assign; no space Jul 16 00:45:32.373862 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: failed to assign Jul 16 00:45:32.373922 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: can't assign; no space Jul 16 00:45:32.373982 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: failed to assign Jul 16 00:45:32.374041 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] Jul 16 00:45:32.374099 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] Jul 16 00:45:32.374158 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] Jul 16 00:45:32.374211 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 16 00:45:32.374267 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] Jul 16 00:45:32.374320 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] Jul 16 00:45:32.374384 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] Jul 16 00:45:32.374440 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] Jul 16 00:45:32.374510 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] Jul 16 00:45:32.374565 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] Jul 16 00:45:32.374627 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] Jul 16 00:45:32.374683 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] Jul 16 00:45:32.374693 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) Jul 16 00:45:32.374757 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 16 00:45:32.374814 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 16 00:45:32.374871 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] Jul 16 00:45:32.374927 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 16 00:45:32.374983 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 Jul 16 00:45:32.375041 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] Jul 16 00:45:32.375051 kernel: PCI host bridge to bus 000c:00 Jul 16 00:45:32.375109 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] Jul 16 00:45:32.375162 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] Jul 16 00:45:32.375214 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] Jul 16 00:45:32.375283 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jul 16 00:45:32.375352 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.375412 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Jul 16 00:45:32.375471 kernel: pci 000c:00:01.0: enabling Extended Tags Jul 16 00:45:32.375530 kernel: pci 000c:00:01.0: supports D1 D2 Jul 16 00:45:32.375590 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.375656 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.375716 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Jul 16 00:45:32.375776 kernel: pci 000c:00:02.0: supports D1 D2 Jul 16 00:45:32.375835 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.375900 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.375959 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Jul 16 00:45:32.376018 kernel: pci 000c:00:03.0: supports D1 D2 Jul 16 00:45:32.376076 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.376141 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.376200 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Jul 16 00:45:32.376261 kernel: pci 000c:00:04.0: supports D1 D2 Jul 16 00:45:32.376323 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.376333 kernel: acpiphp: Slot [1-4] registered Jul 16 00:45:32.376340 kernel: acpiphp: Slot [2-4] registered Jul 16 00:45:32.376348 kernel: acpiphp: Slot [3-2] registered Jul 16 00:45:32.376356 kernel: acpiphp: Slot [4-2] registered Jul 16 00:45:32.376406 kernel: pci_bus 000c:00: on NUMA node 0 Jul 16 00:45:32.376466 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 16 00:45:32.376532 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:45:32.376591 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:45:32.376651 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 16 00:45:32.376711 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:45:32.376770 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:45:32.376830 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 16 00:45:32.376889 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:45:32.376950 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:45:32.377009 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 16 00:45:32.377069 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:45:32.377128 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:45:32.377187 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff]: assigned Jul 16 00:45:32.377246 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref]: assigned Jul 16 00:45:32.377307 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff]: assigned Jul 16 00:45:32.377368 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref]: assigned Jul 16 00:45:32.377427 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff]: assigned Jul 16 00:45:32.377486 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref]: assigned Jul 16 00:45:32.377545 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff]: assigned Jul 16 00:45:32.377603 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref]: assigned Jul 16 00:45:32.377662 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.377721 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.377780 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.377839 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.377897 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.377956 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.378014 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.378072 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.378132 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.378190 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.378248 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.378312 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.378371 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.378430 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.378488 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.378547 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.378605 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Jul 16 00:45:32.378664 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] Jul 16 00:45:32.378722 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] Jul 16 00:45:32.378783 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Jul 16 00:45:32.378843 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] Jul 16 00:45:32.378902 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] Jul 16 00:45:32.378961 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Jul 16 00:45:32.379022 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] Jul 16 00:45:32.379080 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] Jul 16 00:45:32.379139 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Jul 16 00:45:32.379198 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] Jul 16 00:45:32.379257 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] Jul 16 00:45:32.379313 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] Jul 16 00:45:32.379366 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] Jul 16 00:45:32.379431 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] Jul 16 00:45:32.379487 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] Jul 16 00:45:32.379549 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] Jul 16 00:45:32.379604 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] Jul 16 00:45:32.379672 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] Jul 16 00:45:32.379727 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] Jul 16 00:45:32.379792 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] Jul 16 00:45:32.379847 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] Jul 16 00:45:32.379857 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) Jul 16 00:45:32.379921 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 16 00:45:32.379978 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 16 00:45:32.380036 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] Jul 16 00:45:32.380095 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 16 00:45:32.380151 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 Jul 16 00:45:32.380207 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] Jul 16 00:45:32.380219 kernel: PCI host bridge to bus 0002:00 Jul 16 00:45:32.380282 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] Jul 16 00:45:32.380336 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] Jul 16 00:45:32.380387 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] Jul 16 00:45:32.380455 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jul 16 00:45:32.380521 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.380581 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Jul 16 00:45:32.380641 kernel: pci 0002:00:01.0: supports D1 D2 Jul 16 00:45:32.380700 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.380766 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.380826 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Jul 16 00:45:32.380887 kernel: pci 0002:00:03.0: supports D1 D2 Jul 16 00:45:32.380945 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.381011 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.381070 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Jul 16 00:45:32.381128 kernel: pci 0002:00:05.0: supports D1 D2 Jul 16 00:45:32.381187 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.381255 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.381319 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Jul 16 00:45:32.381379 kernel: pci 0002:00:07.0: supports D1 D2 Jul 16 00:45:32.381438 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.381448 kernel: acpiphp: Slot [1-5] registered Jul 16 00:45:32.381455 kernel: acpiphp: Slot [2-5] registered Jul 16 00:45:32.381463 kernel: acpiphp: Slot [3-3] registered Jul 16 00:45:32.381471 kernel: acpiphp: Slot [4-3] registered Jul 16 00:45:32.381521 kernel: pci_bus 0002:00: on NUMA node 0 Jul 16 00:45:32.381581 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 16 00:45:32.381642 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:45:32.381701 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:45:32.381760 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 16 00:45:32.381818 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:45:32.381877 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:45:32.381936 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 16 00:45:32.381995 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:45:32.382055 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:45:32.382114 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 16 00:45:32.382172 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:45:32.382231 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:45:32.382293 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff]: assigned Jul 16 00:45:32.382352 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref]: assigned Jul 16 00:45:32.382412 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff]: assigned Jul 16 00:45:32.382471 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref]: assigned Jul 16 00:45:32.382530 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff]: assigned Jul 16 00:45:32.382588 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref]: assigned Jul 16 00:45:32.382647 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff]: assigned Jul 16 00:45:32.382706 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref]: assigned Jul 16 00:45:32.382765 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.382823 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.382884 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.382943 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.383002 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.383060 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.383119 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.383179 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.383238 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.383300 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.383361 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.383419 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.383478 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.383537 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.383596 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.383656 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.383715 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Jul 16 00:45:32.383774 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] Jul 16 00:45:32.383833 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] Jul 16 00:45:32.383894 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Jul 16 00:45:32.383953 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] Jul 16 00:45:32.384011 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] Jul 16 00:45:32.384070 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Jul 16 00:45:32.384129 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] Jul 16 00:45:32.384188 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] Jul 16 00:45:32.384248 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Jul 16 00:45:32.384318 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] Jul 16 00:45:32.384379 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] Jul 16 00:45:32.384432 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] Jul 16 00:45:32.384484 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] Jul 16 00:45:32.384546 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] Jul 16 00:45:32.384603 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] Jul 16 00:45:32.384665 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] Jul 16 00:45:32.384719 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] Jul 16 00:45:32.384780 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] Jul 16 00:45:32.384835 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] Jul 16 00:45:32.384906 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] Jul 16 00:45:32.384961 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] Jul 16 00:45:32.384973 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) Jul 16 00:45:32.385036 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 16 00:45:32.385093 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 16 00:45:32.385149 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] Jul 16 00:45:32.385205 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 16 00:45:32.385261 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 Jul 16 00:45:32.385345 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] Jul 16 00:45:32.385356 kernel: PCI host bridge to bus 0001:00 Jul 16 00:45:32.385419 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] Jul 16 00:45:32.385471 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] Jul 16 00:45:32.385522 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] Jul 16 00:45:32.385588 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jul 16 00:45:32.385657 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.385717 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Jul 16 00:45:32.385777 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Jul 16 00:45:32.385837 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] Jul 16 00:45:32.385895 kernel: pci 0001:00:01.0: enabling Extended Tags Jul 16 00:45:32.385954 kernel: pci 0001:00:01.0: supports D1 D2 Jul 16 00:45:32.386013 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.386082 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.386142 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Jul 16 00:45:32.386200 kernel: pci 0001:00:02.0: supports D1 D2 Jul 16 00:45:32.386259 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.386330 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.386390 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Jul 16 00:45:32.386448 kernel: pci 0001:00:03.0: supports D1 D2 Jul 16 00:45:32.386509 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.386575 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.386634 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Jul 16 00:45:32.386693 kernel: pci 0001:00:04.0: supports D1 D2 Jul 16 00:45:32.386751 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.386761 kernel: acpiphp: Slot [1-6] registered Jul 16 00:45:32.386827 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 PCIe Endpoint Jul 16 00:45:32.386888 kernel: pci 0001:01:00.0: BAR 0 [mem 0x380002000000-0x380003ffffff 64bit pref] Jul 16 00:45:32.386950 kernel: pci 0001:01:00.0: ROM [mem 0x60100000-0x601fffff pref] Jul 16 00:45:32.387010 kernel: pci 0001:01:00.0: PME# supported from D3cold Jul 16 00:45:32.387071 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 16 00:45:32.387138 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 PCIe Endpoint Jul 16 00:45:32.387201 kernel: pci 0001:01:00.1: BAR 0 [mem 0x380000000000-0x380001ffffff 64bit pref] Jul 16 00:45:32.387261 kernel: pci 0001:01:00.1: ROM [mem 0x60000000-0x600fffff pref] Jul 16 00:45:32.387325 kernel: pci 0001:01:00.1: PME# supported from D3cold Jul 16 00:45:32.387336 kernel: acpiphp: Slot [2-6] registered Jul 16 00:45:32.387344 kernel: acpiphp: Slot [3-4] registered Jul 16 00:45:32.387352 kernel: acpiphp: Slot [4-4] registered Jul 16 00:45:32.387402 kernel: pci_bus 0001:00: on NUMA node 0 Jul 16 00:45:32.387461 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 16 00:45:32.387521 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 16 00:45:32.387580 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:45:32.387638 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:45:32.387700 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 16 00:45:32.387759 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:45:32.387818 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:45:32.387877 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 16 00:45:32.387936 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:45:32.387996 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:45:32.388055 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref]: assigned Jul 16 00:45:32.388115 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff]: assigned Jul 16 00:45:32.388174 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff]: assigned Jul 16 00:45:32.388233 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref]: assigned Jul 16 00:45:32.388294 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff]: assigned Jul 16 00:45:32.388355 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref]: assigned Jul 16 00:45:32.388415 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff]: assigned Jul 16 00:45:32.388473 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref]: assigned Jul 16 00:45:32.388534 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.388594 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.388653 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.388712 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.388770 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.388830 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.388890 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.388949 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.389009 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.389068 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.389127 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.389186 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.389245 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.389307 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.389366 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.389426 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.389489 kernel: pci 0001:01:00.0: BAR 0 [mem 0x380000000000-0x380001ffffff 64bit pref]: assigned Jul 16 00:45:32.389552 kernel: pci 0001:01:00.1: BAR 0 [mem 0x380002000000-0x380003ffffff 64bit pref]: assigned Jul 16 00:45:32.389613 kernel: pci 0001:01:00.0: ROM [mem 0x60000000-0x600fffff pref]: assigned Jul 16 00:45:32.389674 kernel: pci 0001:01:00.1: ROM [mem 0x60100000-0x601fffff pref]: assigned Jul 16 00:45:32.389733 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Jul 16 00:45:32.389792 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Jul 16 00:45:32.389851 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] Jul 16 00:45:32.389910 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Jul 16 00:45:32.389970 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] Jul 16 00:45:32.390029 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] Jul 16 00:45:32.390088 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Jul 16 00:45:32.390147 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] Jul 16 00:45:32.390207 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] Jul 16 00:45:32.390269 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Jul 16 00:45:32.390331 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] Jul 16 00:45:32.390390 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] Jul 16 00:45:32.390444 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] Jul 16 00:45:32.390496 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] Jul 16 00:45:32.390560 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] Jul 16 00:45:32.390614 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] Jul 16 00:45:32.390684 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] Jul 16 00:45:32.390741 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] Jul 16 00:45:32.390803 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] Jul 16 00:45:32.390860 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] Jul 16 00:45:32.390922 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] Jul 16 00:45:32.390976 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] Jul 16 00:45:32.390986 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) Jul 16 00:45:32.391052 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 16 00:45:32.391109 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 16 00:45:32.391165 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] Jul 16 00:45:32.391221 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 16 00:45:32.391282 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 Jul 16 00:45:32.391339 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] Jul 16 00:45:32.391349 kernel: PCI host bridge to bus 0004:00 Jul 16 00:45:32.391409 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] Jul 16 00:45:32.391462 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] Jul 16 00:45:32.391514 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] Jul 16 00:45:32.391580 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jul 16 00:45:32.391646 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.391705 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Jul 16 00:45:32.391764 kernel: pci 0004:00:01.0: bridge window [io 0x0000-0x0fff] Jul 16 00:45:32.391825 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x220fffff] Jul 16 00:45:32.391883 kernel: pci 0004:00:01.0: supports D1 D2 Jul 16 00:45:32.391942 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.392007 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.392067 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Jul 16 00:45:32.392125 kernel: pci 0004:00:03.0: bridge window [mem 0x22200000-0x222fffff] Jul 16 00:45:32.392184 kernel: pci 0004:00:03.0: supports D1 D2 Jul 16 00:45:32.392244 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.392312 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jul 16 00:45:32.392372 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Jul 16 00:45:32.392431 kernel: pci 0004:00:05.0: supports D1 D2 Jul 16 00:45:32.392491 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot Jul 16 00:45:32.392558 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jul 16 00:45:32.392619 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Jul 16 00:45:32.392681 kernel: pci 0004:01:00.0: bridge window [io 0x0000-0x0fff] Jul 16 00:45:32.392741 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x220fffff] Jul 16 00:45:32.392801 kernel: pci 0004:01:00.0: enabling Extended Tags Jul 16 00:45:32.392860 kernel: pci 0004:01:00.0: supports D1 D2 Jul 16 00:45:32.392920 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 16 00:45:32.392984 kernel: pci_bus 0004:02: extended config space not accessible Jul 16 00:45:32.393054 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 conventional PCI endpoint Jul 16 00:45:32.393120 kernel: pci 0004:02:00.0: BAR 0 [mem 0x20000000-0x21ffffff] Jul 16 00:45:32.393182 kernel: pci 0004:02:00.0: BAR 1 [mem 0x22000000-0x2201ffff] Jul 16 00:45:32.393243 kernel: pci 0004:02:00.0: BAR 2 [io 0x0000-0x007f] Jul 16 00:45:32.393308 kernel: pci 0004:02:00.0: supports D1 D2 Jul 16 00:45:32.393371 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 16 00:45:32.393446 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 PCIe Endpoint Jul 16 00:45:32.393508 kernel: pci 0004:03:00.0: BAR 0 [mem 0x22200000-0x22201fff 64bit] Jul 16 00:45:32.393570 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold Jul 16 00:45:32.393623 kernel: pci_bus 0004:00: on NUMA node 0 Jul 16 00:45:32.393682 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 Jul 16 00:45:32.393741 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 16 00:45:32.393801 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:45:32.393859 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jul 16 00:45:32.393918 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 16 00:45:32.393979 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:45:32.394039 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:45:32.394098 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff]: assigned Jul 16 00:45:32.394157 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref]: assigned Jul 16 00:45:32.394216 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff]: assigned Jul 16 00:45:32.394278 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref]: assigned Jul 16 00:45:32.394337 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff]: assigned Jul 16 00:45:32.394396 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref]: assigned Jul 16 00:45:32.394458 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.394518 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.394577 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.394636 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.394695 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.394754 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.394812 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.394871 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.394931 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.394990 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.395049 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.395107 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.395168 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff]: assigned Jul 16 00:45:32.395229 kernel: pci 0004:01:00.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:45:32.395296 kernel: pci 0004:01:00.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:45:32.395362 kernel: pci 0004:02:00.0: BAR 0 [mem 0x20000000-0x21ffffff]: assigned Jul 16 00:45:32.395425 kernel: pci 0004:02:00.0: BAR 1 [mem 0x22000000-0x2201ffff]: assigned Jul 16 00:45:32.395487 kernel: pci 0004:02:00.0: BAR 2 [io size 0x0080]: can't assign; no space Jul 16 00:45:32.395550 kernel: pci 0004:02:00.0: BAR 2 [io size 0x0080]: failed to assign Jul 16 00:45:32.395610 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Jul 16 00:45:32.395670 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] Jul 16 00:45:32.395731 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Jul 16 00:45:32.395789 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] Jul 16 00:45:32.395850 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] Jul 16 00:45:32.395910 kernel: pci 0004:03:00.0: BAR 0 [mem 0x23000000-0x23001fff 64bit]: assigned Jul 16 00:45:32.395971 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Jul 16 00:45:32.396031 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] Jul 16 00:45:32.396089 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] Jul 16 00:45:32.396149 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Jul 16 00:45:32.396208 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] Jul 16 00:45:32.396272 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] Jul 16 00:45:32.396327 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 16 00:45:32.396379 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] Jul 16 00:45:32.396431 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] Jul 16 00:45:32.396495 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] Jul 16 00:45:32.396550 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] Jul 16 00:45:32.396608 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] Jul 16 00:45:32.396672 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] Jul 16 00:45:32.396727 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] Jul 16 00:45:32.396788 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] Jul 16 00:45:32.396843 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] Jul 16 00:45:32.396853 kernel: ACPI: CPU18 has been hot-added Jul 16 00:45:32.396861 kernel: ACPI: CPU58 has been hot-added Jul 16 00:45:32.396868 kernel: ACPI: CPU38 has been hot-added Jul 16 00:45:32.396877 kernel: ACPI: CPU78 has been hot-added Jul 16 00:45:32.396887 kernel: ACPI: CPU16 has been hot-added Jul 16 00:45:32.396895 kernel: ACPI: CPU56 has been hot-added Jul 16 00:45:32.396902 kernel: ACPI: CPU36 has been hot-added Jul 16 00:45:32.396911 kernel: ACPI: CPU76 has been hot-added Jul 16 00:45:32.396919 kernel: ACPI: CPU17 has been hot-added Jul 16 00:45:32.396926 kernel: ACPI: CPU57 has been hot-added Jul 16 00:45:32.396934 kernel: ACPI: CPU37 has been hot-added Jul 16 00:45:32.396941 kernel: ACPI: CPU77 has been hot-added Jul 16 00:45:32.396949 kernel: ACPI: CPU19 has been hot-added Jul 16 00:45:32.396958 kernel: ACPI: CPU59 has been hot-added Jul 16 00:45:32.396965 kernel: ACPI: CPU39 has been hot-added Jul 16 00:45:32.396973 kernel: ACPI: CPU79 has been hot-added Jul 16 00:45:32.396981 kernel: ACPI: CPU12 has been hot-added Jul 16 00:45:32.396988 kernel: ACPI: CPU52 has been hot-added Jul 16 00:45:32.396996 kernel: ACPI: CPU32 has been hot-added Jul 16 00:45:32.397003 kernel: ACPI: CPU72 has been hot-added Jul 16 00:45:32.397011 kernel: ACPI: CPU8 has been hot-added Jul 16 00:45:32.397019 kernel: ACPI: CPU48 has been hot-added Jul 16 00:45:32.397027 kernel: ACPI: CPU28 has been hot-added Jul 16 00:45:32.397035 kernel: ACPI: CPU68 has been hot-added Jul 16 00:45:32.397043 kernel: ACPI: CPU10 has been hot-added Jul 16 00:45:32.397050 kernel: ACPI: CPU50 has been hot-added Jul 16 00:45:32.397058 kernel: ACPI: CPU30 has been hot-added Jul 16 00:45:32.397066 kernel: ACPI: CPU70 has been hot-added Jul 16 00:45:32.397074 kernel: ACPI: CPU14 has been hot-added Jul 16 00:45:32.397082 kernel: ACPI: CPU54 has been hot-added Jul 16 00:45:32.397089 kernel: ACPI: CPU34 has been hot-added Jul 16 00:45:32.397097 kernel: ACPI: CPU74 has been hot-added Jul 16 00:45:32.397106 kernel: ACPI: CPU4 has been hot-added Jul 16 00:45:32.397113 kernel: ACPI: CPU44 has been hot-added Jul 16 00:45:32.397121 kernel: ACPI: CPU24 has been hot-added Jul 16 00:45:32.397128 kernel: ACPI: CPU64 has been hot-added Jul 16 00:45:32.397136 kernel: ACPI: CPU0 has been hot-added Jul 16 00:45:32.397144 kernel: ACPI: CPU40 has been hot-added Jul 16 00:45:32.397151 kernel: ACPI: CPU20 has been hot-added Jul 16 00:45:32.397159 kernel: ACPI: CPU60 has been hot-added Jul 16 00:45:32.397166 kernel: ACPI: CPU2 has been hot-added Jul 16 00:45:32.397176 kernel: ACPI: CPU42 has been hot-added Jul 16 00:45:32.397184 kernel: ACPI: CPU22 has been hot-added Jul 16 00:45:32.397192 kernel: ACPI: CPU62 has been hot-added Jul 16 00:45:32.397199 kernel: ACPI: CPU6 has been hot-added Jul 16 00:45:32.397207 kernel: ACPI: CPU46 has been hot-added Jul 16 00:45:32.397215 kernel: ACPI: CPU26 has been hot-added Jul 16 00:45:32.397223 kernel: ACPI: CPU66 has been hot-added Jul 16 00:45:32.397231 kernel: ACPI: CPU5 has been hot-added Jul 16 00:45:32.397238 kernel: ACPI: CPU45 has been hot-added Jul 16 00:45:32.397247 kernel: ACPI: CPU25 has been hot-added Jul 16 00:45:32.397255 kernel: ACPI: CPU65 has been hot-added Jul 16 00:45:32.397265 kernel: ACPI: CPU1 has been hot-added Jul 16 00:45:32.397273 kernel: ACPI: CPU41 has been hot-added Jul 16 00:45:32.397281 kernel: ACPI: CPU21 has been hot-added Jul 16 00:45:32.397288 kernel: ACPI: CPU61 has been hot-added Jul 16 00:45:32.397296 kernel: ACPI: CPU3 has been hot-added Jul 16 00:45:32.397303 kernel: ACPI: CPU43 has been hot-added Jul 16 00:45:32.397311 kernel: ACPI: CPU23 has been hot-added Jul 16 00:45:32.397319 kernel: ACPI: CPU63 has been hot-added Jul 16 00:45:32.397328 kernel: ACPI: CPU7 has been hot-added Jul 16 00:45:32.397335 kernel: ACPI: CPU47 has been hot-added Jul 16 00:45:32.397343 kernel: ACPI: CPU27 has been hot-added Jul 16 00:45:32.397351 kernel: ACPI: CPU67 has been hot-added Jul 16 00:45:32.397358 kernel: ACPI: CPU13 has been hot-added Jul 16 00:45:32.397366 kernel: ACPI: CPU53 has been hot-added Jul 16 00:45:32.397373 kernel: ACPI: CPU33 has been hot-added Jul 16 00:45:32.397381 kernel: ACPI: CPU73 has been hot-added Jul 16 00:45:32.397389 kernel: ACPI: CPU9 has been hot-added Jul 16 00:45:32.397398 kernel: ACPI: CPU49 has been hot-added Jul 16 00:45:32.397405 kernel: ACPI: CPU29 has been hot-added Jul 16 00:45:32.397413 kernel: ACPI: CPU69 has been hot-added Jul 16 00:45:32.397421 kernel: ACPI: CPU11 has been hot-added Jul 16 00:45:32.397429 kernel: ACPI: CPU51 has been hot-added Jul 16 00:45:32.397436 kernel: ACPI: CPU31 has been hot-added Jul 16 00:45:32.397444 kernel: ACPI: CPU71 has been hot-added Jul 16 00:45:32.397451 kernel: ACPI: CPU15 has been hot-added Jul 16 00:45:32.397459 kernel: ACPI: CPU55 has been hot-added Jul 16 00:45:32.397466 kernel: ACPI: CPU35 has been hot-added Jul 16 00:45:32.397475 kernel: ACPI: CPU75 has been hot-added Jul 16 00:45:32.397483 kernel: iommu: Default domain type: Translated Jul 16 00:45:32.397491 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 16 00:45:32.397498 kernel: efivars: Registered efivars operations Jul 16 00:45:32.397563 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device Jul 16 00:45:32.397626 kernel: pci 0004:02:00.0: vgaarb: bridge control possible Jul 16 00:45:32.397689 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none Jul 16 00:45:32.397699 kernel: vgaarb: loaded Jul 16 00:45:32.397706 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 16 00:45:32.397716 kernel: VFS: Disk quotas dquot_6.6.0 Jul 16 00:45:32.397724 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 16 00:45:32.397732 kernel: pnp: PnP ACPI init Jul 16 00:45:32.397797 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved Jul 16 00:45:32.397856 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved Jul 16 00:45:32.397910 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved Jul 16 00:45:32.397964 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved Jul 16 00:45:32.398019 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved Jul 16 00:45:32.398074 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved Jul 16 00:45:32.398128 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved Jul 16 00:45:32.398181 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved Jul 16 00:45:32.398191 kernel: pnp: PnP ACPI: found 1 devices Jul 16 00:45:32.398199 kernel: NET: Registered PF_INET protocol family Jul 16 00:45:32.398207 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 16 00:45:32.398216 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jul 16 00:45:32.398226 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 16 00:45:32.398234 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 16 00:45:32.398242 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 16 00:45:32.398249 kernel: TCP: Hash tables configured (established 524288 bind 65536) Jul 16 00:45:32.398257 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 16 00:45:32.398267 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 16 00:45:32.398275 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 16 00:45:32.398338 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes Jul 16 00:45:32.398349 kernel: kvm [1]: nv: 554 coarse grained trap handlers Jul 16 00:45:32.398358 kernel: kvm [1]: IPA Size Limit: 48 bits Jul 16 00:45:32.398366 kernel: kvm [1]: GICv3: no GICV resource entry Jul 16 00:45:32.398373 kernel: kvm [1]: disabling GICv2 emulation Jul 16 00:45:32.398381 kernel: kvm [1]: GIC system register CPU interface enabled Jul 16 00:45:32.398389 kernel: kvm [1]: vgic interrupt IRQ9 Jul 16 00:45:32.398396 kernel: kvm [1]: VHE mode initialized successfully Jul 16 00:45:32.398404 kernel: Initialise system trusted keyrings Jul 16 00:45:32.398412 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 Jul 16 00:45:32.398419 kernel: Key type asymmetric registered Jul 16 00:45:32.398428 kernel: Asymmetric key parser 'x509' registered Jul 16 00:45:32.398436 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 16 00:45:32.398444 kernel: io scheduler mq-deadline registered Jul 16 00:45:32.398451 kernel: io scheduler kyber registered Jul 16 00:45:32.398459 kernel: io scheduler bfq registered Jul 16 00:45:32.398466 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 16 00:45:32.398474 kernel: ACPI: button: Power Button [PWRB] Jul 16 00:45:32.398482 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). Jul 16 00:45:32.398490 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 16 00:45:32.398559 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 Jul 16 00:45:32.398615 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) Jul 16 00:45:32.398670 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 16 00:45:32.398725 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 65536 entries for cmdq Jul 16 00:45:32.398779 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 32768 entries for evtq Jul 16 00:45:32.398833 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 65536 entries for priq Jul 16 00:45:32.398897 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 Jul 16 00:45:32.398953 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) Jul 16 00:45:32.399007 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 16 00:45:32.399017 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 16 00:45:32.399025 kernel: cma: number of available pages: 128@3968=> 128 free of 4096 total pages Jul 16 00:45:32.399077 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 65536 entries for cmdq Jul 16 00:45:32.399087 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 16 00:45:32.399095 kernel: cma: number of available pages: 128@3968=> 128 free of 4096 total pages Jul 16 00:45:32.399149 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 32768 entries for evtq Jul 16 00:45:32.399158 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 16 00:45:32.399166 kernel: cma: number of available pages: 128@3968=> 128 free of 4096 total pages Jul 16 00:45:32.399218 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 65536 entries for priq Jul 16 00:45:32.399283 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 Jul 16 00:45:32.399340 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) Jul 16 00:45:32.399396 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 16 00:45:32.399407 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 16 00:45:32.399415 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.399467 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 65536 entries for cmdq Jul 16 00:45:32.399477 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 16 00:45:32.399484 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.399536 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 32768 entries for evtq Jul 16 00:45:32.399546 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 16 00:45:32.399554 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.399606 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 65536 entries for priq Jul 16 00:45:32.399618 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 128 pages, ret: -12 Jul 16 00:45:32.399626 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.399685 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 Jul 16 00:45:32.399741 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) Jul 16 00:45:32.399796 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 16 00:45:32.399805 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 16 00:45:32.399813 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.399865 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 65536 entries for cmdq Jul 16 00:45:32.399875 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 16 00:45:32.399884 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.399936 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 32768 entries for evtq Jul 16 00:45:32.399946 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 16 00:45:32.399953 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.400005 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 65536 entries for priq Jul 16 00:45:32.400015 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.400075 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 Jul 16 00:45:32.400130 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) Jul 16 00:45:32.400186 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 16 00:45:32.400196 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.400248 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 65536 entries for cmdq Jul 16 00:45:32.400258 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.400322 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 32768 entries for evtq Jul 16 00:45:32.400332 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.400385 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 65536 entries for priq Jul 16 00:45:32.400395 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.400455 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 Jul 16 00:45:32.400512 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) Jul 16 00:45:32.400566 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 16 00:45:32.400576 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.400629 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 65536 entries for cmdq Jul 16 00:45:32.400639 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.400691 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 32768 entries for evtq Jul 16 00:45:32.400701 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.400754 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 65536 entries for priq Jul 16 00:45:32.400765 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.400834 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 Jul 16 00:45:32.400890 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) Jul 16 00:45:32.400945 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 16 00:45:32.400954 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.401007 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 65536 entries for cmdq Jul 16 00:45:32.401017 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.401074 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 32768 entries for evtq Jul 16 00:45:32.401083 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.401136 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 65536 entries for priq Jul 16 00:45:32.401146 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.401204 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 Jul 16 00:45:32.401259 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) Jul 16 00:45:32.401323 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 16 00:45:32.401335 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.401390 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 65536 entries for cmdq Jul 16 00:45:32.401401 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.401455 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 32768 entries for evtq Jul 16 00:45:32.401465 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.401518 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 65536 entries for priq Jul 16 00:45:32.401528 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.401536 kernel: thunder_xcv, ver 1.0 Jul 16 00:45:32.401543 kernel: thunder_bgx, ver 1.0 Jul 16 00:45:32.401551 kernel: nicpf, ver 1.0 Jul 16 00:45:32.401560 kernel: nicvf, ver 1.0 Jul 16 00:45:32.401622 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 16 00:45:32.401677 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-16T00:45:30 UTC (1752626730) Jul 16 00:45:32.401687 kernel: efifb: probing for efifb Jul 16 00:45:32.401695 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k Jul 16 00:45:32.401702 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jul 16 00:45:32.401712 kernel: efifb: scrolling: redraw Jul 16 00:45:32.401719 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 16 00:45:32.401728 kernel: Console: switching to colour frame buffer device 100x37 Jul 16 00:45:32.401736 kernel: fb0: EFI VGA frame buffer device Jul 16 00:45:32.401744 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 Jul 16 00:45:32.401752 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 16 00:45:32.401759 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 16 00:45:32.401767 kernel: watchdog: NMI not fully supported Jul 16 00:45:32.401775 kernel: NET: Registered PF_INET6 protocol family Jul 16 00:45:32.401782 kernel: watchdog: Hard watchdog permanently disabled Jul 16 00:45:32.401790 kernel: Segment Routing with IPv6 Jul 16 00:45:32.401799 kernel: In-situ OAM (IOAM) with IPv6 Jul 16 00:45:32.401806 kernel: NET: Registered PF_PACKET protocol family Jul 16 00:45:32.401814 kernel: Key type dns_resolver registered Jul 16 00:45:32.401822 kernel: registered taskstats version 1 Jul 16 00:45:32.401829 kernel: Loading compiled-in X.509 certificates Jul 16 00:45:32.401837 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 2e049b1166d7080a2074348abe7e86e115624bdd' Jul 16 00:45:32.401845 kernel: Demotion targets for Node 0: null Jul 16 00:45:32.401852 kernel: Key type .fscrypt registered Jul 16 00:45:32.401860 kernel: Key type fscrypt-provisioning registered Jul 16 00:45:32.401869 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 16 00:45:32.401876 kernel: ima: Allocated hash algorithm: sha1 Jul 16 00:45:32.401884 kernel: ima: No architecture policies found Jul 16 00:45:32.401892 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 16 00:45:32.401899 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.401961 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 Jul 16 00:45:32.402022 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 Jul 16 00:45:32.402083 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 Jul 16 00:45:32.402143 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 Jul 16 00:45:32.402206 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 Jul 16 00:45:32.402268 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 Jul 16 00:45:32.402333 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 Jul 16 00:45:32.402392 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 Jul 16 00:45:32.402402 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.402461 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 Jul 16 00:45:32.402520 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 Jul 16 00:45:32.402581 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 Jul 16 00:45:32.402640 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 Jul 16 00:45:32.402703 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 Jul 16 00:45:32.402763 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 Jul 16 00:45:32.402823 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 Jul 16 00:45:32.402882 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 Jul 16 00:45:32.402892 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.402951 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 Jul 16 00:45:32.403011 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 Jul 16 00:45:32.403071 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 Jul 16 00:45:32.403130 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 Jul 16 00:45:32.403193 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 Jul 16 00:45:32.403253 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 Jul 16 00:45:32.403317 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 Jul 16 00:45:32.403377 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 Jul 16 00:45:32.403387 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.403446 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 Jul 16 00:45:32.403505 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 Jul 16 00:45:32.403566 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 Jul 16 00:45:32.403627 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 Jul 16 00:45:32.403687 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 Jul 16 00:45:32.403747 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 Jul 16 00:45:32.403757 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.403815 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 Jul 16 00:45:32.403874 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 Jul 16 00:45:32.403935 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 Jul 16 00:45:32.403994 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 Jul 16 00:45:32.404054 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 Jul 16 00:45:32.404114 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 Jul 16 00:45:32.404174 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 Jul 16 00:45:32.404233 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 Jul 16 00:45:32.404243 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.404305 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 Jul 16 00:45:32.404365 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 Jul 16 00:45:32.404427 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 Jul 16 00:45:32.404486 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 Jul 16 00:45:32.404549 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 Jul 16 00:45:32.404608 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 Jul 16 00:45:32.404669 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 Jul 16 00:45:32.404728 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 Jul 16 00:45:32.404738 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.404795 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 Jul 16 00:45:32.404854 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 Jul 16 00:45:32.404917 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 Jul 16 00:45:32.404976 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 Jul 16 00:45:32.405037 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 Jul 16 00:45:32.405095 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 Jul 16 00:45:32.405156 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 Jul 16 00:45:32.405214 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 Jul 16 00:45:32.405224 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.405285 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 Jul 16 00:45:32.405345 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 Jul 16 00:45:32.405405 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 Jul 16 00:45:32.405464 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 Jul 16 00:45:32.405525 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 Jul 16 00:45:32.405584 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 Jul 16 00:45:32.405594 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:32.405653 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 Jul 16 00:45:32.405663 kernel: clk: Disabling unused clocks Jul 16 00:45:32.405670 kernel: PM: genpd: Disabling unused power domains Jul 16 00:45:32.405678 kernel: Warning: unable to open an initial console. Jul 16 00:45:32.405685 kernel: Freeing unused kernel memory: 39488K Jul 16 00:45:32.405693 kernel: Run /init as init process Jul 16 00:45:32.405702 kernel: with arguments: Jul 16 00:45:32.405710 kernel: /init Jul 16 00:45:32.405718 kernel: with environment: Jul 16 00:45:32.405725 kernel: HOME=/ Jul 16 00:45:32.405732 kernel: TERM=linux Jul 16 00:45:32.405740 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 16 00:45:32.405748 systemd[1]: Successfully made /usr/ read-only. Jul 16 00:45:32.405759 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 16 00:45:32.405769 systemd[1]: Detected architecture arm64. Jul 16 00:45:32.405777 systemd[1]: Running in initrd. Jul 16 00:45:32.405785 systemd[1]: No hostname configured, using default hostname. Jul 16 00:45:32.405793 systemd[1]: Hostname set to . Jul 16 00:45:32.405801 systemd[1]: Initializing machine ID from random generator. Jul 16 00:45:32.405809 systemd[1]: Queued start job for default target initrd.target. Jul 16 00:45:32.405817 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 16 00:45:32.405825 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 16 00:45:32.405835 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 16 00:45:32.405844 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 16 00:45:32.405852 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 16 00:45:32.405860 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 16 00:45:32.405869 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 16 00:45:32.405877 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 16 00:45:32.405887 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 16 00:45:32.405896 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 16 00:45:32.405904 systemd[1]: Reached target paths.target - Path Units. Jul 16 00:45:32.405912 systemd[1]: Reached target slices.target - Slice Units. Jul 16 00:45:32.405920 systemd[1]: Reached target swap.target - Swaps. Jul 16 00:45:32.405928 systemd[1]: Reached target timers.target - Timer Units. Jul 16 00:45:32.405936 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 16 00:45:32.405944 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 16 00:45:32.405952 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 16 00:45:32.405962 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 16 00:45:32.405970 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 16 00:45:32.405978 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 16 00:45:32.405986 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 16 00:45:32.405994 systemd[1]: Reached target sockets.target - Socket Units. Jul 16 00:45:32.406002 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 16 00:45:32.406011 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 16 00:45:32.406019 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 16 00:45:32.406027 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 16 00:45:32.406037 systemd[1]: Starting systemd-fsck-usr.service... Jul 16 00:45:32.406045 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 16 00:45:32.406053 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 16 00:45:32.406061 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 16 00:45:32.406087 systemd-journald[909]: Collecting audit messages is disabled. Jul 16 00:45:32.406107 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 16 00:45:32.406116 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 16 00:45:32.406123 kernel: Bridge firewalling registered Jul 16 00:45:32.406132 systemd-journald[909]: Journal started Jul 16 00:45:32.406152 systemd-journald[909]: Runtime Journal (/run/log/journal/9a2f076266274392b66cd8ebeb04387b) is 8M, max 4G, 3.9G free. Jul 16 00:45:32.341808 systemd-modules-load[911]: Inserted module 'overlay' Jul 16 00:45:32.430595 systemd[1]: Started systemd-journald.service - Journal Service. Jul 16 00:45:32.397985 systemd-modules-load[911]: Inserted module 'br_netfilter' Jul 16 00:45:32.436315 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 16 00:45:32.447628 systemd[1]: Finished systemd-fsck-usr.service. Jul 16 00:45:32.458876 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 16 00:45:32.469779 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:45:32.484576 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 16 00:45:32.492772 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 16 00:45:32.522883 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 16 00:45:32.529527 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 16 00:45:32.542045 systemd-tmpfiles[941]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 16 00:45:32.549131 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 16 00:45:32.564394 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 16 00:45:32.580997 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 16 00:45:32.592458 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 16 00:45:32.612435 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 16 00:45:32.642687 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 16 00:45:32.651899 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 16 00:45:32.677683 dracut-cmdline[963]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 16 00:45:32.666394 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 16 00:45:32.682087 systemd-resolved[965]: Positive Trust Anchors: Jul 16 00:45:32.682096 systemd-resolved[965]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 16 00:45:32.682128 systemd-resolved[965]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 16 00:45:32.697319 systemd-resolved[965]: Defaulting to hostname 'linux'. Jul 16 00:45:32.698691 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 16 00:45:32.723972 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 16 00:45:32.842271 kernel: SCSI subsystem initialized Jul 16 00:45:32.858272 kernel: Loading iSCSI transport class v2.0-870. Jul 16 00:45:32.877272 kernel: iscsi: registered transport (tcp) Jul 16 00:45:32.905307 kernel: iscsi: registered transport (qla4xxx) Jul 16 00:45:32.905328 kernel: QLogic iSCSI HBA Driver Jul 16 00:45:32.924064 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 16 00:45:32.955330 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 16 00:45:32.971915 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 16 00:45:33.022588 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 16 00:45:33.034386 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 16 00:45:33.121276 kernel: raid6: neonx8 gen() 15842 MB/s Jul 16 00:45:33.147268 kernel: raid6: neonx4 gen() 15883 MB/s Jul 16 00:45:33.173273 kernel: raid6: neonx2 gen() 13284 MB/s Jul 16 00:45:33.199272 kernel: raid6: neonx1 gen() 10467 MB/s Jul 16 00:45:33.224272 kernel: raid6: int64x8 gen() 6931 MB/s Jul 16 00:45:33.249273 kernel: raid6: int64x4 gen() 7390 MB/s Jul 16 00:45:33.274272 kernel: raid6: int64x2 gen() 6130 MB/s Jul 16 00:45:33.302864 kernel: raid6: int64x1 gen() 5075 MB/s Jul 16 00:45:33.302888 kernel: raid6: using algorithm neonx4 gen() 15883 MB/s Jul 16 00:45:33.338199 kernel: raid6: .... xor() 12421 MB/s, rmw enabled Jul 16 00:45:33.338219 kernel: raid6: using neon recovery algorithm Jul 16 00:45:33.363125 kernel: xor: measuring software checksum speed Jul 16 00:45:33.363148 kernel: 8regs : 21613 MB/sec Jul 16 00:45:33.371769 kernel: 32regs : 21687 MB/sec Jul 16 00:45:33.380420 kernel: arm64_neon : 28205 MB/sec Jul 16 00:45:33.388701 kernel: xor: using function: arm64_neon (28205 MB/sec) Jul 16 00:45:33.455272 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 16 00:45:33.460721 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 16 00:45:33.468304 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 16 00:45:33.506638 systemd-udevd[1185]: Using default interface naming scheme 'v255'. Jul 16 00:45:33.510580 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 16 00:45:33.516955 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 16 00:45:33.557333 dracut-pre-trigger[1198]: rd.md=0: removing MD RAID activation Jul 16 00:45:33.581294 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 16 00:45:33.587525 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 16 00:45:33.886230 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 16 00:45:34.049432 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 16 00:45:34.049453 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 16 00:45:34.049468 kernel: ACPI: bus type USB registered Jul 16 00:45:34.049478 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:34.049488 kernel: usbcore: registered new interface driver usbfs Jul 16 00:45:34.049497 kernel: PTP clock support registered Jul 16 00:45:34.049506 kernel: nvme 0005:03:00.0: Adding to iommu group 31 Jul 16 00:45:34.049653 kernel: usbcore: registered new interface driver hub Jul 16 00:45:34.049663 kernel: usbcore: registered new device driver usb Jul 16 00:45:34.049672 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:34.049681 kernel: nvme 0005:04:00.0: Adding to iommu group 32 Jul 16 00:45:34.049771 kernel: nvme nvme1: pci function 0005:04:00.0 Jul 16 00:45:34.049859 kernel: nvme nvme0: pci function 0005:03:00.0 Jul 16 00:45:34.049936 kernel: nvme nvme1: D3 entry latency set to 8 seconds Jul 16 00:45:34.049998 kernel: nvme nvme0: D3 entry latency set to 8 seconds Jul 16 00:45:34.038780 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 16 00:45:34.140332 kernel: nvme nvme0: 32/0/0 default/read/poll queues Jul 16 00:45:34.140552 kernel: nvme nvme1: 32/0/0 default/read/poll queues Jul 16 00:45:34.140628 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 16 00:45:34.140639 kernel: GPT:9289727 != 1875385007 Jul 16 00:45:34.140649 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 16 00:45:34.140661 kernel: GPT:9289727 != 1875385007 Jul 16 00:45:34.140669 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 16 00:45:34.140678 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 16 00:45:34.056741 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 16 00:45:34.056791 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:45:34.329997 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jul 16 00:45:34.330026 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jul 16 00:45:34.330045 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:34.330064 kernel: igb 0003:03:00.0: Adding to iommu group 33 Jul 16 00:45:34.330277 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:45:34.330290 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 34 Jul 16 00:45:34.330381 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Jul 16 00:45:34.330455 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 Jul 16 00:45:34.330528 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault Jul 16 00:45:34.330604 kernel: igb 0003:03:00.0: added PHC on eth0 Jul 16 00:45:34.330679 kernel: cma: number of available pages: Jul 16 00:45:34.330690 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 16 00:45:34.330761 kernel: => 0 free of 4096 total pages Jul 16 00:45:34.330771 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6f:50 Jul 16 00:45:34.330842 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 35 Jul 16 00:45:34.330922 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 Jul 16 00:45:34.146040 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 16 00:45:34.156289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 16 00:45:34.335405 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 16 00:45:34.358552 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. Jul 16 00:45:34.398827 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Jul 16 00:45:34.398957 kernel: igb 0003:03:00.1: Adding to iommu group 36 Jul 16 00:45:34.376705 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. Jul 16 00:45:34.404494 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:45:34.428864 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Jul 16 00:45:34.446314 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Jul 16 00:45:34.459673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Jul 16 00:45:34.470723 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 16 00:45:34.574904 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000000100000010 Jul 16 00:45:34.575078 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Jul 16 00:45:34.575159 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 Jul 16 00:45:34.575233 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed Jul 16 00:45:34.575325 kernel: hub 1-0:1.0: USB hub found Jul 16 00:45:34.575420 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 16 00:45:34.575430 kernel: hub 1-0:1.0: 4 ports detected Jul 16 00:45:34.575576 disk-uuid[1333]: Primary Header is updated. Jul 16 00:45:34.575576 disk-uuid[1333]: Secondary Entries is updated. Jul 16 00:45:34.575576 disk-uuid[1333]: Secondary Header is updated. Jul 16 00:45:34.661572 kernel: mlx5_core 0001:01:00.0: PTM is not supported by PCIe Jul 16 00:45:34.661701 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 Jul 16 00:45:34.661777 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 16 00:45:34.661851 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 16 00:45:34.700239 kernel: igb 0003:03:00.1: added PHC on eth1 Jul 16 00:45:34.700430 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection Jul 16 00:45:34.712135 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6f:51 Jul 16 00:45:34.732622 kernel: hub 2-0:1.0: USB hub found Jul 16 00:45:34.732802 kernel: hub 2-0:1.0: 4 ports detected Jul 16 00:45:34.741367 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 Jul 16 00:45:34.751346 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Jul 16 00:45:34.770273 kernel: igb 0003:03:00.0 eno1: renamed from eth0 Jul 16 00:45:34.770356 kernel: igb 0003:03:00.1 eno2: renamed from eth1 Jul 16 00:45:34.939271 kernel: mlx5_core 0001:01:00.0: E-Switch: Total vports 2, per vport: max uc(128) max mc(2048) Jul 16 00:45:34.955494 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged Jul 16 00:45:34.990275 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd Jul 16 00:45:35.131273 kernel: hub 1-3:1.0: USB hub found Jul 16 00:45:35.140266 kernel: hub 1-3:1.0: 4 ports detected Jul 16 00:45:35.241276 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd Jul 16 00:45:35.265284 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 16 00:45:35.277276 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 Jul 16 00:45:35.277438 kernel: hub 2-3:1.0: USB hub found Jul 16 00:45:35.298878 kernel: hub 2-3:1.0: 4 ports detected Jul 16 00:45:35.298959 kernel: mlx5_core 0001:01:00.1: PTM is not supported by PCIe Jul 16 00:45:35.308761 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 Jul 16 00:45:35.322659 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 16 00:45:35.523277 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 16 00:45:35.523316 disk-uuid[1335]: The operation has completed successfully. Jul 16 00:45:35.665273 kernel: mlx5_core 0001:01:00.1: E-Switch: Total vports 2, per vport: max uc(128) max mc(2048) Jul 16 00:45:35.682013 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged Jul 16 00:45:36.015298 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 16 00:45:36.031276 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 Jul 16 00:45:36.042273 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 Jul 16 00:45:36.067157 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 16 00:45:36.072753 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 16 00:45:36.073772 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 16 00:45:36.082681 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 16 00:45:36.089639 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 16 00:45:36.098411 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 16 00:45:36.113060 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 16 00:45:36.122526 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 16 00:45:36.142227 sh[1538]: Success Jul 16 00:45:36.150039 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 16 00:45:36.202146 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 16 00:45:36.202164 kernel: device-mapper: uevent: version 1.0.3 Jul 16 00:45:36.202174 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 16 00:45:36.207268 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 16 00:45:36.238380 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 16 00:45:36.246959 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 16 00:45:36.265941 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 16 00:45:36.357576 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 16 00:45:36.357595 kernel: BTRFS: device fsid e70e9257-c19d-4e0a-b2ee-631da7d0eb2b devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (1560) Jul 16 00:45:36.357605 kernel: BTRFS info (device dm-0): first mount of filesystem e70e9257-c19d-4e0a-b2ee-631da7d0eb2b Jul 16 00:45:36.357619 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 16 00:45:36.357630 kernel: BTRFS info (device dm-0): using free-space-tree Jul 16 00:45:36.363773 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 16 00:45:36.375191 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 16 00:45:36.387222 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 16 00:45:36.388739 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 16 00:45:36.405766 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 16 00:45:36.520984 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:6) scanned by mount (1585) Jul 16 00:45:36.521000 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 16 00:45:36.521009 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 16 00:45:36.521019 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 16 00:45:36.521031 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 16 00:45:36.522240 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 16 00:45:36.529887 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 16 00:45:36.548408 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 16 00:45:36.557207 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 16 00:45:36.604711 systemd-networkd[1739]: lo: Link UP Jul 16 00:45:36.604717 systemd-networkd[1739]: lo: Gained carrier Jul 16 00:45:36.608372 systemd-networkd[1739]: Enumeration completed Jul 16 00:45:36.608462 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 16 00:45:36.609596 systemd-networkd[1739]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 00:45:36.616402 systemd[1]: Reached target network.target - Network. Jul 16 00:45:36.662279 ignition[1731]: Ignition 2.21.0 Jul 16 00:45:36.662290 ignition[1731]: Stage: fetch-offline Jul 16 00:45:36.662655 systemd-networkd[1739]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 00:45:36.662333 ignition[1731]: no configs at "/usr/lib/ignition/base.d" Jul 16 00:45:36.669896 unknown[1731]: fetched base config from "system" Jul 16 00:45:36.662341 ignition[1731]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:45:36.669903 unknown[1731]: fetched user config from "system" Jul 16 00:45:36.662532 ignition[1731]: parsed url from cmdline: "" Jul 16 00:45:36.673757 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 16 00:45:36.662535 ignition[1731]: no config URL provided Jul 16 00:45:36.689930 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 16 00:45:36.662539 ignition[1731]: reading system config file "/usr/lib/ignition/user.ign" Jul 16 00:45:36.690753 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 16 00:45:36.662607 ignition[1731]: parsing config with SHA512: 0d82206046ae0ae7253d953426f3bd6f8d9b9f80610a4e935dc114c81cbfb9eeb133b3a7dc4ff92d7f872388aafbb2606b31c60c6cdd5cf23f8cade20d88aaa4 Jul 16 00:45:36.715952 systemd-networkd[1739]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 00:45:36.670464 ignition[1731]: fetch-offline: fetch-offline passed Jul 16 00:45:36.670468 ignition[1731]: POST message to Packet Timeline Jul 16 00:45:36.670495 ignition[1731]: POST Status error: resource requires networking Jul 16 00:45:36.670553 ignition[1731]: Ignition finished successfully Jul 16 00:45:36.748948 ignition[1775]: Ignition 2.21.0 Jul 16 00:45:36.748954 ignition[1775]: Stage: kargs Jul 16 00:45:36.749200 ignition[1775]: no configs at "/usr/lib/ignition/base.d" Jul 16 00:45:36.749210 ignition[1775]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:45:36.754446 ignition[1775]: kargs: kargs passed Jul 16 00:45:36.754452 ignition[1775]: POST message to Packet Timeline Jul 16 00:45:36.754680 ignition[1775]: GET https://metadata.packet.net/metadata: attempt #1 Jul 16 00:45:36.757774 ignition[1775]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35270->[::1]:53: read: connection refused Jul 16 00:45:36.958444 ignition[1775]: GET https://metadata.packet.net/metadata: attempt #2 Jul 16 00:45:36.959010 ignition[1775]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40004->[::1]:53: read: connection refused Jul 16 00:45:37.309273 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Jul 16 00:45:37.312121 systemd-networkd[1739]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 00:45:37.359727 ignition[1775]: GET https://metadata.packet.net/metadata: attempt #3 Jul 16 00:45:37.360096 ignition[1775]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:43427->[::1]:53: read: connection refused Jul 16 00:45:37.925275 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Jul 16 00:45:37.928123 systemd-networkd[1739]: eno1: Link UP Jul 16 00:45:37.928248 systemd-networkd[1739]: eno2: Link UP Jul 16 00:45:37.928374 systemd-networkd[1739]: enP1p1s0f0np0: Link UP Jul 16 00:45:37.928503 systemd-networkd[1739]: enP1p1s0f0np0: Gained carrier Jul 16 00:45:37.949480 systemd-networkd[1739]: enP1p1s0f1np1: Link UP Jul 16 00:45:37.950774 systemd-networkd[1739]: enP1p1s0f1np1: Gained carrier Jul 16 00:45:37.995293 systemd-networkd[1739]: enP1p1s0f0np0: DHCPv4 address 147.28.162.205/31, gateway 147.28.162.204 acquired from 147.28.144.140 Jul 16 00:45:38.160534 ignition[1775]: GET https://metadata.packet.net/metadata: attempt #4 Jul 16 00:45:38.161028 ignition[1775]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60166->[::1]:53: read: connection refused Jul 16 00:45:38.978336 systemd-networkd[1739]: enP1p1s0f0np0: Gained IPv6LL Jul 16 00:45:39.234327 systemd-networkd[1739]: enP1p1s0f1np1: Gained IPv6LL Jul 16 00:45:39.761703 ignition[1775]: GET https://metadata.packet.net/metadata: attempt #5 Jul 16 00:45:39.762109 ignition[1775]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40176->[::1]:53: read: connection refused Jul 16 00:45:42.964591 ignition[1775]: GET https://metadata.packet.net/metadata: attempt #6 Jul 16 00:45:43.712626 ignition[1775]: GET result: OK Jul 16 00:45:44.240511 ignition[1775]: Ignition finished successfully Jul 16 00:45:44.244312 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 16 00:45:44.247274 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 16 00:45:44.288343 ignition[1807]: Ignition 2.21.0 Jul 16 00:45:44.288351 ignition[1807]: Stage: disks Jul 16 00:45:44.288487 ignition[1807]: no configs at "/usr/lib/ignition/base.d" Jul 16 00:45:44.288495 ignition[1807]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:45:44.289328 ignition[1807]: disks: disks passed Jul 16 00:45:44.289332 ignition[1807]: POST message to Packet Timeline Jul 16 00:45:44.289350 ignition[1807]: GET https://metadata.packet.net/metadata: attempt #1 Jul 16 00:45:45.677346 ignition[1807]: GET result: OK Jul 16 00:45:45.996373 ignition[1807]: Ignition finished successfully Jul 16 00:45:45.999231 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 16 00:45:46.006174 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 16 00:45:46.013900 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 16 00:45:46.022117 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 16 00:45:46.030804 systemd[1]: Reached target sysinit.target - System Initialization. Jul 16 00:45:46.039848 systemd[1]: Reached target basic.target - Basic System. Jul 16 00:45:46.050393 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 16 00:45:46.092474 systemd-fsck[1837]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 16 00:45:46.095891 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 16 00:45:46.104471 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 16 00:45:46.195271 kernel: EXT4-fs (nvme0n1p9): mounted filesystem db08fdf6-07fd-45a1-bb3b-a7d0399d70fd r/w with ordered data mode. Quota mode: none. Jul 16 00:45:46.195731 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 16 00:45:46.206099 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 16 00:45:46.217309 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 16 00:45:46.242869 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 16 00:45:46.252267 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 (259:6) scanned by mount (1848) Jul 16 00:45:46.252289 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 16 00:45:46.252300 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 16 00:45:46.252310 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 16 00:45:46.319776 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 16 00:45:46.345519 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jul 16 00:45:46.351525 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 16 00:45:46.351595 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 16 00:45:46.365524 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 16 00:45:46.380167 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 16 00:45:46.408841 coreos-metadata[1866]: Jul 16 00:45:46.395 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 16 00:45:46.420076 coreos-metadata[1865]: Jul 16 00:45:46.395 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 16 00:45:46.394209 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 16 00:45:46.447500 initrd-setup-root[1888]: cut: /sysroot/etc/passwd: No such file or directory Jul 16 00:45:46.453966 initrd-setup-root[1896]: cut: /sysroot/etc/group: No such file or directory Jul 16 00:45:46.460425 initrd-setup-root[1903]: cut: /sysroot/etc/shadow: No such file or directory Jul 16 00:45:46.466953 initrd-setup-root[1911]: cut: /sysroot/etc/gshadow: No such file or directory Jul 16 00:45:46.536653 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 16 00:45:46.548914 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 16 00:45:46.572763 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 16 00:45:46.582268 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 16 00:45:46.605936 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 16 00:45:46.615891 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 16 00:45:46.630348 ignition[1982]: INFO : Ignition 2.21.0 Jul 16 00:45:46.630348 ignition[1982]: INFO : Stage: mount Jul 16 00:45:46.641578 ignition[1982]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 16 00:45:46.641578 ignition[1982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:45:46.641578 ignition[1982]: INFO : mount: mount passed Jul 16 00:45:46.641578 ignition[1982]: INFO : POST message to Packet Timeline Jul 16 00:45:46.641578 ignition[1982]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 16 00:45:47.212201 coreos-metadata[1866]: Jul 16 00:45:47.212 INFO Fetch successful Jul 16 00:45:47.257816 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jul 16 00:45:47.257904 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jul 16 00:45:47.579140 coreos-metadata[1865]: Jul 16 00:45:47.579 INFO Fetch successful Jul 16 00:45:47.623309 coreos-metadata[1865]: Jul 16 00:45:47.623 INFO wrote hostname ci-4372.0.1-n-8893f80933 to /sysroot/etc/hostname Jul 16 00:45:47.626444 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 16 00:45:47.725621 ignition[1982]: INFO : GET result: OK Jul 16 00:45:48.360936 ignition[1982]: INFO : Ignition finished successfully Jul 16 00:45:48.363614 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 16 00:45:48.371788 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 16 00:45:48.409405 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 16 00:45:48.451163 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 (259:6) scanned by mount (2006) Jul 16 00:45:48.451201 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 16 00:45:48.465641 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 16 00:45:48.478785 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 16 00:45:48.487687 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 16 00:45:48.531230 ignition[2023]: INFO : Ignition 2.21.0 Jul 16 00:45:48.531230 ignition[2023]: INFO : Stage: files Jul 16 00:45:48.541278 ignition[2023]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 16 00:45:48.541278 ignition[2023]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:45:48.541278 ignition[2023]: DEBUG : files: compiled without relabeling support, skipping Jul 16 00:45:48.541278 ignition[2023]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 16 00:45:48.541278 ignition[2023]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 16 00:45:48.541278 ignition[2023]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 16 00:45:48.541278 ignition[2023]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 16 00:45:48.541278 ignition[2023]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 16 00:45:48.541278 ignition[2023]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 16 00:45:48.541278 ignition[2023]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 16 00:45:48.538205 unknown[2023]: wrote ssh authorized keys file for user: core Jul 16 00:45:48.637658 ignition[2023]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 16 00:45:48.708412 ignition[2023]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 16 00:45:48.719294 ignition[2023]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 16 00:45:48.719294 ignition[2023]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 16 00:45:48.719294 ignition[2023]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 16 00:45:48.719294 ignition[2023]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 16 00:45:48.719294 ignition[2023]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 16 00:45:48.719294 ignition[2023]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 16 00:45:48.719294 ignition[2023]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 16 00:45:48.719294 ignition[2023]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 16 00:45:48.719294 ignition[2023]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 16 00:45:48.719294 ignition[2023]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 16 00:45:48.719294 ignition[2023]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 16 00:45:48.719294 ignition[2023]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 16 00:45:48.719294 ignition[2023]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 16 00:45:48.719294 ignition[2023]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 16 00:45:48.991685 ignition[2023]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 16 00:45:49.420997 ignition[2023]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 16 00:45:49.420997 ignition[2023]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 16 00:45:49.445881 ignition[2023]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 16 00:45:49.445881 ignition[2023]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 16 00:45:49.445881 ignition[2023]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 16 00:45:49.445881 ignition[2023]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 16 00:45:49.445881 ignition[2023]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 16 00:45:49.445881 ignition[2023]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 16 00:45:49.445881 ignition[2023]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 16 00:45:49.445881 ignition[2023]: INFO : files: files passed Jul 16 00:45:49.445881 ignition[2023]: INFO : POST message to Packet Timeline Jul 16 00:45:49.445881 ignition[2023]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 16 00:45:50.469715 ignition[2023]: INFO : GET result: OK Jul 16 00:45:50.865012 ignition[2023]: INFO : Ignition finished successfully Jul 16 00:45:50.867826 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 16 00:45:50.873009 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 16 00:45:50.892782 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 16 00:45:50.911505 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 16 00:45:50.911664 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 16 00:45:50.929633 initrd-setup-root-after-ignition[2064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 16 00:45:50.929633 initrd-setup-root-after-ignition[2064]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 16 00:45:50.924126 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 16 00:45:50.981675 initrd-setup-root-after-ignition[2068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 16 00:45:50.937043 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 16 00:45:50.953819 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 16 00:45:50.995605 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 16 00:45:50.995675 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 16 00:45:51.017792 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 16 00:45:51.027559 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 16 00:45:51.044196 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 16 00:45:51.045317 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 16 00:45:51.080983 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 16 00:45:51.093716 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 16 00:45:51.118421 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 16 00:45:51.130146 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 16 00:45:51.136010 systemd[1]: Stopped target timers.target - Timer Units. Jul 16 00:45:51.147517 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 16 00:45:51.147619 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 16 00:45:51.159078 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 16 00:45:51.170237 systemd[1]: Stopped target basic.target - Basic System. Jul 16 00:45:51.181573 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 16 00:45:51.192885 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 16 00:45:51.204081 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 16 00:45:51.215214 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 16 00:45:51.226374 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 16 00:45:51.237466 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 16 00:45:51.248694 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 16 00:45:51.259840 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 16 00:45:51.276663 systemd[1]: Stopped target swap.target - Swaps. Jul 16 00:45:51.287960 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 16 00:45:51.288061 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 16 00:45:51.299487 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 16 00:45:51.310638 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 16 00:45:51.321685 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 16 00:45:51.322359 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 16 00:45:51.332960 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 16 00:45:51.333065 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 16 00:45:51.344420 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 16 00:45:51.344510 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 16 00:45:51.355727 systemd[1]: Stopped target paths.target - Path Units. Jul 16 00:45:51.367069 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 16 00:45:51.370295 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 16 00:45:51.384201 systemd[1]: Stopped target slices.target - Slice Units. Jul 16 00:45:51.395687 systemd[1]: Stopped target sockets.target - Socket Units. Jul 16 00:45:51.407305 systemd[1]: iscsid.socket: Deactivated successfully. Jul 16 00:45:51.407393 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 16 00:45:51.418948 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 16 00:45:51.525542 ignition[2093]: INFO : Ignition 2.21.0 Jul 16 00:45:51.525542 ignition[2093]: INFO : Stage: umount Jul 16 00:45:51.525542 ignition[2093]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 16 00:45:51.525542 ignition[2093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:45:51.525542 ignition[2093]: INFO : umount: umount passed Jul 16 00:45:51.525542 ignition[2093]: INFO : POST message to Packet Timeline Jul 16 00:45:51.525542 ignition[2093]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 16 00:45:51.419029 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 16 00:45:51.430616 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 16 00:45:51.430705 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 16 00:45:51.442277 systemd[1]: ignition-files.service: Deactivated successfully. Jul 16 00:45:51.442364 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 16 00:45:51.453982 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 16 00:45:51.454064 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 16 00:45:51.472227 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 16 00:45:51.483304 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 16 00:45:51.483406 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 16 00:45:51.497936 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 16 00:45:51.506761 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 16 00:45:51.506868 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 16 00:45:51.519534 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 16 00:45:51.519618 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 16 00:45:51.534052 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 16 00:45:51.534903 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 16 00:45:51.536293 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 16 00:45:51.545098 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 16 00:45:51.546296 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 16 00:45:52.952078 ignition[2093]: INFO : GET result: OK Jul 16 00:45:53.274301 ignition[2093]: INFO : Ignition finished successfully Jul 16 00:45:53.277256 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 16 00:45:53.277526 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 16 00:45:53.284297 systemd[1]: Stopped target network.target - Network. Jul 16 00:45:53.293278 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 16 00:45:53.293357 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 16 00:45:53.302886 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 16 00:45:53.302921 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 16 00:45:53.312367 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 16 00:45:53.312428 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 16 00:45:53.321973 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 16 00:45:53.322001 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 16 00:45:53.331767 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 16 00:45:53.331805 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 16 00:45:53.341731 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 16 00:45:53.351405 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 16 00:45:53.361414 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 16 00:45:53.361553 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 16 00:45:53.375123 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 16 00:45:53.376138 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 16 00:45:53.376209 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 16 00:45:53.388727 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 16 00:45:53.389034 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 16 00:45:53.390311 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 16 00:45:53.396825 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 16 00:45:53.397681 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 16 00:45:53.406136 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 16 00:45:53.406242 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 16 00:45:53.418297 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 16 00:45:53.426763 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 16 00:45:53.426819 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 16 00:45:53.437460 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 16 00:45:53.437500 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 16 00:45:53.448139 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 16 00:45:53.448197 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 16 00:45:53.463944 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 16 00:45:53.476099 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 16 00:45:53.485720 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 16 00:45:53.486738 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 16 00:45:53.503335 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 16 00:45:53.503372 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 16 00:45:53.514288 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 16 00:45:53.514319 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 16 00:45:53.525499 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 16 00:45:53.525555 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 16 00:45:53.542620 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 16 00:45:53.542659 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 16 00:45:53.553964 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 16 00:45:53.554008 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 16 00:45:53.566399 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 16 00:45:53.577378 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 16 00:45:53.577430 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 16 00:45:53.589314 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 16 00:45:53.589356 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 16 00:45:53.601419 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 16 00:45:53.601453 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:45:53.620940 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 16 00:45:53.621001 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 16 00:45:53.621039 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 16 00:45:53.621361 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 16 00:45:53.621432 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 16 00:45:54.154586 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 16 00:45:54.154738 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 16 00:45:54.166266 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 16 00:45:54.177441 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 16 00:45:54.208617 systemd[1]: Switching root. Jul 16 00:45:54.276047 systemd-journald[909]: Journal stopped Jul 16 00:45:56.480108 systemd-journald[909]: Received SIGTERM from PID 1 (systemd). Jul 16 00:45:56.480135 kernel: SELinux: policy capability network_peer_controls=1 Jul 16 00:45:56.480145 kernel: SELinux: policy capability open_perms=1 Jul 16 00:45:56.480153 kernel: SELinux: policy capability extended_socket_class=1 Jul 16 00:45:56.480160 kernel: SELinux: policy capability always_check_network=0 Jul 16 00:45:56.480167 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 16 00:45:56.480175 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 16 00:45:56.480184 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 16 00:45:56.480192 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 16 00:45:56.480199 kernel: SELinux: policy capability userspace_initial_context=0 Jul 16 00:45:56.480207 kernel: audit: type=1403 audit(1752626754.482:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 16 00:45:56.480215 systemd[1]: Successfully loaded SELinux policy in 141.220ms. Jul 16 00:45:56.480224 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.569ms. Jul 16 00:45:56.480234 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 16 00:45:56.480244 systemd[1]: Detected architecture arm64. Jul 16 00:45:56.480253 systemd[1]: Detected first boot. Jul 16 00:45:56.480261 systemd[1]: Hostname set to . Jul 16 00:45:56.480273 systemd[1]: Initializing machine ID from random generator. Jul 16 00:45:56.480282 zram_generator::config[2165]: No configuration found. Jul 16 00:45:56.480293 systemd[1]: Populated /etc with preset unit settings. Jul 16 00:45:56.480301 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 16 00:45:56.480310 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 16 00:45:56.480318 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 16 00:45:56.480327 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 16 00:45:56.480335 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 16 00:45:56.480344 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 16 00:45:56.480354 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 16 00:45:56.480363 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 16 00:45:56.480371 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 16 00:45:56.480380 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 16 00:45:56.480389 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 16 00:45:56.480398 systemd[1]: Created slice user.slice - User and Session Slice. Jul 16 00:45:56.480406 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 16 00:45:56.480415 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 16 00:45:56.480425 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 16 00:45:56.480433 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 16 00:45:56.480442 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 16 00:45:56.480451 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 16 00:45:56.480460 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 16 00:45:56.480468 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 16 00:45:56.480479 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 16 00:45:56.480490 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 16 00:45:56.480500 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 16 00:45:56.480509 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 16 00:45:56.480518 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 16 00:45:56.480527 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 16 00:45:56.480535 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 16 00:45:56.480544 systemd[1]: Reached target slices.target - Slice Units. Jul 16 00:45:56.480553 systemd[1]: Reached target swap.target - Swaps. Jul 16 00:45:56.480563 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 16 00:45:56.480572 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 16 00:45:56.480581 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 16 00:45:56.480590 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 16 00:45:56.480599 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 16 00:45:56.480609 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 16 00:45:56.480619 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 16 00:45:56.480628 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 16 00:45:56.480637 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 16 00:45:56.480646 systemd[1]: Mounting media.mount - External Media Directory... Jul 16 00:45:56.480655 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 16 00:45:56.480664 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 16 00:45:56.480673 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 16 00:45:56.480683 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 16 00:45:56.480693 systemd[1]: Reached target machines.target - Containers. Jul 16 00:45:56.480702 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 16 00:45:56.480711 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 16 00:45:56.480720 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 16 00:45:56.480729 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 16 00:45:56.480738 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 16 00:45:56.480747 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 16 00:45:56.480756 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 16 00:45:56.480766 kernel: ACPI: bus type drm_connector registered Jul 16 00:45:56.480774 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 16 00:45:56.480783 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 16 00:45:56.480791 kernel: fuse: init (API version 7.41) Jul 16 00:45:56.480799 kernel: loop: module loaded Jul 16 00:45:56.480808 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 16 00:45:56.480817 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 16 00:45:56.480826 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 16 00:45:56.480836 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 16 00:45:56.480845 systemd[1]: Stopped systemd-fsck-usr.service. Jul 16 00:45:56.480855 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 16 00:45:56.480864 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 16 00:45:56.480874 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 16 00:45:56.480883 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 16 00:45:56.480892 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 16 00:45:56.480917 systemd-journald[2270]: Collecting audit messages is disabled. Jul 16 00:45:56.480938 systemd-journald[2270]: Journal started Jul 16 00:45:56.480957 systemd-journald[2270]: Runtime Journal (/run/log/journal/c70ca23f9fb64165af7a1a9946ab88f4) is 8M, max 4G, 3.9G free. Jul 16 00:45:55.037323 systemd[1]: Queued start job for default target multi-user.target. Jul 16 00:45:55.062911 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 16 00:45:55.063230 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 16 00:45:55.063538 systemd[1]: systemd-journald.service: Consumed 3.595s CPU time. Jul 16 00:45:56.516279 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 16 00:45:56.537278 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 16 00:45:56.560420 systemd[1]: verity-setup.service: Deactivated successfully. Jul 16 00:45:56.560437 systemd[1]: Stopped verity-setup.service. Jul 16 00:45:56.586279 systemd[1]: Started systemd-journald.service - Journal Service. Jul 16 00:45:56.591618 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 16 00:45:56.597210 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 16 00:45:56.602730 systemd[1]: Mounted media.mount - External Media Directory. Jul 16 00:45:56.608194 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 16 00:45:56.613712 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 16 00:45:56.619317 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 16 00:45:56.624830 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 16 00:45:56.630427 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 16 00:45:56.635980 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 16 00:45:56.637299 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 16 00:45:56.642820 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 16 00:45:56.642982 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 16 00:45:56.650301 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 16 00:45:56.650474 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 16 00:45:56.655929 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 16 00:45:56.656099 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 16 00:45:56.661379 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 16 00:45:56.661549 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 16 00:45:56.666952 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 16 00:45:56.667122 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 16 00:45:56.672488 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 16 00:45:56.678356 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 16 00:45:56.683600 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 16 00:45:56.688711 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 16 00:45:56.702989 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 16 00:45:56.709205 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 16 00:45:56.738874 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 16 00:45:56.743876 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 16 00:45:56.743902 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 16 00:45:56.749506 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 16 00:45:56.755481 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 16 00:45:56.760464 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 16 00:45:56.761705 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 16 00:45:56.767442 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 16 00:45:56.772284 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 16 00:45:56.773359 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 16 00:45:56.778150 systemd-journald[2270]: Time spent on flushing to /var/log/journal/c70ca23f9fb64165af7a1a9946ab88f4 is 25.188ms for 2528 entries. Jul 16 00:45:56.778150 systemd-journald[2270]: System Journal (/var/log/journal/c70ca23f9fb64165af7a1a9946ab88f4) is 8M, max 195.6M, 187.6M free. Jul 16 00:45:56.820295 systemd-journald[2270]: Received client request to flush runtime journal. Jul 16 00:45:56.820345 kernel: loop0: detected capacity change from 0 to 107312 Jul 16 00:45:56.778199 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 16 00:45:56.779301 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 16 00:45:56.796207 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 16 00:45:56.802000 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 16 00:45:56.808136 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 16 00:45:56.823719 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 16 00:45:56.824269 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 16 00:45:56.838610 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 16 00:45:56.843308 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 16 00:45:56.850000 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 16 00:45:56.854738 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 16 00:45:56.861628 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 16 00:45:56.869845 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 16 00:45:56.875671 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 16 00:45:56.897273 kernel: loop1: detected capacity change from 0 to 207008 Jul 16 00:45:56.900580 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 16 00:45:56.908537 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 16 00:45:56.909148 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 16 00:45:56.921553 systemd-tmpfiles[2329]: ACLs are not supported, ignoring. Jul 16 00:45:56.921565 systemd-tmpfiles[2329]: ACLs are not supported, ignoring. Jul 16 00:45:56.926146 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 16 00:45:56.962278 kernel: loop2: detected capacity change from 0 to 8 Jul 16 00:45:57.012629 ldconfig[2300]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 16 00:45:57.014290 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 16 00:45:57.015270 kernel: loop3: detected capacity change from 0 to 138376 Jul 16 00:45:57.087278 kernel: loop4: detected capacity change from 0 to 107312 Jul 16 00:45:57.104277 kernel: loop5: detected capacity change from 0 to 207008 Jul 16 00:45:57.122273 kernel: loop6: detected capacity change from 0 to 8 Jul 16 00:45:57.134276 kernel: loop7: detected capacity change from 0 to 138376 Jul 16 00:45:57.140614 (sd-merge)[2349]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jul 16 00:45:57.141048 (sd-merge)[2349]: Merged extensions into '/usr'. Jul 16 00:45:57.144200 systemd[1]: Reload requested from client PID 2307 ('systemd-sysext') (unit systemd-sysext.service)... Jul 16 00:45:57.144214 systemd[1]: Reloading... Jul 16 00:45:57.185274 zram_generator::config[2378]: No configuration found. Jul 16 00:45:57.263225 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 00:45:57.336535 systemd[1]: Reloading finished in 191 ms. Jul 16 00:45:57.364608 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 16 00:45:57.369776 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 16 00:45:57.397614 systemd[1]: Starting ensure-sysext.service... Jul 16 00:45:57.403417 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 16 00:45:57.410096 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 16 00:45:57.420870 systemd[1]: Reload requested from client PID 2431 ('systemctl') (unit ensure-sysext.service)... Jul 16 00:45:57.420881 systemd[1]: Reloading... Jul 16 00:45:57.421421 systemd-tmpfiles[2432]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 16 00:45:57.421446 systemd-tmpfiles[2432]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 16 00:45:57.421643 systemd-tmpfiles[2432]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 16 00:45:57.421820 systemd-tmpfiles[2432]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 16 00:45:57.422453 systemd-tmpfiles[2432]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 16 00:45:57.422649 systemd-tmpfiles[2432]: ACLs are not supported, ignoring. Jul 16 00:45:57.422691 systemd-tmpfiles[2432]: ACLs are not supported, ignoring. Jul 16 00:45:57.425760 systemd-tmpfiles[2432]: Detected autofs mount point /boot during canonicalization of boot. Jul 16 00:45:57.425767 systemd-tmpfiles[2432]: Skipping /boot Jul 16 00:45:57.434637 systemd-tmpfiles[2432]: Detected autofs mount point /boot during canonicalization of boot. Jul 16 00:45:57.434644 systemd-tmpfiles[2432]: Skipping /boot Jul 16 00:45:57.438337 systemd-udevd[2433]: Using default interface naming scheme 'v255'. Jul 16 00:45:57.465270 zram_generator::config[2462]: No configuration found. Jul 16 00:45:57.512285 kernel: IPMI message handler: version 39.2 Jul 16 00:45:57.522283 kernel: ipmi device interface Jul 16 00:45:57.534271 kernel: ipmi_ssif: IPMI SSIF Interface driver Jul 16 00:45:57.539268 kernel: MACsec IEEE 802.1AE Jul 16 00:45:57.539314 kernel: ipmi_si: IPMI System Interface driver Jul 16 00:45:57.555303 kernel: ipmi_si: Unable to find any System Interface(s) Jul 16 00:45:57.561256 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 00:45:57.652524 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 16 00:45:57.652866 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Jul 16 00:45:57.657550 systemd[1]: Reloading finished in 236 ms. Jul 16 00:45:57.671460 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 16 00:45:57.688785 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 16 00:45:57.711570 systemd[1]: Finished ensure-sysext.service. Jul 16 00:45:57.733984 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 16 00:45:57.751295 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 16 00:45:57.756134 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 16 00:45:57.757057 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 16 00:45:57.762834 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 16 00:45:57.768582 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 16 00:45:57.774319 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 16 00:45:57.779238 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 16 00:45:57.780153 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 16 00:45:57.785022 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 16 00:45:57.786122 augenrules[2687]: No rules Jul 16 00:45:57.786241 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 16 00:45:57.792858 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 16 00:45:57.799474 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 16 00:45:57.805773 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 16 00:45:57.811381 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 16 00:45:57.836589 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 16 00:45:57.841874 systemd[1]: audit-rules.service: Deactivated successfully. Jul 16 00:45:57.842709 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 16 00:45:57.848805 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 16 00:45:57.853317 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 16 00:45:57.853476 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 16 00:45:57.857922 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 16 00:45:57.858076 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 16 00:45:57.863056 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 16 00:45:57.863853 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 16 00:45:57.870744 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 16 00:45:57.870931 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 16 00:45:57.875683 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 16 00:45:57.880508 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 16 00:45:57.887142 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:45:57.899045 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 16 00:45:57.904083 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 16 00:45:57.904187 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 16 00:45:57.905563 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 16 00:45:57.926651 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 16 00:45:57.931246 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 16 00:45:57.933986 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 16 00:45:57.963088 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 16 00:45:58.031663 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 16 00:45:58.036471 systemd[1]: Reached target time-set.target - System Time Set. Jul 16 00:45:58.039944 systemd-resolved[2696]: Positive Trust Anchors: Jul 16 00:45:58.039956 systemd-resolved[2696]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 16 00:45:58.039988 systemd-resolved[2696]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 16 00:45:58.043524 systemd-resolved[2696]: Using system hostname 'ci-4372.0.1-n-8893f80933'. Jul 16 00:45:58.044845 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 16 00:45:58.045684 systemd-networkd[2695]: lo: Link UP Jul 16 00:45:58.045689 systemd-networkd[2695]: lo: Gained carrier Jul 16 00:45:58.049002 systemd-networkd[2695]: bond0: netdev ready Jul 16 00:45:58.049230 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 16 00:45:58.053629 systemd[1]: Reached target sysinit.target - System Initialization. Jul 16 00:45:58.057895 systemd-networkd[2695]: Enumeration completed Jul 16 00:45:58.058004 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 16 00:45:58.062350 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 16 00:45:58.062932 systemd-networkd[2695]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:5a:81:80.network. Jul 16 00:45:58.066853 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 16 00:45:58.071211 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 16 00:45:58.075675 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 16 00:45:58.079974 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 16 00:45:58.079993 systemd[1]: Reached target paths.target - Path Units. Jul 16 00:45:58.084240 systemd[1]: Reached target timers.target - Timer Units. Jul 16 00:45:58.089317 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 16 00:45:58.094923 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 16 00:45:58.101341 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 16 00:45:58.111195 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 16 00:45:58.115981 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 16 00:45:58.120782 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 16 00:45:58.125367 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 16 00:45:58.129835 systemd[1]: Reached target network.target - Network. Jul 16 00:45:58.134240 systemd[1]: Reached target sockets.target - Socket Units. Jul 16 00:45:58.138541 systemd[1]: Reached target basic.target - Basic System. Jul 16 00:45:58.142820 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 16 00:45:58.142839 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 16 00:45:58.143856 systemd[1]: Starting containerd.service - containerd container runtime... Jul 16 00:45:58.170083 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 16 00:45:58.175542 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 16 00:45:58.181232 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 16 00:45:58.186598 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 16 00:45:58.191769 coreos-metadata[2739]: Jul 16 00:45:58.191 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 16 00:45:58.192061 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 16 00:45:58.195041 coreos-metadata[2739]: Jul 16 00:45:58.195 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 16 00:45:58.196544 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 16 00:45:58.196781 jq[2744]: false Jul 16 00:45:58.197593 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 16 00:45:58.203067 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 16 00:45:58.208172 extend-filesystems[2745]: Found /dev/nvme0n1p6 Jul 16 00:45:58.213040 extend-filesystems[2745]: Found /dev/nvme0n1p9 Jul 16 00:45:58.208591 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 16 00:45:58.222268 extend-filesystems[2745]: Checking size of /dev/nvme0n1p9 Jul 16 00:45:58.218718 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 16 00:45:58.231410 extend-filesystems[2745]: Resized partition /dev/nvme0n1p9 Jul 16 00:45:58.253131 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks Jul 16 00:45:58.230794 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 16 00:45:58.253292 extend-filesystems[2767]: resize2fs 1.47.2 (1-Jan-2025) Jul 16 00:45:58.249533 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 16 00:45:58.259027 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 16 00:45:58.267758 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 16 00:45:58.268285 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 16 00:45:58.268848 systemd[1]: Starting update-engine.service - Update Engine... Jul 16 00:45:58.274784 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 16 00:45:58.281058 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 16 00:45:58.282529 jq[2779]: true Jul 16 00:45:58.286394 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 16 00:45:58.286569 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 16 00:45:58.286796 systemd[1]: motdgen.service: Deactivated successfully. Jul 16 00:45:58.286967 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 16 00:45:58.292562 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 16 00:45:58.292838 systemd-logind[2768]: Watching system buttons on /dev/input/event0 (Power Button) Jul 16 00:45:58.293044 systemd-logind[2768]: New seat seat0. Jul 16 00:45:58.293298 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 16 00:45:58.299007 systemd[1]: Started systemd-logind.service - User Login Management. Jul 16 00:45:58.308090 update_engine[2778]: I20250716 00:45:58.307963 2778 main.cc:92] Flatcar Update Engine starting Jul 16 00:45:58.309108 (ntainerd)[2784]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 16 00:45:58.311174 jq[2782]: true Jul 16 00:45:58.317260 tar[2781]: linux-arm64/LICENSE Jul 16 00:45:58.317432 tar[2781]: linux-arm64/helm Jul 16 00:45:58.325103 dbus-daemon[2740]: [system] SELinux support is enabled Jul 16 00:45:58.325897 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 16 00:45:58.328796 update_engine[2778]: I20250716 00:45:58.328763 2778 update_check_scheduler.cc:74] Next update check in 10m43s Jul 16 00:45:58.334975 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 16 00:45:58.335005 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 16 00:45:58.335251 dbus-daemon[2740]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 16 00:45:58.339617 bash[2810]: Updated "/home/core/.ssh/authorized_keys" Jul 16 00:45:58.339776 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 16 00:45:58.339791 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 16 00:45:58.344782 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 16 00:45:58.350167 systemd[1]: Started update-engine.service - Update Engine. Jul 16 00:45:58.357009 systemd[1]: Starting sshkeys.service... Jul 16 00:45:58.386201 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 16 00:45:58.395716 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 16 00:45:58.401610 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 16 00:45:58.421397 coreos-metadata[2819]: Jul 16 00:45:58.421 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 16 00:45:58.422065 locksmithd[2813]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 16 00:45:58.422547 coreos-metadata[2819]: Jul 16 00:45:58.422 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 16 00:45:58.466469 containerd[2784]: time="2025-07-16T00:45:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 16 00:45:58.467044 containerd[2784]: time="2025-07-16T00:45:58.467018480Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 16 00:45:58.475082 containerd[2784]: time="2025-07-16T00:45:58.475055200Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.68µs" Jul 16 00:45:58.475103 containerd[2784]: time="2025-07-16T00:45:58.475082360Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 16 00:45:58.475103 containerd[2784]: time="2025-07-16T00:45:58.475099360Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 16 00:45:58.475261 containerd[2784]: time="2025-07-16T00:45:58.475248360Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 16 00:45:58.475294 containerd[2784]: time="2025-07-16T00:45:58.475273040Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 16 00:45:58.475317 containerd[2784]: time="2025-07-16T00:45:58.475297440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 16 00:45:58.475360 containerd[2784]: time="2025-07-16T00:45:58.475346160Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 16 00:45:58.475380 containerd[2784]: time="2025-07-16T00:45:58.475359800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 16 00:45:58.475568 containerd[2784]: time="2025-07-16T00:45:58.475553400Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 16 00:45:58.475594 containerd[2784]: time="2025-07-16T00:45:58.475568680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 16 00:45:58.475594 containerd[2784]: time="2025-07-16T00:45:58.475579400Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 16 00:45:58.475594 containerd[2784]: time="2025-07-16T00:45:58.475587240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 16 00:45:58.475682 containerd[2784]: time="2025-07-16T00:45:58.475658360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 16 00:45:58.475854 containerd[2784]: time="2025-07-16T00:45:58.475839680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 16 00:45:58.475881 containerd[2784]: time="2025-07-16T00:45:58.475867000Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 16 00:45:58.475881 containerd[2784]: time="2025-07-16T00:45:58.475878160Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 16 00:45:58.476421 containerd[2784]: time="2025-07-16T00:45:58.476402840Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 16 00:45:58.476641 containerd[2784]: time="2025-07-16T00:45:58.476630000Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 16 00:45:58.476718 containerd[2784]: time="2025-07-16T00:45:58.476706960Z" level=info msg="metadata content store policy set" policy=shared Jul 16 00:45:58.485024 containerd[2784]: time="2025-07-16T00:45:58.484996520Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 16 00:45:58.485077 containerd[2784]: time="2025-07-16T00:45:58.485034080Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 16 00:45:58.485077 containerd[2784]: time="2025-07-16T00:45:58.485046400Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 16 00:45:58.485077 containerd[2784]: time="2025-07-16T00:45:58.485058400Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 16 00:45:58.485077 containerd[2784]: time="2025-07-16T00:45:58.485069520Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 16 00:45:58.485203 containerd[2784]: time="2025-07-16T00:45:58.485082120Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 16 00:45:58.485203 containerd[2784]: time="2025-07-16T00:45:58.485092920Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 16 00:45:58.485203 containerd[2784]: time="2025-07-16T00:45:58.485103640Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 16 00:45:58.485203 containerd[2784]: time="2025-07-16T00:45:58.485114480Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 16 00:45:58.485203 containerd[2784]: time="2025-07-16T00:45:58.485123960Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 16 00:45:58.485203 containerd[2784]: time="2025-07-16T00:45:58.485133960Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 16 00:45:58.485203 containerd[2784]: time="2025-07-16T00:45:58.485146720Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 16 00:45:58.485329 containerd[2784]: time="2025-07-16T00:45:58.485255760Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 16 00:45:58.485329 containerd[2784]: time="2025-07-16T00:45:58.485294640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 16 00:45:58.485329 containerd[2784]: time="2025-07-16T00:45:58.485311080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 16 00:45:58.485329 containerd[2784]: time="2025-07-16T00:45:58.485322520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 16 00:45:58.485390 containerd[2784]: time="2025-07-16T00:45:58.485331920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 16 00:45:58.485390 containerd[2784]: time="2025-07-16T00:45:58.485342560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 16 00:45:58.485390 containerd[2784]: time="2025-07-16T00:45:58.485358000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 16 00:45:58.485390 containerd[2784]: time="2025-07-16T00:45:58.485369360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 16 00:45:58.485390 containerd[2784]: time="2025-07-16T00:45:58.485379800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 16 00:45:58.485390 containerd[2784]: time="2025-07-16T00:45:58.485390960Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 16 00:45:58.485488 containerd[2784]: time="2025-07-16T00:45:58.485401240Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 16 00:45:58.485597 containerd[2784]: time="2025-07-16T00:45:58.485583320Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 16 00:45:58.485618 containerd[2784]: time="2025-07-16T00:45:58.485600080Z" level=info msg="Start snapshots syncer" Jul 16 00:45:58.485635 containerd[2784]: time="2025-07-16T00:45:58.485623640Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 16 00:45:58.485858 containerd[2784]: time="2025-07-16T00:45:58.485825520Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 16 00:45:58.485941 containerd[2784]: time="2025-07-16T00:45:58.485872840Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 16 00:45:58.485977 containerd[2784]: time="2025-07-16T00:45:58.485962480Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 16 00:45:58.486080 containerd[2784]: time="2025-07-16T00:45:58.486067760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 16 00:45:58.486098 containerd[2784]: time="2025-07-16T00:45:58.486091320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 16 00:45:58.486115 containerd[2784]: time="2025-07-16T00:45:58.486101920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 16 00:45:58.486132 containerd[2784]: time="2025-07-16T00:45:58.486114080Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 16 00:45:58.486132 containerd[2784]: time="2025-07-16T00:45:58.486125360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 16 00:45:58.486168 containerd[2784]: time="2025-07-16T00:45:58.486135440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 16 00:45:58.486168 containerd[2784]: time="2025-07-16T00:45:58.486146160Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 16 00:45:58.486199 containerd[2784]: time="2025-07-16T00:45:58.486168320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 16 00:45:58.486199 containerd[2784]: time="2025-07-16T00:45:58.486179040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 16 00:45:58.486199 containerd[2784]: time="2025-07-16T00:45:58.486188960Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 16 00:45:58.486247 containerd[2784]: time="2025-07-16T00:45:58.486225760Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 16 00:45:58.486247 containerd[2784]: time="2025-07-16T00:45:58.486238920Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 16 00:45:58.486284 containerd[2784]: time="2025-07-16T00:45:58.486246840Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 16 00:45:58.486284 containerd[2784]: time="2025-07-16T00:45:58.486256000Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 16 00:45:58.486284 containerd[2784]: time="2025-07-16T00:45:58.486270120Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 16 00:45:58.486284 containerd[2784]: time="2025-07-16T00:45:58.486280040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 16 00:45:58.486345 containerd[2784]: time="2025-07-16T00:45:58.486300920Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 16 00:45:58.486387 containerd[2784]: time="2025-07-16T00:45:58.486378000Z" level=info msg="runtime interface created" Jul 16 00:45:58.486387 containerd[2784]: time="2025-07-16T00:45:58.486385040Z" level=info msg="created NRI interface" Jul 16 00:45:58.486419 containerd[2784]: time="2025-07-16T00:45:58.486393360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 16 00:45:58.486419 containerd[2784]: time="2025-07-16T00:45:58.486404360Z" level=info msg="Connect containerd service" Jul 16 00:45:58.486453 containerd[2784]: time="2025-07-16T00:45:58.486434840Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 16 00:45:58.487067 containerd[2784]: time="2025-07-16T00:45:58.487048080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 16 00:45:58.566474 containerd[2784]: time="2025-07-16T00:45:58.566430200Z" level=info msg="Start subscribing containerd event" Jul 16 00:45:58.566499 containerd[2784]: time="2025-07-16T00:45:58.566489800Z" level=info msg="Start recovering state" Jul 16 00:45:58.566586 containerd[2784]: time="2025-07-16T00:45:58.566573000Z" level=info msg="Start event monitor" Jul 16 00:45:58.566612 containerd[2784]: time="2025-07-16T00:45:58.566593160Z" level=info msg="Start cni network conf syncer for default" Jul 16 00:45:58.566612 containerd[2784]: time="2025-07-16T00:45:58.566601520Z" level=info msg="Start streaming server" Jul 16 00:45:58.566645 containerd[2784]: time="2025-07-16T00:45:58.566611400Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 16 00:45:58.566645 containerd[2784]: time="2025-07-16T00:45:58.566618920Z" level=info msg="runtime interface starting up..." Jul 16 00:45:58.566645 containerd[2784]: time="2025-07-16T00:45:58.566630280Z" level=info msg="starting plugins..." Jul 16 00:45:58.566645 containerd[2784]: time="2025-07-16T00:45:58.566642480Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 16 00:45:58.566752 containerd[2784]: time="2025-07-16T00:45:58.566730000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 16 00:45:58.566796 containerd[2784]: time="2025-07-16T00:45:58.566786920Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 16 00:45:58.566845 containerd[2784]: time="2025-07-16T00:45:58.566837480Z" level=info msg="containerd successfully booted in 0.100715s" Jul 16 00:45:58.566892 systemd[1]: Started containerd.service - containerd container runtime. Jul 16 00:45:58.650070 tar[2781]: linux-arm64/README.md Jul 16 00:45:58.684301 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 16 00:45:58.752279 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 Jul 16 00:45:58.768916 extend-filesystems[2767]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 16 00:45:58.768916 extend-filesystems[2767]: old_desc_blocks = 1, new_desc_blocks = 112 Jul 16 00:45:58.768916 extend-filesystems[2767]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. Jul 16 00:45:58.800619 extend-filesystems[2745]: Resized filesystem in /dev/nvme0n1p9 Jul 16 00:45:58.771318 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 16 00:45:58.771655 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 16 00:45:58.785899 systemd[1]: extend-filesystems.service: Consumed 209ms CPU time, 68.9M memory peak. Jul 16 00:45:58.968664 sshd_keygen[2771]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 16 00:45:58.989309 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 16 00:45:58.996597 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 16 00:45:59.019988 systemd[1]: issuegen.service: Deactivated successfully. Jul 16 00:45:59.020177 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 16 00:45:59.027167 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 16 00:45:59.054061 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 16 00:45:59.060920 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 16 00:45:59.067619 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 16 00:45:59.073248 systemd[1]: Reached target getty.target - Login Prompts. Jul 16 00:45:59.195158 coreos-metadata[2739]: Jul 16 00:45:59.195 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 16 00:45:59.195547 coreos-metadata[2739]: Jul 16 00:45:59.195 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 16 00:45:59.354280 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Jul 16 00:45:59.372276 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link Jul 16 00:45:59.373178 systemd-networkd[2695]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:5a:81:81.network. Jul 16 00:45:59.422650 coreos-metadata[2819]: Jul 16 00:45:59.422 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 16 00:45:59.423024 coreos-metadata[2819]: Jul 16 00:45:59.423 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 16 00:45:59.976281 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Jul 16 00:45:59.992898 systemd-networkd[2695]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jul 16 00:45:59.993272 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link Jul 16 00:45:59.994443 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 16 00:45:59.994555 systemd-networkd[2695]: enP1p1s0f0np0: Link UP Jul 16 00:45:59.994798 systemd-networkd[2695]: enP1p1s0f0np0: Gained carrier Jul 16 00:46:00.014268 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 16 00:46:00.030592 systemd-networkd[2695]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:5a:81:80.network. Jul 16 00:46:00.030867 systemd-networkd[2695]: enP1p1s0f1np1: Link UP Jul 16 00:46:00.031058 systemd-networkd[2695]: enP1p1s0f1np1: Gained carrier Jul 16 00:46:00.042537 systemd-networkd[2695]: bond0: Link UP Jul 16 00:46:00.042789 systemd-networkd[2695]: bond0: Gained carrier Jul 16 00:46:00.042960 systemd-timesyncd[2697]: Network configuration changed, trying to establish connection. Jul 16 00:46:00.043547 systemd-timesyncd[2697]: Network configuration changed, trying to establish connection. Jul 16 00:46:00.043792 systemd-timesyncd[2697]: Network configuration changed, trying to establish connection. Jul 16 00:46:00.043922 systemd-timesyncd[2697]: Network configuration changed, trying to establish connection. Jul 16 00:46:00.117713 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex Jul 16 00:46:00.117745 kernel: bond0: active interface up! Jul 16 00:46:00.242274 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex Jul 16 00:46:01.186317 systemd-networkd[2695]: bond0: Gained IPv6LL Jul 16 00:46:01.186847 systemd-timesyncd[2697]: Network configuration changed, trying to establish connection. Jul 16 00:46:01.195650 coreos-metadata[2739]: Jul 16 00:46:01.195 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Jul 16 00:46:01.423207 coreos-metadata[2819]: Jul 16 00:46:01.423 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Jul 16 00:46:01.634639 systemd-timesyncd[2697]: Network configuration changed, trying to establish connection. Jul 16 00:46:01.634750 systemd-timesyncd[2697]: Network configuration changed, trying to establish connection. Jul 16 00:46:01.638308 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 16 00:46:01.644231 systemd[1]: Reached target network-online.target - Network is Online. Jul 16 00:46:01.651402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:46:01.668709 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 16 00:46:01.690613 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 16 00:46:02.272456 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:46:02.278729 (kubelet)[2901]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 16 00:46:02.643794 kubelet[2901]: E0716 00:46:02.643712 2901 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 00:46:02.646384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 00:46:02.646518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 00:46:02.646865 systemd[1]: kubelet.service: Consumed 724ms CPU time, 264.2M memory peak. Jul 16 00:46:03.514679 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 Jul 16 00:46:03.514984 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity Jul 16 00:46:03.684145 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 16 00:46:03.690372 systemd[1]: Started sshd@0-147.28.162.205:22-139.178.89.65:50146.service - OpenSSH per-connection server daemon (139.178.89.65:50146). Jul 16 00:46:03.965759 coreos-metadata[2739]: Jul 16 00:46:03.965 INFO Fetch successful Jul 16 00:46:04.044129 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 16 00:46:04.051236 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jul 16 00:46:04.097716 login[2878]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying Jul 16 00:46:04.098139 login[2877]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:46:04.108055 systemd-logind[2768]: New session 2 of user core. Jul 16 00:46:04.109415 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 16 00:46:04.110700 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 16 00:46:04.110763 sshd[2930]: Accepted publickey for core from 139.178.89.65 port 50146 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:46:04.112141 sshd-session[2930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:46:04.115287 systemd-logind[2768]: New session 3 of user core. Jul 16 00:46:04.122136 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 16 00:46:04.125281 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 16 00:46:04.131154 (systemd)[2948]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 16 00:46:04.132851 systemd-logind[2768]: New session c1 of user core. Jul 16 00:46:04.258384 systemd[2948]: Queued start job for default target default.target. Jul 16 00:46:04.278300 systemd[2948]: Created slice app.slice - User Application Slice. Jul 16 00:46:04.278325 systemd[2948]: Reached target paths.target - Paths. Jul 16 00:46:04.278359 systemd[2948]: Reached target timers.target - Timers. Jul 16 00:46:04.279539 systemd[2948]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 16 00:46:04.287536 systemd[2948]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 16 00:46:04.287585 systemd[2948]: Reached target sockets.target - Sockets. Jul 16 00:46:04.287626 systemd[2948]: Reached target basic.target - Basic System. Jul 16 00:46:04.287653 systemd[2948]: Reached target default.target - Main User Target. Jul 16 00:46:04.287674 systemd[2948]: Startup finished in 149ms. Jul 16 00:46:04.287967 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 16 00:46:04.289721 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 16 00:46:04.290634 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 16 00:46:04.454332 coreos-metadata[2819]: Jul 16 00:46:04.454 INFO Fetch successful Jul 16 00:46:04.500929 unknown[2819]: wrote ssh authorized keys file for user: core Jul 16 00:46:04.526089 update-ssh-keys[2974]: Updated "/home/core/.ssh/authorized_keys" Jul 16 00:46:04.527251 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 16 00:46:04.528766 systemd[1]: Finished sshkeys.service. Jul 16 00:46:04.594870 systemd[1]: Started sshd@1-147.28.162.205:22-139.178.89.65:50148.service - OpenSSH per-connection server daemon (139.178.89.65:50148). Jul 16 00:46:04.679884 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jul 16 00:46:04.680403 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 16 00:46:04.684336 systemd[1]: Startup finished in 5.263s (kernel) + 22.931s (initrd) + 10.343s (userspace) = 38.538s. Jul 16 00:46:04.998676 sshd[2978]: Accepted publickey for core from 139.178.89.65 port 50148 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:46:04.999885 sshd-session[2978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:46:05.003948 systemd-logind[2768]: New session 4 of user core. Jul 16 00:46:05.025422 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 16 00:46:05.100470 login[2878]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:46:05.104142 systemd-logind[2768]: New session 1 of user core. Jul 16 00:46:05.126425 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 16 00:46:05.289130 sshd[2982]: Connection closed by 139.178.89.65 port 50148 Jul 16 00:46:05.289530 sshd-session[2978]: pam_unix(sshd:session): session closed for user core Jul 16 00:46:05.293360 systemd[1]: sshd@1-147.28.162.205:22-139.178.89.65:50148.service: Deactivated successfully. Jul 16 00:46:05.295820 systemd[1]: session-4.scope: Deactivated successfully. Jul 16 00:46:05.296386 systemd-logind[2768]: Session 4 logged out. Waiting for processes to exit. Jul 16 00:46:05.297166 systemd-logind[2768]: Removed session 4. Jul 16 00:46:05.368652 systemd[1]: Started sshd@2-147.28.162.205:22-139.178.89.65:50162.service - OpenSSH per-connection server daemon (139.178.89.65:50162). Jul 16 00:46:05.771635 sshd[2996]: Accepted publickey for core from 139.178.89.65 port 50162 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:46:05.772763 sshd-session[2996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:46:05.775570 systemd-logind[2768]: New session 5 of user core. Jul 16 00:46:05.797363 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 16 00:46:06.058510 sshd[2998]: Connection closed by 139.178.89.65 port 50162 Jul 16 00:46:06.058927 sshd-session[2996]: pam_unix(sshd:session): session closed for user core Jul 16 00:46:06.062589 systemd[1]: sshd@2-147.28.162.205:22-139.178.89.65:50162.service: Deactivated successfully. Jul 16 00:46:06.064599 systemd[1]: session-5.scope: Deactivated successfully. Jul 16 00:46:06.065151 systemd-logind[2768]: Session 5 logged out. Waiting for processes to exit. Jul 16 00:46:06.066065 systemd-logind[2768]: Removed session 5. Jul 16 00:46:06.137738 systemd[1]: Started sshd@3-147.28.162.205:22-139.178.89.65:50164.service - OpenSSH per-connection server daemon (139.178.89.65:50164). Jul 16 00:46:06.541205 sshd[3006]: Accepted publickey for core from 139.178.89.65 port 50164 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:46:06.542319 sshd-session[3006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:46:06.545110 systemd-logind[2768]: New session 6 of user core. Jul 16 00:46:06.566379 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 16 00:46:06.582117 systemd-timesyncd[2697]: Network configuration changed, trying to establish connection. Jul 16 00:46:06.833963 sshd[3008]: Connection closed by 139.178.89.65 port 50164 Jul 16 00:46:06.834467 sshd-session[3006]: pam_unix(sshd:session): session closed for user core Jul 16 00:46:06.838104 systemd[1]: sshd@3-147.28.162.205:22-139.178.89.65:50164.service: Deactivated successfully. Jul 16 00:46:06.840760 systemd[1]: session-6.scope: Deactivated successfully. Jul 16 00:46:06.841441 systemd-logind[2768]: Session 6 logged out. Waiting for processes to exit. Jul 16 00:46:06.842196 systemd-logind[2768]: Removed session 6. Jul 16 00:46:06.914562 systemd[1]: Started sshd@4-147.28.162.205:22-139.178.89.65:50176.service - OpenSSH per-connection server daemon (139.178.89.65:50176). Jul 16 00:46:07.314908 sshd[3014]: Accepted publickey for core from 139.178.89.65 port 50176 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:46:07.316171 sshd-session[3014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:46:07.319512 systemd-logind[2768]: New session 7 of user core. Jul 16 00:46:07.339440 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 16 00:46:07.560696 sudo[3017]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 16 00:46:07.560939 sudo[3017]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 16 00:46:07.585879 sudo[3017]: pam_unix(sudo:session): session closed for user root Jul 16 00:46:07.648032 sshd[3016]: Connection closed by 139.178.89.65 port 50176 Jul 16 00:46:07.648418 sshd-session[3014]: pam_unix(sshd:session): session closed for user core Jul 16 00:46:07.651294 systemd[1]: sshd@4-147.28.162.205:22-139.178.89.65:50176.service: Deactivated successfully. Jul 16 00:46:07.653610 systemd[1]: session-7.scope: Deactivated successfully. Jul 16 00:46:07.654182 systemd-logind[2768]: Session 7 logged out. Waiting for processes to exit. Jul 16 00:46:07.655148 systemd-logind[2768]: Removed session 7. Jul 16 00:46:07.722883 systemd[1]: Started sshd@5-147.28.162.205:22-139.178.89.65:50182.service - OpenSSH per-connection server daemon (139.178.89.65:50182). Jul 16 00:46:08.124540 sshd[3023]: Accepted publickey for core from 139.178.89.65 port 50182 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:46:08.125880 sshd-session[3023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:46:08.128812 systemd-logind[2768]: New session 8 of user core. Jul 16 00:46:08.150381 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 16 00:46:08.353605 sudo[3028]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 16 00:46:08.353845 sudo[3028]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 16 00:46:08.356573 sudo[3028]: pam_unix(sudo:session): session closed for user root Jul 16 00:46:08.360734 sudo[3027]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 16 00:46:08.360970 sudo[3027]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 16 00:46:08.368239 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 16 00:46:08.412988 augenrules[3050]: No rules Jul 16 00:46:08.414050 systemd[1]: audit-rules.service: Deactivated successfully. Jul 16 00:46:08.414257 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 16 00:46:08.414968 sudo[3027]: pam_unix(sudo:session): session closed for user root Jul 16 00:46:08.476612 sshd[3026]: Connection closed by 139.178.89.65 port 50182 Jul 16 00:46:08.476944 sshd-session[3023]: pam_unix(sshd:session): session closed for user core Jul 16 00:46:08.479727 systemd[1]: sshd@5-147.28.162.205:22-139.178.89.65:50182.service: Deactivated successfully. Jul 16 00:46:08.482539 systemd[1]: session-8.scope: Deactivated successfully. Jul 16 00:46:08.483069 systemd-logind[2768]: Session 8 logged out. Waiting for processes to exit. Jul 16 00:46:08.483869 systemd-logind[2768]: Removed session 8. Jul 16 00:46:08.554685 systemd[1]: Started sshd@6-147.28.162.205:22-139.178.89.65:50198.service - OpenSSH per-connection server daemon (139.178.89.65:50198). Jul 16 00:46:08.959211 sshd[3059]: Accepted publickey for core from 139.178.89.65 port 50198 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:46:08.960411 sshd-session[3059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:46:08.963392 systemd-logind[2768]: New session 9 of user core. Jul 16 00:46:08.975375 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 16 00:46:09.190367 sudo[3063]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 16 00:46:09.190624 sudo[3063]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 16 00:46:09.493637 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 16 00:46:09.505673 (dockerd)[3091]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 16 00:46:09.723637 dockerd[3091]: time="2025-07-16T00:46:09.723588720Z" level=info msg="Starting up" Jul 16 00:46:09.724774 dockerd[3091]: time="2025-07-16T00:46:09.724750920Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 16 00:46:09.754647 dockerd[3091]: time="2025-07-16T00:46:09.754587480Z" level=info msg="Loading containers: start." Jul 16 00:46:09.767273 kernel: Initializing XFRM netlink socket Jul 16 00:46:09.935039 systemd-timesyncd[2697]: Network configuration changed, trying to establish connection. Jul 16 00:46:09.967740 systemd-networkd[2695]: docker0: Link UP Jul 16 00:46:09.968590 dockerd[3091]: time="2025-07-16T00:46:09.968557160Z" level=info msg="Loading containers: done." Jul 16 00:46:09.977572 dockerd[3091]: time="2025-07-16T00:46:09.977548280Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 16 00:46:09.977640 dockerd[3091]: time="2025-07-16T00:46:09.977606720Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 16 00:46:09.977708 dockerd[3091]: time="2025-07-16T00:46:09.977697280Z" level=info msg="Initializing buildkit" Jul 16 00:46:09.991959 dockerd[3091]: time="2025-07-16T00:46:09.991935320Z" level=info msg="Completed buildkit initialization" Jul 16 00:46:09.996470 dockerd[3091]: time="2025-07-16T00:46:09.996440920Z" level=info msg="Daemon has completed initialization" Jul 16 00:46:09.996518 dockerd[3091]: time="2025-07-16T00:46:09.996484120Z" level=info msg="API listen on /run/docker.sock" Jul 16 00:46:09.996620 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 16 00:46:10.619013 containerd[2784]: time="2025-07-16T00:46:10.618981240Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Jul 16 00:46:10.741389 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3427352118-merged.mount: Deactivated successfully. Jul 16 00:46:11.177206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2768432657.mount: Deactivated successfully. Jul 16 00:46:11.657827 systemd-timesyncd[2697]: Contacted time server [2607:b500:410:7700::1]:123 (2.flatcar.pool.ntp.org). Jul 16 00:46:11.657875 systemd-timesyncd[2697]: Initial clock synchronization to Wed 2025-07-16 00:46:11.538904 UTC. Jul 16 00:46:12.580860 containerd[2784]: time="2025-07-16T00:46:12.580818758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:12.581191 containerd[2784]: time="2025-07-16T00:46:12.580836609Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=26327781" Jul 16 00:46:12.581848 containerd[2784]: time="2025-07-16T00:46:12.581820978Z" level=info msg="ImageCreate event name:\"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:12.584099 containerd[2784]: time="2025-07-16T00:46:12.584080850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:12.585080 containerd[2784]: time="2025-07-16T00:46:12.585032236Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"26324581\" in 1.965990184s" Jul 16 00:46:12.585115 containerd[2784]: time="2025-07-16T00:46:12.585099187Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\"" Jul 16 00:46:12.585700 containerd[2784]: time="2025-07-16T00:46:12.585683504Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Jul 16 00:46:12.785951 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 16 00:46:12.787460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:46:12.933524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:46:12.936705 (kubelet)[3424]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 16 00:46:12.968003 kubelet[3424]: E0716 00:46:12.967974 3424 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 00:46:12.971005 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 00:46:12.971131 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 00:46:12.971477 systemd[1]: kubelet.service: Consumed 145ms CPU time, 116.4M memory peak. Jul 16 00:46:13.770548 containerd[2784]: time="2025-07-16T00:46:13.770515734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:13.770830 containerd[2784]: time="2025-07-16T00:46:13.770561453Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=22529696" Jul 16 00:46:13.771457 containerd[2784]: time="2025-07-16T00:46:13.771438354Z" level=info msg="ImageCreate event name:\"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:13.773716 containerd[2784]: time="2025-07-16T00:46:13.773694902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:13.774690 containerd[2784]: time="2025-07-16T00:46:13.774664503Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"24065486\" in 1.188954108s" Jul 16 00:46:13.774710 containerd[2784]: time="2025-07-16T00:46:13.774697904Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\"" Jul 16 00:46:13.775015 containerd[2784]: time="2025-07-16T00:46:13.774993495Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Jul 16 00:46:14.639236 containerd[2784]: time="2025-07-16T00:46:14.639196697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:14.639349 containerd[2784]: time="2025-07-16T00:46:14.639216193Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=17484138" Jul 16 00:46:14.640174 containerd[2784]: time="2025-07-16T00:46:14.640148477Z" level=info msg="ImageCreate event name:\"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:14.642472 containerd[2784]: time="2025-07-16T00:46:14.642442073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:14.643403 containerd[2784]: time="2025-07-16T00:46:14.643375741Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"19019946\" in 868.34697ms" Jul 16 00:46:14.643424 containerd[2784]: time="2025-07-16T00:46:14.643410659Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\"" Jul 16 00:46:14.643774 containerd[2784]: time="2025-07-16T00:46:14.643759569Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Jul 16 00:46:15.620065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount787431294.mount: Deactivated successfully. Jul 16 00:46:15.801717 containerd[2784]: time="2025-07-16T00:46:15.801684608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:15.801908 containerd[2784]: time="2025-07-16T00:46:15.801737358Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=27378405" Jul 16 00:46:15.802344 containerd[2784]: time="2025-07-16T00:46:15.802327075Z" level=info msg="ImageCreate event name:\"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:15.803704 containerd[2784]: time="2025-07-16T00:46:15.803685947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:15.804286 containerd[2784]: time="2025-07-16T00:46:15.804264655Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"27377424\" in 1.160476714s" Jul 16 00:46:15.804316 containerd[2784]: time="2025-07-16T00:46:15.804292971Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\"" Jul 16 00:46:15.804644 containerd[2784]: time="2025-07-16T00:46:15.804630224Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 16 00:46:16.162855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2838258599.mount: Deactivated successfully. Jul 16 00:46:17.006690 containerd[2784]: time="2025-07-16T00:46:17.006647433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:17.006978 containerd[2784]: time="2025-07-16T00:46:17.006686295Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 16 00:46:17.007648 containerd[2784]: time="2025-07-16T00:46:17.007628983Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:17.010021 containerd[2784]: time="2025-07-16T00:46:17.009997295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:17.011031 containerd[2784]: time="2025-07-16T00:46:17.011016357Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.206364494s" Jul 16 00:46:17.011054 containerd[2784]: time="2025-07-16T00:46:17.011038904Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 16 00:46:17.011478 containerd[2784]: time="2025-07-16T00:46:17.011458128Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 16 00:46:17.217965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269882641.mount: Deactivated successfully. Jul 16 00:46:17.218333 containerd[2784]: time="2025-07-16T00:46:17.218307978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 16 00:46:17.218427 containerd[2784]: time="2025-07-16T00:46:17.218404676Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 16 00:46:17.219064 containerd[2784]: time="2025-07-16T00:46:17.219047862Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 16 00:46:17.220651 containerd[2784]: time="2025-07-16T00:46:17.220630203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 16 00:46:17.221355 containerd[2784]: time="2025-07-16T00:46:17.221333686Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 209.844993ms" Jul 16 00:46:17.221380 containerd[2784]: time="2025-07-16T00:46:17.221361036Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 16 00:46:17.221683 containerd[2784]: time="2025-07-16T00:46:17.221671971Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 16 00:46:17.524560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3610462993.mount: Deactivated successfully. Jul 16 00:46:20.531459 containerd[2784]: time="2025-07-16T00:46:20.531416208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:20.531803 containerd[2784]: time="2025-07-16T00:46:20.531486448Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Jul 16 00:46:20.532453 containerd[2784]: time="2025-07-16T00:46:20.532434747Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:20.535077 containerd[2784]: time="2025-07-16T00:46:20.535052766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:20.536198 containerd[2784]: time="2025-07-16T00:46:20.536169522Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.314474818s" Jul 16 00:46:20.536227 containerd[2784]: time="2025-07-16T00:46:20.536205617Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 16 00:46:23.035972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 16 00:46:23.037565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:46:23.177781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:46:23.180958 (kubelet)[3663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 16 00:46:23.228744 kubelet[3663]: E0716 00:46:23.228715 3663 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 00:46:23.231111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 00:46:23.231241 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 00:46:23.233402 systemd[1]: kubelet.service: Consumed 141ms CPU time, 119.2M memory peak. Jul 16 00:46:26.943980 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:46:26.944489 systemd[1]: kubelet.service: Consumed 141ms CPU time, 119.2M memory peak. Jul 16 00:46:26.947123 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:46:26.967331 systemd[1]: Reload requested from client PID 3695 ('systemctl') (unit session-9.scope)... Jul 16 00:46:26.967341 systemd[1]: Reloading... Jul 16 00:46:27.044277 zram_generator::config[3740]: No configuration found. Jul 16 00:46:27.120827 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 00:46:27.223017 systemd[1]: Reloading finished in 255 ms. Jul 16 00:46:27.279143 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 16 00:46:27.279225 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 16 00:46:27.279505 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:46:27.281139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:46:27.418979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:46:27.422248 (kubelet)[3804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 16 00:46:27.451635 kubelet[3804]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 00:46:27.451635 kubelet[3804]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 16 00:46:27.451635 kubelet[3804]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 00:46:27.451802 kubelet[3804]: I0716 00:46:27.451693 3804 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 16 00:46:28.269308 kubelet[3804]: I0716 00:46:28.269276 3804 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 16 00:46:28.269308 kubelet[3804]: I0716 00:46:28.269301 3804 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 16 00:46:28.269547 kubelet[3804]: I0716 00:46:28.269530 3804 server.go:954] "Client rotation is on, will bootstrap in background" Jul 16 00:46:28.292655 kubelet[3804]: E0716 00:46:28.292627 3804 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.28.162.205:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.28.162.205:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:46:28.294105 kubelet[3804]: I0716 00:46:28.294079 3804 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 16 00:46:28.298546 kubelet[3804]: I0716 00:46:28.298524 3804 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 16 00:46:28.319413 kubelet[3804]: I0716 00:46:28.319389 3804 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 16 00:46:28.319985 kubelet[3804]: I0716 00:46:28.319952 3804 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 16 00:46:28.320138 kubelet[3804]: I0716 00:46:28.319987 3804 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-n-8893f80933","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 16 00:46:28.320216 kubelet[3804]: I0716 00:46:28.320210 3804 topology_manager.go:138] "Creating topology manager with none policy" Jul 16 00:46:28.320237 kubelet[3804]: I0716 00:46:28.320219 3804 container_manager_linux.go:304] "Creating device plugin manager" Jul 16 00:46:28.320436 kubelet[3804]: I0716 00:46:28.320426 3804 state_mem.go:36] "Initialized new in-memory state store" Jul 16 00:46:28.323165 kubelet[3804]: I0716 00:46:28.323148 3804 kubelet.go:446] "Attempting to sync node with API server" Jul 16 00:46:28.323208 kubelet[3804]: I0716 00:46:28.323168 3804 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 16 00:46:28.323208 kubelet[3804]: I0716 00:46:28.323190 3804 kubelet.go:352] "Adding apiserver pod source" Jul 16 00:46:28.323208 kubelet[3804]: I0716 00:46:28.323199 3804 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 16 00:46:28.337132 kubelet[3804]: W0716 00:46:28.337097 3804 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.162.205:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.162.205:6443: connect: connection refused Jul 16 00:46:28.337177 kubelet[3804]: E0716 00:46:28.337163 3804 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.162.205:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.162.205:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:46:28.338314 kubelet[3804]: I0716 00:46:28.338303 3804 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 16 00:46:28.338566 kubelet[3804]: W0716 00:46:28.338539 3804 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.162.205:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-n-8893f80933&limit=500&resourceVersion=0": dial tcp 147.28.162.205:6443: connect: connection refused Jul 16 00:46:28.338592 kubelet[3804]: E0716 00:46:28.338580 3804 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.162.205:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-n-8893f80933&limit=500&resourceVersion=0\": dial tcp 147.28.162.205:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:46:28.338850 kubelet[3804]: I0716 00:46:28.338840 3804 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 16 00:46:28.338957 kubelet[3804]: W0716 00:46:28.338950 3804 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 16 00:46:28.339813 kubelet[3804]: I0716 00:46:28.339802 3804 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 16 00:46:28.339837 kubelet[3804]: I0716 00:46:28.339831 3804 server.go:1287] "Started kubelet" Jul 16 00:46:28.340020 kubelet[3804]: I0716 00:46:28.339876 3804 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 16 00:46:28.341853 kubelet[3804]: I0716 00:46:28.341835 3804 server.go:479] "Adding debug handlers to kubelet server" Jul 16 00:46:28.344899 kubelet[3804]: I0716 00:46:28.344668 3804 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 16 00:46:28.345138 kubelet[3804]: I0716 00:46:28.345126 3804 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 16 00:46:28.346289 kubelet[3804]: E0716 00:46:28.346067 3804 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.162.205:6443/api/v1/namespaces/default/events\": dial tcp 147.28.162.205:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.0.1-n-8893f80933.185294c736d40813 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.1-n-8893f80933,UID:ci-4372.0.1-n-8893f80933,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.1-n-8893f80933,},FirstTimestamp:2025-07-16 00:46:28.339812371 +0000 UTC m=+0.914819562,LastTimestamp:2025-07-16 00:46:28.339812371 +0000 UTC m=+0.914819562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.1-n-8893f80933,}" Jul 16 00:46:28.346342 kubelet[3804]: E0716 00:46:28.346323 3804 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 16 00:46:28.346557 kubelet[3804]: I0716 00:46:28.346540 3804 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 16 00:46:28.346628 kubelet[3804]: I0716 00:46:28.346615 3804 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 16 00:46:28.346628 kubelet[3804]: I0716 00:46:28.346619 3804 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 16 00:46:28.346691 kubelet[3804]: I0716 00:46:28.346677 3804 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 16 00:46:28.346766 kubelet[3804]: E0716 00:46:28.346699 3804 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-n-8893f80933\" not found" Jul 16 00:46:28.346766 kubelet[3804]: I0716 00:46:28.346730 3804 reconciler.go:26] "Reconciler: start to sync state" Jul 16 00:46:28.346953 kubelet[3804]: W0716 00:46:28.346920 3804 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.162.205:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.162.205:6443: connect: connection refused Jul 16 00:46:28.346987 kubelet[3804]: E0716 00:46:28.346963 3804 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.162.205:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.162.205:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:46:28.347304 kubelet[3804]: E0716 00:46:28.347279 3804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.162.205:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-n-8893f80933?timeout=10s\": dial tcp 147.28.162.205:6443: connect: connection refused" interval="200ms" Jul 16 00:46:28.347746 kubelet[3804]: I0716 00:46:28.347702 3804 factory.go:221] Registration of the containerd container factory successfully Jul 16 00:46:28.347746 kubelet[3804]: I0716 00:46:28.347718 3804 factory.go:221] Registration of the systemd container factory successfully Jul 16 00:46:28.347806 kubelet[3804]: I0716 00:46:28.347794 3804 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 16 00:46:28.359723 kubelet[3804]: I0716 00:46:28.359692 3804 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 16 00:46:28.360769 kubelet[3804]: I0716 00:46:28.360757 3804 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 16 00:46:28.360800 kubelet[3804]: I0716 00:46:28.360775 3804 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 16 00:46:28.360800 kubelet[3804]: I0716 00:46:28.360792 3804 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 16 00:46:28.360844 kubelet[3804]: I0716 00:46:28.360801 3804 kubelet.go:2382] "Starting kubelet main sync loop" Jul 16 00:46:28.360844 kubelet[3804]: I0716 00:46:28.360826 3804 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 16 00:46:28.360844 kubelet[3804]: I0716 00:46:28.360840 3804 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 16 00:46:28.360844 kubelet[3804]: E0716 00:46:28.360837 3804 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 16 00:46:28.360920 kubelet[3804]: I0716 00:46:28.360856 3804 state_mem.go:36] "Initialized new in-memory state store" Jul 16 00:46:28.361592 kubelet[3804]: I0716 00:46:28.361579 3804 policy_none.go:49] "None policy: Start" Jul 16 00:46:28.361616 kubelet[3804]: I0716 00:46:28.361597 3804 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 16 00:46:28.361616 kubelet[3804]: I0716 00:46:28.361608 3804 state_mem.go:35] "Initializing new in-memory state store" Jul 16 00:46:28.361718 kubelet[3804]: W0716 00:46:28.361684 3804 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.162.205:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.162.205:6443: connect: connection refused Jul 16 00:46:28.361749 kubelet[3804]: E0716 00:46:28.361733 3804 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.162.205:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.162.205:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:46:28.365557 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 16 00:46:28.389374 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 16 00:46:28.391998 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 16 00:46:28.410925 kubelet[3804]: I0716 00:46:28.410902 3804 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 16 00:46:28.411104 kubelet[3804]: I0716 00:46:28.411087 3804 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 16 00:46:28.411151 kubelet[3804]: I0716 00:46:28.411099 3804 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 16 00:46:28.411250 kubelet[3804]: I0716 00:46:28.411232 3804 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 16 00:46:28.411710 kubelet[3804]: E0716 00:46:28.411693 3804 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 16 00:46:28.411744 kubelet[3804]: E0716 00:46:28.411734 3804 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.0.1-n-8893f80933\" not found" Jul 16 00:46:28.468311 systemd[1]: Created slice kubepods-burstable-poda759dfe9efede05cc9b6b41b5e98326f.slice - libcontainer container kubepods-burstable-poda759dfe9efede05cc9b6b41b5e98326f.slice. Jul 16 00:46:28.500489 kubelet[3804]: E0716 00:46:28.500453 3804 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-8893f80933\" not found" node="ci-4372.0.1-n-8893f80933" Jul 16 00:46:28.502775 systemd[1]: Created slice kubepods-burstable-pod893d5a95e86b3fa51ef0792d6a1d84ab.slice - libcontainer container kubepods-burstable-pod893d5a95e86b3fa51ef0792d6a1d84ab.slice. Jul 16 00:46:28.512674 kubelet[3804]: I0716 00:46:28.512658 3804 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-n-8893f80933" Jul 16 00:46:28.513020 kubelet[3804]: E0716 00:46:28.513000 3804 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.28.162.205:6443/api/v1/nodes\": dial tcp 147.28.162.205:6443: connect: connection refused" node="ci-4372.0.1-n-8893f80933" Jul 16 00:46:28.525336 kubelet[3804]: E0716 00:46:28.525284 3804 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-8893f80933\" not found" node="ci-4372.0.1-n-8893f80933" Jul 16 00:46:28.527351 systemd[1]: Created slice kubepods-burstable-pod3453060af685183ea4c5a83762a8ffe4.slice - libcontainer container kubepods-burstable-pod3453060af685183ea4c5a83762a8ffe4.slice. Jul 16 00:46:28.528620 kubelet[3804]: E0716 00:46:28.528596 3804 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-8893f80933\" not found" node="ci-4372.0.1-n-8893f80933" Jul 16 00:46:28.547845 kubelet[3804]: E0716 00:46:28.547816 3804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.162.205:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-n-8893f80933?timeout=10s\": dial tcp 147.28.162.205:6443: connect: connection refused" interval="400ms" Jul 16 00:46:28.648078 kubelet[3804]: I0716 00:46:28.648052 3804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/893d5a95e86b3fa51ef0792d6a1d84ab-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-8893f80933\" (UID: \"893d5a95e86b3fa51ef0792d6a1d84ab\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-8893f80933" Jul 16 00:46:28.648132 kubelet[3804]: I0716 00:46:28.648082 3804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/893d5a95e86b3fa51ef0792d6a1d84ab-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-n-8893f80933\" (UID: \"893d5a95e86b3fa51ef0792d6a1d84ab\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-8893f80933" Jul 16 00:46:28.648132 kubelet[3804]: I0716 00:46:28.648102 3804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/893d5a95e86b3fa51ef0792d6a1d84ab-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-n-8893f80933\" (UID: \"893d5a95e86b3fa51ef0792d6a1d84ab\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-8893f80933" Jul 16 00:46:28.648180 kubelet[3804]: I0716 00:46:28.648139 3804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3453060af685183ea4c5a83762a8ffe4-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-n-8893f80933\" (UID: \"3453060af685183ea4c5a83762a8ffe4\") " pod="kube-system/kube-scheduler-ci-4372.0.1-n-8893f80933" Jul 16 00:46:28.648180 kubelet[3804]: I0716 00:46:28.648167 3804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a759dfe9efede05cc9b6b41b5e98326f-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-n-8893f80933\" (UID: \"a759dfe9efede05cc9b6b41b5e98326f\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-8893f80933" Jul 16 00:46:28.648221 kubelet[3804]: I0716 00:46:28.648186 3804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a759dfe9efede05cc9b6b41b5e98326f-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-n-8893f80933\" (UID: \"a759dfe9efede05cc9b6b41b5e98326f\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-8893f80933" Jul 16 00:46:28.648221 kubelet[3804]: I0716 00:46:28.648205 3804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a759dfe9efede05cc9b6b41b5e98326f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-n-8893f80933\" (UID: \"a759dfe9efede05cc9b6b41b5e98326f\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-8893f80933" Jul 16 00:46:28.648260 kubelet[3804]: I0716 00:46:28.648223 3804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/893d5a95e86b3fa51ef0792d6a1d84ab-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-n-8893f80933\" (UID: \"893d5a95e86b3fa51ef0792d6a1d84ab\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-8893f80933" Jul 16 00:46:28.648260 kubelet[3804]: I0716 00:46:28.648241 3804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/893d5a95e86b3fa51ef0792d6a1d84ab-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-8893f80933\" (UID: \"893d5a95e86b3fa51ef0792d6a1d84ab\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-8893f80933" Jul 16 00:46:28.715438 kubelet[3804]: I0716 00:46:28.715415 3804 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-n-8893f80933" Jul 16 00:46:28.715706 kubelet[3804]: E0716 00:46:28.715682 3804 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.28.162.205:6443/api/v1/nodes\": dial tcp 147.28.162.205:6443: connect: connection refused" node="ci-4372.0.1-n-8893f80933" Jul 16 00:46:28.801738 containerd[2784]: time="2025-07-16T00:46:28.801711096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-n-8893f80933,Uid:a759dfe9efede05cc9b6b41b5e98326f,Namespace:kube-system,Attempt:0,}" Jul 16 00:46:28.811630 containerd[2784]: time="2025-07-16T00:46:28.811603929Z" level=info msg="connecting to shim d0f0a7209c768c0ba7ffe93e83bf69fe09cf85bfee59b33d2b63f7c96775f0c5" address="unix:///run/containerd/s/fae7e59220b305342a4ac01a84de1c48425c3c249208a48176648705f98ea97a" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:46:28.826714 containerd[2784]: time="2025-07-16T00:46:28.826694557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-n-8893f80933,Uid:893d5a95e86b3fa51ef0792d6a1d84ab,Namespace:kube-system,Attempt:0,}" Jul 16 00:46:28.830163 containerd[2784]: time="2025-07-16T00:46:28.830142824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-n-8893f80933,Uid:3453060af685183ea4c5a83762a8ffe4,Namespace:kube-system,Attempt:0,}" Jul 16 00:46:28.835513 containerd[2784]: time="2025-07-16T00:46:28.835484486Z" level=info msg="connecting to shim e29fad99e86c6afc916fbad49e4dd7c87703e346becb0b363d22d53d6fa1216c" address="unix:///run/containerd/s/2f97f80c126d7f1854edb9ea08aa5981f2da8bff342162c335af0a626458ef70" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:46:28.839749 containerd[2784]: time="2025-07-16T00:46:28.839717136Z" level=info msg="connecting to shim 362ba423b3a3e747330c60a0ef6761a3a39458f4290ae30dde869b1bebc976ed" address="unix:///run/containerd/s/c031e887ddedfcc0adb19d651ee2df7fc53bf9b99549fa9fd04a9a6a818b93d8" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:46:28.840416 systemd[1]: Started cri-containerd-d0f0a7209c768c0ba7ffe93e83bf69fe09cf85bfee59b33d2b63f7c96775f0c5.scope - libcontainer container d0f0a7209c768c0ba7ffe93e83bf69fe09cf85bfee59b33d2b63f7c96775f0c5. Jul 16 00:46:28.848644 systemd[1]: Started cri-containerd-e29fad99e86c6afc916fbad49e4dd7c87703e346becb0b363d22d53d6fa1216c.scope - libcontainer container e29fad99e86c6afc916fbad49e4dd7c87703e346becb0b363d22d53d6fa1216c. Jul 16 00:46:28.852163 systemd[1]: Started cri-containerd-362ba423b3a3e747330c60a0ef6761a3a39458f4290ae30dde869b1bebc976ed.scope - libcontainer container 362ba423b3a3e747330c60a0ef6761a3a39458f4290ae30dde869b1bebc976ed. Jul 16 00:46:28.871107 containerd[2784]: time="2025-07-16T00:46:28.871076444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-n-8893f80933,Uid:a759dfe9efede05cc9b6b41b5e98326f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0f0a7209c768c0ba7ffe93e83bf69fe09cf85bfee59b33d2b63f7c96775f0c5\"" Jul 16 00:46:28.872730 containerd[2784]: time="2025-07-16T00:46:28.872709537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-n-8893f80933,Uid:893d5a95e86b3fa51ef0792d6a1d84ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"e29fad99e86c6afc916fbad49e4dd7c87703e346becb0b363d22d53d6fa1216c\"" Jul 16 00:46:28.873893 containerd[2784]: time="2025-07-16T00:46:28.873870659Z" level=info msg="CreateContainer within sandbox \"d0f0a7209c768c0ba7ffe93e83bf69fe09cf85bfee59b33d2b63f7c96775f0c5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 16 00:46:28.874317 containerd[2784]: time="2025-07-16T00:46:28.874296232Z" level=info msg="CreateContainer within sandbox \"e29fad99e86c6afc916fbad49e4dd7c87703e346becb0b363d22d53d6fa1216c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 16 00:46:28.876291 containerd[2784]: time="2025-07-16T00:46:28.876259705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-n-8893f80933,Uid:3453060af685183ea4c5a83762a8ffe4,Namespace:kube-system,Attempt:0,} returns sandbox id \"362ba423b3a3e747330c60a0ef6761a3a39458f4290ae30dde869b1bebc976ed\"" Jul 16 00:46:28.877701 containerd[2784]: time="2025-07-16T00:46:28.877682926Z" level=info msg="CreateContainer within sandbox \"362ba423b3a3e747330c60a0ef6761a3a39458f4290ae30dde869b1bebc976ed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 16 00:46:28.878676 containerd[2784]: time="2025-07-16T00:46:28.878656257Z" level=info msg="Container 63829ced74ba21310e061f86fae3066db43f942ae82bd2091b1231a92bc992a7: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:46:28.879268 containerd[2784]: time="2025-07-16T00:46:28.879244505Z" level=info msg="Container e1fcc85c1a57df98920c4f5ad682867f67663a5ca08f1df28b23664d47b8ac64: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:46:28.881935 containerd[2784]: time="2025-07-16T00:46:28.881911064Z" level=info msg="Container e3612436eec148bb903f15317462592c43634aa422c51773f9762359551e0255: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:46:28.882957 containerd[2784]: time="2025-07-16T00:46:28.882932191Z" level=info msg="CreateContainer within sandbox \"e29fad99e86c6afc916fbad49e4dd7c87703e346becb0b363d22d53d6fa1216c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"63829ced74ba21310e061f86fae3066db43f942ae82bd2091b1231a92bc992a7\"" Jul 16 00:46:28.883024 containerd[2784]: time="2025-07-16T00:46:28.883001190Z" level=info msg="CreateContainer within sandbox \"d0f0a7209c768c0ba7ffe93e83bf69fe09cf85bfee59b33d2b63f7c96775f0c5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e1fcc85c1a57df98920c4f5ad682867f67663a5ca08f1df28b23664d47b8ac64\"" Jul 16 00:46:28.883347 containerd[2784]: time="2025-07-16T00:46:28.883325820Z" level=info msg="StartContainer for \"63829ced74ba21310e061f86fae3066db43f942ae82bd2091b1231a92bc992a7\"" Jul 16 00:46:28.883406 containerd[2784]: time="2025-07-16T00:46:28.883325980Z" level=info msg="StartContainer for \"e1fcc85c1a57df98920c4f5ad682867f67663a5ca08f1df28b23664d47b8ac64\"" Jul 16 00:46:28.884382 containerd[2784]: time="2025-07-16T00:46:28.884361961Z" level=info msg="connecting to shim 63829ced74ba21310e061f86fae3066db43f942ae82bd2091b1231a92bc992a7" address="unix:///run/containerd/s/2f97f80c126d7f1854edb9ea08aa5981f2da8bff342162c335af0a626458ef70" protocol=ttrpc version=3 Jul 16 00:46:28.884422 containerd[2784]: time="2025-07-16T00:46:28.884402330Z" level=info msg="connecting to shim e1fcc85c1a57df98920c4f5ad682867f67663a5ca08f1df28b23664d47b8ac64" address="unix:///run/containerd/s/fae7e59220b305342a4ac01a84de1c48425c3c249208a48176648705f98ea97a" protocol=ttrpc version=3 Jul 16 00:46:28.884750 containerd[2784]: time="2025-07-16T00:46:28.884729795Z" level=info msg="CreateContainer within sandbox \"362ba423b3a3e747330c60a0ef6761a3a39458f4290ae30dde869b1bebc976ed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e3612436eec148bb903f15317462592c43634aa422c51773f9762359551e0255\"" Jul 16 00:46:28.885015 containerd[2784]: time="2025-07-16T00:46:28.884997645Z" level=info msg="StartContainer for \"e3612436eec148bb903f15317462592c43634aa422c51773f9762359551e0255\"" Jul 16 00:46:28.885983 containerd[2784]: time="2025-07-16T00:46:28.885961353Z" level=info msg="connecting to shim e3612436eec148bb903f15317462592c43634aa422c51773f9762359551e0255" address="unix:///run/containerd/s/c031e887ddedfcc0adb19d651ee2df7fc53bf9b99549fa9fd04a9a6a818b93d8" protocol=ttrpc version=3 Jul 16 00:46:28.911398 systemd[1]: Started cri-containerd-63829ced74ba21310e061f86fae3066db43f942ae82bd2091b1231a92bc992a7.scope - libcontainer container 63829ced74ba21310e061f86fae3066db43f942ae82bd2091b1231a92bc992a7. Jul 16 00:46:28.912532 systemd[1]: Started cri-containerd-e1fcc85c1a57df98920c4f5ad682867f67663a5ca08f1df28b23664d47b8ac64.scope - libcontainer container e1fcc85c1a57df98920c4f5ad682867f67663a5ca08f1df28b23664d47b8ac64. Jul 16 00:46:28.913620 systemd[1]: Started cri-containerd-e3612436eec148bb903f15317462592c43634aa422c51773f9762359551e0255.scope - libcontainer container e3612436eec148bb903f15317462592c43634aa422c51773f9762359551e0255. Jul 16 00:46:28.939941 containerd[2784]: time="2025-07-16T00:46:28.939909206Z" level=info msg="StartContainer for \"63829ced74ba21310e061f86fae3066db43f942ae82bd2091b1231a92bc992a7\" returns successfully" Jul 16 00:46:28.940810 containerd[2784]: time="2025-07-16T00:46:28.940788383Z" level=info msg="StartContainer for \"e1fcc85c1a57df98920c4f5ad682867f67663a5ca08f1df28b23664d47b8ac64\" returns successfully" Jul 16 00:46:28.941663 containerd[2784]: time="2025-07-16T00:46:28.941647595Z" level=info msg="StartContainer for \"e3612436eec148bb903f15317462592c43634aa422c51773f9762359551e0255\" returns successfully" Jul 16 00:46:28.948326 kubelet[3804]: E0716 00:46:28.948299 3804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.162.205:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-n-8893f80933?timeout=10s\": dial tcp 147.28.162.205:6443: connect: connection refused" interval="800ms" Jul 16 00:46:29.117618 kubelet[3804]: I0716 00:46:29.117536 3804 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-n-8893f80933" Jul 16 00:46:29.365271 kubelet[3804]: E0716 00:46:29.365231 3804 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-8893f80933\" not found" node="ci-4372.0.1-n-8893f80933" Jul 16 00:46:29.366056 kubelet[3804]: E0716 00:46:29.366031 3804 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-8893f80933\" not found" node="ci-4372.0.1-n-8893f80933" Jul 16 00:46:29.368670 kubelet[3804]: E0716 00:46:29.368584 3804 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-n-8893f80933\" not found" node="ci-4372.0.1-n-8893f80933" Jul 16 00:46:30.247275 kubelet[3804]: E0716 00:46:30.247098 3804 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.0.1-n-8893f80933\" not found" node="ci-4372.0.1-n-8893f80933" Jul 16 00:46:30.297317 kubelet[3804]: E0716 00:46:30.297191 3804 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4372.0.1-n-8893f80933.185294c736d40813 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.1-n-8893f80933,UID:ci-4372.0.1-n-8893f80933,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.1-n-8893f80933,},FirstTimestamp:2025-07-16 00:46:28.339812371 +0000 UTC m=+0.914819562,LastTimestamp:2025-07-16 00:46:28.339812371 +0000 UTC m=+0.914819562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.1-n-8893f80933,}" Jul 16 00:46:30.324650 kubelet[3804]: I0716 00:46:30.324633 3804 apiserver.go:52] "Watching apiserver" Jul 16 00:46:30.347679 kubelet[3804]: I0716 00:46:30.347665 3804 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 16 00:46:30.350463 kubelet[3804]: I0716 00:46:30.350445 3804 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.1-n-8893f80933" Jul 16 00:46:30.366929 kubelet[3804]: I0716 00:46:30.366912 3804 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-n-8893f80933" Jul 16 00:46:30.367068 kubelet[3804]: I0716 00:46:30.367049 3804 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-8893f80933" Jul 16 00:46:30.367274 kubelet[3804]: I0716 00:46:30.367248 3804 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-n-8893f80933" Jul 16 00:46:30.377277 kubelet[3804]: E0716 00:46:30.376527 3804 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.1-n-8893f80933\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-8893f80933" Jul 16 00:46:30.377277 kubelet[3804]: E0716 00:46:30.376714 3804 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-n-8893f80933\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.1-n-8893f80933" Jul 16 00:46:30.377277 kubelet[3804]: E0716 00:46:30.376808 3804 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-n-8893f80933\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.1-n-8893f80933" Jul 16 00:46:30.447404 kubelet[3804]: I0716 00:46:30.447374 3804 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-8893f80933" Jul 16 00:46:30.448959 kubelet[3804]: E0716 00:46:30.448936 3804 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.1-n-8893f80933\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-8893f80933" Jul 16 00:46:30.448959 kubelet[3804]: I0716 00:46:30.448957 3804 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-n-8893f80933" Jul 16 00:46:30.450267 kubelet[3804]: E0716 00:46:30.450246 3804 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-n-8893f80933\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.1-n-8893f80933" Jul 16 00:46:30.450312 kubelet[3804]: I0716 00:46:30.450259 3804 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-n-8893f80933" Jul 16 00:46:30.451614 kubelet[3804]: E0716 00:46:30.451600 3804 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-n-8893f80933\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.1-n-8893f80933" Jul 16 00:46:31.368433 kubelet[3804]: I0716 00:46:31.368377 3804 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-n-8893f80933" Jul 16 00:46:31.371270 kubelet[3804]: W0716 00:46:31.371251 3804 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:46:32.352932 systemd[1]: Reload requested from client PID 4213 ('systemctl') (unit session-9.scope)... Jul 16 00:46:32.352942 systemd[1]: Reloading... Jul 16 00:46:32.429277 zram_generator::config[4259]: No configuration found. Jul 16 00:46:32.505718 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 00:46:32.618370 systemd[1]: Reloading finished in 265 ms. Jul 16 00:46:32.652544 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:46:32.673983 systemd[1]: kubelet.service: Deactivated successfully. Jul 16 00:46:32.674226 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:46:32.674300 systemd[1]: kubelet.service: Consumed 1.370s CPU time, 140.6M memory peak. Jul 16 00:46:32.675897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:46:32.809031 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:46:32.812474 (kubelet)[4319]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 16 00:46:32.842535 kubelet[4319]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 00:46:32.842535 kubelet[4319]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 16 00:46:32.842535 kubelet[4319]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 00:46:32.842794 kubelet[4319]: I0716 00:46:32.842617 4319 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 16 00:46:32.847876 kubelet[4319]: I0716 00:46:32.847854 4319 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 16 00:46:32.847876 kubelet[4319]: I0716 00:46:32.847876 4319 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 16 00:46:32.848160 kubelet[4319]: I0716 00:46:32.848150 4319 server.go:954] "Client rotation is on, will bootstrap in background" Jul 16 00:46:32.849302 kubelet[4319]: I0716 00:46:32.849290 4319 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 16 00:46:32.856483 kubelet[4319]: I0716 00:46:32.856386 4319 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 16 00:46:32.859778 kubelet[4319]: I0716 00:46:32.859764 4319 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 16 00:46:32.878626 kubelet[4319]: I0716 00:46:32.878556 4319 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 16 00:46:32.878761 kubelet[4319]: I0716 00:46:32.878730 4319 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 16 00:46:32.878934 kubelet[4319]: I0716 00:46:32.878756 4319 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-n-8893f80933","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 16 00:46:32.879005 kubelet[4319]: I0716 00:46:32.878944 4319 topology_manager.go:138] "Creating topology manager with none policy" Jul 16 00:46:32.879005 kubelet[4319]: I0716 00:46:32.878953 4319 container_manager_linux.go:304] "Creating device plugin manager" Jul 16 00:46:32.879048 kubelet[4319]: I0716 00:46:32.879013 4319 state_mem.go:36] "Initialized new in-memory state store" Jul 16 00:46:32.879306 kubelet[4319]: I0716 00:46:32.879292 4319 kubelet.go:446] "Attempting to sync node with API server" Jul 16 00:46:32.879336 kubelet[4319]: I0716 00:46:32.879310 4319 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 16 00:46:32.879336 kubelet[4319]: I0716 00:46:32.879331 4319 kubelet.go:352] "Adding apiserver pod source" Jul 16 00:46:32.879374 kubelet[4319]: I0716 00:46:32.879341 4319 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 16 00:46:32.879961 kubelet[4319]: I0716 00:46:32.879947 4319 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 16 00:46:32.880384 kubelet[4319]: I0716 00:46:32.880374 4319 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 16 00:46:32.880771 kubelet[4319]: I0716 00:46:32.880760 4319 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 16 00:46:32.880807 kubelet[4319]: I0716 00:46:32.880786 4319 server.go:1287] "Started kubelet" Jul 16 00:46:32.880879 kubelet[4319]: I0716 00:46:32.880845 4319 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 16 00:46:32.880955 kubelet[4319]: I0716 00:46:32.880841 4319 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 16 00:46:32.881141 kubelet[4319]: I0716 00:46:32.881128 4319 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 16 00:46:32.881821 kubelet[4319]: I0716 00:46:32.881808 4319 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 16 00:46:32.881821 kubelet[4319]: I0716 00:46:32.881810 4319 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 16 00:46:32.881932 kubelet[4319]: E0716 00:46:32.881911 4319 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-n-8893f80933\" not found" Jul 16 00:46:32.881975 kubelet[4319]: I0716 00:46:32.881939 4319 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 16 00:46:32.881998 kubelet[4319]: I0716 00:46:32.881985 4319 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 16 00:46:32.882113 kubelet[4319]: I0716 00:46:32.882098 4319 reconciler.go:26] "Reconciler: start to sync state" Jul 16 00:46:32.882240 kubelet[4319]: I0716 00:46:32.882229 4319 factory.go:221] Registration of the systemd container factory successfully Jul 16 00:46:32.882301 kubelet[4319]: E0716 00:46:32.882281 4319 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 16 00:46:32.882337 kubelet[4319]: I0716 00:46:32.882323 4319 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 16 00:46:32.884099 kubelet[4319]: I0716 00:46:32.884084 4319 server.go:479] "Adding debug handlers to kubelet server" Jul 16 00:46:32.884391 kubelet[4319]: I0716 00:46:32.884376 4319 factory.go:221] Registration of the containerd container factory successfully Jul 16 00:46:32.889414 kubelet[4319]: I0716 00:46:32.889382 4319 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 16 00:46:32.890361 kubelet[4319]: I0716 00:46:32.890348 4319 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 16 00:46:32.890392 kubelet[4319]: I0716 00:46:32.890367 4319 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 16 00:46:32.890392 kubelet[4319]: I0716 00:46:32.890381 4319 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 16 00:46:32.890392 kubelet[4319]: I0716 00:46:32.890388 4319 kubelet.go:2382] "Starting kubelet main sync loop" Jul 16 00:46:32.890453 kubelet[4319]: E0716 00:46:32.890430 4319 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 16 00:46:32.913387 kubelet[4319]: I0716 00:46:32.913364 4319 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 16 00:46:32.913387 kubelet[4319]: I0716 00:46:32.913381 4319 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 16 00:46:32.913499 kubelet[4319]: I0716 00:46:32.913401 4319 state_mem.go:36] "Initialized new in-memory state store" Jul 16 00:46:32.913581 kubelet[4319]: I0716 00:46:32.913546 4319 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 16 00:46:32.913581 kubelet[4319]: I0716 00:46:32.913557 4319 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 16 00:46:32.913581 kubelet[4319]: I0716 00:46:32.913577 4319 policy_none.go:49] "None policy: Start" Jul 16 00:46:32.913641 kubelet[4319]: I0716 00:46:32.913585 4319 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 16 00:46:32.913641 kubelet[4319]: I0716 00:46:32.913594 4319 state_mem.go:35] "Initializing new in-memory state store" Jul 16 00:46:32.913690 kubelet[4319]: I0716 00:46:32.913680 4319 state_mem.go:75] "Updated machine memory state" Jul 16 00:46:32.916576 kubelet[4319]: I0716 00:46:32.916562 4319 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 16 00:46:32.916728 kubelet[4319]: I0716 00:46:32.916719 4319 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 16 00:46:32.916764 kubelet[4319]: I0716 00:46:32.916732 4319 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 16 00:46:32.916905 kubelet[4319]: I0716 00:46:32.916889 4319 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 16 00:46:32.917401 kubelet[4319]: E0716 00:46:32.917383 4319 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 16 00:46:32.991337 kubelet[4319]: I0716 00:46:32.991313 4319 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-n-8893f80933" Jul 16 00:46:32.991337 kubelet[4319]: I0716 00:46:32.991328 4319 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-n-8893f80933" Jul 16 00:46:32.991407 kubelet[4319]: I0716 00:46:32.991379 4319 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-8893f80933" Jul 16 00:46:32.994149 kubelet[4319]: W0716 00:46:32.994134 4319 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:46:32.994183 kubelet[4319]: W0716 00:46:32.994153 4319 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:46:32.994232 kubelet[4319]: W0716 00:46:32.994223 4319 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:46:32.994280 kubelet[4319]: E0716 00:46:32.994268 4319 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-n-8893f80933\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.1-n-8893f80933" Jul 16 00:46:33.020006 kubelet[4319]: I0716 00:46:33.019986 4319 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-n-8893f80933" Jul 16 00:46:33.023735 kubelet[4319]: I0716 00:46:33.023720 4319 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.0.1-n-8893f80933" Jul 16 00:46:33.023792 kubelet[4319]: I0716 00:46:33.023781 4319 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.1-n-8893f80933" Jul 16 00:46:33.183749 kubelet[4319]: I0716 00:46:33.183689 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3453060af685183ea4c5a83762a8ffe4-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-n-8893f80933\" (UID: \"3453060af685183ea4c5a83762a8ffe4\") " pod="kube-system/kube-scheduler-ci-4372.0.1-n-8893f80933" Jul 16 00:46:33.183749 kubelet[4319]: I0716 00:46:33.183714 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a759dfe9efede05cc9b6b41b5e98326f-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-n-8893f80933\" (UID: \"a759dfe9efede05cc9b6b41b5e98326f\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-8893f80933" Jul 16 00:46:33.183749 kubelet[4319]: I0716 00:46:33.183733 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a759dfe9efede05cc9b6b41b5e98326f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-n-8893f80933\" (UID: \"a759dfe9efede05cc9b6b41b5e98326f\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-8893f80933" Jul 16 00:46:33.183840 kubelet[4319]: I0716 00:46:33.183754 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/893d5a95e86b3fa51ef0792d6a1d84ab-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-n-8893f80933\" (UID: \"893d5a95e86b3fa51ef0792d6a1d84ab\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-8893f80933" Jul 16 00:46:33.183871 kubelet[4319]: I0716 00:46:33.183829 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/893d5a95e86b3fa51ef0792d6a1d84ab-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-n-8893f80933\" (UID: \"893d5a95e86b3fa51ef0792d6a1d84ab\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-8893f80933" Jul 16 00:46:33.183892 kubelet[4319]: I0716 00:46:33.183877 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a759dfe9efede05cc9b6b41b5e98326f-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-n-8893f80933\" (UID: \"a759dfe9efede05cc9b6b41b5e98326f\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-8893f80933" Jul 16 00:46:33.183923 kubelet[4319]: I0716 00:46:33.183907 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/893d5a95e86b3fa51ef0792d6a1d84ab-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-8893f80933\" (UID: \"893d5a95e86b3fa51ef0792d6a1d84ab\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-8893f80933" Jul 16 00:46:33.184085 kubelet[4319]: I0716 00:46:33.183940 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/893d5a95e86b3fa51ef0792d6a1d84ab-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-n-8893f80933\" (UID: \"893d5a95e86b3fa51ef0792d6a1d84ab\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-8893f80933" Jul 16 00:46:33.184085 kubelet[4319]: I0716 00:46:33.183969 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/893d5a95e86b3fa51ef0792d6a1d84ab-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-8893f80933\" (UID: \"893d5a95e86b3fa51ef0792d6a1d84ab\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-8893f80933" Jul 16 00:46:33.879659 kubelet[4319]: I0716 00:46:33.879635 4319 apiserver.go:52] "Watching apiserver" Jul 16 00:46:33.882741 kubelet[4319]: I0716 00:46:33.882720 4319 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 16 00:46:33.898058 kubelet[4319]: I0716 00:46:33.898043 4319 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-n-8893f80933" Jul 16 00:46:33.900859 kubelet[4319]: W0716 00:46:33.900848 4319 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:46:33.900900 kubelet[4319]: E0716 00:46:33.900888 4319 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-n-8893f80933\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.1-n-8893f80933" Jul 16 00:46:33.924115 kubelet[4319]: I0716 00:46:33.924061 4319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.0.1-n-8893f80933" podStartSLOduration=2.924045988 podStartE2EDuration="2.924045988s" podCreationTimestamp="2025-07-16 00:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:46:33.918583862 +0000 UTC m=+1.103312712" watchObservedRunningTime="2025-07-16 00:46:33.924045988 +0000 UTC m=+1.108774838" Jul 16 00:46:33.924283 kubelet[4319]: I0716 00:46:33.924151 4319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-8893f80933" podStartSLOduration=1.924147337 podStartE2EDuration="1.924147337s" podCreationTimestamp="2025-07-16 00:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:46:33.92402105 +0000 UTC m=+1.108749900" watchObservedRunningTime="2025-07-16 00:46:33.924147337 +0000 UTC m=+1.108876147" Jul 16 00:46:33.929327 kubelet[4319]: I0716 00:46:33.929274 4319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.0.1-n-8893f80933" podStartSLOduration=1.9292539419999999 podStartE2EDuration="1.929253942s" podCreationTimestamp="2025-07-16 00:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:46:33.929205426 +0000 UTC m=+1.113934276" watchObservedRunningTime="2025-07-16 00:46:33.929253942 +0000 UTC m=+1.113982792" Jul 16 00:46:39.260365 kubelet[4319]: I0716 00:46:39.260336 4319 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 16 00:46:39.261001 containerd[2784]: time="2025-07-16T00:46:39.260911586Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 16 00:46:39.261240 kubelet[4319]: I0716 00:46:39.261061 4319 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 16 00:46:40.407765 systemd[1]: Created slice kubepods-besteffort-pod247a4e10_74ac_4e64_ba51_77cfe15262b5.slice - libcontainer container kubepods-besteffort-pod247a4e10_74ac_4e64_ba51_77cfe15262b5.slice. Jul 16 00:46:40.430376 kubelet[4319]: I0716 00:46:40.430339 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/247a4e10-74ac-4e64-ba51-77cfe15262b5-kube-proxy\") pod \"kube-proxy-rfgdq\" (UID: \"247a4e10-74ac-4e64-ba51-77cfe15262b5\") " pod="kube-system/kube-proxy-rfgdq" Jul 16 00:46:40.430692 kubelet[4319]: I0716 00:46:40.430393 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/247a4e10-74ac-4e64-ba51-77cfe15262b5-lib-modules\") pod \"kube-proxy-rfgdq\" (UID: \"247a4e10-74ac-4e64-ba51-77cfe15262b5\") " pod="kube-system/kube-proxy-rfgdq" Jul 16 00:46:40.430692 kubelet[4319]: I0716 00:46:40.430498 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj5qb\" (UniqueName: \"kubernetes.io/projected/247a4e10-74ac-4e64-ba51-77cfe15262b5-kube-api-access-bj5qb\") pod \"kube-proxy-rfgdq\" (UID: \"247a4e10-74ac-4e64-ba51-77cfe15262b5\") " pod="kube-system/kube-proxy-rfgdq" Jul 16 00:46:40.430692 kubelet[4319]: I0716 00:46:40.430541 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/247a4e10-74ac-4e64-ba51-77cfe15262b5-xtables-lock\") pod \"kube-proxy-rfgdq\" (UID: \"247a4e10-74ac-4e64-ba51-77cfe15262b5\") " pod="kube-system/kube-proxy-rfgdq" Jul 16 00:46:40.527936 systemd[1]: Created slice kubepods-besteffort-pod2fd8d7f7_d26a_4f04_a438_c0910ee56ac1.slice - libcontainer container kubepods-besteffort-pod2fd8d7f7_d26a_4f04_a438_c0910ee56ac1.slice. Jul 16 00:46:40.531399 kubelet[4319]: I0716 00:46:40.531364 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2fd8d7f7-d26a-4f04-a438-c0910ee56ac1-var-lib-calico\") pod \"tigera-operator-747864d56d-m9fn6\" (UID: \"2fd8d7f7-d26a-4f04-a438-c0910ee56ac1\") " pod="tigera-operator/tigera-operator-747864d56d-m9fn6" Jul 16 00:46:40.531470 kubelet[4319]: I0716 00:46:40.531415 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvfz9\" (UniqueName: \"kubernetes.io/projected/2fd8d7f7-d26a-4f04-a438-c0910ee56ac1-kube-api-access-rvfz9\") pod \"tigera-operator-747864d56d-m9fn6\" (UID: \"2fd8d7f7-d26a-4f04-a438-c0910ee56ac1\") " pod="tigera-operator/tigera-operator-747864d56d-m9fn6" Jul 16 00:46:40.722853 containerd[2784]: time="2025-07-16T00:46:40.722671292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rfgdq,Uid:247a4e10-74ac-4e64-ba51-77cfe15262b5,Namespace:kube-system,Attempt:0,}" Jul 16 00:46:40.730515 containerd[2784]: time="2025-07-16T00:46:40.730489372Z" level=info msg="connecting to shim a628fd2e32137dd36ec5c9bc41ab18abcd6d4f6e56c3f320b511f002ba2ef711" address="unix:///run/containerd/s/f9e0565d63ffbadf8179360654e239e6e78f5b3059d6bb53382b2f3f6f05803b" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:46:40.759448 systemd[1]: Started cri-containerd-a628fd2e32137dd36ec5c9bc41ab18abcd6d4f6e56c3f320b511f002ba2ef711.scope - libcontainer container a628fd2e32137dd36ec5c9bc41ab18abcd6d4f6e56c3f320b511f002ba2ef711. Jul 16 00:46:40.777084 containerd[2784]: time="2025-07-16T00:46:40.777053890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rfgdq,Uid:247a4e10-74ac-4e64-ba51-77cfe15262b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a628fd2e32137dd36ec5c9bc41ab18abcd6d4f6e56c3f320b511f002ba2ef711\"" Jul 16 00:46:40.778991 containerd[2784]: time="2025-07-16T00:46:40.778967454Z" level=info msg="CreateContainer within sandbox \"a628fd2e32137dd36ec5c9bc41ab18abcd6d4f6e56c3f320b511f002ba2ef711\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 16 00:46:40.784253 containerd[2784]: time="2025-07-16T00:46:40.784218640Z" level=info msg="Container 68112bbae45de5389be2a2cea3e802492899a2c794d01d0f85529a74b852410f: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:46:40.787940 containerd[2784]: time="2025-07-16T00:46:40.787915015Z" level=info msg="CreateContainer within sandbox \"a628fd2e32137dd36ec5c9bc41ab18abcd6d4f6e56c3f320b511f002ba2ef711\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"68112bbae45de5389be2a2cea3e802492899a2c794d01d0f85529a74b852410f\"" Jul 16 00:46:40.788338 containerd[2784]: time="2025-07-16T00:46:40.788317353Z" level=info msg="StartContainer for \"68112bbae45de5389be2a2cea3e802492899a2c794d01d0f85529a74b852410f\"" Jul 16 00:46:40.789599 containerd[2784]: time="2025-07-16T00:46:40.789574549Z" level=info msg="connecting to shim 68112bbae45de5389be2a2cea3e802492899a2c794d01d0f85529a74b852410f" address="unix:///run/containerd/s/f9e0565d63ffbadf8179360654e239e6e78f5b3059d6bb53382b2f3f6f05803b" protocol=ttrpc version=3 Jul 16 00:46:40.813429 systemd[1]: Started cri-containerd-68112bbae45de5389be2a2cea3e802492899a2c794d01d0f85529a74b852410f.scope - libcontainer container 68112bbae45de5389be2a2cea3e802492899a2c794d01d0f85529a74b852410f. Jul 16 00:46:40.830602 containerd[2784]: time="2025-07-16T00:46:40.830562556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-m9fn6,Uid:2fd8d7f7-d26a-4f04-a438-c0910ee56ac1,Namespace:tigera-operator,Attempt:0,}" Jul 16 00:46:40.840008 containerd[2784]: time="2025-07-16T00:46:40.839974593Z" level=info msg="connecting to shim 2e0f5c1c553bb4bcb91258eba5686aded42588245719cb65edc4c23dc179e3d5" address="unix:///run/containerd/s/f0f1e5d001fd5827d19de57c04f3f1c45dc54a8665bc9277f714600166328319" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:46:40.840933 containerd[2784]: time="2025-07-16T00:46:40.840912262Z" level=info msg="StartContainer for \"68112bbae45de5389be2a2cea3e802492899a2c794d01d0f85529a74b852410f\" returns successfully" Jul 16 00:46:40.869387 systemd[1]: Started cri-containerd-2e0f5c1c553bb4bcb91258eba5686aded42588245719cb65edc4c23dc179e3d5.scope - libcontainer container 2e0f5c1c553bb4bcb91258eba5686aded42588245719cb65edc4c23dc179e3d5. Jul 16 00:46:40.895603 containerd[2784]: time="2025-07-16T00:46:40.895568922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-m9fn6,Uid:2fd8d7f7-d26a-4f04-a438-c0910ee56ac1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2e0f5c1c553bb4bcb91258eba5686aded42588245719cb65edc4c23dc179e3d5\"" Jul 16 00:46:40.896672 containerd[2784]: time="2025-07-16T00:46:40.896653420Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 16 00:46:40.914882 kubelet[4319]: I0716 00:46:40.914832 4319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rfgdq" podStartSLOduration=0.914816806 podStartE2EDuration="914.816806ms" podCreationTimestamp="2025-07-16 00:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:46:40.914695409 +0000 UTC m=+8.099424299" watchObservedRunningTime="2025-07-16 00:46:40.914816806 +0000 UTC m=+8.099545656" Jul 16 00:46:42.240247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1464951675.mount: Deactivated successfully. Jul 16 00:46:43.148362 update_engine[2778]: I20250716 00:46:43.148295 2778 update_attempter.cc:509] Updating boot flags... Jul 16 00:46:43.808724 containerd[2784]: time="2025-07-16T00:46:43.808682380Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:43.809096 containerd[2784]: time="2025-07-16T00:46:43.808701055Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 16 00:46:43.809403 containerd[2784]: time="2025-07-16T00:46:43.809385167Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:43.810963 containerd[2784]: time="2025-07-16T00:46:43.810944064Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:43.811631 containerd[2784]: time="2025-07-16T00:46:43.811606661Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.914926769s" Jul 16 00:46:43.811652 containerd[2784]: time="2025-07-16T00:46:43.811637893Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 16 00:46:43.813168 containerd[2784]: time="2025-07-16T00:46:43.813146922Z" level=info msg="CreateContainer within sandbox \"2e0f5c1c553bb4bcb91258eba5686aded42588245719cb65edc4c23dc179e3d5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 16 00:46:43.816859 containerd[2784]: time="2025-07-16T00:46:43.816832735Z" level=info msg="Container 8d7ae87429fa2c176727635ae01956622cea174b6228aca7420fb8da885e5941: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:46:43.819390 containerd[2784]: time="2025-07-16T00:46:43.819361793Z" level=info msg="CreateContainer within sandbox \"2e0f5c1c553bb4bcb91258eba5686aded42588245719cb65edc4c23dc179e3d5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8d7ae87429fa2c176727635ae01956622cea174b6228aca7420fb8da885e5941\"" Jul 16 00:46:43.819730 containerd[2784]: time="2025-07-16T00:46:43.819708587Z" level=info msg="StartContainer for \"8d7ae87429fa2c176727635ae01956622cea174b6228aca7420fb8da885e5941\"" Jul 16 00:46:43.820417 containerd[2784]: time="2025-07-16T00:46:43.820396058Z" level=info msg="connecting to shim 8d7ae87429fa2c176727635ae01956622cea174b6228aca7420fb8da885e5941" address="unix:///run/containerd/s/f0f1e5d001fd5827d19de57c04f3f1c45dc54a8665bc9277f714600166328319" protocol=ttrpc version=3 Jul 16 00:46:43.850376 systemd[1]: Started cri-containerd-8d7ae87429fa2c176727635ae01956622cea174b6228aca7420fb8da885e5941.scope - libcontainer container 8d7ae87429fa2c176727635ae01956622cea174b6228aca7420fb8da885e5941. Jul 16 00:46:43.869834 containerd[2784]: time="2025-07-16T00:46:43.869811182Z" level=info msg="StartContainer for \"8d7ae87429fa2c176727635ae01956622cea174b6228aca7420fb8da885e5941\" returns successfully" Jul 16 00:46:43.933333 kubelet[4319]: I0716 00:46:43.933280 4319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-m9fn6" podStartSLOduration=1.017414786 podStartE2EDuration="3.933258053s" podCreationTimestamp="2025-07-16 00:46:40 +0000 UTC" firstStartedPulling="2025-07-16 00:46:40.896328574 +0000 UTC m=+8.081057424" lastFinishedPulling="2025-07-16 00:46:43.812171841 +0000 UTC m=+10.996900691" observedRunningTime="2025-07-16 00:46:43.932931413 +0000 UTC m=+11.117660263" watchObservedRunningTime="2025-07-16 00:46:43.933258053 +0000 UTC m=+11.117986863" Jul 16 00:46:48.636971 sudo[3063]: pam_unix(sudo:session): session closed for user root Jul 16 00:46:48.700612 sshd[3062]: Connection closed by 139.178.89.65 port 50198 Jul 16 00:46:48.699713 sshd-session[3059]: pam_unix(sshd:session): session closed for user core Jul 16 00:46:48.702774 systemd[1]: sshd@6-147.28.162.205:22-139.178.89.65:50198.service: Deactivated successfully. Jul 16 00:46:48.704642 systemd[1]: session-9.scope: Deactivated successfully. Jul 16 00:46:48.704965 systemd[1]: session-9.scope: Consumed 8.859s CPU time, 243.9M memory peak. Jul 16 00:46:48.706101 systemd-logind[2768]: Session 9 logged out. Waiting for processes to exit. Jul 16 00:46:48.707089 systemd-logind[2768]: Removed session 9. Jul 16 00:46:52.874180 systemd[1]: Created slice kubepods-besteffort-pod8042099c_14cc_4cff_906c_e871dc9eedc6.slice - libcontainer container kubepods-besteffort-pod8042099c_14cc_4cff_906c_e871dc9eedc6.slice. Jul 16 00:46:52.907438 kubelet[4319]: I0716 00:46:52.907393 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8042099c-14cc-4cff-906c-e871dc9eedc6-tigera-ca-bundle\") pod \"calico-typha-7c47dfd576-grd45\" (UID: \"8042099c-14cc-4cff-906c-e871dc9eedc6\") " pod="calico-system/calico-typha-7c47dfd576-grd45" Jul 16 00:46:52.907739 kubelet[4319]: I0716 00:46:52.907451 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shlhq\" (UniqueName: \"kubernetes.io/projected/8042099c-14cc-4cff-906c-e871dc9eedc6-kube-api-access-shlhq\") pod \"calico-typha-7c47dfd576-grd45\" (UID: \"8042099c-14cc-4cff-906c-e871dc9eedc6\") " pod="calico-system/calico-typha-7c47dfd576-grd45" Jul 16 00:46:52.907739 kubelet[4319]: I0716 00:46:52.907486 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8042099c-14cc-4cff-906c-e871dc9eedc6-typha-certs\") pod \"calico-typha-7c47dfd576-grd45\" (UID: \"8042099c-14cc-4cff-906c-e871dc9eedc6\") " pod="calico-system/calico-typha-7c47dfd576-grd45" Jul 16 00:46:53.130653 systemd[1]: Created slice kubepods-besteffort-pod398a182b_d1c2_4a37_af81_36bf681c80eb.slice - libcontainer container kubepods-besteffort-pod398a182b_d1c2_4a37_af81_36bf681c80eb.slice. Jul 16 00:46:53.177149 containerd[2784]: time="2025-07-16T00:46:53.177111592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c47dfd576-grd45,Uid:8042099c-14cc-4cff-906c-e871dc9eedc6,Namespace:calico-system,Attempt:0,}" Jul 16 00:46:53.185957 containerd[2784]: time="2025-07-16T00:46:53.185929534Z" level=info msg="connecting to shim 336b847cd446c6672946c4c000f7d826d53bf9e4243967094b5f61ad6bada875" address="unix:///run/containerd/s/4e7d0d4d8ef78cc0226d9b17b52135fa79f608199a38d177a70b85e2c88c8d48" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:46:53.210454 kubelet[4319]: I0716 00:46:53.210422 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/398a182b-d1c2-4a37-af81-36bf681c80eb-var-lib-calico\") pod \"calico-node-gmm49\" (UID: \"398a182b-d1c2-4a37-af81-36bf681c80eb\") " pod="calico-system/calico-node-gmm49" Jul 16 00:46:53.210454 kubelet[4319]: I0716 00:46:53.210456 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/398a182b-d1c2-4a37-af81-36bf681c80eb-cni-log-dir\") pod \"calico-node-gmm49\" (UID: \"398a182b-d1c2-4a37-af81-36bf681c80eb\") " pod="calico-system/calico-node-gmm49" Jul 16 00:46:53.210574 kubelet[4319]: I0716 00:46:53.210472 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/398a182b-d1c2-4a37-af81-36bf681c80eb-cni-net-dir\") pod \"calico-node-gmm49\" (UID: \"398a182b-d1c2-4a37-af81-36bf681c80eb\") " pod="calico-system/calico-node-gmm49" Jul 16 00:46:53.210574 kubelet[4319]: I0716 00:46:53.210491 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/398a182b-d1c2-4a37-af81-36bf681c80eb-flexvol-driver-host\") pod \"calico-node-gmm49\" (UID: \"398a182b-d1c2-4a37-af81-36bf681c80eb\") " pod="calico-system/calico-node-gmm49" Jul 16 00:46:53.210574 kubelet[4319]: I0716 00:46:53.210511 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/398a182b-d1c2-4a37-af81-36bf681c80eb-lib-modules\") pod \"calico-node-gmm49\" (UID: \"398a182b-d1c2-4a37-af81-36bf681c80eb\") " pod="calico-system/calico-node-gmm49" Jul 16 00:46:53.210574 kubelet[4319]: I0716 00:46:53.210527 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/398a182b-d1c2-4a37-af81-36bf681c80eb-tigera-ca-bundle\") pod \"calico-node-gmm49\" (UID: \"398a182b-d1c2-4a37-af81-36bf681c80eb\") " pod="calico-system/calico-node-gmm49" Jul 16 00:46:53.210699 kubelet[4319]: I0716 00:46:53.210614 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/398a182b-d1c2-4a37-af81-36bf681c80eb-node-certs\") pod \"calico-node-gmm49\" (UID: \"398a182b-d1c2-4a37-af81-36bf681c80eb\") " pod="calico-system/calico-node-gmm49" Jul 16 00:46:53.210699 kubelet[4319]: I0716 00:46:53.210682 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/398a182b-d1c2-4a37-af81-36bf681c80eb-cni-bin-dir\") pod \"calico-node-gmm49\" (UID: \"398a182b-d1c2-4a37-af81-36bf681c80eb\") " pod="calico-system/calico-node-gmm49" Jul 16 00:46:53.210741 kubelet[4319]: I0716 00:46:53.210707 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/398a182b-d1c2-4a37-af81-36bf681c80eb-policysync\") pod \"calico-node-gmm49\" (UID: \"398a182b-d1c2-4a37-af81-36bf681c80eb\") " pod="calico-system/calico-node-gmm49" Jul 16 00:46:53.210741 kubelet[4319]: I0716 00:46:53.210723 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/398a182b-d1c2-4a37-af81-36bf681c80eb-var-run-calico\") pod \"calico-node-gmm49\" (UID: \"398a182b-d1c2-4a37-af81-36bf681c80eb\") " pod="calico-system/calico-node-gmm49" Jul 16 00:46:53.210783 kubelet[4319]: I0716 00:46:53.210740 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/398a182b-d1c2-4a37-af81-36bf681c80eb-xtables-lock\") pod \"calico-node-gmm49\" (UID: \"398a182b-d1c2-4a37-af81-36bf681c80eb\") " pod="calico-system/calico-node-gmm49" Jul 16 00:46:53.210783 kubelet[4319]: I0716 00:46:53.210757 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8pb7\" (UniqueName: \"kubernetes.io/projected/398a182b-d1c2-4a37-af81-36bf681c80eb-kube-api-access-p8pb7\") pod \"calico-node-gmm49\" (UID: \"398a182b-d1c2-4a37-af81-36bf681c80eb\") " pod="calico-system/calico-node-gmm49" Jul 16 00:46:53.213375 systemd[1]: Started cri-containerd-336b847cd446c6672946c4c000f7d826d53bf9e4243967094b5f61ad6bada875.scope - libcontainer container 336b847cd446c6672946c4c000f7d826d53bf9e4243967094b5f61ad6bada875. Jul 16 00:46:53.238436 containerd[2784]: time="2025-07-16T00:46:53.238409848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c47dfd576-grd45,Uid:8042099c-14cc-4cff-906c-e871dc9eedc6,Namespace:calico-system,Attempt:0,} returns sandbox id \"336b847cd446c6672946c4c000f7d826d53bf9e4243967094b5f61ad6bada875\"" Jul 16 00:46:53.241468 containerd[2784]: time="2025-07-16T00:46:53.241440690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 16 00:46:53.311872 kubelet[4319]: E0716 00:46:53.311847 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.311872 kubelet[4319]: W0716 00:46:53.311864 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.311963 kubelet[4319]: E0716 00:46:53.311883 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.313622 kubelet[4319]: E0716 00:46:53.313601 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.313622 kubelet[4319]: W0716 00:46:53.313616 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.313690 kubelet[4319]: E0716 00:46:53.313630 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.319503 kubelet[4319]: E0716 00:46:53.319488 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.319503 kubelet[4319]: W0716 00:46:53.319500 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.319550 kubelet[4319]: E0716 00:46:53.319512 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.364202 kubelet[4319]: E0716 00:46:53.364171 4319 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xsmnd" podUID="7717274c-2eb9-44db-9dce-2e6914a9164e" Jul 16 00:46:53.402859 kubelet[4319]: E0716 00:46:53.402790 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.402859 kubelet[4319]: W0716 00:46:53.402805 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.402859 kubelet[4319]: E0716 00:46:53.402821 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.402984 kubelet[4319]: E0716 00:46:53.402973 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.403035 kubelet[4319]: W0716 00:46:53.402980 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.403035 kubelet[4319]: E0716 00:46:53.403016 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.403248 kubelet[4319]: E0716 00:46:53.403163 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.403248 kubelet[4319]: W0716 00:46:53.403171 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.403248 kubelet[4319]: E0716 00:46:53.403180 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.403438 kubelet[4319]: E0716 00:46:53.403394 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.403438 kubelet[4319]: W0716 00:46:53.403401 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.403438 kubelet[4319]: E0716 00:46:53.403408 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.403592 kubelet[4319]: E0716 00:46:53.403582 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.403620 kubelet[4319]: W0716 00:46:53.403594 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.403620 kubelet[4319]: E0716 00:46:53.403603 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.403744 kubelet[4319]: E0716 00:46:53.403734 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.403744 kubelet[4319]: W0716 00:46:53.403742 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.403796 kubelet[4319]: E0716 00:46:53.403750 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.403884 kubelet[4319]: E0716 00:46:53.403870 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.403884 kubelet[4319]: W0716 00:46:53.403878 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.403884 kubelet[4319]: E0716 00:46:53.403885 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.404067 kubelet[4319]: E0716 00:46:53.404047 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.404067 kubelet[4319]: W0716 00:46:53.404054 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.404067 kubelet[4319]: E0716 00:46:53.404061 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.404286 kubelet[4319]: E0716 00:46:53.404277 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.404286 kubelet[4319]: W0716 00:46:53.404285 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.404338 kubelet[4319]: E0716 00:46:53.404293 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.404453 kubelet[4319]: E0716 00:46:53.404445 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.404453 kubelet[4319]: W0716 00:46:53.404452 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.404499 kubelet[4319]: E0716 00:46:53.404459 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.404618 kubelet[4319]: E0716 00:46:53.404610 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.404618 kubelet[4319]: W0716 00:46:53.404618 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.404665 kubelet[4319]: E0716 00:46:53.404625 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.404794 kubelet[4319]: E0716 00:46:53.404786 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.404794 kubelet[4319]: W0716 00:46:53.404793 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.404843 kubelet[4319]: E0716 00:46:53.404801 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.404960 kubelet[4319]: E0716 00:46:53.404952 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.404987 kubelet[4319]: W0716 00:46:53.404962 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.404987 kubelet[4319]: E0716 00:46:53.404969 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.405146 kubelet[4319]: E0716 00:46:53.405138 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.405146 kubelet[4319]: W0716 00:46:53.405145 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.405198 kubelet[4319]: E0716 00:46:53.405153 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.405365 kubelet[4319]: E0716 00:46:53.405357 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.405365 kubelet[4319]: W0716 00:46:53.405365 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.405423 kubelet[4319]: E0716 00:46:53.405372 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.405518 kubelet[4319]: E0716 00:46:53.405508 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.405518 kubelet[4319]: W0716 00:46:53.405516 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.405565 kubelet[4319]: E0716 00:46:53.405523 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.405679 kubelet[4319]: E0716 00:46:53.405667 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.405679 kubelet[4319]: W0716 00:46:53.405675 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.405679 kubelet[4319]: E0716 00:46:53.405682 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.405834 kubelet[4319]: E0716 00:46:53.405824 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.405834 kubelet[4319]: W0716 00:46:53.405832 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.405885 kubelet[4319]: E0716 00:46:53.405839 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.405985 kubelet[4319]: E0716 00:46:53.405976 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.405985 kubelet[4319]: W0716 00:46:53.405984 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.406037 kubelet[4319]: E0716 00:46:53.405991 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.406147 kubelet[4319]: E0716 00:46:53.406139 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.406147 kubelet[4319]: W0716 00:46:53.406146 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.406193 kubelet[4319]: E0716 00:46:53.406154 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.411406 kubelet[4319]: E0716 00:46:53.411387 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.411406 kubelet[4319]: W0716 00:46:53.411404 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.411474 kubelet[4319]: E0716 00:46:53.411418 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.411474 kubelet[4319]: I0716 00:46:53.411440 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7717274c-2eb9-44db-9dce-2e6914a9164e-kubelet-dir\") pod \"csi-node-driver-xsmnd\" (UID: \"7717274c-2eb9-44db-9dce-2e6914a9164e\") " pod="calico-system/csi-node-driver-xsmnd" Jul 16 00:46:53.411594 kubelet[4319]: E0716 00:46:53.411583 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.411616 kubelet[4319]: W0716 00:46:53.411593 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.411616 kubelet[4319]: E0716 00:46:53.411604 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.411653 kubelet[4319]: I0716 00:46:53.411617 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8kg2\" (UniqueName: \"kubernetes.io/projected/7717274c-2eb9-44db-9dce-2e6914a9164e-kube-api-access-g8kg2\") pod \"csi-node-driver-xsmnd\" (UID: \"7717274c-2eb9-44db-9dce-2e6914a9164e\") " pod="calico-system/csi-node-driver-xsmnd" Jul 16 00:46:53.411800 kubelet[4319]: E0716 00:46:53.411789 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.411800 kubelet[4319]: W0716 00:46:53.411798 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.411846 kubelet[4319]: E0716 00:46:53.411810 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.411846 kubelet[4319]: I0716 00:46:53.411824 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7717274c-2eb9-44db-9dce-2e6914a9164e-socket-dir\") pod \"csi-node-driver-xsmnd\" (UID: \"7717274c-2eb9-44db-9dce-2e6914a9164e\") " pod="calico-system/csi-node-driver-xsmnd" Jul 16 00:46:53.411963 kubelet[4319]: E0716 00:46:53.411953 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.411963 kubelet[4319]: W0716 00:46:53.411962 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.412002 kubelet[4319]: E0716 00:46:53.411973 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.412002 kubelet[4319]: I0716 00:46:53.411987 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7717274c-2eb9-44db-9dce-2e6914a9164e-registration-dir\") pod \"csi-node-driver-xsmnd\" (UID: \"7717274c-2eb9-44db-9dce-2e6914a9164e\") " pod="calico-system/csi-node-driver-xsmnd" Jul 16 00:46:53.412199 kubelet[4319]: E0716 00:46:53.412190 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.412219 kubelet[4319]: W0716 00:46:53.412199 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.412219 kubelet[4319]: E0716 00:46:53.412210 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.412256 kubelet[4319]: I0716 00:46:53.412223 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7717274c-2eb9-44db-9dce-2e6914a9164e-varrun\") pod \"csi-node-driver-xsmnd\" (UID: \"7717274c-2eb9-44db-9dce-2e6914a9164e\") " pod="calico-system/csi-node-driver-xsmnd" Jul 16 00:46:53.412447 kubelet[4319]: E0716 00:46:53.412436 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.412470 kubelet[4319]: W0716 00:46:53.412446 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.412470 kubelet[4319]: E0716 00:46:53.412459 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.412623 kubelet[4319]: E0716 00:46:53.412614 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.412642 kubelet[4319]: W0716 00:46:53.412622 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.412665 kubelet[4319]: E0716 00:46:53.412644 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.412841 kubelet[4319]: E0716 00:46:53.412832 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.412862 kubelet[4319]: W0716 00:46:53.412840 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.412880 kubelet[4319]: E0716 00:46:53.412864 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.413016 kubelet[4319]: E0716 00:46:53.413008 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.413038 kubelet[4319]: W0716 00:46:53.413016 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.413038 kubelet[4319]: E0716 00:46:53.413033 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.413192 kubelet[4319]: E0716 00:46:53.413184 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.413212 kubelet[4319]: W0716 00:46:53.413192 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.413212 kubelet[4319]: E0716 00:46:53.413206 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.413364 kubelet[4319]: E0716 00:46:53.413355 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.413364 kubelet[4319]: W0716 00:46:53.413363 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.413406 kubelet[4319]: E0716 00:46:53.413377 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.413541 kubelet[4319]: E0716 00:46:53.413533 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.413541 kubelet[4319]: W0716 00:46:53.413540 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.413582 kubelet[4319]: E0716 00:46:53.413549 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.413758 kubelet[4319]: E0716 00:46:53.413748 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.413758 kubelet[4319]: W0716 00:46:53.413757 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.413798 kubelet[4319]: E0716 00:46:53.413764 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.413933 kubelet[4319]: E0716 00:46:53.413925 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.413956 kubelet[4319]: W0716 00:46:53.413934 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.413956 kubelet[4319]: E0716 00:46:53.413942 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.414076 kubelet[4319]: E0716 00:46:53.414068 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.414098 kubelet[4319]: W0716 00:46:53.414076 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.414098 kubelet[4319]: E0716 00:46:53.414083 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.432698 containerd[2784]: time="2025-07-16T00:46:53.432659677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gmm49,Uid:398a182b-d1c2-4a37-af81-36bf681c80eb,Namespace:calico-system,Attempt:0,}" Jul 16 00:46:53.440424 containerd[2784]: time="2025-07-16T00:46:53.440390236Z" level=info msg="connecting to shim 9decf577f86eebf96c30348ce8b48e7aa8765017d6e0a35e8ed5b358850ae8e3" address="unix:///run/containerd/s/00de3e19e3a80321904d51ed5c98e07451e048b47f3fe30becc1dc3d797ce3d1" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:46:53.469394 systemd[1]: Started cri-containerd-9decf577f86eebf96c30348ce8b48e7aa8765017d6e0a35e8ed5b358850ae8e3.scope - libcontainer container 9decf577f86eebf96c30348ce8b48e7aa8765017d6e0a35e8ed5b358850ae8e3. Jul 16 00:46:53.487046 containerd[2784]: time="2025-07-16T00:46:53.487020494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gmm49,Uid:398a182b-d1c2-4a37-af81-36bf681c80eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"9decf577f86eebf96c30348ce8b48e7aa8765017d6e0a35e8ed5b358850ae8e3\"" Jul 16 00:46:53.513349 kubelet[4319]: E0716 00:46:53.513327 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.513349 kubelet[4319]: W0716 00:46:53.513344 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.513450 kubelet[4319]: E0716 00:46:53.513362 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.513619 kubelet[4319]: E0716 00:46:53.513610 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.513643 kubelet[4319]: W0716 00:46:53.513618 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.513643 kubelet[4319]: E0716 00:46:53.513630 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.513808 kubelet[4319]: E0716 00:46:53.513798 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.513831 kubelet[4319]: W0716 00:46:53.513810 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.513831 kubelet[4319]: E0716 00:46:53.513821 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.514009 kubelet[4319]: E0716 00:46:53.513999 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.514031 kubelet[4319]: W0716 00:46:53.514009 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.514031 kubelet[4319]: E0716 00:46:53.514019 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.514205 kubelet[4319]: E0716 00:46:53.514196 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.514228 kubelet[4319]: W0716 00:46:53.514206 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.514228 kubelet[4319]: E0716 00:46:53.514217 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.514442 kubelet[4319]: E0716 00:46:53.514432 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.514469 kubelet[4319]: W0716 00:46:53.514442 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.514469 kubelet[4319]: E0716 00:46:53.514454 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.514594 kubelet[4319]: E0716 00:46:53.514585 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.514594 kubelet[4319]: W0716 00:46:53.514593 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.514637 kubelet[4319]: E0716 00:46:53.514618 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.514720 kubelet[4319]: E0716 00:46:53.514712 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.514740 kubelet[4319]: W0716 00:46:53.514720 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.514740 kubelet[4319]: E0716 00:46:53.514735 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.514883 kubelet[4319]: E0716 00:46:53.514875 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.514905 kubelet[4319]: W0716 00:46:53.514883 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.514905 kubelet[4319]: E0716 00:46:53.514894 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.515027 kubelet[4319]: E0716 00:46:53.515019 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.515050 kubelet[4319]: W0716 00:46:53.515027 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.515050 kubelet[4319]: E0716 00:46:53.515037 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.515267 kubelet[4319]: E0716 00:46:53.515253 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.515288 kubelet[4319]: W0716 00:46:53.515270 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.515288 kubelet[4319]: E0716 00:46:53.515285 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.515455 kubelet[4319]: E0716 00:46:53.515447 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.515476 kubelet[4319]: W0716 00:46:53.515455 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.515497 kubelet[4319]: E0716 00:46:53.515478 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.515597 kubelet[4319]: E0716 00:46:53.515588 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.515618 kubelet[4319]: W0716 00:46:53.515596 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.515618 kubelet[4319]: E0716 00:46:53.515612 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.515771 kubelet[4319]: E0716 00:46:53.515763 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.515793 kubelet[4319]: W0716 00:46:53.515771 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.515793 kubelet[4319]: E0716 00:46:53.515782 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.515992 kubelet[4319]: E0716 00:46:53.515984 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.516012 kubelet[4319]: W0716 00:46:53.515992 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.516031 kubelet[4319]: E0716 00:46:53.516012 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.516190 kubelet[4319]: E0716 00:46:53.516182 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.516212 kubelet[4319]: W0716 00:46:53.516190 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.516212 kubelet[4319]: E0716 00:46:53.516206 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.516364 kubelet[4319]: E0716 00:46:53.516356 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.516384 kubelet[4319]: W0716 00:46:53.516364 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.516384 kubelet[4319]: E0716 00:46:53.516379 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.516496 kubelet[4319]: E0716 00:46:53.516489 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.516520 kubelet[4319]: W0716 00:46:53.516496 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.516520 kubelet[4319]: E0716 00:46:53.516507 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.516706 kubelet[4319]: E0716 00:46:53.516697 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.516726 kubelet[4319]: W0716 00:46:53.516706 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.516726 kubelet[4319]: E0716 00:46:53.516716 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.516853 kubelet[4319]: E0716 00:46:53.516845 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.516873 kubelet[4319]: W0716 00:46:53.516853 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.516873 kubelet[4319]: E0716 00:46:53.516863 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.516994 kubelet[4319]: E0716 00:46:53.516985 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.517014 kubelet[4319]: W0716 00:46:53.516994 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.517014 kubelet[4319]: E0716 00:46:53.517004 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.517148 kubelet[4319]: E0716 00:46:53.517140 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.517170 kubelet[4319]: W0716 00:46:53.517147 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.517170 kubelet[4319]: E0716 00:46:53.517157 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.517381 kubelet[4319]: E0716 00:46:53.517369 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.517405 kubelet[4319]: W0716 00:46:53.517381 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.517405 kubelet[4319]: E0716 00:46:53.517395 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.517596 kubelet[4319]: E0716 00:46:53.517588 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.517617 kubelet[4319]: W0716 00:46:53.517597 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.517617 kubelet[4319]: E0716 00:46:53.517608 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.517885 kubelet[4319]: E0716 00:46:53.517874 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.517906 kubelet[4319]: W0716 00:46:53.517886 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.517906 kubelet[4319]: E0716 00:46:53.517896 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.526141 kubelet[4319]: E0716 00:46:53.526124 4319 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:46:53.526141 kubelet[4319]: W0716 00:46:53.526137 4319 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:46:53.526200 kubelet[4319]: E0716 00:46:53.526149 4319 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:46:53.971124 containerd[2784]: time="2025-07-16T00:46:53.971079509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:53.971376 containerd[2784]: time="2025-07-16T00:46:53.971155385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 16 00:46:53.971758 containerd[2784]: time="2025-07-16T00:46:53.971739555Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:53.973264 containerd[2784]: time="2025-07-16T00:46:53.973240557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:53.973923 containerd[2784]: time="2025-07-16T00:46:53.973904243Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 732.429314ms" Jul 16 00:46:53.973958 containerd[2784]: time="2025-07-16T00:46:53.973928961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 16 00:46:53.974628 containerd[2784]: time="2025-07-16T00:46:53.974605006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 16 00:46:53.979135 containerd[2784]: time="2025-07-16T00:46:53.979112612Z" level=info msg="CreateContainer within sandbox \"336b847cd446c6672946c4c000f7d826d53bf9e4243967094b5f61ad6bada875\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 16 00:46:53.983601 containerd[2784]: time="2025-07-16T00:46:53.983553701Z" level=info msg="Container 5e155de97647b574d3dca0837eba39ed901c91672261bb0770af6d8d37a16861: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:46:53.988010 containerd[2784]: time="2025-07-16T00:46:53.987980671Z" level=info msg="CreateContainer within sandbox \"336b847cd446c6672946c4c000f7d826d53bf9e4243967094b5f61ad6bada875\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5e155de97647b574d3dca0837eba39ed901c91672261bb0770af6d8d37a16861\"" Jul 16 00:46:53.988326 containerd[2784]: time="2025-07-16T00:46:53.988308414Z" level=info msg="StartContainer for \"5e155de97647b574d3dca0837eba39ed901c91672261bb0770af6d8d37a16861\"" Jul 16 00:46:53.989237 containerd[2784]: time="2025-07-16T00:46:53.989217687Z" level=info msg="connecting to shim 5e155de97647b574d3dca0837eba39ed901c91672261bb0770af6d8d37a16861" address="unix:///run/containerd/s/4e7d0d4d8ef78cc0226d9b17b52135fa79f608199a38d177a70b85e2c88c8d48" protocol=ttrpc version=3 Jul 16 00:46:54.010442 systemd[1]: Started cri-containerd-5e155de97647b574d3dca0837eba39ed901c91672261bb0770af6d8d37a16861.scope - libcontainer container 5e155de97647b574d3dca0837eba39ed901c91672261bb0770af6d8d37a16861. Jul 16 00:46:54.040918 containerd[2784]: time="2025-07-16T00:46:54.040885668Z" level=info msg="StartContainer for \"5e155de97647b574d3dca0837eba39ed901c91672261bb0770af6d8d37a16861\" returns successfully" Jul 16 00:46:54.321343 containerd[2784]: time="2025-07-16T00:46:54.321302846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:54.321639 containerd[2784]: time="2025-07-16T00:46:54.321350124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 16 00:46:54.321952 containerd[2784]: time="2025-07-16T00:46:54.321933575Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:54.323379 containerd[2784]: time="2025-07-16T00:46:54.323361545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:54.323955 containerd[2784]: time="2025-07-16T00:46:54.323934837Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 349.301192ms" Jul 16 00:46:54.323979 containerd[2784]: time="2025-07-16T00:46:54.323960955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 16 00:46:54.325359 containerd[2784]: time="2025-07-16T00:46:54.325341527Z" level=info msg="CreateContainer within sandbox \"9decf577f86eebf96c30348ce8b48e7aa8765017d6e0a35e8ed5b358850ae8e3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 16 00:46:54.329566 containerd[2784]: time="2025-07-16T00:46:54.329546520Z" level=info msg="Container a2450454d41237b842d75bfb73423faa2e49485c48717c0e38c199e6dede9f27: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:46:54.333318 containerd[2784]: time="2025-07-16T00:46:54.333293895Z" level=info msg="CreateContainer within sandbox \"9decf577f86eebf96c30348ce8b48e7aa8765017d6e0a35e8ed5b358850ae8e3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a2450454d41237b842d75bfb73423faa2e49485c48717c0e38c199e6dede9f27\"" Jul 16 00:46:54.333603 containerd[2784]: time="2025-07-16T00:46:54.333584881Z" level=info msg="StartContainer for \"a2450454d41237b842d75bfb73423faa2e49485c48717c0e38c199e6dede9f27\"" Jul 16 00:46:54.334813 containerd[2784]: time="2025-07-16T00:46:54.334794821Z" level=info msg="connecting to shim a2450454d41237b842d75bfb73423faa2e49485c48717c0e38c199e6dede9f27" address="unix:///run/containerd/s/00de3e19e3a80321904d51ed5c98e07451e048b47f3fe30becc1dc3d797ce3d1" protocol=ttrpc version=3 Jul 16 00:46:54.355450 systemd[1]: Started cri-containerd-a2450454d41237b842d75bfb73423faa2e49485c48717c0e38c199e6dede9f27.scope - libcontainer container a2450454d41237b842d75bfb73423faa2e49485c48717c0e38c199e6dede9f27. Jul 16 00:46:54.381577 containerd[2784]: time="2025-07-16T00:46:54.381553197Z" level=info msg="StartContainer for \"a2450454d41237b842d75bfb73423faa2e49485c48717c0e38c199e6dede9f27\" returns successfully" Jul 16 00:46:54.391442 systemd[1]: cri-containerd-a2450454d41237b842d75bfb73423faa2e49485c48717c0e38c199e6dede9f27.scope: Deactivated successfully. Jul 16 00:46:54.393406 containerd[2784]: time="2025-07-16T00:46:54.393380254Z" level=info msg="received exit event container_id:\"a2450454d41237b842d75bfb73423faa2e49485c48717c0e38c199e6dede9f27\" id:\"a2450454d41237b842d75bfb73423faa2e49485c48717c0e38c199e6dede9f27\" pid:5317 exited_at:{seconds:1752626814 nanos:393107787}" Jul 16 00:46:54.393494 containerd[2784]: time="2025-07-16T00:46:54.393467289Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2450454d41237b842d75bfb73423faa2e49485c48717c0e38c199e6dede9f27\" id:\"a2450454d41237b842d75bfb73423faa2e49485c48717c0e38c199e6dede9f27\" pid:5317 exited_at:{seconds:1752626814 nanos:393107787}" Jul 16 00:46:54.408527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2450454d41237b842d75bfb73423faa2e49485c48717c0e38c199e6dede9f27-rootfs.mount: Deactivated successfully. Jul 16 00:46:54.891633 kubelet[4319]: E0716 00:46:54.891601 4319 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xsmnd" podUID="7717274c-2eb9-44db-9dce-2e6914a9164e" Jul 16 00:46:54.929950 containerd[2784]: time="2025-07-16T00:46:54.929915007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 16 00:46:54.955500 kubelet[4319]: I0716 00:46:54.955454 4319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c47dfd576-grd45" podStartSLOduration=2.220816909 podStartE2EDuration="2.955437309s" podCreationTimestamp="2025-07-16 00:46:52 +0000 UTC" firstStartedPulling="2025-07-16 00:46:53.239847813 +0000 UTC m=+20.424576623" lastFinishedPulling="2025-07-16 00:46:53.974468173 +0000 UTC m=+21.159197023" observedRunningTime="2025-07-16 00:46:54.955275277 +0000 UTC m=+22.140004127" watchObservedRunningTime="2025-07-16 00:46:54.955437309 +0000 UTC m=+22.140166119" Jul 16 00:46:55.932187 kubelet[4319]: I0716 00:46:55.932155 4319 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:46:56.102230 containerd[2784]: time="2025-07-16T00:46:56.102196159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:56.102587 containerd[2784]: time="2025-07-16T00:46:56.102212478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 16 00:46:56.102833 containerd[2784]: time="2025-07-16T00:46:56.102814731Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:56.104448 containerd[2784]: time="2025-07-16T00:46:56.104429819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:56.106601 containerd[2784]: time="2025-07-16T00:46:56.106537166Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 1.176567601s" Jul 16 00:46:56.107017 containerd[2784]: time="2025-07-16T00:46:56.106993905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 16 00:46:56.108585 containerd[2784]: time="2025-07-16T00:46:56.108556836Z" level=info msg="CreateContainer within sandbox \"9decf577f86eebf96c30348ce8b48e7aa8765017d6e0a35e8ed5b358850ae8e3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 16 00:46:56.112904 containerd[2784]: time="2025-07-16T00:46:56.112883083Z" level=info msg="Container c48a71e04cb8294218245448a4eafda1b844a42065838d1e5f46020d8f8c8ead: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:46:56.130434 containerd[2784]: time="2025-07-16T00:46:56.130406904Z" level=info msg="CreateContainer within sandbox \"9decf577f86eebf96c30348ce8b48e7aa8765017d6e0a35e8ed5b358850ae8e3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c48a71e04cb8294218245448a4eafda1b844a42065838d1e5f46020d8f8c8ead\"" Jul 16 00:46:56.130710 containerd[2784]: time="2025-07-16T00:46:56.130685932Z" level=info msg="StartContainer for \"c48a71e04cb8294218245448a4eafda1b844a42065838d1e5f46020d8f8c8ead\"" Jul 16 00:46:56.132002 containerd[2784]: time="2025-07-16T00:46:56.131974474Z" level=info msg="connecting to shim c48a71e04cb8294218245448a4eafda1b844a42065838d1e5f46020d8f8c8ead" address="unix:///run/containerd/s/00de3e19e3a80321904d51ed5c98e07451e048b47f3fe30becc1dc3d797ce3d1" protocol=ttrpc version=3 Jul 16 00:46:56.163376 systemd[1]: Started cri-containerd-c48a71e04cb8294218245448a4eafda1b844a42065838d1e5f46020d8f8c8ead.scope - libcontainer container c48a71e04cb8294218245448a4eafda1b844a42065838d1e5f46020d8f8c8ead. Jul 16 00:46:56.190522 containerd[2784]: time="2025-07-16T00:46:56.190453594Z" level=info msg="StartContainer for \"c48a71e04cb8294218245448a4eafda1b844a42065838d1e5f46020d8f8c8ead\" returns successfully" Jul 16 00:46:56.576080 containerd[2784]: time="2025-07-16T00:46:56.576022487Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 16 00:46:56.577918 systemd[1]: cri-containerd-c48a71e04cb8294218245448a4eafda1b844a42065838d1e5f46020d8f8c8ead.scope: Deactivated successfully. Jul 16 00:46:56.578341 systemd[1]: cri-containerd-c48a71e04cb8294218245448a4eafda1b844a42065838d1e5f46020d8f8c8ead.scope: Consumed 975ms CPU time, 199.3M memory peak, 165.8M written to disk. Jul 16 00:46:56.579363 containerd[2784]: time="2025-07-16T00:46:56.579323660Z" level=info msg="received exit event container_id:\"c48a71e04cb8294218245448a4eafda1b844a42065838d1e5f46020d8f8c8ead\" id:\"c48a71e04cb8294218245448a4eafda1b844a42065838d1e5f46020d8f8c8ead\" pid:5382 exited_at:{seconds:1752626816 nanos:579135829}" Jul 16 00:46:56.579603 containerd[2784]: time="2025-07-16T00:46:56.579394217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c48a71e04cb8294218245448a4eafda1b844a42065838d1e5f46020d8f8c8ead\" id:\"c48a71e04cb8294218245448a4eafda1b844a42065838d1e5f46020d8f8c8ead\" pid:5382 exited_at:{seconds:1752626816 nanos:579135829}" Jul 16 00:46:56.594550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c48a71e04cb8294218245448a4eafda1b844a42065838d1e5f46020d8f8c8ead-rootfs.mount: Deactivated successfully. Jul 16 00:46:56.664386 kubelet[4319]: I0716 00:46:56.664354 4319 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 16 00:46:56.685250 systemd[1]: Created slice kubepods-besteffort-pod2da18249_deb2_4cdd_bb19_cbba4e2755b7.slice - libcontainer container kubepods-besteffort-pod2da18249_deb2_4cdd_bb19_cbba4e2755b7.slice. Jul 16 00:46:56.689392 systemd[1]: Created slice kubepods-besteffort-podde1352e7_7881_4e9e_8429_56fa32567373.slice - libcontainer container kubepods-besteffort-podde1352e7_7881_4e9e_8429_56fa32567373.slice. Jul 16 00:46:56.692959 systemd[1]: Created slice kubepods-burstable-pod90d031fa_ea59_464d_98af_0980e1b42263.slice - libcontainer container kubepods-burstable-pod90d031fa_ea59_464d_98af_0980e1b42263.slice. Jul 16 00:46:56.696961 systemd[1]: Created slice kubepods-besteffort-pod2fb7dd78_c188_4dd0_a051_40138c1ed92e.slice - libcontainer container kubepods-besteffort-pod2fb7dd78_c188_4dd0_a051_40138c1ed92e.slice. Jul 16 00:46:56.700896 systemd[1]: Created slice kubepods-besteffort-pod9c201598_6c58_44df_8298_234b75650d58.slice - libcontainer container kubepods-besteffort-pod9c201598_6c58_44df_8298_234b75650d58.slice. Jul 16 00:46:56.704662 systemd[1]: Created slice kubepods-burstable-pod2b48aa09_a83c_49a6_859a_b1b7384550f0.slice - libcontainer container kubepods-burstable-pod2b48aa09_a83c_49a6_859a_b1b7384550f0.slice. Jul 16 00:46:56.707621 systemd[1]: Created slice kubepods-besteffort-pod21ea9c3d_bc28_47e5_85bf_b25d451d9290.slice - libcontainer container kubepods-besteffort-pod21ea9c3d_bc28_47e5_85bf_b25d451d9290.slice. Jul 16 00:46:56.733880 kubelet[4319]: I0716 00:46:56.733826 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90d031fa-ea59-464d-98af-0980e1b42263-config-volume\") pod \"coredns-668d6bf9bc-qhtkz\" (UID: \"90d031fa-ea59-464d-98af-0980e1b42263\") " pod="kube-system/coredns-668d6bf9bc-qhtkz" Jul 16 00:46:56.734031 kubelet[4319]: I0716 00:46:56.733884 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpcmm\" (UniqueName: \"kubernetes.io/projected/2b48aa09-a83c-49a6-859a-b1b7384550f0-kube-api-access-vpcmm\") pod \"coredns-668d6bf9bc-ttk4s\" (UID: \"2b48aa09-a83c-49a6-859a-b1b7384550f0\") " pod="kube-system/coredns-668d6bf9bc-ttk4s" Jul 16 00:46:56.734031 kubelet[4319]: I0716 00:46:56.733939 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9c201598-6c58-44df-8298-234b75650d58-calico-apiserver-certs\") pod \"calico-apiserver-85f985586b-sdjzf\" (UID: \"9c201598-6c58-44df-8298-234b75650d58\") " pod="calico-apiserver/calico-apiserver-85f985586b-sdjzf" Jul 16 00:46:56.734031 kubelet[4319]: I0716 00:46:56.733972 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbdf4\" (UniqueName: \"kubernetes.io/projected/9c201598-6c58-44df-8298-234b75650d58-kube-api-access-cbdf4\") pod \"calico-apiserver-85f985586b-sdjzf\" (UID: \"9c201598-6c58-44df-8298-234b75650d58\") " pod="calico-apiserver/calico-apiserver-85f985586b-sdjzf" Jul 16 00:46:56.734031 kubelet[4319]: I0716 00:46:56.733994 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn62f\" (UniqueName: \"kubernetes.io/projected/90d031fa-ea59-464d-98af-0980e1b42263-kube-api-access-nn62f\") pod \"coredns-668d6bf9bc-qhtkz\" (UID: \"90d031fa-ea59-464d-98af-0980e1b42263\") " pod="kube-system/coredns-668d6bf9bc-qhtkz" Jul 16 00:46:56.734031 kubelet[4319]: I0716 00:46:56.734012 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6xlj\" (UniqueName: \"kubernetes.io/projected/21ea9c3d-bc28-47e5-85bf-b25d451d9290-kube-api-access-q6xlj\") pod \"whisker-6f688dbd74-dg67d\" (UID: \"21ea9c3d-bc28-47e5-85bf-b25d451d9290\") " pod="calico-system/whisker-6f688dbd74-dg67d" Jul 16 00:46:56.734165 kubelet[4319]: I0716 00:46:56.734033 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/de1352e7-7881-4e9e-8429-56fa32567373-calico-apiserver-certs\") pod \"calico-apiserver-85f985586b-4kwjk\" (UID: \"de1352e7-7881-4e9e-8429-56fa32567373\") " pod="calico-apiserver/calico-apiserver-85f985586b-4kwjk" Jul 16 00:46:56.734165 kubelet[4319]: I0716 00:46:56.734053 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fb7dd78-c188-4dd0-a051-40138c1ed92e-config\") pod \"goldmane-768f4c5c69-54wb2\" (UID: \"2fb7dd78-c188-4dd0-a051-40138c1ed92e\") " pod="calico-system/goldmane-768f4c5c69-54wb2" Jul 16 00:46:56.734165 kubelet[4319]: I0716 00:46:56.734071 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2fb7dd78-c188-4dd0-a051-40138c1ed92e-goldmane-key-pair\") pod \"goldmane-768f4c5c69-54wb2\" (UID: \"2fb7dd78-c188-4dd0-a051-40138c1ed92e\") " pod="calico-system/goldmane-768f4c5c69-54wb2" Jul 16 00:46:56.734165 kubelet[4319]: I0716 00:46:56.734086 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/21ea9c3d-bc28-47e5-85bf-b25d451d9290-whisker-backend-key-pair\") pod \"whisker-6f688dbd74-dg67d\" (UID: \"21ea9c3d-bc28-47e5-85bf-b25d451d9290\") " pod="calico-system/whisker-6f688dbd74-dg67d" Jul 16 00:46:56.734165 kubelet[4319]: I0716 00:46:56.734102 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21ea9c3d-bc28-47e5-85bf-b25d451d9290-whisker-ca-bundle\") pod \"whisker-6f688dbd74-dg67d\" (UID: \"21ea9c3d-bc28-47e5-85bf-b25d451d9290\") " pod="calico-system/whisker-6f688dbd74-dg67d" Jul 16 00:46:56.734287 kubelet[4319]: I0716 00:46:56.734120 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5w2r\" (UniqueName: \"kubernetes.io/projected/de1352e7-7881-4e9e-8429-56fa32567373-kube-api-access-b5w2r\") pod \"calico-apiserver-85f985586b-4kwjk\" (UID: \"de1352e7-7881-4e9e-8429-56fa32567373\") " pod="calico-apiserver/calico-apiserver-85f985586b-4kwjk" Jul 16 00:46:56.734287 kubelet[4319]: I0716 00:46:56.734139 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b48aa09-a83c-49a6-859a-b1b7384550f0-config-volume\") pod \"coredns-668d6bf9bc-ttk4s\" (UID: \"2b48aa09-a83c-49a6-859a-b1b7384550f0\") " pod="kube-system/coredns-668d6bf9bc-ttk4s" Jul 16 00:46:56.734287 kubelet[4319]: I0716 00:46:56.734156 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt5jf\" (UniqueName: \"kubernetes.io/projected/2da18249-deb2-4cdd-bb19-cbba4e2755b7-kube-api-access-bt5jf\") pod \"calico-kube-controllers-5976c699f5-jqrlm\" (UID: \"2da18249-deb2-4cdd-bb19-cbba4e2755b7\") " pod="calico-system/calico-kube-controllers-5976c699f5-jqrlm" Jul 16 00:46:56.734287 kubelet[4319]: I0716 00:46:56.734189 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tdp5\" (UniqueName: \"kubernetes.io/projected/2fb7dd78-c188-4dd0-a051-40138c1ed92e-kube-api-access-5tdp5\") pod \"goldmane-768f4c5c69-54wb2\" (UID: \"2fb7dd78-c188-4dd0-a051-40138c1ed92e\") " pod="calico-system/goldmane-768f4c5c69-54wb2" Jul 16 00:46:56.734376 kubelet[4319]: I0716 00:46:56.734287 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2da18249-deb2-4cdd-bb19-cbba4e2755b7-tigera-ca-bundle\") pod \"calico-kube-controllers-5976c699f5-jqrlm\" (UID: \"2da18249-deb2-4cdd-bb19-cbba4e2755b7\") " pod="calico-system/calico-kube-controllers-5976c699f5-jqrlm" Jul 16 00:46:56.734376 kubelet[4319]: I0716 00:46:56.734328 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fb7dd78-c188-4dd0-a051-40138c1ed92e-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-54wb2\" (UID: \"2fb7dd78-c188-4dd0-a051-40138c1ed92e\") " pod="calico-system/goldmane-768f4c5c69-54wb2" Jul 16 00:46:56.895992 systemd[1]: Created slice kubepods-besteffort-pod7717274c_2eb9_44db_9dce_2e6914a9164e.slice - libcontainer container kubepods-besteffort-pod7717274c_2eb9_44db_9dce_2e6914a9164e.slice. Jul 16 00:46:56.897694 containerd[2784]: time="2025-07-16T00:46:56.897663384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xsmnd,Uid:7717274c-2eb9-44db-9dce-2e6914a9164e,Namespace:calico-system,Attempt:0,}" Jul 16 00:46:56.936429 containerd[2784]: time="2025-07-16T00:46:56.936399461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 16 00:46:56.955316 containerd[2784]: time="2025-07-16T00:46:56.955271222Z" level=error msg="Failed to destroy network for sandbox \"5ff8d42ded3a855d10d4a5ea11944c7afd5ce585fc9f119b742a103cf66530f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:56.955698 containerd[2784]: time="2025-07-16T00:46:56.955668724Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xsmnd,Uid:7717274c-2eb9-44db-9dce-2e6914a9164e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ff8d42ded3a855d10d4a5ea11944c7afd5ce585fc9f119b742a103cf66530f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:56.955948 kubelet[4319]: E0716 00:46:56.955892 4319 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ff8d42ded3a855d10d4a5ea11944c7afd5ce585fc9f119b742a103cf66530f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:56.956195 kubelet[4319]: E0716 00:46:56.955991 4319 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ff8d42ded3a855d10d4a5ea11944c7afd5ce585fc9f119b742a103cf66530f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xsmnd" Jul 16 00:46:56.956195 kubelet[4319]: E0716 00:46:56.956023 4319 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ff8d42ded3a855d10d4a5ea11944c7afd5ce585fc9f119b742a103cf66530f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xsmnd" Jul 16 00:46:56.956195 kubelet[4319]: E0716 00:46:56.956079 4319 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xsmnd_calico-system(7717274c-2eb9-44db-9dce-2e6914a9164e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xsmnd_calico-system(7717274c-2eb9-44db-9dce-2e6914a9164e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ff8d42ded3a855d10d4a5ea11944c7afd5ce585fc9f119b742a103cf66530f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xsmnd" podUID="7717274c-2eb9-44db-9dce-2e6914a9164e" Jul 16 00:46:56.987735 containerd[2784]: time="2025-07-16T00:46:56.987706659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5976c699f5-jqrlm,Uid:2da18249-deb2-4cdd-bb19-cbba4e2755b7,Namespace:calico-system,Attempt:0,}" Jul 16 00:46:56.992265 containerd[2784]: time="2025-07-16T00:46:56.992239818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85f985586b-4kwjk,Uid:de1352e7-7881-4e9e-8429-56fa32567373,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:46:56.995714 containerd[2784]: time="2025-07-16T00:46:56.995688984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qhtkz,Uid:90d031fa-ea59-464d-98af-0980e1b42263,Namespace:kube-system,Attempt:0,}" Jul 16 00:46:57.000174 containerd[2784]: time="2025-07-16T00:46:57.000146746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-54wb2,Uid:2fb7dd78-c188-4dd0-a051-40138c1ed92e,Namespace:calico-system,Attempt:0,}" Jul 16 00:46:57.003740 containerd[2784]: time="2025-07-16T00:46:57.003711635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85f985586b-sdjzf,Uid:9c201598-6c58-44df-8298-234b75650d58,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:46:57.007290 containerd[2784]: time="2025-07-16T00:46:57.007253805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ttk4s,Uid:2b48aa09-a83c-49a6-859a-b1b7384550f0,Namespace:kube-system,Attempt:0,}" Jul 16 00:46:57.009733 containerd[2784]: time="2025-07-16T00:46:57.009704302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f688dbd74-dg67d,Uid:21ea9c3d-bc28-47e5-85bf-b25d451d9290,Namespace:calico-system,Attempt:0,}" Jul 16 00:46:57.030272 containerd[2784]: time="2025-07-16T00:46:57.030212515Z" level=error msg="Failed to destroy network for sandbox \"659d74ab0e9e47ee1c0d9b79214b90f51226cb367f6526d23671d7f8c660a381\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.030669 containerd[2784]: time="2025-07-16T00:46:57.030638297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5976c699f5-jqrlm,Uid:2da18249-deb2-4cdd-bb19-cbba4e2755b7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"659d74ab0e9e47ee1c0d9b79214b90f51226cb367f6526d23671d7f8c660a381\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.030884 kubelet[4319]: E0716 00:46:57.030846 4319 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"659d74ab0e9e47ee1c0d9b79214b90f51226cb367f6526d23671d7f8c660a381\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.030927 kubelet[4319]: E0716 00:46:57.030910 4319 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"659d74ab0e9e47ee1c0d9b79214b90f51226cb367f6526d23671d7f8c660a381\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5976c699f5-jqrlm" Jul 16 00:46:57.030951 kubelet[4319]: E0716 00:46:57.030929 4319 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"659d74ab0e9e47ee1c0d9b79214b90f51226cb367f6526d23671d7f8c660a381\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5976c699f5-jqrlm" Jul 16 00:46:57.030992 kubelet[4319]: E0716 00:46:57.030969 4319 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5976c699f5-jqrlm_calico-system(2da18249-deb2-4cdd-bb19-cbba4e2755b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5976c699f5-jqrlm_calico-system(2da18249-deb2-4cdd-bb19-cbba4e2755b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"659d74ab0e9e47ee1c0d9b79214b90f51226cb367f6526d23671d7f8c660a381\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5976c699f5-jqrlm" podUID="2da18249-deb2-4cdd-bb19-cbba4e2755b7" Jul 16 00:46:57.031706 containerd[2784]: time="2025-07-16T00:46:57.031672813Z" level=error msg="Failed to destroy network for sandbox \"5a2a0ced0c1b1252b1b72d11fd522a7759829cd4717f45ec0138efcdea8328cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.032046 containerd[2784]: time="2025-07-16T00:46:57.032021118Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85f985586b-4kwjk,Uid:de1352e7-7881-4e9e-8429-56fa32567373,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a2a0ced0c1b1252b1b72d11fd522a7759829cd4717f45ec0138efcdea8328cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.032184 kubelet[4319]: E0716 00:46:57.032159 4319 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a2a0ced0c1b1252b1b72d11fd522a7759829cd4717f45ec0138efcdea8328cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.032212 kubelet[4319]: E0716 00:46:57.032203 4319 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a2a0ced0c1b1252b1b72d11fd522a7759829cd4717f45ec0138efcdea8328cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85f985586b-4kwjk" Jul 16 00:46:57.032238 kubelet[4319]: E0716 00:46:57.032221 4319 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a2a0ced0c1b1252b1b72d11fd522a7759829cd4717f45ec0138efcdea8328cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85f985586b-4kwjk" Jul 16 00:46:57.032290 kubelet[4319]: E0716 00:46:57.032253 4319 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85f985586b-4kwjk_calico-apiserver(de1352e7-7881-4e9e-8429-56fa32567373)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85f985586b-4kwjk_calico-apiserver(de1352e7-7881-4e9e-8429-56fa32567373)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a2a0ced0c1b1252b1b72d11fd522a7759829cd4717f45ec0138efcdea8328cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85f985586b-4kwjk" podUID="de1352e7-7881-4e9e-8429-56fa32567373" Jul 16 00:46:57.037637 containerd[2784]: time="2025-07-16T00:46:57.037592363Z" level=error msg="Failed to destroy network for sandbox \"5ad64634f54c21751cd22712cd92675d1facc50d30ee7fa3754b984b896b56c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.038240 containerd[2784]: time="2025-07-16T00:46:57.038173058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qhtkz,Uid:90d031fa-ea59-464d-98af-0980e1b42263,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ad64634f54c21751cd22712cd92675d1facc50d30ee7fa3754b984b896b56c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.038484 kubelet[4319]: E0716 00:46:57.038452 4319 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ad64634f54c21751cd22712cd92675d1facc50d30ee7fa3754b984b896b56c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.038529 kubelet[4319]: E0716 00:46:57.038506 4319 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ad64634f54c21751cd22712cd92675d1facc50d30ee7fa3754b984b896b56c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qhtkz" Jul 16 00:46:57.038553 kubelet[4319]: E0716 00:46:57.038526 4319 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ad64634f54c21751cd22712cd92675d1facc50d30ee7fa3754b984b896b56c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qhtkz" Jul 16 00:46:57.038587 kubelet[4319]: E0716 00:46:57.038566 4319 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qhtkz_kube-system(90d031fa-ea59-464d-98af-0980e1b42263)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qhtkz_kube-system(90d031fa-ea59-464d-98af-0980e1b42263)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ad64634f54c21751cd22712cd92675d1facc50d30ee7fa3754b984b896b56c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qhtkz" podUID="90d031fa-ea59-464d-98af-0980e1b42263" Jul 16 00:46:57.041791 containerd[2784]: time="2025-07-16T00:46:57.041760666Z" level=error msg="Failed to destroy network for sandbox \"7e57742345cc16a726db164277607acfdd184d51f119579a1d0699930fc851c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.042111 containerd[2784]: time="2025-07-16T00:46:57.042087372Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-54wb2,Uid:2fb7dd78-c188-4dd0-a051-40138c1ed92e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e57742345cc16a726db164277607acfdd184d51f119579a1d0699930fc851c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.042296 kubelet[4319]: E0716 00:46:57.042258 4319 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e57742345cc16a726db164277607acfdd184d51f119579a1d0699930fc851c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.042336 kubelet[4319]: E0716 00:46:57.042313 4319 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e57742345cc16a726db164277607acfdd184d51f119579a1d0699930fc851c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-54wb2" Jul 16 00:46:57.042361 kubelet[4319]: E0716 00:46:57.042333 4319 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e57742345cc16a726db164277607acfdd184d51f119579a1d0699930fc851c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-54wb2" Jul 16 00:46:57.042395 kubelet[4319]: E0716 00:46:57.042369 4319 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-54wb2_calico-system(2fb7dd78-c188-4dd0-a051-40138c1ed92e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-54wb2_calico-system(2fb7dd78-c188-4dd0-a051-40138c1ed92e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e57742345cc16a726db164277607acfdd184d51f119579a1d0699930fc851c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-54wb2" podUID="2fb7dd78-c188-4dd0-a051-40138c1ed92e" Jul 16 00:46:57.044593 containerd[2784]: time="2025-07-16T00:46:57.044554588Z" level=error msg="Failed to destroy network for sandbox \"2c34ed7776cb82ebeb4138bf5b78ccc6fc72b6f60292e7f8697163746ceefc80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.044933 containerd[2784]: time="2025-07-16T00:46:57.044900414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85f985586b-sdjzf,Uid:9c201598-6c58-44df-8298-234b75650d58,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c34ed7776cb82ebeb4138bf5b78ccc6fc72b6f60292e7f8697163746ceefc80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.045069 kubelet[4319]: E0716 00:46:57.045038 4319 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c34ed7776cb82ebeb4138bf5b78ccc6fc72b6f60292e7f8697163746ceefc80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.045101 kubelet[4319]: E0716 00:46:57.045089 4319 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c34ed7776cb82ebeb4138bf5b78ccc6fc72b6f60292e7f8697163746ceefc80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85f985586b-sdjzf" Jul 16 00:46:57.045130 kubelet[4319]: E0716 00:46:57.045107 4319 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c34ed7776cb82ebeb4138bf5b78ccc6fc72b6f60292e7f8697163746ceefc80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85f985586b-sdjzf" Jul 16 00:46:57.045163 kubelet[4319]: E0716 00:46:57.045143 4319 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85f985586b-sdjzf_calico-apiserver(9c201598-6c58-44df-8298-234b75650d58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85f985586b-sdjzf_calico-apiserver(9c201598-6c58-44df-8298-234b75650d58)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c34ed7776cb82ebeb4138bf5b78ccc6fc72b6f60292e7f8697163746ceefc80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85f985586b-sdjzf" podUID="9c201598-6c58-44df-8298-234b75650d58" Jul 16 00:46:57.048407 containerd[2784]: time="2025-07-16T00:46:57.048250712Z" level=error msg="Failed to destroy network for sandbox \"f063fb250a6093a96509893c8ff41921f67a810e1d75a024a1da3f1c08fc5e16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.048653 containerd[2784]: time="2025-07-16T00:46:57.048627456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ttk4s,Uid:2b48aa09-a83c-49a6-859a-b1b7384550f0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f063fb250a6093a96509893c8ff41921f67a810e1d75a024a1da3f1c08fc5e16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.048760 kubelet[4319]: E0716 00:46:57.048741 4319 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f063fb250a6093a96509893c8ff41921f67a810e1d75a024a1da3f1c08fc5e16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.048799 kubelet[4319]: E0716 00:46:57.048771 4319 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f063fb250a6093a96509893c8ff41921f67a810e1d75a024a1da3f1c08fc5e16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ttk4s" Jul 16 00:46:57.048799 kubelet[4319]: E0716 00:46:57.048785 4319 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f063fb250a6093a96509893c8ff41921f67a810e1d75a024a1da3f1c08fc5e16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ttk4s" Jul 16 00:46:57.048838 kubelet[4319]: E0716 00:46:57.048815 4319 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ttk4s_kube-system(2b48aa09-a83c-49a6-859a-b1b7384550f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ttk4s_kube-system(2b48aa09-a83c-49a6-859a-b1b7384550f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f063fb250a6093a96509893c8ff41921f67a810e1d75a024a1da3f1c08fc5e16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ttk4s" podUID="2b48aa09-a83c-49a6-859a-b1b7384550f0" Jul 16 00:46:57.051979 containerd[2784]: time="2025-07-16T00:46:57.051939996Z" level=error msg="Failed to destroy network for sandbox \"6217c559590af78aab077e345f11de8598d3a6a0530e15f257a81f7863274404\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.052330 containerd[2784]: time="2025-07-16T00:46:57.052305260Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f688dbd74-dg67d,Uid:21ea9c3d-bc28-47e5-85bf-b25d451d9290,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6217c559590af78aab077e345f11de8598d3a6a0530e15f257a81f7863274404\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.052426 kubelet[4319]: E0716 00:46:57.052409 4319 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6217c559590af78aab077e345f11de8598d3a6a0530e15f257a81f7863274404\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:46:57.052456 kubelet[4319]: E0716 00:46:57.052440 4319 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6217c559590af78aab077e345f11de8598d3a6a0530e15f257a81f7863274404\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f688dbd74-dg67d" Jul 16 00:46:57.052482 kubelet[4319]: E0716 00:46:57.052454 4319 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6217c559590af78aab077e345f11de8598d3a6a0530e15f257a81f7863274404\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6f688dbd74-dg67d" Jul 16 00:46:57.052502 kubelet[4319]: E0716 00:46:57.052481 4319 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6f688dbd74-dg67d_calico-system(21ea9c3d-bc28-47e5-85bf-b25d451d9290)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6f688dbd74-dg67d_calico-system(21ea9c3d-bc28-47e5-85bf-b25d451d9290)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6217c559590af78aab077e345f11de8598d3a6a0530e15f257a81f7863274404\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6f688dbd74-dg67d" podUID="21ea9c3d-bc28-47e5-85bf-b25d451d9290" Jul 16 00:46:59.560813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1080515545.mount: Deactivated successfully. Jul 16 00:46:59.577460 containerd[2784]: time="2025-07-16T00:46:59.577420550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:59.577706 containerd[2784]: time="2025-07-16T00:46:59.577459269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 16 00:46:59.578079 containerd[2784]: time="2025-07-16T00:46:59.578061646Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:59.579394 containerd[2784]: time="2025-07-16T00:46:59.579375995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:46:59.579980 containerd[2784]: time="2025-07-16T00:46:59.579955013Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 2.643521114s" Jul 16 00:46:59.580029 containerd[2784]: time="2025-07-16T00:46:59.579985092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 16 00:46:59.585222 containerd[2784]: time="2025-07-16T00:46:59.585199892Z" level=info msg="CreateContainer within sandbox \"9decf577f86eebf96c30348ce8b48e7aa8765017d6e0a35e8ed5b358850ae8e3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 16 00:46:59.602281 containerd[2784]: time="2025-07-16T00:46:59.602220160Z" level=info msg="Container 7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:46:59.610856 containerd[2784]: time="2025-07-16T00:46:59.610821271Z" level=info msg="CreateContainer within sandbox \"9decf577f86eebf96c30348ce8b48e7aa8765017d6e0a35e8ed5b358850ae8e3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\"" Jul 16 00:46:59.611188 containerd[2784]: time="2025-07-16T00:46:59.611164897Z" level=info msg="StartContainer for \"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\"" Jul 16 00:46:59.612562 containerd[2784]: time="2025-07-16T00:46:59.612535325Z" level=info msg="connecting to shim 7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4" address="unix:///run/containerd/s/00de3e19e3a80321904d51ed5c98e07451e048b47f3fe30becc1dc3d797ce3d1" protocol=ttrpc version=3 Jul 16 00:46:59.643494 systemd[1]: Started cri-containerd-7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4.scope - libcontainer container 7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4. Jul 16 00:46:59.674827 containerd[2784]: time="2025-07-16T00:46:59.674795220Z" level=info msg="StartContainer for \"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\" returns successfully" Jul 16 00:46:59.811298 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 16 00:46:59.811430 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 16 00:46:59.956116 kubelet[4319]: I0716 00:46:59.956052 4319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gmm49" podStartSLOduration=0.863333906 podStartE2EDuration="6.956036844s" podCreationTimestamp="2025-07-16 00:46:53 +0000 UTC" firstStartedPulling="2025-07-16 00:46:53.487772455 +0000 UTC m=+20.672501305" lastFinishedPulling="2025-07-16 00:46:59.580475393 +0000 UTC m=+26.765204243" observedRunningTime="2025-07-16 00:46:59.954819851 +0000 UTC m=+27.139548701" watchObservedRunningTime="2025-07-16 00:46:59.956036844 +0000 UTC m=+27.140765694" Jul 16 00:47:00.050656 kubelet[4319]: I0716 00:47:00.050628 4319 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/21ea9c3d-bc28-47e5-85bf-b25d451d9290-whisker-backend-key-pair\") pod \"21ea9c3d-bc28-47e5-85bf-b25d451d9290\" (UID: \"21ea9c3d-bc28-47e5-85bf-b25d451d9290\") " Jul 16 00:47:00.050656 kubelet[4319]: I0716 00:47:00.050661 4319 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21ea9c3d-bc28-47e5-85bf-b25d451d9290-whisker-ca-bundle\") pod \"21ea9c3d-bc28-47e5-85bf-b25d451d9290\" (UID: \"21ea9c3d-bc28-47e5-85bf-b25d451d9290\") " Jul 16 00:47:00.050797 kubelet[4319]: I0716 00:47:00.050687 4319 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6xlj\" (UniqueName: \"kubernetes.io/projected/21ea9c3d-bc28-47e5-85bf-b25d451d9290-kube-api-access-q6xlj\") pod \"21ea9c3d-bc28-47e5-85bf-b25d451d9290\" (UID: \"21ea9c3d-bc28-47e5-85bf-b25d451d9290\") " Jul 16 00:47:00.051018 kubelet[4319]: I0716 00:47:00.050995 4319 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21ea9c3d-bc28-47e5-85bf-b25d451d9290-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "21ea9c3d-bc28-47e5-85bf-b25d451d9290" (UID: "21ea9c3d-bc28-47e5-85bf-b25d451d9290"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 16 00:47:00.052857 kubelet[4319]: I0716 00:47:00.052832 4319 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21ea9c3d-bc28-47e5-85bf-b25d451d9290-kube-api-access-q6xlj" (OuterVolumeSpecName: "kube-api-access-q6xlj") pod "21ea9c3d-bc28-47e5-85bf-b25d451d9290" (UID: "21ea9c3d-bc28-47e5-85bf-b25d451d9290"). InnerVolumeSpecName "kube-api-access-q6xlj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 16 00:47:00.052912 kubelet[4319]: I0716 00:47:00.052887 4319 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21ea9c3d-bc28-47e5-85bf-b25d451d9290-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "21ea9c3d-bc28-47e5-85bf-b25d451d9290" (UID: "21ea9c3d-bc28-47e5-85bf-b25d451d9290"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 16 00:47:00.151675 kubelet[4319]: I0716 00:47:00.151646 4319 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/21ea9c3d-bc28-47e5-85bf-b25d451d9290-whisker-backend-key-pair\") on node \"ci-4372.0.1-n-8893f80933\" DevicePath \"\"" Jul 16 00:47:00.151675 kubelet[4319]: I0716 00:47:00.151666 4319 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21ea9c3d-bc28-47e5-85bf-b25d451d9290-whisker-ca-bundle\") on node \"ci-4372.0.1-n-8893f80933\" DevicePath \"\"" Jul 16 00:47:00.151675 kubelet[4319]: I0716 00:47:00.151677 4319 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q6xlj\" (UniqueName: \"kubernetes.io/projected/21ea9c3d-bc28-47e5-85bf-b25d451d9290-kube-api-access-q6xlj\") on node \"ci-4372.0.1-n-8893f80933\" DevicePath \"\"" Jul 16 00:47:00.561641 systemd[1]: var-lib-kubelet-pods-21ea9c3d\x2dbc28\x2d47e5\x2d85bf\x2db25d451d9290-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq6xlj.mount: Deactivated successfully. Jul 16 00:47:00.561723 systemd[1]: var-lib-kubelet-pods-21ea9c3d\x2dbc28\x2d47e5\x2d85bf\x2db25d451d9290-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 16 00:47:00.896422 systemd[1]: Removed slice kubepods-besteffort-pod21ea9c3d_bc28_47e5_85bf_b25d451d9290.slice - libcontainer container kubepods-besteffort-pod21ea9c3d_bc28_47e5_85bf_b25d451d9290.slice. Jul 16 00:47:00.945442 kubelet[4319]: I0716 00:47:00.945424 4319 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:47:00.974978 systemd[1]: Created slice kubepods-besteffort-podd3dfd459_e8e4_4d16_bcc4_a60f9bcbef7a.slice - libcontainer container kubepods-besteffort-podd3dfd459_e8e4_4d16_bcc4_a60f9bcbef7a.slice. Jul 16 00:47:01.057062 kubelet[4319]: I0716 00:47:01.057019 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d3dfd459-e8e4-4d16-bcc4-a60f9bcbef7a-whisker-backend-key-pair\") pod \"whisker-f688987bb-lkvft\" (UID: \"d3dfd459-e8e4-4d16-bcc4-a60f9bcbef7a\") " pod="calico-system/whisker-f688987bb-lkvft" Jul 16 00:47:01.057410 kubelet[4319]: I0716 00:47:01.057097 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3dfd459-e8e4-4d16-bcc4-a60f9bcbef7a-whisker-ca-bundle\") pod \"whisker-f688987bb-lkvft\" (UID: \"d3dfd459-e8e4-4d16-bcc4-a60f9bcbef7a\") " pod="calico-system/whisker-f688987bb-lkvft" Jul 16 00:47:01.057410 kubelet[4319]: I0716 00:47:01.057320 4319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6wnn\" (UniqueName: \"kubernetes.io/projected/d3dfd459-e8e4-4d16-bcc4-a60f9bcbef7a-kube-api-access-r6wnn\") pod \"whisker-f688987bb-lkvft\" (UID: \"d3dfd459-e8e4-4d16-bcc4-a60f9bcbef7a\") " pod="calico-system/whisker-f688987bb-lkvft" Jul 16 00:47:01.277406 containerd[2784]: time="2025-07-16T00:47:01.277290433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f688987bb-lkvft,Uid:d3dfd459-e8e4-4d16-bcc4-a60f9bcbef7a,Namespace:calico-system,Attempt:0,}" Jul 16 00:47:01.379420 systemd-networkd[2695]: caliaeee7948cd2: Link UP Jul 16 00:47:01.379572 systemd-networkd[2695]: caliaeee7948cd2: Gained carrier Jul 16 00:47:01.386713 containerd[2784]: 2025-07-16 00:47:01.294 [INFO][6143] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 16 00:47:01.386713 containerd[2784]: 2025-07-16 00:47:01.309 [INFO][6143] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--8893f80933-k8s-whisker--f688987bb--lkvft-eth0 whisker-f688987bb- calico-system d3dfd459-e8e4-4d16-bcc4-a60f9bcbef7a 867 0 2025-07-16 00:47:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f688987bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372.0.1-n-8893f80933 whisker-f688987bb-lkvft eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliaeee7948cd2 [] [] }} ContainerID="373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516" Namespace="calico-system" Pod="whisker-f688987bb-lkvft" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-whisker--f688987bb--lkvft-" Jul 16 00:47:01.386713 containerd[2784]: 2025-07-16 00:47:01.309 [INFO][6143] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516" Namespace="calico-system" Pod="whisker-f688987bb-lkvft" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-whisker--f688987bb--lkvft-eth0" Jul 16 00:47:01.386713 containerd[2784]: 2025-07-16 00:47:01.346 [INFO][6169] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516" HandleID="k8s-pod-network.373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516" Workload="ci--4372.0.1--n--8893f80933-k8s-whisker--f688987bb--lkvft-eth0" Jul 16 00:47:01.386886 containerd[2784]: 2025-07-16 00:47:01.346 [INFO][6169] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516" HandleID="k8s-pod-network.373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516" Workload="ci--4372.0.1--n--8893f80933-k8s-whisker--f688987bb--lkvft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001b7e20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-n-8893f80933", "pod":"whisker-f688987bb-lkvft", "timestamp":"2025-07-16 00:47:01.346740694 +0000 UTC"}, Hostname:"ci-4372.0.1-n-8893f80933", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:47:01.386886 containerd[2784]: 2025-07-16 00:47:01.346 [INFO][6169] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:47:01.386886 containerd[2784]: 2025-07-16 00:47:01.346 [INFO][6169] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:47:01.386886 containerd[2784]: 2025-07-16 00:47:01.347 [INFO][6169] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-8893f80933' Jul 16 00:47:01.386886 containerd[2784]: 2025-07-16 00:47:01.355 [INFO][6169] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:01.386886 containerd[2784]: 2025-07-16 00:47:01.358 [INFO][6169] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:01.386886 containerd[2784]: 2025-07-16 00:47:01.361 [INFO][6169] ipam/ipam.go 511: Trying affinity for 192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:01.386886 containerd[2784]: 2025-07-16 00:47:01.363 [INFO][6169] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:01.386886 containerd[2784]: 2025-07-16 00:47:01.364 [INFO][6169] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:01.387062 containerd[2784]: 2025-07-16 00:47:01.364 [INFO][6169] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:01.387062 containerd[2784]: 2025-07-16 00:47:01.365 [INFO][6169] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516 Jul 16 00:47:01.387062 containerd[2784]: 2025-07-16 00:47:01.368 [INFO][6169] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:01.387062 containerd[2784]: 2025-07-16 00:47:01.371 [INFO][6169] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.1/26] block=192.168.106.0/26 handle="k8s-pod-network.373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:01.387062 containerd[2784]: 2025-07-16 00:47:01.371 [INFO][6169] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.1/26] handle="k8s-pod-network.373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:01.387062 containerd[2784]: 2025-07-16 00:47:01.371 [INFO][6169] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:47:01.387062 containerd[2784]: 2025-07-16 00:47:01.371 [INFO][6169] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.1/26] IPv6=[] ContainerID="373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516" HandleID="k8s-pod-network.373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516" Workload="ci--4372.0.1--n--8893f80933-k8s-whisker--f688987bb--lkvft-eth0" Jul 16 00:47:01.387181 containerd[2784]: 2025-07-16 00:47:01.374 [INFO][6143] cni-plugin/k8s.go 418: Populated endpoint ContainerID="373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516" Namespace="calico-system" Pod="whisker-f688987bb-lkvft" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-whisker--f688987bb--lkvft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--8893f80933-k8s-whisker--f688987bb--lkvft-eth0", GenerateName:"whisker-f688987bb-", Namespace:"calico-system", SelfLink:"", UID:"d3dfd459-e8e4-4d16-bcc4-a60f9bcbef7a", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 47, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f688987bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-8893f80933", ContainerID:"", Pod:"whisker-f688987bb-lkvft", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.106.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliaeee7948cd2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:47:01.387181 containerd[2784]: 2025-07-16 00:47:01.374 [INFO][6143] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.1/32] ContainerID="373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516" Namespace="calico-system" Pod="whisker-f688987bb-lkvft" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-whisker--f688987bb--lkvft-eth0" Jul 16 00:47:01.387246 containerd[2784]: 2025-07-16 00:47:01.374 [INFO][6143] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaeee7948cd2 ContainerID="373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516" Namespace="calico-system" Pod="whisker-f688987bb-lkvft" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-whisker--f688987bb--lkvft-eth0" Jul 16 00:47:01.387246 containerd[2784]: 2025-07-16 00:47:01.379 [INFO][6143] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516" Namespace="calico-system" Pod="whisker-f688987bb-lkvft" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-whisker--f688987bb--lkvft-eth0" Jul 16 00:47:01.387360 containerd[2784]: 2025-07-16 00:47:01.379 [INFO][6143] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516" Namespace="calico-system" Pod="whisker-f688987bb-lkvft" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-whisker--f688987bb--lkvft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--8893f80933-k8s-whisker--f688987bb--lkvft-eth0", GenerateName:"whisker-f688987bb-", Namespace:"calico-system", SelfLink:"", UID:"d3dfd459-e8e4-4d16-bcc4-a60f9bcbef7a", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 47, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f688987bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-8893f80933", ContainerID:"373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516", Pod:"whisker-f688987bb-lkvft", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.106.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliaeee7948cd2", MAC:"2e:86:c6:ba:6d:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:47:01.387413 containerd[2784]: 2025-07-16 00:47:01.385 [INFO][6143] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516" Namespace="calico-system" Pod="whisker-f688987bb-lkvft" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-whisker--f688987bb--lkvft-eth0" Jul 16 00:47:01.396734 containerd[2784]: time="2025-07-16T00:47:01.396704155Z" level=info msg="connecting to shim 373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516" address="unix:///run/containerd/s/441bfa8efd46f72f18cb2a33bcc2467a1a9e4e2193c972f5551cd9c5e834d6f1" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:47:01.419385 systemd[1]: Started cri-containerd-373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516.scope - libcontainer container 373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516. Jul 16 00:47:01.455813 containerd[2784]: time="2025-07-16T00:47:01.455781697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f688987bb-lkvft,Uid:d3dfd459-e8e4-4d16-bcc4-a60f9bcbef7a,Namespace:calico-system,Attempt:0,} returns sandbox id \"373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516\"" Jul 16 00:47:01.456843 containerd[2784]: time="2025-07-16T00:47:01.456821661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 16 00:47:01.818131 containerd[2784]: time="2025-07-16T00:47:01.818095761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:01.818193 containerd[2784]: time="2025-07-16T00:47:01.818149199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 16 00:47:01.818807 containerd[2784]: time="2025-07-16T00:47:01.818780537Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:01.820407 containerd[2784]: time="2025-07-16T00:47:01.820380761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:01.821014 containerd[2784]: time="2025-07-16T00:47:01.820990340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 364.14236ms" Jul 16 00:47:01.821037 containerd[2784]: time="2025-07-16T00:47:01.821017259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 16 00:47:01.822583 containerd[2784]: time="2025-07-16T00:47:01.822565925Z" level=info msg="CreateContainer within sandbox \"373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 16 00:47:01.825701 containerd[2784]: time="2025-07-16T00:47:01.825677737Z" level=info msg="Container f61f4b08fb399d55752b10f98134498e6ab886eecbd093f738293d70bb126df6: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:47:01.841082 containerd[2784]: time="2025-07-16T00:47:01.841047562Z" level=info msg="CreateContainer within sandbox \"373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f61f4b08fb399d55752b10f98134498e6ab886eecbd093f738293d70bb126df6\"" Jul 16 00:47:01.841493 containerd[2784]: time="2025-07-16T00:47:01.841470627Z" level=info msg="StartContainer for \"f61f4b08fb399d55752b10f98134498e6ab886eecbd093f738293d70bb126df6\"" Jul 16 00:47:01.842512 containerd[2784]: time="2025-07-16T00:47:01.842490551Z" level=info msg="connecting to shim f61f4b08fb399d55752b10f98134498e6ab886eecbd093f738293d70bb126df6" address="unix:///run/containerd/s/441bfa8efd46f72f18cb2a33bcc2467a1a9e4e2193c972f5551cd9c5e834d6f1" protocol=ttrpc version=3 Jul 16 00:47:01.871444 systemd[1]: Started cri-containerd-f61f4b08fb399d55752b10f98134498e6ab886eecbd093f738293d70bb126df6.scope - libcontainer container f61f4b08fb399d55752b10f98134498e6ab886eecbd093f738293d70bb126df6. Jul 16 00:47:01.899557 containerd[2784]: time="2025-07-16T00:47:01.899529485Z" level=info msg="StartContainer for \"f61f4b08fb399d55752b10f98134498e6ab886eecbd093f738293d70bb126df6\" returns successfully" Jul 16 00:47:01.900309 containerd[2784]: time="2025-07-16T00:47:01.900290019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 16 00:47:02.547133 containerd[2784]: time="2025-07-16T00:47:02.547089602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:02.547489 containerd[2784]: time="2025-07-16T00:47:02.547131961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 16 00:47:02.547840 containerd[2784]: time="2025-07-16T00:47:02.547815178Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:02.549479 containerd[2784]: time="2025-07-16T00:47:02.549459843Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:02.550180 containerd[2784]: time="2025-07-16T00:47:02.550156700Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 649.840082ms" Jul 16 00:47:02.550205 containerd[2784]: time="2025-07-16T00:47:02.550187659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 16 00:47:02.551838 containerd[2784]: time="2025-07-16T00:47:02.551820445Z" level=info msg="CreateContainer within sandbox \"373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 16 00:47:02.555486 containerd[2784]: time="2025-07-16T00:47:02.555456324Z" level=info msg="Container 6762e44b623747269b317e7d2839e3f6d3a23804062fcdd637079344bbc9f5ef: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:47:02.559092 containerd[2784]: time="2025-07-16T00:47:02.559067524Z" level=info msg="CreateContainer within sandbox \"373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6762e44b623747269b317e7d2839e3f6d3a23804062fcdd637079344bbc9f5ef\"" Jul 16 00:47:02.559437 containerd[2784]: time="2025-07-16T00:47:02.559417712Z" level=info msg="StartContainer for \"6762e44b623747269b317e7d2839e3f6d3a23804062fcdd637079344bbc9f5ef\"" Jul 16 00:47:02.560369 containerd[2784]: time="2025-07-16T00:47:02.560348521Z" level=info msg="connecting to shim 6762e44b623747269b317e7d2839e3f6d3a23804062fcdd637079344bbc9f5ef" address="unix:///run/containerd/s/441bfa8efd46f72f18cb2a33bcc2467a1a9e4e2193c972f5551cd9c5e834d6f1" protocol=ttrpc version=3 Jul 16 00:47:02.560968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2301257561.mount: Deactivated successfully. Jul 16 00:47:02.588384 systemd[1]: Started cri-containerd-6762e44b623747269b317e7d2839e3f6d3a23804062fcdd637079344bbc9f5ef.scope - libcontainer container 6762e44b623747269b317e7d2839e3f6d3a23804062fcdd637079344bbc9f5ef. Jul 16 00:47:02.616647 containerd[2784]: time="2025-07-16T00:47:02.616610091Z" level=info msg="StartContainer for \"6762e44b623747269b317e7d2839e3f6d3a23804062fcdd637079344bbc9f5ef\" returns successfully" Jul 16 00:47:02.893559 kubelet[4319]: I0716 00:47:02.893467 4319 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21ea9c3d-bc28-47e5-85bf-b25d451d9290" path="/var/lib/kubelet/pods/21ea9c3d-bc28-47e5-85bf-b25d451d9290/volumes" Jul 16 00:47:03.063036 kubelet[4319]: I0716 00:47:03.063002 4319 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:47:03.071656 kubelet[4319]: I0716 00:47:03.071610 4319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-f688987bb-lkvft" podStartSLOduration=1.977446823 podStartE2EDuration="3.071597074s" podCreationTimestamp="2025-07-16 00:47:00 +0000 UTC" firstStartedPulling="2025-07-16 00:47:01.456633028 +0000 UTC m=+28.641361838" lastFinishedPulling="2025-07-16 00:47:02.550783239 +0000 UTC m=+29.735512089" observedRunningTime="2025-07-16 00:47:02.959103307 +0000 UTC m=+30.143832157" watchObservedRunningTime="2025-07-16 00:47:03.071597074 +0000 UTC m=+30.256325924" Jul 16 00:47:03.138368 systemd-networkd[2695]: caliaeee7948cd2: Gained IPv6LL Jul 16 00:47:03.378615 systemd-networkd[2695]: vxlan.calico: Link UP Jul 16 00:47:03.378619 systemd-networkd[2695]: vxlan.calico: Gained carrier Jul 16 00:47:04.482330 systemd-networkd[2695]: vxlan.calico: Gained IPv6LL Jul 16 00:47:07.891937 containerd[2784]: time="2025-07-16T00:47:07.891876812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ttk4s,Uid:2b48aa09-a83c-49a6-859a-b1b7384550f0,Namespace:kube-system,Attempt:0,}" Jul 16 00:47:07.891937 containerd[2784]: time="2025-07-16T00:47:07.891908251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85f985586b-4kwjk,Uid:de1352e7-7881-4e9e-8429-56fa32567373,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:47:07.892390 containerd[2784]: time="2025-07-16T00:47:07.891996889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xsmnd,Uid:7717274c-2eb9-44db-9dce-2e6914a9164e,Namespace:calico-system,Attempt:0,}" Jul 16 00:47:07.968548 systemd-networkd[2695]: cali175db470ca4: Link UP Jul 16 00:47:07.968782 systemd-networkd[2695]: cali175db470ca4: Gained carrier Jul 16 00:47:07.990724 containerd[2784]: 2025-07-16 00:47:07.921 [INFO][6834] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--8893f80933-k8s-csi--node--driver--xsmnd-eth0 csi-node-driver- calico-system 7717274c-2eb9-44db-9dce-2e6914a9164e 697 0 2025-07-16 00:46:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372.0.1-n-8893f80933 csi-node-driver-xsmnd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali175db470ca4 [] [] }} ContainerID="ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d" Namespace="calico-system" Pod="csi-node-driver-xsmnd" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-csi--node--driver--xsmnd-" Jul 16 00:47:07.990724 containerd[2784]: 2025-07-16 00:47:07.921 [INFO][6834] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d" Namespace="calico-system" Pod="csi-node-driver-xsmnd" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-csi--node--driver--xsmnd-eth0" Jul 16 00:47:07.990724 containerd[2784]: 2025-07-16 00:47:07.941 [INFO][6907] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d" HandleID="k8s-pod-network.ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d" Workload="ci--4372.0.1--n--8893f80933-k8s-csi--node--driver--xsmnd-eth0" Jul 16 00:47:07.990849 containerd[2784]: 2025-07-16 00:47:07.941 [INFO][6907] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d" HandleID="k8s-pod-network.ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d" Workload="ci--4372.0.1--n--8893f80933-k8s-csi--node--driver--xsmnd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000363b30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-n-8893f80933", "pod":"csi-node-driver-xsmnd", "timestamp":"2025-07-16 00:47:07.941555726 +0000 UTC"}, Hostname:"ci-4372.0.1-n-8893f80933", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:47:07.990849 containerd[2784]: 2025-07-16 00:47:07.941 [INFO][6907] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:47:07.990849 containerd[2784]: 2025-07-16 00:47:07.941 [INFO][6907] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:47:07.990849 containerd[2784]: 2025-07-16 00:47:07.941 [INFO][6907] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-8893f80933' Jul 16 00:47:07.990849 containerd[2784]: 2025-07-16 00:47:07.949 [INFO][6907] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:07.990849 containerd[2784]: 2025-07-16 00:47:07.952 [INFO][6907] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:07.990849 containerd[2784]: 2025-07-16 00:47:07.955 [INFO][6907] ipam/ipam.go 511: Trying affinity for 192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:07.990849 containerd[2784]: 2025-07-16 00:47:07.957 [INFO][6907] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:07.990849 containerd[2784]: 2025-07-16 00:47:07.958 [INFO][6907] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:07.991026 containerd[2784]: 2025-07-16 00:47:07.958 [INFO][6907] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:07.991026 containerd[2784]: 2025-07-16 00:47:07.959 [INFO][6907] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d Jul 16 00:47:07.991026 containerd[2784]: 2025-07-16 00:47:07.961 [INFO][6907] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:07.991026 containerd[2784]: 2025-07-16 00:47:07.965 [INFO][6907] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.2/26] block=192.168.106.0/26 handle="k8s-pod-network.ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:07.991026 containerd[2784]: 2025-07-16 00:47:07.965 [INFO][6907] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.2/26] handle="k8s-pod-network.ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:07.991026 containerd[2784]: 2025-07-16 00:47:07.965 [INFO][6907] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:47:07.991026 containerd[2784]: 2025-07-16 00:47:07.965 [INFO][6907] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.2/26] IPv6=[] ContainerID="ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d" HandleID="k8s-pod-network.ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d" Workload="ci--4372.0.1--n--8893f80933-k8s-csi--node--driver--xsmnd-eth0" Jul 16 00:47:07.991159 containerd[2784]: 2025-07-16 00:47:07.967 [INFO][6834] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d" Namespace="calico-system" Pod="csi-node-driver-xsmnd" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-csi--node--driver--xsmnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--8893f80933-k8s-csi--node--driver--xsmnd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7717274c-2eb9-44db-9dce-2e6914a9164e", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 46, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-8893f80933", ContainerID:"", Pod:"csi-node-driver-xsmnd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali175db470ca4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:47:07.991213 containerd[2784]: 2025-07-16 00:47:07.967 [INFO][6834] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.2/32] ContainerID="ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d" Namespace="calico-system" Pod="csi-node-driver-xsmnd" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-csi--node--driver--xsmnd-eth0" Jul 16 00:47:07.991213 containerd[2784]: 2025-07-16 00:47:07.967 [INFO][6834] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali175db470ca4 ContainerID="ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d" Namespace="calico-system" Pod="csi-node-driver-xsmnd" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-csi--node--driver--xsmnd-eth0" Jul 16 00:47:07.991213 containerd[2784]: 2025-07-16 00:47:07.968 [INFO][6834] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d" Namespace="calico-system" Pod="csi-node-driver-xsmnd" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-csi--node--driver--xsmnd-eth0" Jul 16 00:47:07.991280 containerd[2784]: 2025-07-16 00:47:07.969 [INFO][6834] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d" Namespace="calico-system" Pod="csi-node-driver-xsmnd" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-csi--node--driver--xsmnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--8893f80933-k8s-csi--node--driver--xsmnd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7717274c-2eb9-44db-9dce-2e6914a9164e", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 46, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-8893f80933", ContainerID:"ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d", Pod:"csi-node-driver-xsmnd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali175db470ca4", MAC:"72:1a:8e:05:8b:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:47:07.991323 containerd[2784]: 2025-07-16 00:47:07.989 [INFO][6834] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d" Namespace="calico-system" Pod="csi-node-driver-xsmnd" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-csi--node--driver--xsmnd-eth0" Jul 16 00:47:07.999845 containerd[2784]: time="2025-07-16T00:47:07.999810691Z" level=info msg="connecting to shim ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d" address="unix:///run/containerd/s/46205f7bd01df34f160bef69870a705d20e46c3e2a1d15028beba1371f34ae22" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:47:08.029435 systemd[1]: Started cri-containerd-ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d.scope - libcontainer container ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d. Jul 16 00:47:08.047145 containerd[2784]: time="2025-07-16T00:47:08.047120238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xsmnd,Uid:7717274c-2eb9-44db-9dce-2e6914a9164e,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d\"" Jul 16 00:47:08.048191 containerd[2784]: time="2025-07-16T00:47:08.048171691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 16 00:47:08.068919 systemd-networkd[2695]: calib2a0397a94a: Link UP Jul 16 00:47:08.069943 systemd-networkd[2695]: calib2a0397a94a: Gained carrier Jul 16 00:47:08.077802 containerd[2784]: 2025-07-16 00:47:07.925 [INFO][6832] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--ttk4s-eth0 coredns-668d6bf9bc- kube-system 2b48aa09-a83c-49a6-859a-b1b7384550f0 806 0 2025-07-16 00:46:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.1-n-8893f80933 coredns-668d6bf9bc-ttk4s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib2a0397a94a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73" Namespace="kube-system" Pod="coredns-668d6bf9bc-ttk4s" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--ttk4s-" Jul 16 00:47:08.077802 containerd[2784]: 2025-07-16 00:47:07.925 [INFO][6832] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73" Namespace="kube-system" Pod="coredns-668d6bf9bc-ttk4s" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--ttk4s-eth0" Jul 16 00:47:08.077802 containerd[2784]: 2025-07-16 00:47:07.944 [INFO][6918] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73" HandleID="k8s-pod-network.1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73" Workload="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--ttk4s-eth0" Jul 16 00:47:08.077924 containerd[2784]: 2025-07-16 00:47:07.944 [INFO][6918] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73" HandleID="k8s-pod-network.1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73" Workload="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--ttk4s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400063df30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.1-n-8893f80933", "pod":"coredns-668d6bf9bc-ttk4s", "timestamp":"2025-07-16 00:47:07.944535567 +0000 UTC"}, Hostname:"ci-4372.0.1-n-8893f80933", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:47:08.077924 containerd[2784]: 2025-07-16 00:47:07.944 [INFO][6918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:47:08.077924 containerd[2784]: 2025-07-16 00:47:07.965 [INFO][6918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:47:08.077924 containerd[2784]: 2025-07-16 00:47:07.965 [INFO][6918] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-8893f80933' Jul 16 00:47:08.077924 containerd[2784]: 2025-07-16 00:47:08.050 [INFO][6918] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.077924 containerd[2784]: 2025-07-16 00:47:08.053 [INFO][6918] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.077924 containerd[2784]: 2025-07-16 00:47:08.056 [INFO][6918] ipam/ipam.go 511: Trying affinity for 192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.077924 containerd[2784]: 2025-07-16 00:47:08.057 [INFO][6918] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.077924 containerd[2784]: 2025-07-16 00:47:08.059 [INFO][6918] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.078106 containerd[2784]: 2025-07-16 00:47:08.059 [INFO][6918] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.078106 containerd[2784]: 2025-07-16 00:47:08.060 [INFO][6918] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73 Jul 16 00:47:08.078106 containerd[2784]: 2025-07-16 00:47:08.062 [INFO][6918] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.078106 containerd[2784]: 2025-07-16 00:47:08.066 [INFO][6918] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.3/26] block=192.168.106.0/26 handle="k8s-pod-network.1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.078106 containerd[2784]: 2025-07-16 00:47:08.066 [INFO][6918] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.3/26] handle="k8s-pod-network.1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.078106 containerd[2784]: 2025-07-16 00:47:08.066 [INFO][6918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:47:08.078106 containerd[2784]: 2025-07-16 00:47:08.066 [INFO][6918] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.3/26] IPv6=[] ContainerID="1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73" HandleID="k8s-pod-network.1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73" Workload="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--ttk4s-eth0" Jul 16 00:47:08.078236 containerd[2784]: 2025-07-16 00:47:08.067 [INFO][6832] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73" Namespace="kube-system" Pod="coredns-668d6bf9bc-ttk4s" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--ttk4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--ttk4s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2b48aa09-a83c-49a6-859a-b1b7384550f0", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 46, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-8893f80933", ContainerID:"", Pod:"coredns-668d6bf9bc-ttk4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2a0397a94a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:47:08.078236 containerd[2784]: 2025-07-16 00:47:08.067 [INFO][6832] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.3/32] ContainerID="1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73" Namespace="kube-system" Pod="coredns-668d6bf9bc-ttk4s" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--ttk4s-eth0" Jul 16 00:47:08.078236 containerd[2784]: 2025-07-16 00:47:08.067 [INFO][6832] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2a0397a94a ContainerID="1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73" Namespace="kube-system" Pod="coredns-668d6bf9bc-ttk4s" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--ttk4s-eth0" Jul 16 00:47:08.078236 containerd[2784]: 2025-07-16 00:47:08.070 [INFO][6832] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73" Namespace="kube-system" Pod="coredns-668d6bf9bc-ttk4s" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--ttk4s-eth0" Jul 16 00:47:08.078236 containerd[2784]: 2025-07-16 00:47:08.070 [INFO][6832] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73" Namespace="kube-system" Pod="coredns-668d6bf9bc-ttk4s" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--ttk4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--ttk4s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2b48aa09-a83c-49a6-859a-b1b7384550f0", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 46, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-8893f80933", ContainerID:"1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73", Pod:"coredns-668d6bf9bc-ttk4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2a0397a94a", MAC:"a6:58:f9:34:b4:64", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:47:08.078236 containerd[2784]: 2025-07-16 00:47:08.076 [INFO][6832] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73" Namespace="kube-system" Pod="coredns-668d6bf9bc-ttk4s" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--ttk4s-eth0" Jul 16 00:47:08.087648 containerd[2784]: time="2025-07-16T00:47:08.087615601Z" level=info msg="connecting to shim 1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73" address="unix:///run/containerd/s/41dcae1e799ba1d24c414b538ab1001d9da4d02b6bb92ca0279fa28cf9607d19" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:47:08.108386 systemd[1]: Started cri-containerd-1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73.scope - libcontainer container 1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73. Jul 16 00:47:08.134359 containerd[2784]: time="2025-07-16T00:47:08.134331525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ttk4s,Uid:2b48aa09-a83c-49a6-859a-b1b7384550f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73\"" Jul 16 00:47:08.136152 containerd[2784]: time="2025-07-16T00:47:08.136129518Z" level=info msg="CreateContainer within sandbox \"1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 16 00:47:08.140267 containerd[2784]: time="2025-07-16T00:47:08.140239013Z" level=info msg="Container 5ee338c66f283d73486febea5e4d3dbaab4d6b99288dc2d1569ad4274f0c15e5: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:47:08.142848 containerd[2784]: time="2025-07-16T00:47:08.142793108Z" level=info msg="CreateContainer within sandbox \"1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5ee338c66f283d73486febea5e4d3dbaab4d6b99288dc2d1569ad4274f0c15e5\"" Jul 16 00:47:08.143088 containerd[2784]: time="2025-07-16T00:47:08.143068901Z" level=info msg="StartContainer for \"5ee338c66f283d73486febea5e4d3dbaab4d6b99288dc2d1569ad4274f0c15e5\"" Jul 16 00:47:08.143808 containerd[2784]: time="2025-07-16T00:47:08.143788042Z" level=info msg="connecting to shim 5ee338c66f283d73486febea5e4d3dbaab4d6b99288dc2d1569ad4274f0c15e5" address="unix:///run/containerd/s/41dcae1e799ba1d24c414b538ab1001d9da4d02b6bb92ca0279fa28cf9607d19" protocol=ttrpc version=3 Jul 16 00:47:08.169455 systemd[1]: Started cri-containerd-5ee338c66f283d73486febea5e4d3dbaab4d6b99288dc2d1569ad4274f0c15e5.scope - libcontainer container 5ee338c66f283d73486febea5e4d3dbaab4d6b99288dc2d1569ad4274f0c15e5. Jul 16 00:47:08.170151 systemd-networkd[2695]: cali82f65e5442b: Link UP Jul 16 00:47:08.170451 systemd-networkd[2695]: cali82f65e5442b: Gained carrier Jul 16 00:47:08.178399 containerd[2784]: 2025-07-16 00:47:07.925 [INFO][6842] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--4kwjk-eth0 calico-apiserver-85f985586b- calico-apiserver de1352e7-7881-4e9e-8429-56fa32567373 807 0 2025-07-16 00:46:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85f985586b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-n-8893f80933 calico-apiserver-85f985586b-4kwjk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali82f65e5442b [] [] }} ContainerID="71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f" Namespace="calico-apiserver" Pod="calico-apiserver-85f985586b-4kwjk" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--4kwjk-" Jul 16 00:47:08.178399 containerd[2784]: 2025-07-16 00:47:07.926 [INFO][6842] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f" Namespace="calico-apiserver" Pod="calico-apiserver-85f985586b-4kwjk" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--4kwjk-eth0" Jul 16 00:47:08.178399 containerd[2784]: 2025-07-16 00:47:07.944 [INFO][6920] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f" HandleID="k8s-pod-network.71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f" Workload="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--4kwjk-eth0" Jul 16 00:47:08.178399 containerd[2784]: 2025-07-16 00:47:07.944 [INFO][6920] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f" HandleID="k8s-pod-network.71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f" Workload="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--4kwjk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400019a2e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-n-8893f80933", "pod":"calico-apiserver-85f985586b-4kwjk", "timestamp":"2025-07-16 00:47:07.944751121 +0000 UTC"}, Hostname:"ci-4372.0.1-n-8893f80933", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:47:08.178399 containerd[2784]: 2025-07-16 00:47:07.944 [INFO][6920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:47:08.178399 containerd[2784]: 2025-07-16 00:47:08.066 [INFO][6920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:47:08.178399 containerd[2784]: 2025-07-16 00:47:08.066 [INFO][6920] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-8893f80933' Jul 16 00:47:08.178399 containerd[2784]: 2025-07-16 00:47:08.150 [INFO][6920] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.178399 containerd[2784]: 2025-07-16 00:47:08.153 [INFO][6920] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.178399 containerd[2784]: 2025-07-16 00:47:08.156 [INFO][6920] ipam/ipam.go 511: Trying affinity for 192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.178399 containerd[2784]: 2025-07-16 00:47:08.157 [INFO][6920] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.178399 containerd[2784]: 2025-07-16 00:47:08.159 [INFO][6920] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.178399 containerd[2784]: 2025-07-16 00:47:08.159 [INFO][6920] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.178399 containerd[2784]: 2025-07-16 00:47:08.160 [INFO][6920] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f Jul 16 00:47:08.178399 containerd[2784]: 2025-07-16 00:47:08.162 [INFO][6920] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.178399 containerd[2784]: 2025-07-16 00:47:08.166 [INFO][6920] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.4/26] block=192.168.106.0/26 handle="k8s-pod-network.71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.178399 containerd[2784]: 2025-07-16 00:47:08.166 [INFO][6920] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.4/26] handle="k8s-pod-network.71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.178399 containerd[2784]: 2025-07-16 00:47:08.166 [INFO][6920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:47:08.178399 containerd[2784]: 2025-07-16 00:47:08.166 [INFO][6920] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.4/26] IPv6=[] ContainerID="71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f" HandleID="k8s-pod-network.71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f" Workload="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--4kwjk-eth0" Jul 16 00:47:08.178816 containerd[2784]: 2025-07-16 00:47:08.168 [INFO][6842] cni-plugin/k8s.go 418: Populated endpoint ContainerID="71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f" Namespace="calico-apiserver" Pod="calico-apiserver-85f985586b-4kwjk" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--4kwjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--4kwjk-eth0", GenerateName:"calico-apiserver-85f985586b-", Namespace:"calico-apiserver", SelfLink:"", UID:"de1352e7-7881-4e9e-8429-56fa32567373", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 46, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85f985586b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-8893f80933", ContainerID:"", Pod:"calico-apiserver-85f985586b-4kwjk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82f65e5442b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:47:08.178816 containerd[2784]: 2025-07-16 00:47:08.168 [INFO][6842] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.4/32] ContainerID="71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f" Namespace="calico-apiserver" Pod="calico-apiserver-85f985586b-4kwjk" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--4kwjk-eth0" Jul 16 00:47:08.178816 containerd[2784]: 2025-07-16 00:47:08.168 [INFO][6842] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82f65e5442b ContainerID="71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f" Namespace="calico-apiserver" Pod="calico-apiserver-85f985586b-4kwjk" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--4kwjk-eth0" Jul 16 00:47:08.178816 containerd[2784]: 2025-07-16 00:47:08.170 [INFO][6842] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f" Namespace="calico-apiserver" Pod="calico-apiserver-85f985586b-4kwjk" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--4kwjk-eth0" Jul 16 00:47:08.178816 containerd[2784]: 2025-07-16 00:47:08.170 [INFO][6842] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f" Namespace="calico-apiserver" Pod="calico-apiserver-85f985586b-4kwjk" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--4kwjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--4kwjk-eth0", GenerateName:"calico-apiserver-85f985586b-", Namespace:"calico-apiserver", SelfLink:"", UID:"de1352e7-7881-4e9e-8429-56fa32567373", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 46, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85f985586b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-8893f80933", ContainerID:"71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f", Pod:"calico-apiserver-85f985586b-4kwjk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali82f65e5442b", MAC:"36:b8:4d:70:ca:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:47:08.178816 containerd[2784]: 2025-07-16 00:47:08.176 [INFO][6842] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f" Namespace="calico-apiserver" Pod="calico-apiserver-85f985586b-4kwjk" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--4kwjk-eth0" Jul 16 00:47:08.189840 containerd[2784]: time="2025-07-16T00:47:08.189810624Z" level=info msg="connecting to shim 71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f" address="unix:///run/containerd/s/1dd63bc032eb57ac110ac2044a4975b2f804f1b48bf42804ee61e7436f90d267" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:47:08.202376 containerd[2784]: time="2025-07-16T00:47:08.202346542Z" level=info msg="StartContainer for \"5ee338c66f283d73486febea5e4d3dbaab4d6b99288dc2d1569ad4274f0c15e5\" returns successfully" Jul 16 00:47:08.226402 systemd[1]: Started cri-containerd-71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f.scope - libcontainer container 71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f. Jul 16 00:47:08.252835 containerd[2784]: time="2025-07-16T00:47:08.252805650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85f985586b-4kwjk,Uid:de1352e7-7881-4e9e-8429-56fa32567373,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f\"" Jul 16 00:47:08.409997 containerd[2784]: time="2025-07-16T00:47:08.409888947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:08.409997 containerd[2784]: time="2025-07-16T00:47:08.409904666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 16 00:47:08.410604 containerd[2784]: time="2025-07-16T00:47:08.410581489Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:08.412137 containerd[2784]: time="2025-07-16T00:47:08.412112930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:08.412775 containerd[2784]: time="2025-07-16T00:47:08.412757913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 364.562103ms" Jul 16 00:47:08.412815 containerd[2784]: time="2025-07-16T00:47:08.412778993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 16 00:47:08.413499 containerd[2784]: time="2025-07-16T00:47:08.413478535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 16 00:47:08.414345 containerd[2784]: time="2025-07-16T00:47:08.414323793Z" level=info msg="CreateContainer within sandbox \"ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 16 00:47:08.419168 containerd[2784]: time="2025-07-16T00:47:08.419142310Z" level=info msg="Container c11d1be2d531edaad3b655273585e1595e2863286c5680f345ae1e5630da74c0: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:47:08.424766 containerd[2784]: time="2025-07-16T00:47:08.424734967Z" level=info msg="CreateContainer within sandbox \"ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c11d1be2d531edaad3b655273585e1595e2863286c5680f345ae1e5630da74c0\"" Jul 16 00:47:08.425127 containerd[2784]: time="2025-07-16T00:47:08.425103197Z" level=info msg="StartContainer for \"c11d1be2d531edaad3b655273585e1595e2863286c5680f345ae1e5630da74c0\"" Jul 16 00:47:08.426464 containerd[2784]: time="2025-07-16T00:47:08.426443403Z" level=info msg="connecting to shim c11d1be2d531edaad3b655273585e1595e2863286c5680f345ae1e5630da74c0" address="unix:///run/containerd/s/46205f7bd01df34f160bef69870a705d20e46c3e2a1d15028beba1371f34ae22" protocol=ttrpc version=3 Jul 16 00:47:08.449389 systemd[1]: Started cri-containerd-c11d1be2d531edaad3b655273585e1595e2863286c5680f345ae1e5630da74c0.scope - libcontainer container c11d1be2d531edaad3b655273585e1595e2863286c5680f345ae1e5630da74c0. Jul 16 00:47:08.476934 containerd[2784]: time="2025-07-16T00:47:08.476904870Z" level=info msg="StartContainer for \"c11d1be2d531edaad3b655273585e1595e2863286c5680f345ae1e5630da74c0\" returns successfully" Jul 16 00:47:08.891234 containerd[2784]: time="2025-07-16T00:47:08.891197219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-54wb2,Uid:2fb7dd78-c188-4dd0-a051-40138c1ed92e,Namespace:calico-system,Attempt:0,}" Jul 16 00:47:08.984826 kubelet[4319]: I0716 00:47:08.984768 4319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ttk4s" podStartSLOduration=28.984752983 podStartE2EDuration="28.984752983s" podCreationTimestamp="2025-07-16 00:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:47:08.984537229 +0000 UTC m=+36.169266079" watchObservedRunningTime="2025-07-16 00:47:08.984752983 +0000 UTC m=+36.169481833" Jul 16 00:47:08.987673 systemd-networkd[2695]: cali47f2e0dbc2e: Link UP Jul 16 00:47:08.987938 systemd-networkd[2695]: cali47f2e0dbc2e: Gained carrier Jul 16 00:47:08.995678 containerd[2784]: 2025-07-16 00:47:08.921 [INFO][7230] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--8893f80933-k8s-goldmane--768f4c5c69--54wb2-eth0 goldmane-768f4c5c69- calico-system 2fb7dd78-c188-4dd0-a051-40138c1ed92e 808 0 2025-07-16 00:46:53 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372.0.1-n-8893f80933 goldmane-768f4c5c69-54wb2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali47f2e0dbc2e [] [] }} ContainerID="dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996" Namespace="calico-system" Pod="goldmane-768f4c5c69-54wb2" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-goldmane--768f4c5c69--54wb2-" Jul 16 00:47:08.995678 containerd[2784]: 2025-07-16 00:47:08.921 [INFO][7230] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996" Namespace="calico-system" Pod="goldmane-768f4c5c69-54wb2" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-goldmane--768f4c5c69--54wb2-eth0" Jul 16 00:47:08.995678 containerd[2784]: 2025-07-16 00:47:08.941 [INFO][7258] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996" HandleID="k8s-pod-network.dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996" Workload="ci--4372.0.1--n--8893f80933-k8s-goldmane--768f4c5c69--54wb2-eth0" Jul 16 00:47:08.995678 containerd[2784]: 2025-07-16 00:47:08.941 [INFO][7258] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996" HandleID="k8s-pod-network.dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996" Workload="ci--4372.0.1--n--8893f80933-k8s-goldmane--768f4c5c69--54wb2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000521790), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-n-8893f80933", "pod":"goldmane-768f4c5c69-54wb2", "timestamp":"2025-07-16 00:47:08.941835642 +0000 UTC"}, Hostname:"ci-4372.0.1-n-8893f80933", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:47:08.995678 containerd[2784]: 2025-07-16 00:47:08.942 [INFO][7258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:47:08.995678 containerd[2784]: 2025-07-16 00:47:08.942 [INFO][7258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:47:08.995678 containerd[2784]: 2025-07-16 00:47:08.942 [INFO][7258] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-8893f80933' Jul 16 00:47:08.995678 containerd[2784]: 2025-07-16 00:47:08.950 [INFO][7258] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.995678 containerd[2784]: 2025-07-16 00:47:08.953 [INFO][7258] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.995678 containerd[2784]: 2025-07-16 00:47:08.966 [INFO][7258] ipam/ipam.go 511: Trying affinity for 192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.995678 containerd[2784]: 2025-07-16 00:47:08.971 [INFO][7258] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.995678 containerd[2784]: 2025-07-16 00:47:08.975 [INFO][7258] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.995678 containerd[2784]: 2025-07-16 00:47:08.975 [INFO][7258] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.995678 containerd[2784]: 2025-07-16 00:47:08.976 [INFO][7258] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996 Jul 16 00:47:08.995678 containerd[2784]: 2025-07-16 00:47:08.980 [INFO][7258] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.995678 containerd[2784]: 2025-07-16 00:47:08.984 [INFO][7258] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.5/26] block=192.168.106.0/26 handle="k8s-pod-network.dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.995678 containerd[2784]: 2025-07-16 00:47:08.984 [INFO][7258] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.5/26] handle="k8s-pod-network.dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:08.995678 containerd[2784]: 2025-07-16 00:47:08.984 [INFO][7258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:47:08.995678 containerd[2784]: 2025-07-16 00:47:08.984 [INFO][7258] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.5/26] IPv6=[] ContainerID="dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996" HandleID="k8s-pod-network.dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996" Workload="ci--4372.0.1--n--8893f80933-k8s-goldmane--768f4c5c69--54wb2-eth0" Jul 16 00:47:08.996239 containerd[2784]: 2025-07-16 00:47:08.986 [INFO][7230] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996" Namespace="calico-system" Pod="goldmane-768f4c5c69-54wb2" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-goldmane--768f4c5c69--54wb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--8893f80933-k8s-goldmane--768f4c5c69--54wb2-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"2fb7dd78-c188-4dd0-a051-40138c1ed92e", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 46, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-8893f80933", ContainerID:"", Pod:"goldmane-768f4c5c69-54wb2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.106.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali47f2e0dbc2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:47:08.996239 containerd[2784]: 2025-07-16 00:47:08.986 [INFO][7230] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.5/32] ContainerID="dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996" Namespace="calico-system" Pod="goldmane-768f4c5c69-54wb2" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-goldmane--768f4c5c69--54wb2-eth0" Jul 16 00:47:08.996239 containerd[2784]: 2025-07-16 00:47:08.986 [INFO][7230] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali47f2e0dbc2e ContainerID="dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996" Namespace="calico-system" Pod="goldmane-768f4c5c69-54wb2" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-goldmane--768f4c5c69--54wb2-eth0" Jul 16 00:47:08.996239 containerd[2784]: 2025-07-16 00:47:08.988 [INFO][7230] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996" Namespace="calico-system" Pod="goldmane-768f4c5c69-54wb2" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-goldmane--768f4c5c69--54wb2-eth0" Jul 16 00:47:08.996239 containerd[2784]: 2025-07-16 00:47:08.988 [INFO][7230] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996" Namespace="calico-system" Pod="goldmane-768f4c5c69-54wb2" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-goldmane--768f4c5c69--54wb2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--8893f80933-k8s-goldmane--768f4c5c69--54wb2-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"2fb7dd78-c188-4dd0-a051-40138c1ed92e", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 46, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-8893f80933", ContainerID:"dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996", Pod:"goldmane-768f4c5c69-54wb2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.106.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali47f2e0dbc2e", MAC:"8e:12:a5:71:6f:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:47:08.996239 containerd[2784]: 2025-07-16 00:47:08.994 [INFO][7230] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996" Namespace="calico-system" Pod="goldmane-768f4c5c69-54wb2" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-goldmane--768f4c5c69--54wb2-eth0" Jul 16 00:47:09.006875 containerd[2784]: time="2025-07-16T00:47:09.006838143Z" level=info msg="connecting to shim dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996" address="unix:///run/containerd/s/78014768f315d0aab28947b1af026801e1c0baac59bccbc19ddcf2c50eb15ad8" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:47:09.035394 systemd[1]: Started cri-containerd-dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996.scope - libcontainer container dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996. Jul 16 00:47:09.061388 containerd[2784]: time="2025-07-16T00:47:09.061355922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-54wb2,Uid:2fb7dd78-c188-4dd0-a051-40138c1ed92e,Namespace:calico-system,Attempt:0,} returns sandbox id \"dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996\"" Jul 16 00:47:09.227246 kubelet[4319]: I0716 00:47:09.227174 4319 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:47:09.299583 containerd[2784]: time="2025-07-16T00:47:09.299545221Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\" id:\"5553c406a1745663f2d2e1dfe3fbc3ebb023311a8bee223595e588d9b5d9c38b\" pid:7358 exited_at:{seconds:1752626829 nanos:299196390}" Jul 16 00:47:09.314297 containerd[2784]: time="2025-07-16T00:47:09.314255819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:09.314377 containerd[2784]: time="2025-07-16T00:47:09.314274299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 16 00:47:09.314921 containerd[2784]: time="2025-07-16T00:47:09.314897564Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:09.316462 containerd[2784]: time="2025-07-16T00:47:09.316438446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:09.317164 containerd[2784]: time="2025-07-16T00:47:09.317138868Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 903.629855ms" Jul 16 00:47:09.317222 containerd[2784]: time="2025-07-16T00:47:09.317168908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 16 00:47:09.317835 containerd[2784]: time="2025-07-16T00:47:09.317816012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 16 00:47:09.318632 containerd[2784]: time="2025-07-16T00:47:09.318614552Z" level=info msg="CreateContainer within sandbox \"71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 16 00:47:09.324818 containerd[2784]: time="2025-07-16T00:47:09.324787240Z" level=info msg="Container 51a2720f4ac821849d09ef6abc6b631499686451f0b96ccd8e17ea3ed49295e0: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:47:09.327952 containerd[2784]: time="2025-07-16T00:47:09.327927283Z" level=info msg="CreateContainer within sandbox \"71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"51a2720f4ac821849d09ef6abc6b631499686451f0b96ccd8e17ea3ed49295e0\"" Jul 16 00:47:09.328323 containerd[2784]: time="2025-07-16T00:47:09.328301994Z" level=info msg="StartContainer for \"51a2720f4ac821849d09ef6abc6b631499686451f0b96ccd8e17ea3ed49295e0\"" Jul 16 00:47:09.329241 containerd[2784]: time="2025-07-16T00:47:09.329221731Z" level=info msg="connecting to shim 51a2720f4ac821849d09ef6abc6b631499686451f0b96ccd8e17ea3ed49295e0" address="unix:///run/containerd/s/1dd63bc032eb57ac110ac2044a4975b2f804f1b48bf42804ee61e7436f90d267" protocol=ttrpc version=3 Jul 16 00:47:09.339223 systemd[1]: Started cri-containerd-51a2720f4ac821849d09ef6abc6b631499686451f0b96ccd8e17ea3ed49295e0.scope - libcontainer container 51a2720f4ac821849d09ef6abc6b631499686451f0b96ccd8e17ea3ed49295e0. Jul 16 00:47:09.367445 containerd[2784]: time="2025-07-16T00:47:09.367420631Z" level=info msg="StartContainer for \"51a2720f4ac821849d09ef6abc6b631499686451f0b96ccd8e17ea3ed49295e0\" returns successfully" Jul 16 00:47:09.370430 containerd[2784]: time="2025-07-16T00:47:09.370412758Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\" id:\"e8c49ad6a025662189a539257fcd889db181f3ebf00fe94d3f7361b9888928ff\" pid:7394 exited_at:{seconds:1752626829 nanos:370213323}" Jul 16 00:47:09.698432 containerd[2784]: time="2025-07-16T00:47:09.698392488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:09.698570 containerd[2784]: time="2025-07-16T00:47:09.698468526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 16 00:47:09.699119 containerd[2784]: time="2025-07-16T00:47:09.699098471Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:09.700646 containerd[2784]: time="2025-07-16T00:47:09.700620473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:09.701265 containerd[2784]: time="2025-07-16T00:47:09.701239658Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 383.389367ms" Jul 16 00:47:09.701292 containerd[2784]: time="2025-07-16T00:47:09.701276577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 16 00:47:09.701937 containerd[2784]: time="2025-07-16T00:47:09.701921721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 16 00:47:09.704005 containerd[2784]: time="2025-07-16T00:47:09.703979751Z" level=info msg="CreateContainer within sandbox \"ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 16 00:47:09.708136 containerd[2784]: time="2025-07-16T00:47:09.708108529Z" level=info msg="Container d616e22159cc3d9042eba63b10f0aff386acd724f284328bbc739f51ff878b12: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:47:09.712768 containerd[2784]: time="2025-07-16T00:47:09.712740455Z" level=info msg="CreateContainer within sandbox \"ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d616e22159cc3d9042eba63b10f0aff386acd724f284328bbc739f51ff878b12\"" Jul 16 00:47:09.713140 containerd[2784]: time="2025-07-16T00:47:09.713114286Z" level=info msg="StartContainer for \"d616e22159cc3d9042eba63b10f0aff386acd724f284328bbc739f51ff878b12\"" Jul 16 00:47:09.714446 containerd[2784]: time="2025-07-16T00:47:09.714423054Z" level=info msg="connecting to shim d616e22159cc3d9042eba63b10f0aff386acd724f284328bbc739f51ff878b12" address="unix:///run/containerd/s/46205f7bd01df34f160bef69870a705d20e46c3e2a1d15028beba1371f34ae22" protocol=ttrpc version=3 Jul 16 00:47:09.747385 systemd[1]: Started cri-containerd-d616e22159cc3d9042eba63b10f0aff386acd724f284328bbc739f51ff878b12.scope - libcontainer container d616e22159cc3d9042eba63b10f0aff386acd724f284328bbc739f51ff878b12. Jul 16 00:47:09.774516 containerd[2784]: time="2025-07-16T00:47:09.774487736Z" level=info msg="StartContainer for \"d616e22159cc3d9042eba63b10f0aff386acd724f284328bbc739f51ff878b12\" returns successfully" Jul 16 00:47:09.892943 containerd[2784]: time="2025-07-16T00:47:09.892899662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5976c699f5-jqrlm,Uid:2da18249-deb2-4cdd-bb19-cbba4e2755b7,Namespace:calico-system,Attempt:0,}" Jul 16 00:47:09.922354 systemd-networkd[2695]: cali175db470ca4: Gained IPv6LL Jul 16 00:47:09.938503 kubelet[4319]: I0716 00:47:09.938481 4319 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 16 00:47:09.938561 kubelet[4319]: I0716 00:47:09.938516 4319 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 16 00:47:09.971464 systemd-networkd[2695]: cali7bea79fdd4c: Link UP Jul 16 00:47:09.971975 systemd-networkd[2695]: cali7bea79fdd4c: Gained carrier Jul 16 00:47:09.988535 containerd[2784]: 2025-07-16 00:47:09.921 [INFO][7517] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--8893f80933-k8s-calico--kube--controllers--5976c699f5--jqrlm-eth0 calico-kube-controllers-5976c699f5- calico-system 2da18249-deb2-4cdd-bb19-cbba4e2755b7 795 0 2025-07-16 00:46:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5976c699f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372.0.1-n-8893f80933 calico-kube-controllers-5976c699f5-jqrlm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7bea79fdd4c [] [] }} ContainerID="c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065" Namespace="calico-system" Pod="calico-kube-controllers-5976c699f5-jqrlm" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--kube--controllers--5976c699f5--jqrlm-" Jul 16 00:47:09.988535 containerd[2784]: 2025-07-16 00:47:09.921 [INFO][7517] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065" Namespace="calico-system" Pod="calico-kube-controllers-5976c699f5-jqrlm" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--kube--controllers--5976c699f5--jqrlm-eth0" Jul 16 00:47:09.988535 containerd[2784]: 2025-07-16 00:47:09.942 [INFO][7546] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065" HandleID="k8s-pod-network.c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065" Workload="ci--4372.0.1--n--8893f80933-k8s-calico--kube--controllers--5976c699f5--jqrlm-eth0" Jul 16 00:47:09.988535 containerd[2784]: 2025-07-16 00:47:09.942 [INFO][7546] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065" HandleID="k8s-pod-network.c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065" Workload="ci--4372.0.1--n--8893f80933-k8s-calico--kube--controllers--5976c699f5--jqrlm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400042f4c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-n-8893f80933", "pod":"calico-kube-controllers-5976c699f5-jqrlm", "timestamp":"2025-07-16 00:47:09.942757796 +0000 UTC"}, Hostname:"ci-4372.0.1-n-8893f80933", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:47:09.988535 containerd[2784]: 2025-07-16 00:47:09.942 [INFO][7546] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:47:09.988535 containerd[2784]: 2025-07-16 00:47:09.942 [INFO][7546] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:47:09.988535 containerd[2784]: 2025-07-16 00:47:09.942 [INFO][7546] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-8893f80933' Jul 16 00:47:09.988535 containerd[2784]: 2025-07-16 00:47:09.950 [INFO][7546] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:09.988535 containerd[2784]: 2025-07-16 00:47:09.953 [INFO][7546] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:09.988535 containerd[2784]: 2025-07-16 00:47:09.956 [INFO][7546] ipam/ipam.go 511: Trying affinity for 192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:09.988535 containerd[2784]: 2025-07-16 00:47:09.958 [INFO][7546] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:09.988535 containerd[2784]: 2025-07-16 00:47:09.960 [INFO][7546] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:09.988535 containerd[2784]: 2025-07-16 00:47:09.960 [INFO][7546] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:09.988535 containerd[2784]: 2025-07-16 00:47:09.961 [INFO][7546] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065 Jul 16 00:47:09.988535 containerd[2784]: 2025-07-16 00:47:09.963 [INFO][7546] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:09.988535 containerd[2784]: 2025-07-16 00:47:09.968 [INFO][7546] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.6/26] block=192.168.106.0/26 handle="k8s-pod-network.c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:09.988535 containerd[2784]: 2025-07-16 00:47:09.968 [INFO][7546] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.6/26] handle="k8s-pod-network.c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:09.988535 containerd[2784]: 2025-07-16 00:47:09.968 [INFO][7546] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:47:09.988535 containerd[2784]: 2025-07-16 00:47:09.968 [INFO][7546] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.6/26] IPv6=[] ContainerID="c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065" HandleID="k8s-pod-network.c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065" Workload="ci--4372.0.1--n--8893f80933-k8s-calico--kube--controllers--5976c699f5--jqrlm-eth0" Jul 16 00:47:09.988944 containerd[2784]: 2025-07-16 00:47:09.970 [INFO][7517] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065" Namespace="calico-system" Pod="calico-kube-controllers-5976c699f5-jqrlm" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--kube--controllers--5976c699f5--jqrlm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--8893f80933-k8s-calico--kube--controllers--5976c699f5--jqrlm-eth0", GenerateName:"calico-kube-controllers-5976c699f5-", Namespace:"calico-system", SelfLink:"", UID:"2da18249-deb2-4cdd-bb19-cbba4e2755b7", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 46, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5976c699f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-8893f80933", ContainerID:"", Pod:"calico-kube-controllers-5976c699f5-jqrlm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7bea79fdd4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:47:09.988944 containerd[2784]: 2025-07-16 00:47:09.970 [INFO][7517] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.6/32] ContainerID="c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065" Namespace="calico-system" Pod="calico-kube-controllers-5976c699f5-jqrlm" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--kube--controllers--5976c699f5--jqrlm-eth0" Jul 16 00:47:09.988944 containerd[2784]: 2025-07-16 00:47:09.970 [INFO][7517] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7bea79fdd4c ContainerID="c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065" Namespace="calico-system" Pod="calico-kube-controllers-5976c699f5-jqrlm" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--kube--controllers--5976c699f5--jqrlm-eth0" Jul 16 00:47:09.988944 containerd[2784]: 2025-07-16 00:47:09.971 [INFO][7517] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065" Namespace="calico-system" Pod="calico-kube-controllers-5976c699f5-jqrlm" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--kube--controllers--5976c699f5--jqrlm-eth0" Jul 16 00:47:09.988944 containerd[2784]: 2025-07-16 00:47:09.972 [INFO][7517] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065" Namespace="calico-system" Pod="calico-kube-controllers-5976c699f5-jqrlm" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--kube--controllers--5976c699f5--jqrlm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--8893f80933-k8s-calico--kube--controllers--5976c699f5--jqrlm-eth0", GenerateName:"calico-kube-controllers-5976c699f5-", Namespace:"calico-system", SelfLink:"", UID:"2da18249-deb2-4cdd-bb19-cbba4e2755b7", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 46, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5976c699f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-8893f80933", ContainerID:"c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065", Pod:"calico-kube-controllers-5976c699f5-jqrlm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7bea79fdd4c", MAC:"82:d8:ee:6b:e3:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:47:09.988944 containerd[2784]: 2025-07-16 00:47:09.986 [INFO][7517] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065" Namespace="calico-system" Pod="calico-kube-controllers-5976c699f5-jqrlm" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--kube--controllers--5976c699f5--jqrlm-eth0" Jul 16 00:47:09.993383 kubelet[4319]: I0716 00:47:09.993338 4319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xsmnd" podStartSLOduration=15.339482164 podStartE2EDuration="16.993322351s" podCreationTimestamp="2025-07-16 00:46:53 +0000 UTC" firstStartedPulling="2025-07-16 00:47:08.047963937 +0000 UTC m=+35.232692787" lastFinishedPulling="2025-07-16 00:47:09.701804164 +0000 UTC m=+36.886532974" observedRunningTime="2025-07-16 00:47:09.993220434 +0000 UTC m=+37.177949284" watchObservedRunningTime="2025-07-16 00:47:09.993322351 +0000 UTC m=+37.178051201" Jul 16 00:47:09.998385 containerd[2784]: time="2025-07-16T00:47:09.998348268Z" level=info msg="connecting to shim c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065" address="unix:///run/containerd/s/29a48e7954d700024c40f81714cbcba96ff7bff12478576425e154446d3acfed" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:47:10.000911 kubelet[4319]: I0716 00:47:10.000870 4319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85f985586b-4kwjk" podStartSLOduration=18.936780061 podStartE2EDuration="20.000854166s" podCreationTimestamp="2025-07-16 00:46:50 +0000 UTC" firstStartedPulling="2025-07-16 00:47:08.253638829 +0000 UTC m=+35.438367679" lastFinishedPulling="2025-07-16 00:47:09.317712974 +0000 UTC m=+36.502441784" observedRunningTime="2025-07-16 00:47:10.000556533 +0000 UTC m=+37.185285383" watchObservedRunningTime="2025-07-16 00:47:10.000854166 +0000 UTC m=+37.185583016" Jul 16 00:47:10.027391 systemd[1]: Started cri-containerd-c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065.scope - libcontainer container c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065. Jul 16 00:47:10.054255 containerd[2784]: time="2025-07-16T00:47:10.054221823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5976c699f5-jqrlm,Uid:2da18249-deb2-4cdd-bb19-cbba4e2755b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065\"" Jul 16 00:47:10.114384 systemd-networkd[2695]: calib2a0397a94a: Gained IPv6LL Jul 16 00:47:10.178355 systemd-networkd[2695]: cali82f65e5442b: Gained IPv6LL Jul 16 00:47:10.550969 containerd[2784]: time="2025-07-16T00:47:10.550925311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:10.551114 containerd[2784]: time="2025-07-16T00:47:10.550940071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 16 00:47:10.551642 containerd[2784]: time="2025-07-16T00:47:10.551616655Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:10.553344 containerd[2784]: time="2025-07-16T00:47:10.553320295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:10.554003 containerd[2784]: time="2025-07-16T00:47:10.553985679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 852.039718ms" Jul 16 00:47:10.554036 containerd[2784]: time="2025-07-16T00:47:10.554006318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 16 00:47:10.554754 containerd[2784]: time="2025-07-16T00:47:10.554732541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 16 00:47:10.555661 containerd[2784]: time="2025-07-16T00:47:10.555640040Z" level=info msg="CreateContainer within sandbox \"dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 16 00:47:10.559642 containerd[2784]: time="2025-07-16T00:47:10.559614466Z" level=info msg="Container b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:47:10.562947 containerd[2784]: time="2025-07-16T00:47:10.562913748Z" level=info msg="CreateContainer within sandbox \"dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\"" Jul 16 00:47:10.563318 containerd[2784]: time="2025-07-16T00:47:10.563290379Z" level=info msg="StartContainer for \"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\"" Jul 16 00:47:10.564310 containerd[2784]: time="2025-07-16T00:47:10.564284035Z" level=info msg="connecting to shim b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681" address="unix:///run/containerd/s/78014768f315d0aab28947b1af026801e1c0baac59bccbc19ddcf2c50eb15ad8" protocol=ttrpc version=3 Jul 16 00:47:10.593385 systemd[1]: Started cri-containerd-b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681.scope - libcontainer container b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681. Jul 16 00:47:10.622300 containerd[2784]: time="2025-07-16T00:47:10.622151066Z" level=info msg="StartContainer for \"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" returns successfully" Jul 16 00:47:10.755366 systemd-networkd[2695]: cali47f2e0dbc2e: Gained IPv6LL Jul 16 00:47:10.892086 containerd[2784]: time="2025-07-16T00:47:10.892010081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qhtkz,Uid:90d031fa-ea59-464d-98af-0980e1b42263,Namespace:kube-system,Attempt:0,}" Jul 16 00:47:10.896882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3504581599.mount: Deactivated successfully. Jul 16 00:47:10.968339 systemd-networkd[2695]: calia313b14c8a7: Link UP Jul 16 00:47:10.968613 systemd-networkd[2695]: calia313b14c8a7: Gained carrier Jul 16 00:47:10.975895 containerd[2784]: 2025-07-16 00:47:10.920 [INFO][7730] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--qhtkz-eth0 coredns-668d6bf9bc- kube-system 90d031fa-ea59-464d-98af-0980e1b42263 804 0 2025-07-16 00:46:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.1-n-8893f80933 coredns-668d6bf9bc-qhtkz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia313b14c8a7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741" Namespace="kube-system" Pod="coredns-668d6bf9bc-qhtkz" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--qhtkz-" Jul 16 00:47:10.975895 containerd[2784]: 2025-07-16 00:47:10.920 [INFO][7730] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741" Namespace="kube-system" Pod="coredns-668d6bf9bc-qhtkz" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--qhtkz-eth0" Jul 16 00:47:10.975895 containerd[2784]: 2025-07-16 00:47:10.940 [INFO][7754] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741" HandleID="k8s-pod-network.5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741" Workload="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--qhtkz-eth0" Jul 16 00:47:10.975895 containerd[2784]: 2025-07-16 00:47:10.940 [INFO][7754] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741" HandleID="k8s-pod-network.5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741" Workload="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--qhtkz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000362620), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.1-n-8893f80933", "pod":"coredns-668d6bf9bc-qhtkz", "timestamp":"2025-07-16 00:47:10.94024326 +0000 UTC"}, Hostname:"ci-4372.0.1-n-8893f80933", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:47:10.975895 containerd[2784]: 2025-07-16 00:47:10.940 [INFO][7754] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:47:10.975895 containerd[2784]: 2025-07-16 00:47:10.940 [INFO][7754] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:47:10.975895 containerd[2784]: 2025-07-16 00:47:10.940 [INFO][7754] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-8893f80933' Jul 16 00:47:10.975895 containerd[2784]: 2025-07-16 00:47:10.948 [INFO][7754] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:10.975895 containerd[2784]: 2025-07-16 00:47:10.951 [INFO][7754] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:10.975895 containerd[2784]: 2025-07-16 00:47:10.954 [INFO][7754] ipam/ipam.go 511: Trying affinity for 192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:10.975895 containerd[2784]: 2025-07-16 00:47:10.955 [INFO][7754] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:10.975895 containerd[2784]: 2025-07-16 00:47:10.957 [INFO][7754] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:10.975895 containerd[2784]: 2025-07-16 00:47:10.957 [INFO][7754] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:10.975895 containerd[2784]: 2025-07-16 00:47:10.958 [INFO][7754] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741 Jul 16 00:47:10.975895 containerd[2784]: 2025-07-16 00:47:10.960 [INFO][7754] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:10.975895 containerd[2784]: 2025-07-16 00:47:10.964 [INFO][7754] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.7/26] block=192.168.106.0/26 handle="k8s-pod-network.5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:10.975895 containerd[2784]: 2025-07-16 00:47:10.964 [INFO][7754] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.7/26] handle="k8s-pod-network.5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:10.975895 containerd[2784]: 2025-07-16 00:47:10.964 [INFO][7754] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:47:10.975895 containerd[2784]: 2025-07-16 00:47:10.964 [INFO][7754] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.7/26] IPv6=[] ContainerID="5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741" HandleID="k8s-pod-network.5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741" Workload="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--qhtkz-eth0" Jul 16 00:47:10.976567 containerd[2784]: 2025-07-16 00:47:10.966 [INFO][7730] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741" Namespace="kube-system" Pod="coredns-668d6bf9bc-qhtkz" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--qhtkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--qhtkz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"90d031fa-ea59-464d-98af-0980e1b42263", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 46, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-8893f80933", ContainerID:"", Pod:"coredns-668d6bf9bc-qhtkz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia313b14c8a7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:47:10.976567 containerd[2784]: 2025-07-16 00:47:10.966 [INFO][7730] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.7/32] ContainerID="5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741" Namespace="kube-system" Pod="coredns-668d6bf9bc-qhtkz" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--qhtkz-eth0" Jul 16 00:47:10.976567 containerd[2784]: 2025-07-16 00:47:10.967 [INFO][7730] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia313b14c8a7 ContainerID="5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741" Namespace="kube-system" Pod="coredns-668d6bf9bc-qhtkz" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--qhtkz-eth0" Jul 16 00:47:10.976567 containerd[2784]: 2025-07-16 00:47:10.968 [INFO][7730] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741" Namespace="kube-system" Pod="coredns-668d6bf9bc-qhtkz" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--qhtkz-eth0" Jul 16 00:47:10.976567 containerd[2784]: 2025-07-16 00:47:10.968 [INFO][7730] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741" Namespace="kube-system" Pod="coredns-668d6bf9bc-qhtkz" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--qhtkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--qhtkz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"90d031fa-ea59-464d-98af-0980e1b42263", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 46, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-8893f80933", ContainerID:"5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741", Pod:"coredns-668d6bf9bc-qhtkz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia313b14c8a7", MAC:"ba:7a:67:55:0a:7f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:47:10.976567 containerd[2784]: 2025-07-16 00:47:10.974 [INFO][7730] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741" Namespace="kube-system" Pod="coredns-668d6bf9bc-qhtkz" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-coredns--668d6bf9bc--qhtkz-eth0" Jul 16 00:47:10.986270 containerd[2784]: time="2025-07-16T00:47:10.986228572Z" level=info msg="connecting to shim 5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741" address="unix:///run/containerd/s/8c31fedb828243c16989aac31fd0897d5fd9bd3cbe8c00a4ffe206ab6eb7d4a8" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:47:10.987904 kubelet[4319]: I0716 00:47:10.987884 4319 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:47:10.995657 kubelet[4319]: I0716 00:47:10.995595 4319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-54wb2" podStartSLOduration=16.503192988 podStartE2EDuration="17.995580591s" podCreationTimestamp="2025-07-16 00:46:53 +0000 UTC" firstStartedPulling="2025-07-16 00:47:09.06225442 +0000 UTC m=+36.246983270" lastFinishedPulling="2025-07-16 00:47:10.554642023 +0000 UTC m=+37.739370873" observedRunningTime="2025-07-16 00:47:10.995479153 +0000 UTC m=+38.180208003" watchObservedRunningTime="2025-07-16 00:47:10.995580591 +0000 UTC m=+38.180309441" Jul 16 00:47:11.013899 systemd[1]: Started cri-containerd-5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741.scope - libcontainer container 5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741. Jul 16 00:47:11.040810 containerd[2784]: time="2025-07-16T00:47:11.040777997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qhtkz,Uid:90d031fa-ea59-464d-98af-0980e1b42263,Namespace:kube-system,Attempt:0,} returns sandbox id \"5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741\"" Jul 16 00:47:11.042727 containerd[2784]: time="2025-07-16T00:47:11.042704873Z" level=info msg="CreateContainer within sandbox \"5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 16 00:47:11.047203 containerd[2784]: time="2025-07-16T00:47:11.047179171Z" level=info msg="Container 1f2ff1873062ecf09c3e1770af46ee380976b19205ebbacbef91567b72795733: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:47:11.049727 containerd[2784]: time="2025-07-16T00:47:11.049703953Z" level=info msg="CreateContainer within sandbox \"5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f2ff1873062ecf09c3e1770af46ee380976b19205ebbacbef91567b72795733\"" Jul 16 00:47:11.050016 containerd[2784]: time="2025-07-16T00:47:11.049996787Z" level=info msg="StartContainer for \"1f2ff1873062ecf09c3e1770af46ee380976b19205ebbacbef91567b72795733\"" Jul 16 00:47:11.050768 containerd[2784]: time="2025-07-16T00:47:11.050748490Z" level=info msg="connecting to shim 1f2ff1873062ecf09c3e1770af46ee380976b19205ebbacbef91567b72795733" address="unix:///run/containerd/s/8c31fedb828243c16989aac31fd0897d5fd9bd3cbe8c00a4ffe206ab6eb7d4a8" protocol=ttrpc version=3 Jul 16 00:47:11.050794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3126231165.mount: Deactivated successfully. Jul 16 00:47:11.066346 containerd[2784]: time="2025-07-16T00:47:11.066318455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"80c3317b1cc5ad3ca365fec722d2dbadc76d89669ac03c49ce22f485937f3dee\" pid:7825 exit_status:1 exited_at:{seconds:1752626831 nanos:65570752}" Jul 16 00:47:11.072398 systemd[1]: Started cri-containerd-1f2ff1873062ecf09c3e1770af46ee380976b19205ebbacbef91567b72795733.scope - libcontainer container 1f2ff1873062ecf09c3e1770af46ee380976b19205ebbacbef91567b72795733. Jul 16 00:47:11.093905 containerd[2784]: time="2025-07-16T00:47:11.093879187Z" level=info msg="StartContainer for \"1f2ff1873062ecf09c3e1770af46ee380976b19205ebbacbef91567b72795733\" returns successfully" Jul 16 00:47:11.514204 containerd[2784]: time="2025-07-16T00:47:11.514163856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:11.514311 containerd[2784]: time="2025-07-16T00:47:11.514171696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 16 00:47:11.514878 containerd[2784]: time="2025-07-16T00:47:11.514863200Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:11.516381 containerd[2784]: time="2025-07-16T00:47:11.516357286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:47:11.517002 containerd[2784]: time="2025-07-16T00:47:11.516984912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 962.226171ms" Jul 16 00:47:11.517035 containerd[2784]: time="2025-07-16T00:47:11.517005471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 16 00:47:11.522046 containerd[2784]: time="2025-07-16T00:47:11.522023117Z" level=info msg="CreateContainer within sandbox \"c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 16 00:47:11.525833 containerd[2784]: time="2025-07-16T00:47:11.525807751Z" level=info msg="Container b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:47:11.529078 containerd[2784]: time="2025-07-16T00:47:11.529054077Z" level=info msg="CreateContainer within sandbox \"c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\"" Jul 16 00:47:11.529407 containerd[2784]: time="2025-07-16T00:47:11.529380829Z" level=info msg="StartContainer for \"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\"" Jul 16 00:47:11.530341 containerd[2784]: time="2025-07-16T00:47:11.530320928Z" level=info msg="connecting to shim b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd" address="unix:///run/containerd/s/29a48e7954d700024c40f81714cbcba96ff7bff12478576425e154446d3acfed" protocol=ttrpc version=3 Jul 16 00:47:11.561391 systemd[1]: Started cri-containerd-b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd.scope - libcontainer container b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd. Jul 16 00:47:11.590178 containerd[2784]: time="2025-07-16T00:47:11.590152485Z" level=info msg="StartContainer for \"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" returns successfully" Jul 16 00:47:11.892043 containerd[2784]: time="2025-07-16T00:47:11.892004571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85f985586b-sdjzf,Uid:9c201598-6c58-44df-8298-234b75650d58,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:47:11.980439 systemd-networkd[2695]: calidf484acaba0: Link UP Jul 16 00:47:11.980751 systemd-networkd[2695]: calidf484acaba0: Gained carrier Jul 16 00:47:11.988706 containerd[2784]: 2025-07-16 00:47:11.930 [INFO][7995] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--sdjzf-eth0 calico-apiserver-85f985586b- calico-apiserver 9c201598-6c58-44df-8298-234b75650d58 805 0 2025-07-16 00:46:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85f985586b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-n-8893f80933 calico-apiserver-85f985586b-sdjzf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidf484acaba0 [] [] }} ContainerID="2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339" Namespace="calico-apiserver" Pod="calico-apiserver-85f985586b-sdjzf" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--sdjzf-" Jul 16 00:47:11.988706 containerd[2784]: 2025-07-16 00:47:11.930 [INFO][7995] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339" Namespace="calico-apiserver" Pod="calico-apiserver-85f985586b-sdjzf" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--sdjzf-eth0" Jul 16 00:47:11.988706 containerd[2784]: 2025-07-16 00:47:11.950 [INFO][8023] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339" HandleID="k8s-pod-network.2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339" Workload="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--sdjzf-eth0" Jul 16 00:47:11.988706 containerd[2784]: 2025-07-16 00:47:11.950 [INFO][8023] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339" HandleID="k8s-pod-network.2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339" Workload="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--sdjzf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000727b30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-n-8893f80933", "pod":"calico-apiserver-85f985586b-sdjzf", "timestamp":"2025-07-16 00:47:11.950053769 +0000 UTC"}, Hostname:"ci-4372.0.1-n-8893f80933", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:47:11.988706 containerd[2784]: 2025-07-16 00:47:11.950 [INFO][8023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:47:11.988706 containerd[2784]: 2025-07-16 00:47:11.950 [INFO][8023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:47:11.988706 containerd[2784]: 2025-07-16 00:47:11.950 [INFO][8023] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-8893f80933' Jul 16 00:47:11.988706 containerd[2784]: 2025-07-16 00:47:11.958 [INFO][8023] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:11.988706 containerd[2784]: 2025-07-16 00:47:11.961 [INFO][8023] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:11.988706 containerd[2784]: 2025-07-16 00:47:11.964 [INFO][8023] ipam/ipam.go 511: Trying affinity for 192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:11.988706 containerd[2784]: 2025-07-16 00:47:11.965 [INFO][8023] ipam/ipam.go 158: Attempting to load block cidr=192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:11.988706 containerd[2784]: 2025-07-16 00:47:11.967 [INFO][8023] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.106.0/26 host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:11.988706 containerd[2784]: 2025-07-16 00:47:11.967 [INFO][8023] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.106.0/26 handle="k8s-pod-network.2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:11.988706 containerd[2784]: 2025-07-16 00:47:11.969 [INFO][8023] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339 Jul 16 00:47:11.988706 containerd[2784]: 2025-07-16 00:47:11.972 [INFO][8023] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.106.0/26 handle="k8s-pod-network.2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:11.988706 containerd[2784]: 2025-07-16 00:47:11.976 [INFO][8023] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.106.8/26] block=192.168.106.0/26 handle="k8s-pod-network.2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:11.988706 containerd[2784]: 2025-07-16 00:47:11.976 [INFO][8023] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.106.8/26] handle="k8s-pod-network.2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339" host="ci-4372.0.1-n-8893f80933" Jul 16 00:47:11.988706 containerd[2784]: 2025-07-16 00:47:11.976 [INFO][8023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:47:11.988706 containerd[2784]: 2025-07-16 00:47:11.976 [INFO][8023] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.8/26] IPv6=[] ContainerID="2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339" HandleID="k8s-pod-network.2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339" Workload="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--sdjzf-eth0" Jul 16 00:47:11.989160 containerd[2784]: 2025-07-16 00:47:11.978 [INFO][7995] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339" Namespace="calico-apiserver" Pod="calico-apiserver-85f985586b-sdjzf" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--sdjzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--sdjzf-eth0", GenerateName:"calico-apiserver-85f985586b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c201598-6c58-44df-8298-234b75650d58", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 46, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85f985586b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-8893f80933", ContainerID:"", Pod:"calico-apiserver-85f985586b-sdjzf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidf484acaba0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:47:11.989160 containerd[2784]: 2025-07-16 00:47:11.978 [INFO][7995] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.106.8/32] ContainerID="2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339" Namespace="calico-apiserver" Pod="calico-apiserver-85f985586b-sdjzf" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--sdjzf-eth0" Jul 16 00:47:11.989160 containerd[2784]: 2025-07-16 00:47:11.978 [INFO][7995] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf484acaba0 ContainerID="2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339" Namespace="calico-apiserver" Pod="calico-apiserver-85f985586b-sdjzf" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--sdjzf-eth0" Jul 16 00:47:11.989160 containerd[2784]: 2025-07-16 00:47:11.980 [INFO][7995] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339" Namespace="calico-apiserver" Pod="calico-apiserver-85f985586b-sdjzf" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--sdjzf-eth0" Jul 16 00:47:11.989160 containerd[2784]: 2025-07-16 00:47:11.981 [INFO][7995] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339" Namespace="calico-apiserver" Pod="calico-apiserver-85f985586b-sdjzf" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--sdjzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--sdjzf-eth0", GenerateName:"calico-apiserver-85f985586b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c201598-6c58-44df-8298-234b75650d58", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 46, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85f985586b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-8893f80933", ContainerID:"2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339", Pod:"calico-apiserver-85f985586b-sdjzf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidf484acaba0", MAC:"fa:23:9e:6a:94:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:47:11.989160 containerd[2784]: 2025-07-16 00:47:11.987 [INFO][7995] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339" Namespace="calico-apiserver" Pod="calico-apiserver-85f985586b-sdjzf" WorkloadEndpoint="ci--4372.0.1--n--8893f80933-k8s-calico--apiserver--85f985586b--sdjzf-eth0" Jul 16 00:47:11.998778 kubelet[4319]: I0716 00:47:11.998723 4319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qhtkz" podStartSLOduration=31.998708661 podStartE2EDuration="31.998708661s" podCreationTimestamp="2025-07-16 00:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:47:11.998412588 +0000 UTC m=+39.183141438" watchObservedRunningTime="2025-07-16 00:47:11.998708661 +0000 UTC m=+39.183437511" Jul 16 00:47:11.999506 containerd[2784]: time="2025-07-16T00:47:11.999472364Z" level=info msg="connecting to shim 2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339" address="unix:///run/containerd/s/68b6a736da3f35a3f0a26331f880be5e8d1dec33d3d962020d6b442cd5e52b52" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:47:12.013873 kubelet[4319]: I0716 00:47:12.013829 4319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5976c699f5-jqrlm" podStartSLOduration=17.551345231 podStartE2EDuration="19.013811647s" podCreationTimestamp="2025-07-16 00:46:53 +0000 UTC" firstStartedPulling="2025-07-16 00:47:10.055032484 +0000 UTC m=+37.239761334" lastFinishedPulling="2025-07-16 00:47:11.5174989 +0000 UTC m=+38.702227750" observedRunningTime="2025-07-16 00:47:12.006072057 +0000 UTC m=+39.190800907" watchObservedRunningTime="2025-07-16 00:47:12.013811647 +0000 UTC m=+39.198540497" Jul 16 00:47:12.023574 systemd[1]: Started cri-containerd-2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339.scope - libcontainer container 2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339. Jul 16 00:47:12.034367 systemd-networkd[2695]: cali7bea79fdd4c: Gained IPv6LL Jul 16 00:47:12.049255 containerd[2784]: time="2025-07-16T00:47:12.049223830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85f985586b-sdjzf,Uid:9c201598-6c58-44df-8298-234b75650d58,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339\"" Jul 16 00:47:12.051063 containerd[2784]: time="2025-07-16T00:47:12.051036951Z" level=info msg="CreateContainer within sandbox \"2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 16 00:47:12.060788 containerd[2784]: time="2025-07-16T00:47:12.060759657Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"45eacf2c36d2eae31bcc7e20fb63c2d88b9cfe8b0ea2d8af79398a8783801698\" pid:8086 exit_status:1 exited_at:{seconds:1752626832 nanos:60561662}" Jul 16 00:47:12.073681 containerd[2784]: time="2025-07-16T00:47:12.073652174Z" level=info msg="Container c3b6f6bedbda089302504883d33671028fba9237ceb503e5b186d69a4af5e24d: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:47:12.076843 containerd[2784]: time="2025-07-16T00:47:12.076821945Z" level=info msg="CreateContainer within sandbox \"2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c3b6f6bedbda089302504883d33671028fba9237ceb503e5b186d69a4af5e24d\"" Jul 16 00:47:12.077203 containerd[2784]: time="2025-07-16T00:47:12.077184337Z" level=info msg="StartContainer for \"c3b6f6bedbda089302504883d33671028fba9237ceb503e5b186d69a4af5e24d\"" Jul 16 00:47:12.078751 containerd[2784]: time="2025-07-16T00:47:12.078717343Z" level=info msg="connecting to shim c3b6f6bedbda089302504883d33671028fba9237ceb503e5b186d69a4af5e24d" address="unix:///run/containerd/s/68b6a736da3f35a3f0a26331f880be5e8d1dec33d3d962020d6b442cd5e52b52" protocol=ttrpc version=3 Jul 16 00:47:12.107444 systemd[1]: Started cri-containerd-c3b6f6bedbda089302504883d33671028fba9237ceb503e5b186d69a4af5e24d.scope - libcontainer container c3b6f6bedbda089302504883d33671028fba9237ceb503e5b186d69a4af5e24d. Jul 16 00:47:12.136202 containerd[2784]: time="2025-07-16T00:47:12.136176002Z" level=info msg="StartContainer for \"c3b6f6bedbda089302504883d33671028fba9237ceb503e5b186d69a4af5e24d\" returns successfully" Jul 16 00:47:12.290390 systemd-networkd[2695]: calia313b14c8a7: Gained IPv6LL Jul 16 00:47:12.994958 kubelet[4319]: I0716 00:47:12.994934 4319 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:47:13.003589 kubelet[4319]: I0716 00:47:13.003541 4319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85f985586b-sdjzf" podStartSLOduration=23.003527852 podStartE2EDuration="23.003527852s" podCreationTimestamp="2025-07-16 00:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:47:13.003463613 +0000 UTC m=+40.188192423" watchObservedRunningTime="2025-07-16 00:47:13.003527852 +0000 UTC m=+40.188256702" Jul 16 00:47:13.890422 systemd-networkd[2695]: calidf484acaba0: Gained IPv6LL Jul 16 00:47:13.996115 kubelet[4319]: I0716 00:47:13.996085 4319 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:47:20.716236 kubelet[4319]: I0716 00:47:20.716157 4319 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:47:25.256474 kubelet[4319]: I0716 00:47:25.256419 4319 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:47:25.294865 containerd[2784]: time="2025-07-16T00:47:25.294823495Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"dc7ef4938319d9e5d4510b5531f3327e162ae94b81d03e767b5db46072ebc10a\" pid:8312 exited_at:{seconds:1752626845 nanos:294622058}" Jul 16 00:47:25.336092 containerd[2784]: time="2025-07-16T00:47:25.336066642Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"ce4eb614eec400e172a780ed6d8e6238879ffa60132ca099941b0ede77d797c0\" pid:8334 exited_at:{seconds:1752626845 nanos:335900204}" Jul 16 00:47:39.366876 containerd[2784]: time="2025-07-16T00:47:39.366772069Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\" id:\"f03ba5e376b600b89eb96a49bf6039ee9d42daf193f35ae676fde98db3f95713\" pid:8371 exited_at:{seconds:1752626859 nanos:366539752}" Jul 16 00:47:42.063836 containerd[2784]: time="2025-07-16T00:47:42.063802266Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"c8e5268441359764b921149f5a716c9c3a0830d3370164b53693938255333fdb\" pid:8410 exited_at:{seconds:1752626862 nanos:63615829}" Jul 16 00:47:51.249311 kubelet[4319]: I0716 00:47:51.249253 4319 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:47:52.678105 containerd[2784]: time="2025-07-16T00:47:52.678059630Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"cf0d9fcda029dbb0b463e353b93a2f8a1e986431b019423375ee8e8a86e8b7f4\" pid:8461 exited_at:{seconds:1752626872 nanos:677904948}" Jul 16 00:47:55.331986 containerd[2784]: time="2025-07-16T00:47:55.331939246Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"0ed3fcafbe574399ba5072187f9021f5950052a165feb73c7a0093e1664c6fda\" pid:8483 exited_at:{seconds:1752626875 nanos:331772725}" Jul 16 00:48:05.510660 containerd[2784]: time="2025-07-16T00:48:05.510611972Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"8c28d9146f01fac9a1765edb53818cd1e5bae1c5a1ea1626b21b89d54df46786\" pid:8505 exited_at:{seconds:1752626885 nanos:510353612}" Jul 16 00:48:09.363696 containerd[2784]: time="2025-07-16T00:48:09.363645678Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\" id:\"8e22e62fb13cd0f5a873cc91734eee0423034bf3affe9b9fcae9412765b0636b\" pid:8542 exited_at:{seconds:1752626889 nanos:363375637}" Jul 16 00:48:12.052668 containerd[2784]: time="2025-07-16T00:48:12.052620557Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"0a88a3146e97de5a5122e70a8a99839c2f84985e60182464dc7c133f56d9f92c\" pid:8581 exited_at:{seconds:1752626892 nanos:52395757}" Jul 16 00:48:25.336961 containerd[2784]: time="2025-07-16T00:48:25.336911200Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"0c4ffdfb907695eb8e9f2fa4589950d7bcf2f79338d45414189750d0a1c00c18\" pid:8629 exited_at:{seconds:1752626905 nanos:336768321}" Jul 16 00:48:39.363921 containerd[2784]: time="2025-07-16T00:48:39.363870435Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\" id:\"ecb9bd80ef9bb6d66634c47696da68ccb39cef1fecb0e1778d1f064578c8dafb\" pid:8673 exited_at:{seconds:1752626919 nanos:363593556}" Jul 16 00:48:42.053996 containerd[2784]: time="2025-07-16T00:48:42.053959100Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"1e49666e073405fc6c3234b382bc51b274bcf7f77a6d6e319fb13f401eb6c50a\" pid:8716 exited_at:{seconds:1752626922 nanos:53792101}" Jul 16 00:48:52.678150 containerd[2784]: time="2025-07-16T00:48:52.678100049Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"62067ffc9f8a931619f5ac78dc517c75e213c4f67d022852c8898ce48db8205e\" pid:8756 exited_at:{seconds:1752626932 nanos:677870210}" Jul 16 00:48:55.335184 containerd[2784]: time="2025-07-16T00:48:55.335135598Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"dca61702a1ff1f0068132e125fe560a763b845febae19bdedf7974e2c46841ff\" pid:8779 exited_at:{seconds:1752626935 nanos:334999518}" Jul 16 00:49:05.510519 containerd[2784]: time="2025-07-16T00:49:05.510442360Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"99a936978559423bd34307bb9d84b8e3d2a4fef88f16d33d40e24a9aa55e820a\" pid:8825 exited_at:{seconds:1752626945 nanos:510218401}" Jul 16 00:49:09.363698 containerd[2784]: time="2025-07-16T00:49:09.363662164Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\" id:\"ab11801b7da5f07d5d9d59aad7f52ad4d33d918ff8172e87b8250f28ed1ddff3\" pid:8866 exited_at:{seconds:1752626949 nanos:363470165}" Jul 16 00:49:12.048214 containerd[2784]: time="2025-07-16T00:49:12.048177663Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"8d711074323995625bd96d4c4bf2753eafca647e73eab003a60f5611533062d8\" pid:8907 exited_at:{seconds:1752626952 nanos:47854345}" Jul 16 00:49:25.338460 containerd[2784]: time="2025-07-16T00:49:25.338406685Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"5c84dd060966f102ae1f7e9c99e2f2432274f1b5cfc9df06d6c2c353d32ba1fe\" pid:8951 exited_at:{seconds:1752626965 nanos:338191646}" Jul 16 00:49:39.371533 containerd[2784]: time="2025-07-16T00:49:39.371485698Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\" id:\"2c900ba0798b8ab871907ecb3e3d8e8ac4f134482740224d1bf21ef68d2f310a\" pid:8976 exited_at:{seconds:1752626979 nanos:371253620}" Jul 16 00:49:42.057253 containerd[2784]: time="2025-07-16T00:49:42.057204405Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"de5f55fbdf1877c2857e202af4368260c10eacdfb551e7c4cb5a7671693abb50\" pid:9015 exited_at:{seconds:1752626982 nanos:56938487}" Jul 16 00:49:52.678453 containerd[2784]: time="2025-07-16T00:49:52.678403952Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"3b64bf7365b798f8cd51f760caefae98e6dbe23ed552d0419e365083cb9c91fd\" pid:9063 exited_at:{seconds:1752626992 nanos:678225473}" Jul 16 00:49:55.338241 containerd[2784]: time="2025-07-16T00:49:55.338201795Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"821972f081acb4be077a4eafe912966ae6f12221eb39c34056fd7cdb755dc180\" pid:9084 exited_at:{seconds:1752626995 nanos:337971317}" Jul 16 00:50:05.516194 containerd[2784]: time="2025-07-16T00:50:05.516159136Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"429e2034f0286868a9f6c2be6e9e47871ad7ac894de7ffcff139a0c83abd5936\" pid:9107 exited_at:{seconds:1752627005 nanos:515995134}" Jul 16 00:50:09.364039 containerd[2784]: time="2025-07-16T00:50:09.363992602Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\" id:\"07824eec3ac88523120bf786cff5cb30de0c76367157121aa82038ca84476a84\" pid:9146 exited_at:{seconds:1752627009 nanos:363799080}" Jul 16 00:50:12.063137 containerd[2784]: time="2025-07-16T00:50:12.063103214Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"566f3de4602191e78c72ea60e6220d25ff7caafbf39a461c3137f3a1d718768f\" pid:9188 exited_at:{seconds:1752627012 nanos:62863332}" Jul 16 00:50:25.334165 containerd[2784]: time="2025-07-16T00:50:25.334114989Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"794361336b34717bbf1523bc97389f82934f06dc427bb01911c0d007f07c0f4f\" pid:9250 exited_at:{seconds:1752627025 nanos:333931667}" Jul 16 00:50:39.363694 containerd[2784]: time="2025-07-16T00:50:39.363583132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\" id:\"7da8bc5e04843c3a1b14a9b582d03683f24c54103bf9d6a8a8d6d5bdbb8bfb3e\" pid:9279 exited_at:{seconds:1752627039 nanos:363370531}" Jul 16 00:50:42.055216 containerd[2784]: time="2025-07-16T00:50:42.055177017Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"70da2fe157d23a3c62af498605d23ee6b622f89aefc6558dcc54b8ed94a51cae\" pid:9316 exited_at:{seconds:1752627042 nanos:54968576}" Jul 16 00:50:52.679252 containerd[2784]: time="2025-07-16T00:50:52.679209484Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"74e5927515b27484200597a0a5e12d016f6b490c66e73fce921a4daeeabb586c\" pid:9359 exited_at:{seconds:1752627052 nanos:679014883}" Jul 16 00:50:55.337532 containerd[2784]: time="2025-07-16T00:50:55.337483610Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"9e6241f83c68e96ffbccc96cdfc4b2e97f8abc99bf57acf7436b51b9f3d04e74\" pid:9381 exited_at:{seconds:1752627055 nanos:337278009}" Jul 16 00:51:05.517185 containerd[2784]: time="2025-07-16T00:51:05.517133834Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"d0c92ba7aab000ae862a885d1f49d1c9b3c09a01af2de4651c98ab1fce82e53a\" pid:9422 exited_at:{seconds:1752627065 nanos:516905994}" Jul 16 00:51:09.365713 containerd[2784]: time="2025-07-16T00:51:09.365670181Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\" id:\"7f8e207dd244b467006b10c7686e2f1f2977e15cb508e2455ae5afe27b8a4d6a\" pid:9462 exited_at:{seconds:1752627069 nanos:365395341}" Jul 16 00:51:12.061669 containerd[2784]: time="2025-07-16T00:51:12.061625119Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"ee45e6fba87260f558620c9a3fcd24d16abb41aa18bde81a01604922c01b7d2b\" pid:9503 exited_at:{seconds:1752627072 nanos:61358519}" Jul 16 00:51:25.331319 containerd[2784]: time="2025-07-16T00:51:25.331277881Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"82ad09a96cd07b0edbc97b45b3d230eed642cfdeaa4e751a2612b51f68856050\" pid:9543 exited_at:{seconds:1752627085 nanos:331091321}" Jul 16 00:51:28.872136 containerd[2784]: time="2025-07-16T00:51:28.872061039Z" level=warning msg="container event discarded" container=d0f0a7209c768c0ba7ffe93e83bf69fe09cf85bfee59b33d2b63f7c96775f0c5 type=CONTAINER_CREATED_EVENT Jul 16 00:51:28.883315 containerd[2784]: time="2025-07-16T00:51:28.883217278Z" level=warning msg="container event discarded" container=d0f0a7209c768c0ba7ffe93e83bf69fe09cf85bfee59b33d2b63f7c96775f0c5 type=CONTAINER_STARTED_EVENT Jul 16 00:51:28.883315 containerd[2784]: time="2025-07-16T00:51:28.883257918Z" level=warning msg="container event discarded" container=e29fad99e86c6afc916fbad49e4dd7c87703e346becb0b363d22d53d6fa1216c type=CONTAINER_CREATED_EVENT Jul 16 00:51:28.883315 containerd[2784]: time="2025-07-16T00:51:28.883292438Z" level=warning msg="container event discarded" container=e29fad99e86c6afc916fbad49e4dd7c87703e346becb0b363d22d53d6fa1216c type=CONTAINER_STARTED_EVENT Jul 16 00:51:28.883315 containerd[2784]: time="2025-07-16T00:51:28.883308598Z" level=warning msg="container event discarded" container=362ba423b3a3e747330c60a0ef6761a3a39458f4290ae30dde869b1bebc976ed type=CONTAINER_CREATED_EVENT Jul 16 00:51:28.883589 containerd[2784]: time="2025-07-16T00:51:28.883327358Z" level=warning msg="container event discarded" container=362ba423b3a3e747330c60a0ef6761a3a39458f4290ae30dde869b1bebc976ed type=CONTAINER_STARTED_EVENT Jul 16 00:51:28.883589 containerd[2784]: time="2025-07-16T00:51:28.883350918Z" level=warning msg="container event discarded" container=63829ced74ba21310e061f86fae3066db43f942ae82bd2091b1231a92bc992a7 type=CONTAINER_CREATED_EVENT Jul 16 00:51:28.883589 containerd[2784]: time="2025-07-16T00:51:28.883362478Z" level=warning msg="container event discarded" container=e1fcc85c1a57df98920c4f5ad682867f67663a5ca08f1df28b23664d47b8ac64 type=CONTAINER_CREATED_EVENT Jul 16 00:51:28.894604 containerd[2784]: time="2025-07-16T00:51:28.894586557Z" level=warning msg="container event discarded" container=e3612436eec148bb903f15317462592c43634aa422c51773f9762359551e0255 type=CONTAINER_CREATED_EVENT Jul 16 00:51:28.949932 containerd[2784]: time="2025-07-16T00:51:28.949874312Z" level=warning msg="container event discarded" container=63829ced74ba21310e061f86fae3066db43f942ae82bd2091b1231a92bc992a7 type=CONTAINER_STARTED_EVENT Jul 16 00:51:28.949932 containerd[2784]: time="2025-07-16T00:51:28.949904512Z" level=warning msg="container event discarded" container=e1fcc85c1a57df98920c4f5ad682867f67663a5ca08f1df28b23664d47b8ac64 type=CONTAINER_STARTED_EVENT Jul 16 00:51:28.949932 containerd[2784]: time="2025-07-16T00:51:28.949912312Z" level=warning msg="container event discarded" container=e3612436eec148bb903f15317462592c43634aa422c51773f9762359551e0255 type=CONTAINER_STARTED_EVENT Jul 16 00:51:39.366998 containerd[2784]: time="2025-07-16T00:51:39.366949463Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\" id:\"8718ef75d642a78347337d5de58d0cb54f1681cf4f30602c8c7c5eebfe733a93\" pid:9569 exited_at:{seconds:1752627099 nanos:366739503}" Jul 16 00:51:40.787986 containerd[2784]: time="2025-07-16T00:51:40.787917787Z" level=warning msg="container event discarded" container=a628fd2e32137dd36ec5c9bc41ab18abcd6d4f6e56c3f320b511f002ba2ef711 type=CONTAINER_CREATED_EVENT Jul 16 00:51:40.787986 containerd[2784]: time="2025-07-16T00:51:40.787965187Z" level=warning msg="container event discarded" container=a628fd2e32137dd36ec5c9bc41ab18abcd6d4f6e56c3f320b511f002ba2ef711 type=CONTAINER_STARTED_EVENT Jul 16 00:51:40.799171 containerd[2784]: time="2025-07-16T00:51:40.799133896Z" level=warning msg="container event discarded" container=68112bbae45de5389be2a2cea3e802492899a2c794d01d0f85529a74b852410f type=CONTAINER_CREATED_EVENT Jul 16 00:51:40.850427 containerd[2784]: time="2025-07-16T00:51:40.850390649Z" level=warning msg="container event discarded" container=68112bbae45de5389be2a2cea3e802492899a2c794d01d0f85529a74b852410f type=CONTAINER_STARTED_EVENT Jul 16 00:51:40.905883 containerd[2784]: time="2025-07-16T00:51:40.905852397Z" level=warning msg="container event discarded" container=2e0f5c1c553bb4bcb91258eba5686aded42588245719cb65edc4c23dc179e3d5 type=CONTAINER_CREATED_EVENT Jul 16 00:51:40.905883 containerd[2784]: time="2025-07-16T00:51:40.905872957Z" level=warning msg="container event discarded" container=2e0f5c1c553bb4bcb91258eba5686aded42588245719cb65edc4c23dc179e3d5 type=CONTAINER_STARTED_EVENT Jul 16 00:51:42.057832 containerd[2784]: time="2025-07-16T00:51:42.057755461Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"b9a980a79f74d35d5b8aae98fc68d44b5911236fbc192645a496eeae6ba095f2\" pid:9607 exited_at:{seconds:1752627102 nanos:57557621}" Jul 16 00:51:43.829025 containerd[2784]: time="2025-07-16T00:51:43.828978111Z" level=warning msg="container event discarded" container=8d7ae87429fa2c176727635ae01956622cea174b6228aca7420fb8da885e5941 type=CONTAINER_CREATED_EVENT Jul 16 00:51:43.880058 containerd[2784]: time="2025-07-16T00:51:43.880017815Z" level=warning msg="container event discarded" container=8d7ae87429fa2c176727635ae01956622cea174b6228aca7420fb8da885e5941 type=CONTAINER_STARTED_EVENT Jul 16 00:51:52.679545 containerd[2784]: time="2025-07-16T00:51:52.679497565Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"edd6e1d43aa58c967f64440eeaeffe7100597d22366efb8c86e05a719880af89\" pid:9668 exited_at:{seconds:1752627112 nanos:679343565}" Jul 16 00:51:53.249546 containerd[2784]: time="2025-07-16T00:51:53.249461553Z" level=warning msg="container event discarded" container=336b847cd446c6672946c4c000f7d826d53bf9e4243967094b5f61ad6bada875 type=CONTAINER_CREATED_EVENT Jul 16 00:51:53.249546 containerd[2784]: time="2025-07-16T00:51:53.249526073Z" level=warning msg="container event discarded" container=336b847cd446c6672946c4c000f7d826d53bf9e4243967094b5f61ad6bada875 type=CONTAINER_STARTED_EVENT Jul 16 00:51:53.497625 containerd[2784]: time="2025-07-16T00:51:53.497583420Z" level=warning msg="container event discarded" container=9decf577f86eebf96c30348ce8b48e7aa8765017d6e0a35e8ed5b358850ae8e3 type=CONTAINER_CREATED_EVENT Jul 16 00:51:53.497625 containerd[2784]: time="2025-07-16T00:51:53.497607420Z" level=warning msg="container event discarded" container=9decf577f86eebf96c30348ce8b48e7aa8765017d6e0a35e8ed5b358850ae8e3 type=CONTAINER_STARTED_EVENT Jul 16 00:51:53.997534 containerd[2784]: time="2025-07-16T00:51:53.997482629Z" level=warning msg="container event discarded" container=5e155de97647b574d3dca0837eba39ed901c91672261bb0770af6d8d37a16861 type=CONTAINER_CREATED_EVENT Jul 16 00:51:54.050727 containerd[2784]: time="2025-07-16T00:51:54.050687658Z" level=warning msg="container event discarded" container=5e155de97647b574d3dca0837eba39ed901c91672261bb0770af6d8d37a16861 type=CONTAINER_STARTED_EVENT Jul 16 00:51:54.343225 containerd[2784]: time="2025-07-16T00:51:54.343183556Z" level=warning msg="container event discarded" container=a2450454d41237b842d75bfb73423faa2e49485c48717c0e38c199e6dede9f27 type=CONTAINER_CREATED_EVENT Jul 16 00:51:54.391517 containerd[2784]: time="2025-07-16T00:51:54.391483554Z" level=warning msg="container event discarded" container=a2450454d41237b842d75bfb73423faa2e49485c48717c0e38c199e6dede9f27 type=CONTAINER_STARTED_EVENT Jul 16 00:51:54.716352 containerd[2784]: time="2025-07-16T00:51:54.716279317Z" level=warning msg="container event discarded" container=a2450454d41237b842d75bfb73423faa2e49485c48717c0e38c199e6dede9f27 type=CONTAINER_STOPPED_EVENT Jul 16 00:51:55.336377 containerd[2784]: time="2025-07-16T00:51:55.336348197Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"74e61660aa9a0eb5557efc190b09ca76701ec8ad0805a3fb6a07df7ddd0f49e6\" pid:9690 exited_at:{seconds:1752627115 nanos:336169077}" Jul 16 00:51:56.140052 containerd[2784]: time="2025-07-16T00:51:56.140010933Z" level=warning msg="container event discarded" container=c48a71e04cb8294218245448a4eafda1b844a42065838d1e5f46020d8f8c8ead type=CONTAINER_CREATED_EVENT Jul 16 00:51:56.200241 containerd[2784]: time="2025-07-16T00:51:56.200202223Z" level=warning msg="container event discarded" container=c48a71e04cb8294218245448a4eafda1b844a42065838d1e5f46020d8f8c8ead type=CONTAINER_STARTED_EVENT Jul 16 00:51:56.725017 containerd[2784]: time="2025-07-16T00:51:56.724985352Z" level=warning msg="container event discarded" container=c48a71e04cb8294218245448a4eafda1b844a42065838d1e5f46020d8f8c8ead type=CONTAINER_STOPPED_EVENT Jul 16 00:51:59.620402 containerd[2784]: time="2025-07-16T00:51:59.620353549Z" level=warning msg="container event discarded" container=7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4 type=CONTAINER_CREATED_EVENT Jul 16 00:51:59.684031 containerd[2784]: time="2025-07-16T00:51:59.683989184Z" level=warning msg="container event discarded" container=7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4 type=CONTAINER_STARTED_EVENT Jul 16 00:52:01.466525 containerd[2784]: time="2025-07-16T00:52:01.466466609Z" level=warning msg="container event discarded" container=373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516 type=CONTAINER_CREATED_EVENT Jul 16 00:52:01.466525 containerd[2784]: time="2025-07-16T00:52:01.466504849Z" level=warning msg="container event discarded" container=373ce02ea26ddf014c1d9258c4fa473a622759ff87fd844872c6a8fba71f5516 type=CONTAINER_STARTED_EVENT Jul 16 00:52:01.851381 containerd[2784]: time="2025-07-16T00:52:01.851340501Z" level=warning msg="container event discarded" container=f61f4b08fb399d55752b10f98134498e6ab886eecbd093f738293d70bb126df6 type=CONTAINER_CREATED_EVENT Jul 16 00:52:01.909577 containerd[2784]: time="2025-07-16T00:52:01.909535902Z" level=warning msg="container event discarded" container=f61f4b08fb399d55752b10f98134498e6ab886eecbd093f738293d70bb126df6 type=CONTAINER_STARTED_EVENT Jul 16 00:52:02.569152 containerd[2784]: time="2025-07-16T00:52:02.569091366Z" level=warning msg="container event discarded" container=6762e44b623747269b317e7d2839e3f6d3a23804062fcdd637079344bbc9f5ef type=CONTAINER_CREATED_EVENT Jul 16 00:52:02.626302 containerd[2784]: time="2025-07-16T00:52:02.626278006Z" level=warning msg="container event discarded" container=6762e44b623747269b317e7d2839e3f6d3a23804062fcdd637079344bbc9f5ef type=CONTAINER_STARTED_EVENT Jul 16 00:52:05.519091 containerd[2784]: time="2025-07-16T00:52:05.519053795Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"fc32c0eff50c5334096e59a61bc26afc8cde54fb8d9b55251f5f078f7b634683\" pid:9714 exited_at:{seconds:1752627125 nanos:518874515}" Jul 16 00:52:08.057813 containerd[2784]: time="2025-07-16T00:52:08.057756461Z" level=warning msg="container event discarded" container=ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d type=CONTAINER_CREATED_EVENT Jul 16 00:52:08.057813 containerd[2784]: time="2025-07-16T00:52:08.057789741Z" level=warning msg="container event discarded" container=ea02999a8d393bfe6335d811b5f13e151bd466d10fff3610aeb9fcc4b0f1a49d type=CONTAINER_STARTED_EVENT Jul 16 00:52:08.145079 containerd[2784]: time="2025-07-16T00:52:08.145046536Z" level=warning msg="container event discarded" container=1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73 type=CONTAINER_CREATED_EVENT Jul 16 00:52:08.145079 containerd[2784]: time="2025-07-16T00:52:08.145077456Z" level=warning msg="container event discarded" container=1dbc15ddc9da55239efcb369c6579250ef6fd30e62569e8758888adf58db7c73 type=CONTAINER_STARTED_EVENT Jul 16 00:52:08.145160 containerd[2784]: time="2025-07-16T00:52:08.145085456Z" level=warning msg="container event discarded" container=5ee338c66f283d73486febea5e4d3dbaab4d6b99288dc2d1569ad4274f0c15e5 type=CONTAINER_CREATED_EVENT Jul 16 00:52:08.212285 containerd[2784]: time="2025-07-16T00:52:08.212259418Z" level=warning msg="container event discarded" container=5ee338c66f283d73486febea5e4d3dbaab4d6b99288dc2d1569ad4274f0c15e5 type=CONTAINER_STARTED_EVENT Jul 16 00:52:08.263473 containerd[2784]: time="2025-07-16T00:52:08.263443858Z" level=warning msg="container event discarded" container=71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f type=CONTAINER_CREATED_EVENT Jul 16 00:52:08.263473 containerd[2784]: time="2025-07-16T00:52:08.263469178Z" level=warning msg="container event discarded" container=71fc064d100411156cafaa7c7cb5ea5aafce0bca6f5cf7c11ca6f5672847c43f type=CONTAINER_STARTED_EVENT Jul 16 00:52:08.434838 containerd[2784]: time="2025-07-16T00:52:08.434782176Z" level=warning msg="container event discarded" container=c11d1be2d531edaad3b655273585e1595e2863286c5680f345ae1e5630da74c0 type=CONTAINER_CREATED_EVENT Jul 16 00:52:08.486882 containerd[2784]: time="2025-07-16T00:52:08.486857334Z" level=warning msg="container event discarded" container=c11d1be2d531edaad3b655273585e1595e2863286c5680f345ae1e5630da74c0 type=CONTAINER_STARTED_EVENT Jul 16 00:52:09.072314 containerd[2784]: time="2025-07-16T00:52:09.072283118Z" level=warning msg="container event discarded" container=dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996 type=CONTAINER_CREATED_EVENT Jul 16 00:52:09.072314 containerd[2784]: time="2025-07-16T00:52:09.072310398Z" level=warning msg="container event discarded" container=dcd29e7fa66d7c717096d382a0ca8106e7835331f530fe98e7f832323aad1996 type=CONTAINER_STARTED_EVENT Jul 16 00:52:09.338097 containerd[2784]: time="2025-07-16T00:52:09.338018924Z" level=warning msg="container event discarded" container=51a2720f4ac821849d09ef6abc6b631499686451f0b96ccd8e17ea3ed49295e0 type=CONTAINER_CREATED_EVENT Jul 16 00:52:09.373309 containerd[2784]: time="2025-07-16T00:52:09.373284679Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\" id:\"2ef0562006e7324d1909cf41b806363d66d566e8b04ef2956a26ce8654c71f48\" pid:9754 exited_at:{seconds:1752627129 nanos:373102840}" Jul 16 00:52:09.377310 containerd[2784]: time="2025-07-16T00:52:09.377284710Z" level=warning msg="container event discarded" container=51a2720f4ac821849d09ef6abc6b631499686451f0b96ccd8e17ea3ed49295e0 type=CONTAINER_STARTED_EVENT Jul 16 00:52:09.722405 containerd[2784]: time="2025-07-16T00:52:09.722325967Z" level=warning msg="container event discarded" container=d616e22159cc3d9042eba63b10f0aff386acd724f284328bbc739f51ff878b12 type=CONTAINER_CREATED_EVENT Jul 16 00:52:09.784554 containerd[2784]: time="2025-07-16T00:52:09.784525498Z" level=warning msg="container event discarded" container=d616e22159cc3d9042eba63b10f0aff386acd724f284328bbc739f51ff878b12 type=CONTAINER_STARTED_EVENT Jul 16 00:52:10.064820 containerd[2784]: time="2025-07-16T00:52:10.064788147Z" level=warning msg="container event discarded" container=c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065 type=CONTAINER_CREATED_EVENT Jul 16 00:52:10.064820 containerd[2784]: time="2025-07-16T00:52:10.064810107Z" level=warning msg="container event discarded" container=c5ee88928eb041164d7ecde8f739c679892e804019d5a31bd704770afa85a065 type=CONTAINER_STARTED_EVENT Jul 16 00:52:10.573081 containerd[2784]: time="2025-07-16T00:52:10.573048954Z" level=warning msg="container event discarded" container=b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681 type=CONTAINER_CREATED_EVENT Jul 16 00:52:10.631322 containerd[2784]: time="2025-07-16T00:52:10.631282213Z" level=warning msg="container event discarded" container=b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681 type=CONTAINER_STARTED_EVENT Jul 16 00:52:11.051327 containerd[2784]: time="2025-07-16T00:52:11.051285712Z" level=warning msg="container event discarded" container=5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741 type=CONTAINER_CREATED_EVENT Jul 16 00:52:11.051327 containerd[2784]: time="2025-07-16T00:52:11.051317792Z" level=warning msg="container event discarded" container=5061dc66b424def4b41f697f759c9271b70f75ef4342a738c2f7b5bea3d55741 type=CONTAINER_STARTED_EVENT Jul 16 00:52:11.051327 containerd[2784]: time="2025-07-16T00:52:11.051325832Z" level=warning msg="container event discarded" container=1f2ff1873062ecf09c3e1770af46ee380976b19205ebbacbef91567b72795733 type=CONTAINER_CREATED_EVENT Jul 16 00:52:11.103603 containerd[2784]: time="2025-07-16T00:52:11.103566783Z" level=warning msg="container event discarded" container=1f2ff1873062ecf09c3e1770af46ee380976b19205ebbacbef91567b72795733 type=CONTAINER_STARTED_EVENT Jul 16 00:52:11.538912 containerd[2784]: time="2025-07-16T00:52:11.538880310Z" level=warning msg="container event discarded" container=b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd type=CONTAINER_CREATED_EVENT Jul 16 00:52:11.600300 containerd[2784]: time="2025-07-16T00:52:11.600275679Z" level=warning msg="container event discarded" container=b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd type=CONTAINER_STARTED_EVENT Jul 16 00:52:12.054356 containerd[2784]: time="2025-07-16T00:52:12.054319078Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"8aff510d7172961a2821e4d840f81c912eb21cb22f4cbe82a291b465b9e65143\" pid:9790 exited_at:{seconds:1752627132 nanos:54157358}" Jul 16 00:52:12.059428 containerd[2784]: time="2025-07-16T00:52:12.059403465Z" level=warning msg="container event discarded" container=2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339 type=CONTAINER_CREATED_EVENT Jul 16 00:52:12.059428 containerd[2784]: time="2025-07-16T00:52:12.059424585Z" level=warning msg="container event discarded" container=2a1ac88e40d37e91e5889625a91a66f2364943aeeaa45d6f72fac3e51876c339 type=CONTAINER_STARTED_EVENT Jul 16 00:52:12.086713 containerd[2784]: time="2025-07-16T00:52:12.086673957Z" level=warning msg="container event discarded" container=c3b6f6bedbda089302504883d33671028fba9237ceb503e5b186d69a4af5e24d type=CONTAINER_CREATED_EVENT Jul 16 00:52:12.145903 containerd[2784]: time="2025-07-16T00:52:12.145868609Z" level=warning msg="container event discarded" container=c3b6f6bedbda089302504883d33671028fba9237ceb503e5b186d69a4af5e24d type=CONTAINER_STARTED_EVENT Jul 16 00:52:25.335153 containerd[2784]: time="2025-07-16T00:52:25.335115002Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"7121b1d494d46ec763780dfc83c798b5af3c976fa542e3c4ee99cfb843a42beb\" pid:9833 exited_at:{seconds:1752627145 nanos:334895923}" Jul 16 00:52:39.371614 containerd[2784]: time="2025-07-16T00:52:39.371576220Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\" id:\"caab3d8d6f1c67d7888de2729573cd229b34ba45801dee177639b4ae4250fa62\" pid:9861 exited_at:{seconds:1752627159 nanos:371330380}" Jul 16 00:52:42.053040 containerd[2784]: time="2025-07-16T00:52:42.052992678Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"b610560fbddf740a1e6782c9083744fb6f63c62ab9d47664bc7d3d38944e3c44\" pid:9899 exited_at:{seconds:1752627162 nanos:52806158}" Jul 16 00:52:52.678132 containerd[2784]: time="2025-07-16T00:52:52.678079864Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"20605c06d49a4a15aadd1d0084eb916a0d981ea4bb2f6a69d756f0c00054d2d5\" pid:9935 exited_at:{seconds:1752627172 nanos:677869345}" Jul 16 00:52:55.337302 containerd[2784]: time="2025-07-16T00:52:55.337267060Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"663e97ab1263e86a5dea1bb3ea143a43308191138a88f1ce5656a64eaff2bce1\" pid:9958 exited_at:{seconds:1752627175 nanos:337089380}" Jul 16 00:53:05.522116 containerd[2784]: time="2025-07-16T00:53:05.522066881Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"4b367c0600c194988875b5cf18d38f9b02cdd9237313a1809671ae2fce9e8dca\" pid:9983 exited_at:{seconds:1752627185 nanos:521894722}" Jul 16 00:53:09.363026 containerd[2784]: time="2025-07-16T00:53:09.362978240Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\" id:\"f9e27e71ac4333328b13cafb00377cb6c43368f0a5886de192245a7294f438b3\" pid:10022 exited_at:{seconds:1752627189 nanos:362755561}" Jul 16 00:53:12.047296 containerd[2784]: time="2025-07-16T00:53:12.047242569Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"73cd983de4968fb528c1920423db1824a051214e461732e7fc35db70701d6ba7\" pid:10063 exited_at:{seconds:1752627192 nanos:47026330}" Jul 16 00:53:25.334129 containerd[2784]: time="2025-07-16T00:53:25.334086165Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"83af83df3ce6785299ddd4dc0c1c907395ceb750ee035f448c596ffb22f68df4\" pid:10128 exited_at:{seconds:1752627205 nanos:333917726}" Jul 16 00:53:39.368211 containerd[2784]: time="2025-07-16T00:53:39.368113504Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\" id:\"2d2d9b4385acb6086634d91acaf594ec4e86975c080ae3a5485d1548f1753d9d\" pid:10152 exited_at:{seconds:1752627219 nanos:367904105}" Jul 16 00:53:42.058496 containerd[2784]: time="2025-07-16T00:53:42.058439994Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"5191a6f6e7dac6befc80cecee8622efeae7bf8c06655d8435d77ad2307dd0d35\" pid:10189 exited_at:{seconds:1752627222 nanos:58233835}" Jul 16 00:53:52.674936 containerd[2784]: time="2025-07-16T00:53:52.674894226Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"112dd2544b656502a3de8b24251308ba238e41fdd2118198e055f0e531878b08\" pid:10227 exited_at:{seconds:1752627232 nanos:674707067}" Jul 16 00:53:55.338301 containerd[2784]: time="2025-07-16T00:53:55.338256969Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"57bf11cce24e1e32b5a8bd83958b95baa2d7259cd03172c6dc71f890bc16943e\" pid:10248 exited_at:{seconds:1752627235 nanos:338080170}" Jul 16 00:54:05.515687 containerd[2784]: time="2025-07-16T00:54:05.515642097Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"45a73d300d0cd4a41a8ea4a9796806d657d19e7b7baee6f73f2570f788b4cbea\" pid:10271 exited_at:{seconds:1752627245 nanos:515462497}" Jul 16 00:54:09.369552 containerd[2784]: time="2025-07-16T00:54:09.369499139Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\" id:\"7ae578139a2ae419bbd2e88c20a58f4ec71b5da4daaef80412876069edafb510\" pid:10309 exited_at:{seconds:1752627249 nanos:369250140}" Jul 16 00:54:12.047924 containerd[2784]: time="2025-07-16T00:54:12.047891235Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"912e8a900496634b93538d9c568bf035d5ed22d811a62dd4991eb6e4d9c3e9eb\" pid:10347 exited_at:{seconds:1752627252 nanos:47699956}" Jul 16 00:54:25.335238 containerd[2784]: time="2025-07-16T00:54:25.335206668Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"695e0e2aeb1a4570acedf58d7215c40bd26c995c3e8c0d252f32aba986fa64a9\" pid:10385 exited_at:{seconds:1752627265 nanos:335050187}" Jul 16 00:54:39.375828 containerd[2784]: time="2025-07-16T00:54:39.375789963Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\" id:\"5ecbc965aeaadc9bca31619e7d09d4bfbe060488a4a3de494c427b6522aded8b\" pid:10413 exited_at:{seconds:1752627279 nanos:375560202}" Jul 16 00:54:42.056924 containerd[2784]: time="2025-07-16T00:54:42.056886759Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"848bd8f500adb17e7b3f80c480bcc6fccc4d41558a842dfd84d118e7b67ba542\" pid:10448 exited_at:{seconds:1752627282 nanos:56717758}" Jul 16 00:54:52.677079 containerd[2784]: time="2025-07-16T00:54:52.677039974Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"785270f83396bd92a25c7e6d9910782cbc93a76791e65e12c17373f77d36195d\" pid:10507 exited_at:{seconds:1752627292 nanos:676845734}" Jul 16 00:54:53.752275 systemd[1]: Started sshd@7-147.28.162.205:22-139.178.89.65:59168.service - OpenSSH per-connection server daemon (139.178.89.65:59168). Jul 16 00:54:54.150278 sshd[10528]: Accepted publickey for core from 139.178.89.65 port 59168 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:54:54.151568 sshd-session[10528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:54:54.155441 systemd-logind[2768]: New session 10 of user core. Jul 16 00:54:54.177380 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 16 00:54:54.501886 sshd[10530]: Connection closed by 139.178.89.65 port 59168 Jul 16 00:54:54.502300 sshd-session[10528]: pam_unix(sshd:session): session closed for user core Jul 16 00:54:54.505282 systemd[1]: sshd@7-147.28.162.205:22-139.178.89.65:59168.service: Deactivated successfully. Jul 16 00:54:54.507484 systemd[1]: session-10.scope: Deactivated successfully. Jul 16 00:54:54.508100 systemd-logind[2768]: Session 10 logged out. Waiting for processes to exit. Jul 16 00:54:54.509015 systemd-logind[2768]: Removed session 10. Jul 16 00:54:55.336506 containerd[2784]: time="2025-07-16T00:54:55.336475945Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"246394c8bbc81267daf158fb7c28a77d8060d1b2658751363ff6b0c6d4b0bbcf\" pid:10584 exited_at:{seconds:1752627295 nanos:336286905}" Jul 16 00:54:59.579023 systemd[1]: Started sshd@8-147.28.162.205:22-139.178.89.65:60522.service - OpenSSH per-connection server daemon (139.178.89.65:60522). Jul 16 00:54:59.979634 sshd[10599]: Accepted publickey for core from 139.178.89.65 port 60522 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:54:59.980981 sshd-session[10599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:54:59.984266 systemd-logind[2768]: New session 11 of user core. Jul 16 00:55:00.006377 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 16 00:55:00.324137 sshd[10601]: Connection closed by 139.178.89.65 port 60522 Jul 16 00:55:00.324501 sshd-session[10599]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:00.327513 systemd[1]: sshd@8-147.28.162.205:22-139.178.89.65:60522.service: Deactivated successfully. Jul 16 00:55:00.329716 systemd[1]: session-11.scope: Deactivated successfully. Jul 16 00:55:00.330895 systemd-logind[2768]: Session 11 logged out. Waiting for processes to exit. Jul 16 00:55:00.331680 systemd-logind[2768]: Removed session 11. Jul 16 00:55:00.396157 systemd[1]: Started sshd@9-147.28.162.205:22-139.178.89.65:60532.service - OpenSSH per-connection server daemon (139.178.89.65:60532). Jul 16 00:55:00.796479 sshd[10637]: Accepted publickey for core from 139.178.89.65 port 60532 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:55:00.797722 sshd-session[10637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:00.801035 systemd-logind[2768]: New session 12 of user core. Jul 16 00:55:00.818381 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 16 00:55:01.166175 sshd[10639]: Connection closed by 139.178.89.65 port 60532 Jul 16 00:55:01.166429 sshd-session[10637]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:01.169399 systemd[1]: sshd@9-147.28.162.205:22-139.178.89.65:60532.service: Deactivated successfully. Jul 16 00:55:01.171899 systemd[1]: session-12.scope: Deactivated successfully. Jul 16 00:55:01.172759 systemd-logind[2768]: Session 12 logged out. Waiting for processes to exit. Jul 16 00:55:01.173505 systemd-logind[2768]: Removed session 12. Jul 16 00:55:01.240038 systemd[1]: Started sshd@10-147.28.162.205:22-139.178.89.65:60540.service - OpenSSH per-connection server daemon (139.178.89.65:60540). Jul 16 00:55:01.643210 sshd[10678]: Accepted publickey for core from 139.178.89.65 port 60540 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:55:01.644544 sshd-session[10678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:01.647858 systemd-logind[2768]: New session 13 of user core. Jul 16 00:55:01.671389 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 16 00:55:01.992067 sshd[10680]: Connection closed by 139.178.89.65 port 60540 Jul 16 00:55:01.992421 sshd-session[10678]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:01.995350 systemd[1]: sshd@10-147.28.162.205:22-139.178.89.65:60540.service: Deactivated successfully. Jul 16 00:55:01.997556 systemd[1]: session-13.scope: Deactivated successfully. Jul 16 00:55:01.998159 systemd-logind[2768]: Session 13 logged out. Waiting for processes to exit. Jul 16 00:55:01.998960 systemd-logind[2768]: Removed session 13. Jul 16 00:55:05.521040 containerd[2784]: time="2025-07-16T00:55:05.520995942Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"090b1da3ee4f51d9fed6e6dc64c58ba73f1a7cad1323e1b1f5570ddd8278f18e\" pid:10729 exited_at:{seconds:1752627305 nanos:520686661}" Jul 16 00:55:07.069182 systemd[1]: Started sshd@11-147.28.162.205:22-139.178.89.65:60556.service - OpenSSH per-connection server daemon (139.178.89.65:60556). Jul 16 00:55:07.474011 sshd[10758]: Accepted publickey for core from 139.178.89.65 port 60556 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:55:07.475129 sshd-session[10758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:07.478137 systemd-logind[2768]: New session 14 of user core. Jul 16 00:55:07.501370 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 16 00:55:07.821904 sshd[10760]: Connection closed by 139.178.89.65 port 60556 Jul 16 00:55:07.822290 sshd-session[10758]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:07.825304 systemd[1]: sshd@11-147.28.162.205:22-139.178.89.65:60556.service: Deactivated successfully. Jul 16 00:55:07.827498 systemd[1]: session-14.scope: Deactivated successfully. Jul 16 00:55:07.828085 systemd-logind[2768]: Session 14 logged out. Waiting for processes to exit. Jul 16 00:55:07.828873 systemd-logind[2768]: Removed session 14. Jul 16 00:55:09.364392 containerd[2784]: time="2025-07-16T00:55:09.364354628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\" id:\"00e25877fe0916d98ead1d2b162455a348a06cf3d0ed6b089cc5e8998f0ed26f\" pid:10808 exited_at:{seconds:1752627309 nanos:364144307}" Jul 16 00:55:12.052168 containerd[2784]: time="2025-07-16T00:55:12.052126064Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"dcc6216469726e253946099b76c07a344bfeeeb87457060bffa4bcaf078915e6\" pid:10844 exited_at:{seconds:1752627312 nanos:51919863}" Jul 16 00:55:12.895194 systemd[1]: Started sshd@12-147.28.162.205:22-139.178.89.65:36254.service - OpenSSH per-connection server daemon (139.178.89.65:36254). Jul 16 00:55:13.315041 sshd[10873]: Accepted publickey for core from 139.178.89.65 port 36254 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:55:13.316331 sshd-session[10873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:13.319701 systemd-logind[2768]: New session 15 of user core. Jul 16 00:55:13.341390 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 16 00:55:13.661523 sshd[10875]: Connection closed by 139.178.89.65 port 36254 Jul 16 00:55:13.664404 sshd-session[10873]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:13.667353 systemd[1]: sshd@12-147.28.162.205:22-139.178.89.65:36254.service: Deactivated successfully. Jul 16 00:55:13.669101 systemd[1]: session-15.scope: Deactivated successfully. Jul 16 00:55:13.669694 systemd-logind[2768]: Session 15 logged out. Waiting for processes to exit. Jul 16 00:55:13.670480 systemd-logind[2768]: Removed session 15. Jul 16 00:55:18.738114 systemd[1]: Started sshd@13-147.28.162.205:22-139.178.89.65:36270.service - OpenSSH per-connection server daemon (139.178.89.65:36270). Jul 16 00:55:19.138228 sshd[10917]: Accepted publickey for core from 139.178.89.65 port 36270 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:55:19.139417 sshd-session[10917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:19.142592 systemd-logind[2768]: New session 16 of user core. Jul 16 00:55:19.165418 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 16 00:55:19.483058 sshd[10919]: Connection closed by 139.178.89.65 port 36270 Jul 16 00:55:19.483376 sshd-session[10917]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:19.486323 systemd[1]: sshd@13-147.28.162.205:22-139.178.89.65:36270.service: Deactivated successfully. Jul 16 00:55:19.487917 systemd[1]: session-16.scope: Deactivated successfully. Jul 16 00:55:19.488518 systemd-logind[2768]: Session 16 logged out. Waiting for processes to exit. Jul 16 00:55:19.489285 systemd-logind[2768]: Removed session 16. Jul 16 00:55:19.559852 systemd[1]: Started sshd@14-147.28.162.205:22-139.178.89.65:42442.service - OpenSSH per-connection server daemon (139.178.89.65:42442). Jul 16 00:55:19.963005 sshd[10954]: Accepted publickey for core from 139.178.89.65 port 42442 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:55:19.964190 sshd-session[10954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:19.967247 systemd-logind[2768]: New session 17 of user core. Jul 16 00:55:19.989427 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 16 00:55:20.330457 sshd[10956]: Connection closed by 139.178.89.65 port 42442 Jul 16 00:55:20.330820 sshd-session[10954]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:20.333679 systemd[1]: sshd@14-147.28.162.205:22-139.178.89.65:42442.service: Deactivated successfully. Jul 16 00:55:20.335242 systemd[1]: session-17.scope: Deactivated successfully. Jul 16 00:55:20.335821 systemd-logind[2768]: Session 17 logged out. Waiting for processes to exit. Jul 16 00:55:20.336590 systemd-logind[2768]: Removed session 17. Jul 16 00:55:20.405905 systemd[1]: Started sshd@15-147.28.162.205:22-139.178.89.65:42452.service - OpenSSH per-connection server daemon (139.178.89.65:42452). Jul 16 00:55:20.806740 sshd[10985]: Accepted publickey for core from 139.178.89.65 port 42452 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:55:20.807911 sshd-session[10985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:20.810872 systemd-logind[2768]: New session 18 of user core. Jul 16 00:55:20.833413 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 16 00:55:21.486577 sshd[10987]: Connection closed by 139.178.89.65 port 42452 Jul 16 00:55:21.486909 sshd-session[10985]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:21.489908 systemd[1]: sshd@15-147.28.162.205:22-139.178.89.65:42452.service: Deactivated successfully. Jul 16 00:55:21.491487 systemd[1]: session-18.scope: Deactivated successfully. Jul 16 00:55:21.492053 systemd-logind[2768]: Session 18 logged out. Waiting for processes to exit. Jul 16 00:55:21.492818 systemd-logind[2768]: Removed session 18. Jul 16 00:55:21.564798 systemd[1]: Started sshd@16-147.28.162.205:22-139.178.89.65:42460.service - OpenSSH per-connection server daemon (139.178.89.65:42460). Jul 16 00:55:21.983394 sshd[11043]: Accepted publickey for core from 139.178.89.65 port 42460 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:55:21.984716 sshd-session[11043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:21.988232 systemd-logind[2768]: New session 19 of user core. Jul 16 00:55:22.009373 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 16 00:55:22.418274 sshd[11045]: Connection closed by 139.178.89.65 port 42460 Jul 16 00:55:22.426154 sshd-session[11043]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:22.434901 systemd[1]: sshd@16-147.28.162.205:22-139.178.89.65:42460.service: Deactivated successfully. Jul 16 00:55:22.437163 systemd[1]: session-19.scope: Deactivated successfully. Jul 16 00:55:22.437829 systemd-logind[2768]: Session 19 logged out. Waiting for processes to exit. Jul 16 00:55:22.438578 systemd-logind[2768]: Removed session 19. Jul 16 00:55:22.494091 systemd[1]: Started sshd@17-147.28.162.205:22-139.178.89.65:42464.service - OpenSSH per-connection server daemon (139.178.89.65:42464). Jul 16 00:55:22.896864 sshd[11097]: Accepted publickey for core from 139.178.89.65 port 42464 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:55:22.898182 sshd-session[11097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:22.901423 systemd-logind[2768]: New session 20 of user core. Jul 16 00:55:22.925380 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 16 00:55:23.246491 sshd[11099]: Connection closed by 139.178.89.65 port 42464 Jul 16 00:55:23.246819 sshd-session[11097]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:23.249798 systemd[1]: sshd@17-147.28.162.205:22-139.178.89.65:42464.service: Deactivated successfully. Jul 16 00:55:23.251346 systemd[1]: session-20.scope: Deactivated successfully. Jul 16 00:55:23.251913 systemd-logind[2768]: Session 20 logged out. Waiting for processes to exit. Jul 16 00:55:23.252682 systemd-logind[2768]: Removed session 20. Jul 16 00:55:25.338311 containerd[2784]: time="2025-07-16T00:55:25.338270359Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b356d5a290fbb13ab989e406da8a382c6bf4a349be6259db3f9a280c0cce20fd\" id:\"7882d5bfe8311fd6909f511481a34ba3370c2dedd89b23c79ff46249995214e9\" pid:11152 exited_at:{seconds:1752627325 nanos:338111319}" Jul 16 00:55:28.319121 systemd[1]: Started sshd@18-147.28.162.205:22-139.178.89.65:42478.service - OpenSSH per-connection server daemon (139.178.89.65:42478). Jul 16 00:55:28.730473 sshd[11164]: Accepted publickey for core from 139.178.89.65 port 42478 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:55:28.731726 sshd-session[11164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:28.734912 systemd-logind[2768]: New session 21 of user core. Jul 16 00:55:28.760371 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 16 00:55:29.075157 sshd[11166]: Connection closed by 139.178.89.65 port 42478 Jul 16 00:55:29.075561 sshd-session[11164]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:29.078556 systemd[1]: sshd@18-147.28.162.205:22-139.178.89.65:42478.service: Deactivated successfully. Jul 16 00:55:29.080764 systemd[1]: session-21.scope: Deactivated successfully. Jul 16 00:55:29.081413 systemd-logind[2768]: Session 21 logged out. Waiting for processes to exit. Jul 16 00:55:29.082362 systemd-logind[2768]: Removed session 21. Jul 16 00:55:34.151976 systemd[1]: Started sshd@19-147.28.162.205:22-139.178.89.65:34368.service - OpenSSH per-connection server daemon (139.178.89.65:34368). Jul 16 00:55:34.554911 sshd[11214]: Accepted publickey for core from 139.178.89.65 port 34368 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:55:34.556236 sshd-session[11214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:34.559463 systemd-logind[2768]: New session 22 of user core. Jul 16 00:55:34.583367 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 16 00:55:34.898755 sshd[11216]: Connection closed by 139.178.89.65 port 34368 Jul 16 00:55:34.899066 sshd-session[11214]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:34.902098 systemd[1]: sshd@19-147.28.162.205:22-139.178.89.65:34368.service: Deactivated successfully. Jul 16 00:55:34.904823 systemd[1]: session-22.scope: Deactivated successfully. Jul 16 00:55:34.905416 systemd-logind[2768]: Session 22 logged out. Waiting for processes to exit. Jul 16 00:55:34.906156 systemd-logind[2768]: Removed session 22. Jul 16 00:55:39.365809 containerd[2784]: time="2025-07-16T00:55:39.365772869Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b44334799692c3e77cf2a2643f5e2c006bfd6c7d80f69d508024ff8f39b85e4\" id:\"9eb2ef310ae3510a66125515885d1446232f8b21480b046249fe0d5bce59c38c\" pid:11264 exited_at:{seconds:1752627339 nanos:365511748}" Jul 16 00:55:39.975141 systemd[1]: Started sshd@20-147.28.162.205:22-139.178.89.65:44194.service - OpenSSH per-connection server daemon (139.178.89.65:44194). Jul 16 00:55:40.377780 sshd[11288]: Accepted publickey for core from 139.178.89.65 port 44194 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:55:40.378877 sshd-session[11288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:40.381982 systemd-logind[2768]: New session 23 of user core. Jul 16 00:55:40.405366 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 16 00:55:40.722963 sshd[11290]: Connection closed by 139.178.89.65 port 44194 Jul 16 00:55:40.723296 sshd-session[11288]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:40.726158 systemd[1]: sshd@20-147.28.162.205:22-139.178.89.65:44194.service: Deactivated successfully. Jul 16 00:55:40.728305 systemd[1]: session-23.scope: Deactivated successfully. Jul 16 00:55:40.728997 systemd-logind[2768]: Session 23 logged out. Waiting for processes to exit. Jul 16 00:55:40.729870 systemd-logind[2768]: Removed session 23. Jul 16 00:55:42.063135 containerd[2784]: time="2025-07-16T00:55:42.063099503Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b610e05dd12e419314cfe87647c7c4bf67e9b9cb783ecef662b79b7e70d87681\" id:\"5946b38c4f23d91da16c8f8b4bd14a51f1d4968365d3aa951413b1f519795c28\" pid:11342 exited_at:{seconds:1752627342 nanos:62822663}"