Jul 16 00:54:46.307298 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] Jul 16 00:54:46.307320 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 15 22:00:45 -00 2025 Jul 16 00:54:46.307328 kernel: KASLR enabled Jul 16 00:54:46.307333 kernel: efi: EFI v2.7 by American Megatrends Jul 16 00:54:46.307339 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea468818 RNG=0xebf10018 MEMRESERVE=0xe4661f98 Jul 16 00:54:46.307344 kernel: random: crng init done Jul 16 00:54:46.307351 kernel: secureboot: Secure boot disabled Jul 16 00:54:46.307357 kernel: esrt: Reserving ESRT space from 0x00000000ea468818 to 0x00000000ea468878. Jul 16 00:54:46.307364 kernel: ACPI: Early table checksum verification disabled Jul 16 00:54:46.307370 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) Jul 16 00:54:46.307376 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) Jul 16 00:54:46.307381 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) Jul 16 00:54:46.307387 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) Jul 16 00:54:46.307393 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) Jul 16 00:54:46.307401 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) Jul 16 00:54:46.307407 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) Jul 16 00:54:46.307413 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) Jul 16 00:54:46.307419 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) Jul 16 00:54:46.307425 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) Jul 16 00:54:46.307431 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) Jul 16 00:54:46.307437 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) Jul 16 00:54:46.307443 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) Jul 16 00:54:46.307449 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) Jul 16 00:54:46.307455 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) Jul 16 00:54:46.307462 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) Jul 16 00:54:46.307468 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) Jul 16 00:54:46.307474 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) Jul 16 00:54:46.307480 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) Jul 16 00:54:46.307486 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 Jul 16 00:54:46.307492 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 16 00:54:46.307498 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] Jul 16 00:54:46.307504 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] Jul 16 00:54:46.307510 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] Jul 16 00:54:46.307516 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] Jul 16 00:54:46.307522 kernel: NUMA: Initialized distance table, cnt=1 Jul 16 00:54:46.307530 kernel: NUMA: Node 0 [mem 0x88300000-0x883fffff] + [mem 0x90000000-0xffffffff] -> [mem 0x88300000-0xffffffff] Jul 16 00:54:46.307536 kernel: NUMA: Node 0 [mem 0x88300000-0xffffffff] + [mem 0x80000000000-0x8007fffffff] -> [mem 0x88300000-0x8007fffffff] Jul 16 00:54:46.307542 kernel: NUMA: Node 0 [mem 0x88300000-0x8007fffffff] + [mem 0x80100000000-0x83fffffffff] -> [mem 0x88300000-0x83fffffffff] Jul 16 00:54:46.307548 kernel: NODE_DATA(0) allocated [mem 0x83fdffd8a00-0x83fdffdffff] Jul 16 00:54:46.307555 kernel: Zone ranges: Jul 16 00:54:46.307563 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] Jul 16 00:54:46.307571 kernel: DMA32 empty Jul 16 00:54:46.307577 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] Jul 16 00:54:46.307584 kernel: Device empty Jul 16 00:54:46.307590 kernel: Movable zone start for each node Jul 16 00:54:46.307596 kernel: Early memory node ranges Jul 16 00:54:46.307603 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] Jul 16 00:54:46.307609 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] Jul 16 00:54:46.307616 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] Jul 16 00:54:46.307622 kernel: node 0: [mem 0x0000000094000000-0x00000000eba2dfff] Jul 16 00:54:46.307628 kernel: node 0: [mem 0x00000000eba2e000-0x00000000ebeaffff] Jul 16 00:54:46.307636 kernel: node 0: [mem 0x00000000ebeb0000-0x00000000ebeb9fff] Jul 16 00:54:46.307642 kernel: node 0: [mem 0x00000000ebeba000-0x00000000ebeccfff] Jul 16 00:54:46.307649 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] Jul 16 00:54:46.307655 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] Jul 16 00:54:46.307661 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] Jul 16 00:54:46.307668 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] Jul 16 00:54:46.307674 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee53ffff] Jul 16 00:54:46.307680 kernel: node 0: [mem 0x00000000ee540000-0x00000000f765ffff] Jul 16 00:54:46.307687 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] Jul 16 00:54:46.307693 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] Jul 16 00:54:46.307699 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] Jul 16 00:54:46.307705 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] Jul 16 00:54:46.307713 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] Jul 16 00:54:46.307719 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] Jul 16 00:54:46.307726 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] Jul 16 00:54:46.307732 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] Jul 16 00:54:46.307739 kernel: On node 0, zone DMA: 768 pages in unavailable ranges Jul 16 00:54:46.307745 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges Jul 16 00:54:46.307751 kernel: cma: Reserved 16 MiB at 0x00000000fec00000 on node -1 Jul 16 00:54:46.307758 kernel: psci: probing for conduit method from ACPI. Jul 16 00:54:46.307765 kernel: psci: PSCIv1.1 detected in firmware. Jul 16 00:54:46.307771 kernel: psci: Using standard PSCI v0.2 function IDs Jul 16 00:54:46.307777 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 16 00:54:46.307785 kernel: psci: SMC Calling Convention v1.2 Jul 16 00:54:46.307791 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 16 00:54:46.307798 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 Jul 16 00:54:46.307804 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 Jul 16 00:54:46.307810 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 Jul 16 00:54:46.307817 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 Jul 16 00:54:46.307823 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 Jul 16 00:54:46.307829 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 Jul 16 00:54:46.307836 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 Jul 16 00:54:46.307842 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 Jul 16 00:54:46.307848 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 Jul 16 00:54:46.307855 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 Jul 16 00:54:46.307862 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 Jul 16 00:54:46.307868 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 Jul 16 00:54:46.307875 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 Jul 16 00:54:46.307881 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 Jul 16 00:54:46.307888 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 Jul 16 00:54:46.307894 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 Jul 16 00:54:46.307900 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 Jul 16 00:54:46.307907 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 Jul 16 00:54:46.307913 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 Jul 16 00:54:46.307919 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 Jul 16 00:54:46.307925 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 Jul 16 00:54:46.307932 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 Jul 16 00:54:46.307939 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 Jul 16 00:54:46.307950 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 Jul 16 00:54:46.307957 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 Jul 16 00:54:46.307963 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 Jul 16 00:54:46.307970 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 Jul 16 00:54:46.307976 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 Jul 16 00:54:46.307982 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 Jul 16 00:54:46.307989 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 Jul 16 00:54:46.307995 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 Jul 16 00:54:46.308001 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 Jul 16 00:54:46.308008 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 Jul 16 00:54:46.308015 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 Jul 16 00:54:46.308022 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 Jul 16 00:54:46.308028 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 Jul 16 00:54:46.308034 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 Jul 16 00:54:46.308041 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 Jul 16 00:54:46.308047 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 Jul 16 00:54:46.308053 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 Jul 16 00:54:46.308060 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 Jul 16 00:54:46.308066 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 Jul 16 00:54:46.308078 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 Jul 16 00:54:46.308087 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 Jul 16 00:54:46.308093 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 Jul 16 00:54:46.308100 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 Jul 16 00:54:46.308107 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 Jul 16 00:54:46.308114 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 Jul 16 00:54:46.308120 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 Jul 16 00:54:46.308128 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 Jul 16 00:54:46.308135 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 Jul 16 00:54:46.308142 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 Jul 16 00:54:46.308149 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 Jul 16 00:54:46.308156 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 Jul 16 00:54:46.308162 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 Jul 16 00:54:46.308169 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 Jul 16 00:54:46.308176 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 Jul 16 00:54:46.308183 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 Jul 16 00:54:46.308189 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 Jul 16 00:54:46.308196 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 Jul 16 00:54:46.308203 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 Jul 16 00:54:46.308211 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 Jul 16 00:54:46.308218 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 Jul 16 00:54:46.308224 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 Jul 16 00:54:46.308231 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 Jul 16 00:54:46.308238 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 Jul 16 00:54:46.308244 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 Jul 16 00:54:46.308251 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 Jul 16 00:54:46.308258 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 Jul 16 00:54:46.308264 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 Jul 16 00:54:46.308271 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 Jul 16 00:54:46.308278 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 Jul 16 00:54:46.308284 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 Jul 16 00:54:46.308292 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 Jul 16 00:54:46.308299 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 Jul 16 00:54:46.308306 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 Jul 16 00:54:46.308312 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 Jul 16 00:54:46.308319 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 Jul 16 00:54:46.308326 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 Jul 16 00:54:46.308333 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 16 00:54:46.308339 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 16 00:54:46.308346 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 Jul 16 00:54:46.308353 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 Jul 16 00:54:46.308360 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 Jul 16 00:54:46.308368 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 Jul 16 00:54:46.308375 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 Jul 16 00:54:46.308382 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 Jul 16 00:54:46.308388 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 Jul 16 00:54:46.308395 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 Jul 16 00:54:46.308401 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 Jul 16 00:54:46.308408 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 Jul 16 00:54:46.308415 kernel: Detected PIPT I-cache on CPU0 Jul 16 00:54:46.308421 kernel: CPU features: detected: GIC system register CPU interface Jul 16 00:54:46.308428 kernel: CPU features: detected: Virtualization Host Extensions Jul 16 00:54:46.308435 kernel: CPU features: detected: Spectre-v4 Jul 16 00:54:46.308443 kernel: CPU features: detected: Spectre-BHB Jul 16 00:54:46.308450 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 16 00:54:46.308457 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 16 00:54:46.308463 kernel: CPU features: detected: ARM erratum 1418040 Jul 16 00:54:46.308470 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 16 00:54:46.308477 kernel: alternatives: applying boot alternatives Jul 16 00:54:46.308485 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 16 00:54:46.308492 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 16 00:54:46.308499 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 16 00:54:46.308506 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes Jul 16 00:54:46.308512 kernel: printk: log_buf_len min size: 262144 bytes Jul 16 00:54:46.308520 kernel: printk: log_buf_len: 1048576 bytes Jul 16 00:54:46.308527 kernel: printk: early log buf free: 249376(95%) Jul 16 00:54:46.308534 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) Jul 16 00:54:46.308541 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) Jul 16 00:54:46.308548 kernel: Fallback order for Node 0: 0 Jul 16 00:54:46.308554 kernel: Built 1 zonelists, mobility grouping on. Total pages: 67043584 Jul 16 00:54:46.308561 kernel: Policy zone: Normal Jul 16 00:54:46.308568 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 16 00:54:46.308575 kernel: software IO TLB: area num 128. Jul 16 00:54:46.308581 kernel: software IO TLB: mapped [mem 0x00000000fac00000-0x00000000fec00000] (64MB) Jul 16 00:54:46.308588 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 Jul 16 00:54:46.308596 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 16 00:54:46.308604 kernel: rcu: RCU event tracing is enabled. Jul 16 00:54:46.308611 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. Jul 16 00:54:46.308618 kernel: Trampoline variant of Tasks RCU enabled. Jul 16 00:54:46.308624 kernel: Tracing variant of Tasks RCU enabled. Jul 16 00:54:46.308632 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 16 00:54:46.308638 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 Jul 16 00:54:46.308645 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Jul 16 00:54:46.308652 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Jul 16 00:54:46.308659 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 16 00:54:46.308666 kernel: GICv3: GIC: Using split EOI/Deactivate mode Jul 16 00:54:46.308673 kernel: GICv3: 672 SPIs implemented Jul 16 00:54:46.308681 kernel: GICv3: 0 Extended SPIs implemented Jul 16 00:54:46.308688 kernel: Root IRQ handler: gic_handle_irq Jul 16 00:54:46.308694 kernel: GICv3: GICv3 features: 16 PPIs Jul 16 00:54:46.308701 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=1 Jul 16 00:54:46.308708 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 Jul 16 00:54:46.308715 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 Jul 16 00:54:46.308721 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 Jul 16 00:54:46.308728 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 Jul 16 00:54:46.308734 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 Jul 16 00:54:46.308741 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 Jul 16 00:54:46.308748 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 Jul 16 00:54:46.308754 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 Jul 16 00:54:46.308762 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 Jul 16 00:54:46.308769 kernel: ITS [mem 0x100100040000-0x10010005ffff] Jul 16 00:54:46.308776 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000340000 (indirect, esz 8, psz 64K, shr 1) Jul 16 00:54:46.308783 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000350000 (flat, esz 2, psz 64K, shr 1) Jul 16 00:54:46.308790 kernel: ITS [mem 0x100100060000-0x10010007ffff] Jul 16 00:54:46.308796 kernel: ITS@0x0000100100060000: allocated 8192 Devices @80000370000 (indirect, esz 8, psz 64K, shr 1) Jul 16 00:54:46.308803 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @80000380000 (flat, esz 2, psz 64K, shr 1) Jul 16 00:54:46.308810 kernel: ITS [mem 0x100100080000-0x10010009ffff] Jul 16 00:54:46.308817 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800003a0000 (indirect, esz 8, psz 64K, shr 1) Jul 16 00:54:46.308824 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800003b0000 (flat, esz 2, psz 64K, shr 1) Jul 16 00:54:46.308830 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] Jul 16 00:54:46.308838 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @800003d0000 (indirect, esz 8, psz 64K, shr 1) Jul 16 00:54:46.308845 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @800003e0000 (flat, esz 2, psz 64K, shr 1) Jul 16 00:54:46.308852 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] Jul 16 00:54:46.308859 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000800000 (indirect, esz 8, psz 64K, shr 1) Jul 16 00:54:46.308866 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000810000 (flat, esz 2, psz 64K, shr 1) Jul 16 00:54:46.308873 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] Jul 16 00:54:46.308880 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000830000 (indirect, esz 8, psz 64K, shr 1) Jul 16 00:54:46.308887 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000840000 (flat, esz 2, psz 64K, shr 1) Jul 16 00:54:46.308893 kernel: ITS [mem 0x100100100000-0x10010011ffff] Jul 16 00:54:46.308900 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000860000 (indirect, esz 8, psz 64K, shr 1) Jul 16 00:54:46.308907 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @80000870000 (flat, esz 2, psz 64K, shr 1) Jul 16 00:54:46.308915 kernel: ITS [mem 0x100100120000-0x10010013ffff] Jul 16 00:54:46.308922 kernel: ITS@0x0000100100120000: allocated 8192 Devices @80000890000 (indirect, esz 8, psz 64K, shr 1) Jul 16 00:54:46.308929 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800008a0000 (flat, esz 2, psz 64K, shr 1) Jul 16 00:54:46.308936 kernel: GICv3: using LPI property table @0x00000800008b0000 Jul 16 00:54:46.308943 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800008c0000 Jul 16 00:54:46.308952 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 16 00:54:46.308959 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.308966 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). Jul 16 00:54:46.308973 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). Jul 16 00:54:46.308980 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 16 00:54:46.308987 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 16 00:54:46.308995 kernel: Console: colour dummy device 80x25 Jul 16 00:54:46.309002 kernel: printk: legacy console [tty0] enabled Jul 16 00:54:46.309009 kernel: ACPI: Core revision 20240827 Jul 16 00:54:46.309016 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 16 00:54:46.309023 kernel: pid_max: default: 81920 minimum: 640 Jul 16 00:54:46.309030 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 16 00:54:46.309037 kernel: landlock: Up and running. Jul 16 00:54:46.309044 kernel: SELinux: Initializing. Jul 16 00:54:46.309051 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 16 00:54:46.309058 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 16 00:54:46.309066 kernel: rcu: Hierarchical SRCU implementation. Jul 16 00:54:46.309073 kernel: rcu: Max phase no-delay instances is 400. Jul 16 00:54:46.309080 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Jul 16 00:54:46.309087 kernel: Remapping and enabling EFI services. Jul 16 00:54:46.309094 kernel: smp: Bringing up secondary CPUs ... Jul 16 00:54:46.309100 kernel: Detected PIPT I-cache on CPU1 Jul 16 00:54:46.309107 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 Jul 16 00:54:46.309114 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000800008d0000 Jul 16 00:54:46.309121 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309129 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] Jul 16 00:54:46.309136 kernel: Detected PIPT I-cache on CPU2 Jul 16 00:54:46.309143 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 Jul 16 00:54:46.309150 kernel: GICv3: CPU2: using allocated LPI pending table @0x00000800008e0000 Jul 16 00:54:46.309157 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309164 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] Jul 16 00:54:46.309171 kernel: Detected PIPT I-cache on CPU3 Jul 16 00:54:46.309178 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 Jul 16 00:54:46.309185 kernel: GICv3: CPU3: using allocated LPI pending table @0x00000800008f0000 Jul 16 00:54:46.309193 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309200 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] Jul 16 00:54:46.309206 kernel: Detected PIPT I-cache on CPU4 Jul 16 00:54:46.309213 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 Jul 16 00:54:46.309220 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000900000 Jul 16 00:54:46.309227 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309234 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] Jul 16 00:54:46.309241 kernel: Detected PIPT I-cache on CPU5 Jul 16 00:54:46.309247 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 Jul 16 00:54:46.309254 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000910000 Jul 16 00:54:46.309263 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309269 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] Jul 16 00:54:46.309276 kernel: Detected PIPT I-cache on CPU6 Jul 16 00:54:46.309283 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 Jul 16 00:54:46.309290 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000920000 Jul 16 00:54:46.309297 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309303 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] Jul 16 00:54:46.309310 kernel: Detected PIPT I-cache on CPU7 Jul 16 00:54:46.309317 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 Jul 16 00:54:46.309325 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000930000 Jul 16 00:54:46.309332 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309339 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] Jul 16 00:54:46.309346 kernel: Detected PIPT I-cache on CPU8 Jul 16 00:54:46.309353 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 Jul 16 00:54:46.309360 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000940000 Jul 16 00:54:46.309366 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309373 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] Jul 16 00:54:46.309380 kernel: Detected PIPT I-cache on CPU9 Jul 16 00:54:46.309387 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 Jul 16 00:54:46.309395 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000950000 Jul 16 00:54:46.309402 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309409 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] Jul 16 00:54:46.309416 kernel: Detected PIPT I-cache on CPU10 Jul 16 00:54:46.309422 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 Jul 16 00:54:46.309429 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000960000 Jul 16 00:54:46.309436 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309443 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] Jul 16 00:54:46.309450 kernel: Detected PIPT I-cache on CPU11 Jul 16 00:54:46.309457 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 Jul 16 00:54:46.309465 kernel: GICv3: CPU11: using allocated LPI pending table @0x0000080000970000 Jul 16 00:54:46.309472 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309479 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] Jul 16 00:54:46.309486 kernel: Detected PIPT I-cache on CPU12 Jul 16 00:54:46.309492 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 Jul 16 00:54:46.309499 kernel: GICv3: CPU12: using allocated LPI pending table @0x0000080000980000 Jul 16 00:54:46.309506 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309513 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] Jul 16 00:54:46.309520 kernel: Detected PIPT I-cache on CPU13 Jul 16 00:54:46.309528 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 Jul 16 00:54:46.309535 kernel: GICv3: CPU13: using allocated LPI pending table @0x0000080000990000 Jul 16 00:54:46.309542 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309548 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] Jul 16 00:54:46.309555 kernel: Detected PIPT I-cache on CPU14 Jul 16 00:54:46.309562 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 Jul 16 00:54:46.309569 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800009a0000 Jul 16 00:54:46.309576 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309582 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] Jul 16 00:54:46.309591 kernel: Detected PIPT I-cache on CPU15 Jul 16 00:54:46.309598 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 Jul 16 00:54:46.309605 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800009b0000 Jul 16 00:54:46.309612 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309618 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] Jul 16 00:54:46.309625 kernel: Detected PIPT I-cache on CPU16 Jul 16 00:54:46.309632 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 Jul 16 00:54:46.309639 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800009c0000 Jul 16 00:54:46.309646 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309653 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] Jul 16 00:54:46.309661 kernel: Detected PIPT I-cache on CPU17 Jul 16 00:54:46.309668 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 Jul 16 00:54:46.309675 kernel: GICv3: CPU17: using allocated LPI pending table @0x00000800009d0000 Jul 16 00:54:46.309681 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309688 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] Jul 16 00:54:46.309695 kernel: Detected PIPT I-cache on CPU18 Jul 16 00:54:46.309702 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 Jul 16 00:54:46.309718 kernel: GICv3: CPU18: using allocated LPI pending table @0x00000800009e0000 Jul 16 00:54:46.309726 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309734 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] Jul 16 00:54:46.309741 kernel: Detected PIPT I-cache on CPU19 Jul 16 00:54:46.309749 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 Jul 16 00:54:46.309756 kernel: GICv3: CPU19: using allocated LPI pending table @0x00000800009f0000 Jul 16 00:54:46.309763 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309770 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] Jul 16 00:54:46.309777 kernel: Detected PIPT I-cache on CPU20 Jul 16 00:54:46.309784 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 Jul 16 00:54:46.309792 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000a00000 Jul 16 00:54:46.309800 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309807 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] Jul 16 00:54:46.309814 kernel: Detected PIPT I-cache on CPU21 Jul 16 00:54:46.309821 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 Jul 16 00:54:46.309828 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000a10000 Jul 16 00:54:46.309836 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309844 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] Jul 16 00:54:46.309853 kernel: Detected PIPT I-cache on CPU22 Jul 16 00:54:46.309860 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 Jul 16 00:54:46.309867 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000a20000 Jul 16 00:54:46.309875 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309882 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] Jul 16 00:54:46.309889 kernel: Detected PIPT I-cache on CPU23 Jul 16 00:54:46.309896 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 Jul 16 00:54:46.309905 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000a30000 Jul 16 00:54:46.309912 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309920 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] Jul 16 00:54:46.309928 kernel: Detected PIPT I-cache on CPU24 Jul 16 00:54:46.309935 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 Jul 16 00:54:46.309942 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000a40000 Jul 16 00:54:46.309952 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309959 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] Jul 16 00:54:46.309967 kernel: Detected PIPT I-cache on CPU25 Jul 16 00:54:46.309974 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 Jul 16 00:54:46.309981 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000a50000 Jul 16 00:54:46.309990 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.309998 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] Jul 16 00:54:46.310005 kernel: Detected PIPT I-cache on CPU26 Jul 16 00:54:46.310012 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 Jul 16 00:54:46.310019 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000a60000 Jul 16 00:54:46.310027 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310034 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] Jul 16 00:54:46.310041 kernel: Detected PIPT I-cache on CPU27 Jul 16 00:54:46.310048 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 Jul 16 00:54:46.310056 kernel: GICv3: CPU27: using allocated LPI pending table @0x0000080000a70000 Jul 16 00:54:46.310064 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310071 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] Jul 16 00:54:46.310078 kernel: Detected PIPT I-cache on CPU28 Jul 16 00:54:46.310086 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 Jul 16 00:54:46.310093 kernel: GICv3: CPU28: using allocated LPI pending table @0x0000080000a80000 Jul 16 00:54:46.310100 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310107 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] Jul 16 00:54:46.310115 kernel: Detected PIPT I-cache on CPU29 Jul 16 00:54:46.310122 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 Jul 16 00:54:46.310131 kernel: GICv3: CPU29: using allocated LPI pending table @0x0000080000a90000 Jul 16 00:54:46.310138 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310145 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] Jul 16 00:54:46.310153 kernel: Detected PIPT I-cache on CPU30 Jul 16 00:54:46.310160 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 Jul 16 00:54:46.310167 kernel: GICv3: CPU30: using allocated LPI pending table @0x0000080000aa0000 Jul 16 00:54:46.310175 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310182 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] Jul 16 00:54:46.310189 kernel: Detected PIPT I-cache on CPU31 Jul 16 00:54:46.310196 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 Jul 16 00:54:46.310205 kernel: GICv3: CPU31: using allocated LPI pending table @0x0000080000ab0000 Jul 16 00:54:46.310212 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310219 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] Jul 16 00:54:46.310226 kernel: Detected PIPT I-cache on CPU32 Jul 16 00:54:46.310233 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 Jul 16 00:54:46.310241 kernel: GICv3: CPU32: using allocated LPI pending table @0x0000080000ac0000 Jul 16 00:54:46.310248 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310255 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] Jul 16 00:54:46.310262 kernel: Detected PIPT I-cache on CPU33 Jul 16 00:54:46.310271 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 Jul 16 00:54:46.310278 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000ad0000 Jul 16 00:54:46.310286 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310293 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] Jul 16 00:54:46.310300 kernel: Detected PIPT I-cache on CPU34 Jul 16 00:54:46.310307 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 Jul 16 00:54:46.310314 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000ae0000 Jul 16 00:54:46.310322 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310329 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] Jul 16 00:54:46.310336 kernel: Detected PIPT I-cache on CPU35 Jul 16 00:54:46.310344 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 Jul 16 00:54:46.310352 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000af0000 Jul 16 00:54:46.310359 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310366 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] Jul 16 00:54:46.310373 kernel: Detected PIPT I-cache on CPU36 Jul 16 00:54:46.310380 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 Jul 16 00:54:46.310388 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000b00000 Jul 16 00:54:46.310395 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310402 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] Jul 16 00:54:46.310410 kernel: Detected PIPT I-cache on CPU37 Jul 16 00:54:46.310418 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 Jul 16 00:54:46.310425 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000b10000 Jul 16 00:54:46.310432 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310439 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] Jul 16 00:54:46.310446 kernel: Detected PIPT I-cache on CPU38 Jul 16 00:54:46.310453 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 Jul 16 00:54:46.310461 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000b20000 Jul 16 00:54:46.310468 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310475 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] Jul 16 00:54:46.310483 kernel: Detected PIPT I-cache on CPU39 Jul 16 00:54:46.310491 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 Jul 16 00:54:46.310498 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000b30000 Jul 16 00:54:46.310505 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310512 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] Jul 16 00:54:46.310520 kernel: Detected PIPT I-cache on CPU40 Jul 16 00:54:46.310527 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 Jul 16 00:54:46.310536 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000b40000 Jul 16 00:54:46.310543 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310550 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] Jul 16 00:54:46.310557 kernel: Detected PIPT I-cache on CPU41 Jul 16 00:54:46.310565 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 Jul 16 00:54:46.310572 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000b50000 Jul 16 00:54:46.310579 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310586 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] Jul 16 00:54:46.310593 kernel: Detected PIPT I-cache on CPU42 Jul 16 00:54:46.310601 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 Jul 16 00:54:46.310609 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000b60000 Jul 16 00:54:46.310617 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310624 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] Jul 16 00:54:46.310631 kernel: Detected PIPT I-cache on CPU43 Jul 16 00:54:46.310638 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 Jul 16 00:54:46.310645 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000b70000 Jul 16 00:54:46.310653 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310660 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] Jul 16 00:54:46.310667 kernel: Detected PIPT I-cache on CPU44 Jul 16 00:54:46.310675 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 Jul 16 00:54:46.310683 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000b80000 Jul 16 00:54:46.310690 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310697 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] Jul 16 00:54:46.310704 kernel: Detected PIPT I-cache on CPU45 Jul 16 00:54:46.310712 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 Jul 16 00:54:46.310719 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000b90000 Jul 16 00:54:46.310726 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310733 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] Jul 16 00:54:46.310740 kernel: Detected PIPT I-cache on CPU46 Jul 16 00:54:46.310749 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 Jul 16 00:54:46.310756 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ba0000 Jul 16 00:54:46.310764 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310771 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] Jul 16 00:54:46.310778 kernel: Detected PIPT I-cache on CPU47 Jul 16 00:54:46.310785 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 Jul 16 00:54:46.310792 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000bb0000 Jul 16 00:54:46.310800 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310807 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] Jul 16 00:54:46.310815 kernel: Detected PIPT I-cache on CPU48 Jul 16 00:54:46.310824 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 Jul 16 00:54:46.310831 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000bc0000 Jul 16 00:54:46.310840 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310847 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] Jul 16 00:54:46.310854 kernel: Detected PIPT I-cache on CPU49 Jul 16 00:54:46.310861 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 Jul 16 00:54:46.310868 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000bd0000 Jul 16 00:54:46.310876 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310883 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] Jul 16 00:54:46.310891 kernel: Detected PIPT I-cache on CPU50 Jul 16 00:54:46.310899 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 Jul 16 00:54:46.310906 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000be0000 Jul 16 00:54:46.310913 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310920 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] Jul 16 00:54:46.310927 kernel: Detected PIPT I-cache on CPU51 Jul 16 00:54:46.310935 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 Jul 16 00:54:46.310942 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000bf0000 Jul 16 00:54:46.310952 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310960 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] Jul 16 00:54:46.310968 kernel: Detected PIPT I-cache on CPU52 Jul 16 00:54:46.310975 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 Jul 16 00:54:46.310982 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000c00000 Jul 16 00:54:46.310989 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.310996 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] Jul 16 00:54:46.311004 kernel: Detected PIPT I-cache on CPU53 Jul 16 00:54:46.311011 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 Jul 16 00:54:46.311018 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000c10000 Jul 16 00:54:46.311027 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311034 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] Jul 16 00:54:46.311041 kernel: Detected PIPT I-cache on CPU54 Jul 16 00:54:46.311048 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 Jul 16 00:54:46.311055 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000c20000 Jul 16 00:54:46.311063 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311070 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] Jul 16 00:54:46.311077 kernel: Detected PIPT I-cache on CPU55 Jul 16 00:54:46.311084 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 Jul 16 00:54:46.311091 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000c30000 Jul 16 00:54:46.311100 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311107 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] Jul 16 00:54:46.311115 kernel: Detected PIPT I-cache on CPU56 Jul 16 00:54:46.311122 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 Jul 16 00:54:46.311129 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000c40000 Jul 16 00:54:46.311137 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311144 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] Jul 16 00:54:46.311151 kernel: Detected PIPT I-cache on CPU57 Jul 16 00:54:46.311158 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 Jul 16 00:54:46.311167 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000c50000 Jul 16 00:54:46.311174 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311181 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] Jul 16 00:54:46.311188 kernel: Detected PIPT I-cache on CPU58 Jul 16 00:54:46.311195 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 Jul 16 00:54:46.311203 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000c60000 Jul 16 00:54:46.311210 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311217 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] Jul 16 00:54:46.311224 kernel: Detected PIPT I-cache on CPU59 Jul 16 00:54:46.311231 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 Jul 16 00:54:46.311240 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000c70000 Jul 16 00:54:46.311247 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311254 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] Jul 16 00:54:46.311262 kernel: Detected PIPT I-cache on CPU60 Jul 16 00:54:46.311269 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 Jul 16 00:54:46.311276 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000c80000 Jul 16 00:54:46.311283 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311290 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] Jul 16 00:54:46.311298 kernel: Detected PIPT I-cache on CPU61 Jul 16 00:54:46.311306 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 Jul 16 00:54:46.311314 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000c90000 Jul 16 00:54:46.311321 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311328 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] Jul 16 00:54:46.311335 kernel: Detected PIPT I-cache on CPU62 Jul 16 00:54:46.311342 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 Jul 16 00:54:46.311349 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000ca0000 Jul 16 00:54:46.311357 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311364 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] Jul 16 00:54:46.311371 kernel: Detected PIPT I-cache on CPU63 Jul 16 00:54:46.311380 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 Jul 16 00:54:46.311387 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000cb0000 Jul 16 00:54:46.311394 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311401 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] Jul 16 00:54:46.311408 kernel: Detected PIPT I-cache on CPU64 Jul 16 00:54:46.311416 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 Jul 16 00:54:46.311423 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000cc0000 Jul 16 00:54:46.311430 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311437 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] Jul 16 00:54:46.311446 kernel: Detected PIPT I-cache on CPU65 Jul 16 00:54:46.311453 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 Jul 16 00:54:46.311461 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000cd0000 Jul 16 00:54:46.311468 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311475 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] Jul 16 00:54:46.311482 kernel: Detected PIPT I-cache on CPU66 Jul 16 00:54:46.311489 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 Jul 16 00:54:46.311497 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000ce0000 Jul 16 00:54:46.311504 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311511 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] Jul 16 00:54:46.311520 kernel: Detected PIPT I-cache on CPU67 Jul 16 00:54:46.311527 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 Jul 16 00:54:46.311534 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000cf0000 Jul 16 00:54:46.311542 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311549 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] Jul 16 00:54:46.311556 kernel: Detected PIPT I-cache on CPU68 Jul 16 00:54:46.311563 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 Jul 16 00:54:46.311570 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000d00000 Jul 16 00:54:46.311578 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311586 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] Jul 16 00:54:46.311593 kernel: Detected PIPT I-cache on CPU69 Jul 16 00:54:46.311601 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 Jul 16 00:54:46.311608 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000d10000 Jul 16 00:54:46.311615 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311622 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] Jul 16 00:54:46.311629 kernel: Detected PIPT I-cache on CPU70 Jul 16 00:54:46.311636 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 Jul 16 00:54:46.311644 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000d20000 Jul 16 00:54:46.311652 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311659 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] Jul 16 00:54:46.311666 kernel: Detected PIPT I-cache on CPU71 Jul 16 00:54:46.311674 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 Jul 16 00:54:46.311681 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000d30000 Jul 16 00:54:46.311688 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311695 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] Jul 16 00:54:46.311702 kernel: Detected PIPT I-cache on CPU72 Jul 16 00:54:46.311710 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 Jul 16 00:54:46.311717 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000d40000 Jul 16 00:54:46.311725 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311733 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] Jul 16 00:54:46.311740 kernel: Detected PIPT I-cache on CPU73 Jul 16 00:54:46.311747 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 Jul 16 00:54:46.311754 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000d50000 Jul 16 00:54:46.311761 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311768 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] Jul 16 00:54:46.311776 kernel: Detected PIPT I-cache on CPU74 Jul 16 00:54:46.311783 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 Jul 16 00:54:46.311791 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000d60000 Jul 16 00:54:46.311799 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311806 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] Jul 16 00:54:46.311813 kernel: Detected PIPT I-cache on CPU75 Jul 16 00:54:46.311820 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 Jul 16 00:54:46.311827 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000d70000 Jul 16 00:54:46.311834 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311842 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] Jul 16 00:54:46.311849 kernel: Detected PIPT I-cache on CPU76 Jul 16 00:54:46.311856 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 Jul 16 00:54:46.311865 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000d80000 Jul 16 00:54:46.311872 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311879 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] Jul 16 00:54:46.311886 kernel: Detected PIPT I-cache on CPU77 Jul 16 00:54:46.311893 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 Jul 16 00:54:46.311901 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000d90000 Jul 16 00:54:46.311908 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311915 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] Jul 16 00:54:46.311922 kernel: Detected PIPT I-cache on CPU78 Jul 16 00:54:46.311931 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 Jul 16 00:54:46.311938 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000da0000 Jul 16 00:54:46.311948 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311955 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] Jul 16 00:54:46.311962 kernel: Detected PIPT I-cache on CPU79 Jul 16 00:54:46.311970 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 Jul 16 00:54:46.311977 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000db0000 Jul 16 00:54:46.311984 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 16 00:54:46.311992 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] Jul 16 00:54:46.311999 kernel: smp: Brought up 1 node, 80 CPUs Jul 16 00:54:46.312008 kernel: SMP: Total of 80 processors activated. Jul 16 00:54:46.312015 kernel: CPU: All CPU(s) started at EL2 Jul 16 00:54:46.312022 kernel: CPU features: detected: 32-bit EL0 Support Jul 16 00:54:46.312030 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 16 00:54:46.312037 kernel: CPU features: detected: Common not Private translations Jul 16 00:54:46.312044 kernel: CPU features: detected: CRC32 instructions Jul 16 00:54:46.312051 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 16 00:54:46.312059 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 16 00:54:46.312066 kernel: CPU features: detected: LSE atomic instructions Jul 16 00:54:46.312074 kernel: CPU features: detected: Privileged Access Never Jul 16 00:54:46.312082 kernel: CPU features: detected: RAS Extension Support Jul 16 00:54:46.312089 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 16 00:54:46.312096 kernel: alternatives: applying system-wide alternatives Jul 16 00:54:46.312104 kernel: CPU features: detected: Hardware dirty bit management on CPU0-79 Jul 16 00:54:46.312111 kernel: Memory: 262843548K/268174336K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 5254856K reserved, 16384K cma-reserved) Jul 16 00:54:46.312118 kernel: devtmpfs: initialized Jul 16 00:54:46.312126 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 16 00:54:46.312133 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 16 00:54:46.312142 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 16 00:54:46.312149 kernel: 0 pages in range for non-PLT usage Jul 16 00:54:46.312156 kernel: 508432 pages in range for PLT usage Jul 16 00:54:46.312163 kernel: pinctrl core: initialized pinctrl subsystem Jul 16 00:54:46.312171 kernel: SMBIOS 3.4.0 present. Jul 16 00:54:46.312178 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 Jul 16 00:54:46.312185 kernel: DMI: Memory slots populated: 8/16 Jul 16 00:54:46.312192 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 16 00:54:46.312199 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations Jul 16 00:54:46.312208 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 16 00:54:46.312215 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 16 00:54:46.312223 kernel: audit: initializing netlink subsys (disabled) Jul 16 00:54:46.312230 kernel: audit: type=2000 audit(0.065:1): state=initialized audit_enabled=0 res=1 Jul 16 00:54:46.312237 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 16 00:54:46.312244 kernel: cpuidle: using governor menu Jul 16 00:54:46.312252 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 16 00:54:46.312259 kernel: ASID allocator initialised with 32768 entries Jul 16 00:54:46.312266 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 16 00:54:46.312275 kernel: Serial: AMBA PL011 UART driver Jul 16 00:54:46.312282 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 16 00:54:46.312290 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 16 00:54:46.312297 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 16 00:54:46.312304 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 16 00:54:46.312311 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 16 00:54:46.312319 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 16 00:54:46.312326 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 16 00:54:46.312333 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 16 00:54:46.312341 kernel: ACPI: Added _OSI(Module Device) Jul 16 00:54:46.312349 kernel: ACPI: Added _OSI(Processor Device) Jul 16 00:54:46.312356 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 16 00:54:46.312363 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded Jul 16 00:54:46.312370 kernel: ACPI: Interpreter enabled Jul 16 00:54:46.312378 kernel: ACPI: Using GIC for interrupt routing Jul 16 00:54:46.312385 kernel: ACPI: MCFG table detected, 8 entries Jul 16 00:54:46.312392 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 Jul 16 00:54:46.312399 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 Jul 16 00:54:46.312408 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 Jul 16 00:54:46.312415 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 Jul 16 00:54:46.312422 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 Jul 16 00:54:46.312429 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 Jul 16 00:54:46.312437 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 Jul 16 00:54:46.312444 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 Jul 16 00:54:46.312451 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA Jul 16 00:54:46.312459 kernel: printk: legacy console [ttyAMA0] enabled Jul 16 00:54:46.312466 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA Jul 16 00:54:46.312475 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) Jul 16 00:54:46.312603 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 16 00:54:46.312668 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 16 00:54:46.312725 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] Jul 16 00:54:46.312780 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 16 00:54:46.312835 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 Jul 16 00:54:46.312893 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] Jul 16 00:54:46.312903 kernel: PCI host bridge to bus 000d:00 Jul 16 00:54:46.312974 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] Jul 16 00:54:46.313028 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] Jul 16 00:54:46.313079 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] Jul 16 00:54:46.313156 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jul 16 00:54:46.313226 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.313288 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Jul 16 00:54:46.313347 kernel: pci 000d:00:01.0: enabling Extended Tags Jul 16 00:54:46.313407 kernel: pci 000d:00:01.0: supports D1 D2 Jul 16 00:54:46.313466 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.313533 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.313591 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Jul 16 00:54:46.313650 kernel: pci 000d:00:02.0: supports D1 D2 Jul 16 00:54:46.313707 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.313773 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.313831 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Jul 16 00:54:46.313887 kernel: pci 000d:00:03.0: supports D1 D2 Jul 16 00:54:46.313947 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.314015 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.314076 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Jul 16 00:54:46.314133 kernel: pci 000d:00:04.0: supports D1 D2 Jul 16 00:54:46.314190 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.314199 kernel: acpiphp: Slot [1] registered Jul 16 00:54:46.314207 kernel: acpiphp: Slot [2] registered Jul 16 00:54:46.314214 kernel: acpiphp: Slot [3] registered Jul 16 00:54:46.314221 kernel: acpiphp: Slot [4] registered Jul 16 00:54:46.314271 kernel: pci_bus 000d:00: on NUMA node 0 Jul 16 00:54:46.314330 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 16 00:54:46.314390 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:54:46.314448 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:54:46.314505 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 16 00:54:46.314562 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:54:46.314620 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:54:46.314678 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 16 00:54:46.314739 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:54:46.314797 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:54:46.314855 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 16 00:54:46.314912 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:54:46.314973 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:54:46.315031 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff]: assigned Jul 16 00:54:46.315088 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref]: assigned Jul 16 00:54:46.315147 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff]: assigned Jul 16 00:54:46.315205 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref]: assigned Jul 16 00:54:46.315262 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff]: assigned Jul 16 00:54:46.315318 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref]: assigned Jul 16 00:54:46.315375 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff]: assigned Jul 16 00:54:46.315432 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref]: assigned Jul 16 00:54:46.315489 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.315546 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.315605 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.315663 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.315720 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.315776 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.315833 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.315890 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.315950 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.316010 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.316066 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.316123 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.316181 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.316238 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.316295 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.316351 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.316408 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Jul 16 00:54:46.316467 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] Jul 16 00:54:46.316524 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] Jul 16 00:54:46.316581 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Jul 16 00:54:46.316638 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] Jul 16 00:54:46.316696 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] Jul 16 00:54:46.316752 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Jul 16 00:54:46.316809 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] Jul 16 00:54:46.316868 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] Jul 16 00:54:46.316925 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Jul 16 00:54:46.316985 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] Jul 16 00:54:46.317042 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] Jul 16 00:54:46.317094 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] Jul 16 00:54:46.317146 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] Jul 16 00:54:46.317211 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] Jul 16 00:54:46.317265 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] Jul 16 00:54:46.317327 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] Jul 16 00:54:46.317380 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] Jul 16 00:54:46.317449 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] Jul 16 00:54:46.317502 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] Jul 16 00:54:46.317561 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] Jul 16 00:54:46.317618 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] Jul 16 00:54:46.317628 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) Jul 16 00:54:46.317691 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 16 00:54:46.317748 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 16 00:54:46.317802 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] Jul 16 00:54:46.317858 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 16 00:54:46.317918 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 Jul 16 00:54:46.318026 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] Jul 16 00:54:46.318037 kernel: PCI host bridge to bus 0000:00 Jul 16 00:54:46.318100 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] Jul 16 00:54:46.318153 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] Jul 16 00:54:46.318205 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 16 00:54:46.318272 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jul 16 00:54:46.318341 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.318400 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 16 00:54:46.318460 kernel: pci 0000:00:01.0: enabling Extended Tags Jul 16 00:54:46.318522 kernel: pci 0000:00:01.0: supports D1 D2 Jul 16 00:54:46.318583 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.318648 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.318710 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Jul 16 00:54:46.318771 kernel: pci 0000:00:02.0: supports D1 D2 Jul 16 00:54:46.318829 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.318894 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.318957 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Jul 16 00:54:46.319015 kernel: pci 0000:00:03.0: supports D1 D2 Jul 16 00:54:46.319073 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.319137 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.319197 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Jul 16 00:54:46.319254 kernel: pci 0000:00:04.0: supports D1 D2 Jul 16 00:54:46.319310 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.319320 kernel: acpiphp: Slot [1-1] registered Jul 16 00:54:46.319327 kernel: acpiphp: Slot [2-1] registered Jul 16 00:54:46.319334 kernel: acpiphp: Slot [3-1] registered Jul 16 00:54:46.319341 kernel: acpiphp: Slot [4-1] registered Jul 16 00:54:46.319391 kernel: pci_bus 0000:00: on NUMA node 0 Jul 16 00:54:46.319451 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 16 00:54:46.319509 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:54:46.319567 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:54:46.319624 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 16 00:54:46.319683 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:54:46.319741 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:54:46.319798 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 16 00:54:46.319857 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:54:46.319915 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:54:46.319976 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 16 00:54:46.320034 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:54:46.320091 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:54:46.320148 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff]: assigned Jul 16 00:54:46.320206 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref]: assigned Jul 16 00:54:46.320265 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff]: assigned Jul 16 00:54:46.320322 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref]: assigned Jul 16 00:54:46.320378 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff]: assigned Jul 16 00:54:46.320435 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref]: assigned Jul 16 00:54:46.320492 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff]: assigned Jul 16 00:54:46.320549 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref]: assigned Jul 16 00:54:46.320606 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.320665 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.320722 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.320780 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.320836 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.320893 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.320954 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.321011 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.321068 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.321127 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.321185 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.321242 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.321299 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.321355 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.321414 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.321471 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.321528 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 16 00:54:46.321585 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] Jul 16 00:54:46.321642 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Jul 16 00:54:46.321699 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Jul 16 00:54:46.321756 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] Jul 16 00:54:46.321815 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Jul 16 00:54:46.321872 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Jul 16 00:54:46.321928 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] Jul 16 00:54:46.321988 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Jul 16 00:54:46.322046 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Jul 16 00:54:46.322105 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] Jul 16 00:54:46.322163 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Jul 16 00:54:46.322217 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] Jul 16 00:54:46.322268 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] Jul 16 00:54:46.322330 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] Jul 16 00:54:46.322383 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Jul 16 00:54:46.322444 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] Jul 16 00:54:46.322497 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Jul 16 00:54:46.322566 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] Jul 16 00:54:46.322620 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Jul 16 00:54:46.322681 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] Jul 16 00:54:46.322734 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Jul 16 00:54:46.322743 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) Jul 16 00:54:46.322807 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 16 00:54:46.322863 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 16 00:54:46.322920 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] Jul 16 00:54:46.322979 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 16 00:54:46.323034 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 Jul 16 00:54:46.323088 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] Jul 16 00:54:46.323097 kernel: PCI host bridge to bus 0005:00 Jul 16 00:54:46.323156 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] Jul 16 00:54:46.323210 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] Jul 16 00:54:46.323260 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] Jul 16 00:54:46.323327 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jul 16 00:54:46.323392 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.323449 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Jul 16 00:54:46.323507 kernel: pci 0005:00:01.0: supports D1 D2 Jul 16 00:54:46.323564 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.323629 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.323687 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Jul 16 00:54:46.323744 kernel: pci 0005:00:03.0: supports D1 D2 Jul 16 00:54:46.323800 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.323863 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.323921 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Jul 16 00:54:46.323982 kernel: pci 0005:00:05.0: bridge window [mem 0x30100000-0x301fffff] Jul 16 00:54:46.324042 kernel: pci 0005:00:05.0: supports D1 D2 Jul 16 00:54:46.324099 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.324164 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.324223 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Jul 16 00:54:46.324280 kernel: pci 0005:00:07.0: bridge window [mem 0x30000000-0x300fffff] Jul 16 00:54:46.324336 kernel: pci 0005:00:07.0: supports D1 D2 Jul 16 00:54:46.324393 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.324402 kernel: acpiphp: Slot [1-2] registered Jul 16 00:54:46.324411 kernel: acpiphp: Slot [2-2] registered Jul 16 00:54:46.324476 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 PCIe Endpoint Jul 16 00:54:46.324538 kernel: pci 0005:03:00.0: BAR 0 [mem 0x30110000-0x30113fff 64bit] Jul 16 00:54:46.324597 kernel: pci 0005:03:00.0: ROM [mem 0x30100000-0x3010ffff pref] Jul 16 00:54:46.324662 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 PCIe Endpoint Jul 16 00:54:46.324722 kernel: pci 0005:04:00.0: BAR 0 [mem 0x30010000-0x30013fff 64bit] Jul 16 00:54:46.324780 kernel: pci 0005:04:00.0: ROM [mem 0x30000000-0x3000ffff pref] Jul 16 00:54:46.324834 kernel: pci_bus 0005:00: on NUMA node 0 Jul 16 00:54:46.324892 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 16 00:54:46.324952 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:54:46.325011 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:54:46.325069 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 16 00:54:46.325129 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:54:46.325187 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:54:46.325246 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 16 00:54:46.325304 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:54:46.325362 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jul 16 00:54:46.325420 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 16 00:54:46.325478 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:54:46.325535 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 Jul 16 00:54:46.325593 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff]: assigned Jul 16 00:54:46.325652 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref]: assigned Jul 16 00:54:46.325708 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff]: assigned Jul 16 00:54:46.325765 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref]: assigned Jul 16 00:54:46.325822 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff]: assigned Jul 16 00:54:46.325879 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref]: assigned Jul 16 00:54:46.325936 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff]: assigned Jul 16 00:54:46.325996 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref]: assigned Jul 16 00:54:46.326056 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.326114 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.326172 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.326229 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.326286 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.326342 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.326399 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.326456 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.326515 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.326571 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.326628 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.326685 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.326742 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.326799 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.326856 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.326915 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.326976 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Jul 16 00:54:46.327033 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] Jul 16 00:54:46.327090 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Jul 16 00:54:46.327147 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Jul 16 00:54:46.327205 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] Jul 16 00:54:46.327261 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Jul 16 00:54:46.327323 kernel: pci 0005:03:00.0: ROM [mem 0x30400000-0x3040ffff pref]: assigned Jul 16 00:54:46.327382 kernel: pci 0005:03:00.0: BAR 0 [mem 0x30410000-0x30413fff 64bit]: assigned Jul 16 00:54:46.327439 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Jul 16 00:54:46.327497 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] Jul 16 00:54:46.327555 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Jul 16 00:54:46.327614 kernel: pci 0005:04:00.0: ROM [mem 0x30600000-0x3060ffff pref]: assigned Jul 16 00:54:46.327672 kernel: pci 0005:04:00.0: BAR 0 [mem 0x30610000-0x30613fff 64bit]: assigned Jul 16 00:54:46.327729 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Jul 16 00:54:46.327787 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] Jul 16 00:54:46.327845 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Jul 16 00:54:46.327896 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] Jul 16 00:54:46.327950 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] Jul 16 00:54:46.328013 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] Jul 16 00:54:46.328066 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Jul 16 00:54:46.328136 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] Jul 16 00:54:46.328192 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Jul 16 00:54:46.328252 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] Jul 16 00:54:46.328305 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Jul 16 00:54:46.328365 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] Jul 16 00:54:46.328418 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Jul 16 00:54:46.328429 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) Jul 16 00:54:46.328492 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 16 00:54:46.328548 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 16 00:54:46.328603 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] Jul 16 00:54:46.328658 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 16 00:54:46.328713 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 Jul 16 00:54:46.328767 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] Jul 16 00:54:46.328778 kernel: PCI host bridge to bus 0003:00 Jul 16 00:54:46.328837 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] Jul 16 00:54:46.328889 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] Jul 16 00:54:46.328939 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] Jul 16 00:54:46.329010 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jul 16 00:54:46.329076 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.329134 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Jul 16 00:54:46.329194 kernel: pci 0003:00:01.0: supports D1 D2 Jul 16 00:54:46.329251 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.329315 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.329372 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Jul 16 00:54:46.329430 kernel: pci 0003:00:03.0: supports D1 D2 Jul 16 00:54:46.329487 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.329551 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.329611 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] Jul 16 00:54:46.329668 kernel: pci 0003:00:05.0: bridge window [io 0x0000-0x0fff] Jul 16 00:54:46.329725 kernel: pci 0003:00:05.0: bridge window [mem 0x10000000-0x100fffff] Jul 16 00:54:46.329782 kernel: pci 0003:00:05.0: bridge window [mem 0x240000000000-0x2400000fffff 64bit pref] Jul 16 00:54:46.329839 kernel: pci 0003:00:05.0: supports D1 D2 Jul 16 00:54:46.329897 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.329906 kernel: acpiphp: Slot [1-3] registered Jul 16 00:54:46.329913 kernel: acpiphp: Slot [2-3] registered Jul 16 00:54:46.329984 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 PCIe Endpoint Jul 16 00:54:46.330046 kernel: pci 0003:03:00.0: BAR 0 [mem 0x10020000-0x1003ffff] Jul 16 00:54:46.330105 kernel: pci 0003:03:00.0: BAR 2 [io 0x0020-0x003f] Jul 16 00:54:46.330163 kernel: pci 0003:03:00.0: BAR 3 [mem 0x10044000-0x10047fff] Jul 16 00:54:46.330221 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold Jul 16 00:54:46.330280 kernel: pci 0003:03:00.0: VF BAR 0 [mem 0x240000060000-0x240000063fff 64bit pref] Jul 16 00:54:46.330338 kernel: pci 0003:03:00.0: VF BAR 0 [mem 0x240000060000-0x24000007ffff 64bit pref]: contains BAR 0 for 8 VFs Jul 16 00:54:46.330399 kernel: pci 0003:03:00.0: VF BAR 3 [mem 0x240000040000-0x240000043fff 64bit pref] Jul 16 00:54:46.330457 kernel: pci 0003:03:00.0: VF BAR 3 [mem 0x240000040000-0x24000005ffff 64bit pref]: contains BAR 3 for 8 VFs Jul 16 00:54:46.330516 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) Jul 16 00:54:46.330583 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 PCIe Endpoint Jul 16 00:54:46.330642 kernel: pci 0003:03:00.1: BAR 0 [mem 0x10000000-0x1001ffff] Jul 16 00:54:46.330700 kernel: pci 0003:03:00.1: BAR 2 [io 0x0000-0x001f] Jul 16 00:54:46.330758 kernel: pci 0003:03:00.1: BAR 3 [mem 0x10040000-0x10043fff] Jul 16 00:54:46.330818 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold Jul 16 00:54:46.330877 kernel: pci 0003:03:00.1: VF BAR 0 [mem 0x240000020000-0x240000023fff 64bit pref] Jul 16 00:54:46.330950 kernel: pci 0003:03:00.1: VF BAR 0 [mem 0x240000020000-0x24000003ffff 64bit pref]: contains BAR 0 for 8 VFs Jul 16 00:54:46.331010 kernel: pci 0003:03:00.1: VF BAR 3 [mem 0x240000000000-0x240000003fff 64bit pref] Jul 16 00:54:46.331069 kernel: pci 0003:03:00.1: VF BAR 3 [mem 0x240000000000-0x24000001ffff 64bit pref]: contains BAR 3 for 8 VFs Jul 16 00:54:46.331121 kernel: pci_bus 0003:00: on NUMA node 0 Jul 16 00:54:46.331180 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 16 00:54:46.331238 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:54:46.331298 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:54:46.331356 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 16 00:54:46.331413 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:54:46.331471 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:54:46.331528 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 Jul 16 00:54:46.331585 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 Jul 16 00:54:46.331642 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jul 16 00:54:46.331701 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref]: assigned Jul 16 00:54:46.331758 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff]: assigned Jul 16 00:54:46.331815 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref]: assigned Jul 16 00:54:46.331872 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff]: assigned Jul 16 00:54:46.331929 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref]: assigned Jul 16 00:54:46.331989 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.332047 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.332105 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.332164 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.332222 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.332279 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.332336 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.332393 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.332451 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.332511 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.332568 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.332625 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.332682 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Jul 16 00:54:46.332739 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] Jul 16 00:54:46.332798 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] Jul 16 00:54:46.332855 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Jul 16 00:54:46.332914 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] Jul 16 00:54:46.332975 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] Jul 16 00:54:46.333035 kernel: pci 0003:03:00.0: BAR 0 [mem 0x10400000-0x1041ffff]: assigned Jul 16 00:54:46.333094 kernel: pci 0003:03:00.1: BAR 0 [mem 0x10420000-0x1043ffff]: assigned Jul 16 00:54:46.333153 kernel: pci 0003:03:00.0: BAR 3 [mem 0x10440000-0x10443fff]: assigned Jul 16 00:54:46.333211 kernel: pci 0003:03:00.0: VF BAR 0 [mem 0x240000400000-0x24000041ffff 64bit pref]: assigned Jul 16 00:54:46.333270 kernel: pci 0003:03:00.0: VF BAR 3 [mem 0x240000420000-0x24000043ffff 64bit pref]: assigned Jul 16 00:54:46.333331 kernel: pci 0003:03:00.1: BAR 3 [mem 0x10444000-0x10447fff]: assigned Jul 16 00:54:46.333390 kernel: pci 0003:03:00.1: VF BAR 0 [mem 0x240000440000-0x24000045ffff 64bit pref]: assigned Jul 16 00:54:46.333449 kernel: pci 0003:03:00.1: VF BAR 3 [mem 0x240000460000-0x24000047ffff 64bit pref]: assigned Jul 16 00:54:46.333508 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: can't assign; no space Jul 16 00:54:46.333567 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: failed to assign Jul 16 00:54:46.333626 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: can't assign; no space Jul 16 00:54:46.333684 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: failed to assign Jul 16 00:54:46.333745 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: can't assign; no space Jul 16 00:54:46.333803 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: failed to assign Jul 16 00:54:46.333862 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: can't assign; no space Jul 16 00:54:46.333920 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: failed to assign Jul 16 00:54:46.333981 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] Jul 16 00:54:46.334038 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] Jul 16 00:54:46.334096 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] Jul 16 00:54:46.334148 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 16 00:54:46.334201 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] Jul 16 00:54:46.334252 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] Jul 16 00:54:46.334314 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] Jul 16 00:54:46.334367 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] Jul 16 00:54:46.334436 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] Jul 16 00:54:46.334489 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] Jul 16 00:54:46.334550 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] Jul 16 00:54:46.334606 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] Jul 16 00:54:46.334616 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) Jul 16 00:54:46.334678 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 16 00:54:46.334734 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 16 00:54:46.334789 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] Jul 16 00:54:46.334844 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 16 00:54:46.334899 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 Jul 16 00:54:46.334962 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] Jul 16 00:54:46.334973 kernel: PCI host bridge to bus 000c:00 Jul 16 00:54:46.335030 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] Jul 16 00:54:46.335082 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] Jul 16 00:54:46.335132 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] Jul 16 00:54:46.335197 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jul 16 00:54:46.335266 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.335325 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Jul 16 00:54:46.335384 kernel: pci 000c:00:01.0: enabling Extended Tags Jul 16 00:54:46.335440 kernel: pci 000c:00:01.0: supports D1 D2 Jul 16 00:54:46.335498 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.335563 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.335622 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Jul 16 00:54:46.335681 kernel: pci 000c:00:02.0: supports D1 D2 Jul 16 00:54:46.335738 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.335803 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.335861 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Jul 16 00:54:46.335918 kernel: pci 000c:00:03.0: supports D1 D2 Jul 16 00:54:46.335979 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.336043 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.336101 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Jul 16 00:54:46.336161 kernel: pci 000c:00:04.0: supports D1 D2 Jul 16 00:54:46.336218 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.336228 kernel: acpiphp: Slot [1-4] registered Jul 16 00:54:46.336235 kernel: acpiphp: Slot [2-4] registered Jul 16 00:54:46.336243 kernel: acpiphp: Slot [3-2] registered Jul 16 00:54:46.336250 kernel: acpiphp: Slot [4-2] registered Jul 16 00:54:46.336300 kernel: pci_bus 000c:00: on NUMA node 0 Jul 16 00:54:46.336358 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 16 00:54:46.336418 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:54:46.336476 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:54:46.336533 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 16 00:54:46.336593 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:54:46.336652 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:54:46.336709 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 16 00:54:46.336766 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:54:46.336825 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:54:46.336882 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 16 00:54:46.336939 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:54:46.337000 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:54:46.337057 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff]: assigned Jul 16 00:54:46.337115 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref]: assigned Jul 16 00:54:46.337172 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff]: assigned Jul 16 00:54:46.337233 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref]: assigned Jul 16 00:54:46.337291 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff]: assigned Jul 16 00:54:46.337348 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref]: assigned Jul 16 00:54:46.337405 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff]: assigned Jul 16 00:54:46.337464 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref]: assigned Jul 16 00:54:46.337522 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.337580 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.337638 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.337697 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.337754 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.337811 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.337868 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.337926 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.337986 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.338044 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.338101 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.338160 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.338217 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.338274 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.338332 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.338389 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.338446 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Jul 16 00:54:46.338503 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] Jul 16 00:54:46.338560 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] Jul 16 00:54:46.338619 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Jul 16 00:54:46.338678 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] Jul 16 00:54:46.338736 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] Jul 16 00:54:46.338793 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Jul 16 00:54:46.338852 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] Jul 16 00:54:46.338909 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] Jul 16 00:54:46.338970 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Jul 16 00:54:46.339027 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] Jul 16 00:54:46.339085 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] Jul 16 00:54:46.339137 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] Jul 16 00:54:46.339187 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] Jul 16 00:54:46.339251 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] Jul 16 00:54:46.339305 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] Jul 16 00:54:46.339367 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] Jul 16 00:54:46.339421 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] Jul 16 00:54:46.339489 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] Jul 16 00:54:46.339542 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] Jul 16 00:54:46.339605 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] Jul 16 00:54:46.339659 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] Jul 16 00:54:46.339668 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) Jul 16 00:54:46.339731 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 16 00:54:46.339786 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 16 00:54:46.339841 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] Jul 16 00:54:46.339899 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 16 00:54:46.339958 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 Jul 16 00:54:46.340014 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] Jul 16 00:54:46.340025 kernel: PCI host bridge to bus 0002:00 Jul 16 00:54:46.340084 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] Jul 16 00:54:46.340136 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] Jul 16 00:54:46.340187 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] Jul 16 00:54:46.340253 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jul 16 00:54:46.340320 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.340379 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Jul 16 00:54:46.340437 kernel: pci 0002:00:01.0: supports D1 D2 Jul 16 00:54:46.340494 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.340558 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.340616 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Jul 16 00:54:46.340675 kernel: pci 0002:00:03.0: supports D1 D2 Jul 16 00:54:46.340733 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.340798 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.340856 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Jul 16 00:54:46.340913 kernel: pci 0002:00:05.0: supports D1 D2 Jul 16 00:54:46.340975 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.341038 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.341098 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Jul 16 00:54:46.341156 kernel: pci 0002:00:07.0: supports D1 D2 Jul 16 00:54:46.341212 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.341222 kernel: acpiphp: Slot [1-5] registered Jul 16 00:54:46.341229 kernel: acpiphp: Slot [2-5] registered Jul 16 00:54:46.341237 kernel: acpiphp: Slot [3-3] registered Jul 16 00:54:46.341245 kernel: acpiphp: Slot [4-3] registered Jul 16 00:54:46.341295 kernel: pci_bus 0002:00: on NUMA node 0 Jul 16 00:54:46.341353 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 16 00:54:46.341413 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:54:46.341471 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 16 00:54:46.341528 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 16 00:54:46.341586 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:54:46.341644 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:54:46.341702 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 16 00:54:46.341759 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:54:46.341818 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:54:46.341876 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 16 00:54:46.341933 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:54:46.341995 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:54:46.342054 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff]: assigned Jul 16 00:54:46.342112 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref]: assigned Jul 16 00:54:46.342171 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff]: assigned Jul 16 00:54:46.342229 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref]: assigned Jul 16 00:54:46.342286 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff]: assigned Jul 16 00:54:46.342344 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref]: assigned Jul 16 00:54:46.342404 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff]: assigned Jul 16 00:54:46.342461 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref]: assigned Jul 16 00:54:46.342519 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.342577 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.342636 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.342693 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.342751 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.342808 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.342865 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.342922 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.342983 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.343041 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.343100 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.343157 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.343214 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.343271 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.343328 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.343386 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.343443 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Jul 16 00:54:46.343500 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] Jul 16 00:54:46.343557 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] Jul 16 00:54:46.343617 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Jul 16 00:54:46.343674 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] Jul 16 00:54:46.343732 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] Jul 16 00:54:46.343789 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Jul 16 00:54:46.343847 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] Jul 16 00:54:46.343904 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] Jul 16 00:54:46.343973 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Jul 16 00:54:46.344032 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] Jul 16 00:54:46.344090 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] Jul 16 00:54:46.344142 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] Jul 16 00:54:46.344194 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] Jul 16 00:54:46.344256 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] Jul 16 00:54:46.344313 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] Jul 16 00:54:46.344375 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] Jul 16 00:54:46.344429 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] Jul 16 00:54:46.344496 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] Jul 16 00:54:46.344550 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] Jul 16 00:54:46.344609 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] Jul 16 00:54:46.344663 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] Jul 16 00:54:46.344675 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) Jul 16 00:54:46.344737 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 16 00:54:46.344793 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 16 00:54:46.344850 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] Jul 16 00:54:46.344905 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 16 00:54:46.344975 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 Jul 16 00:54:46.345054 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] Jul 16 00:54:46.345065 kernel: PCI host bridge to bus 0001:00 Jul 16 00:54:46.345124 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] Jul 16 00:54:46.345176 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] Jul 16 00:54:46.345228 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] Jul 16 00:54:46.345294 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jul 16 00:54:46.345363 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.345421 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Jul 16 00:54:46.345479 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Jul 16 00:54:46.345536 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] Jul 16 00:54:46.345593 kernel: pci 0001:00:01.0: enabling Extended Tags Jul 16 00:54:46.345650 kernel: pci 0001:00:01.0: supports D1 D2 Jul 16 00:54:46.345709 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.345775 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.345833 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Jul 16 00:54:46.345890 kernel: pci 0001:00:02.0: supports D1 D2 Jul 16 00:54:46.345950 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.346017 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.346074 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Jul 16 00:54:46.346132 kernel: pci 0001:00:03.0: supports D1 D2 Jul 16 00:54:46.346191 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.346256 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.346315 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Jul 16 00:54:46.346372 kernel: pci 0001:00:04.0: supports D1 D2 Jul 16 00:54:46.346429 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.346439 kernel: acpiphp: Slot [1-6] registered Jul 16 00:54:46.346505 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 PCIe Endpoint Jul 16 00:54:46.346564 kernel: pci 0001:01:00.0: BAR 0 [mem 0x380002000000-0x380003ffffff 64bit pref] Jul 16 00:54:46.346626 kernel: pci 0001:01:00.0: ROM [mem 0x60100000-0x601fffff pref] Jul 16 00:54:46.346685 kernel: pci 0001:01:00.0: PME# supported from D3cold Jul 16 00:54:46.346745 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 16 00:54:46.346812 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 PCIe Endpoint Jul 16 00:54:46.346873 kernel: pci 0001:01:00.1: BAR 0 [mem 0x380000000000-0x380001ffffff 64bit pref] Jul 16 00:54:46.346932 kernel: pci 0001:01:00.1: ROM [mem 0x60000000-0x600fffff pref] Jul 16 00:54:46.346996 kernel: pci 0001:01:00.1: PME# supported from D3cold Jul 16 00:54:46.347008 kernel: acpiphp: Slot [2-6] registered Jul 16 00:54:46.347015 kernel: acpiphp: Slot [3-4] registered Jul 16 00:54:46.347023 kernel: acpiphp: Slot [4-4] registered Jul 16 00:54:46.347074 kernel: pci_bus 0001:00: on NUMA node 0 Jul 16 00:54:46.347132 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 16 00:54:46.347190 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 16 00:54:46.347247 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:54:46.347305 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 16 00:54:46.347365 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 16 00:54:46.347422 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:54:46.347480 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:54:46.347539 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 16 00:54:46.347597 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:54:46.347655 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:54:46.347712 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref]: assigned Jul 16 00:54:46.347772 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff]: assigned Jul 16 00:54:46.347829 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff]: assigned Jul 16 00:54:46.347886 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref]: assigned Jul 16 00:54:46.347943 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff]: assigned Jul 16 00:54:46.348005 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref]: assigned Jul 16 00:54:46.348062 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff]: assigned Jul 16 00:54:46.348120 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref]: assigned Jul 16 00:54:46.348180 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.348237 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.348294 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.348351 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.348409 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.348466 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.348523 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.348582 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.348641 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.348699 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.348756 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.348813 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.348870 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.348927 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.348988 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.349046 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.349106 kernel: pci 0001:01:00.0: BAR 0 [mem 0x380000000000-0x380001ffffff 64bit pref]: assigned Jul 16 00:54:46.349168 kernel: pci 0001:01:00.1: BAR 0 [mem 0x380002000000-0x380003ffffff 64bit pref]: assigned Jul 16 00:54:46.349227 kernel: pci 0001:01:00.0: ROM [mem 0x60000000-0x600fffff pref]: assigned Jul 16 00:54:46.349286 kernel: pci 0001:01:00.1: ROM [mem 0x60100000-0x601fffff pref]: assigned Jul 16 00:54:46.349343 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Jul 16 00:54:46.349400 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Jul 16 00:54:46.349458 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380003ffffff 64bit pref] Jul 16 00:54:46.349515 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Jul 16 00:54:46.349574 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] Jul 16 00:54:46.349631 kernel: pci 0001:00:02.0: bridge window [mem 0x380004000000-0x3800041fffff 64bit pref] Jul 16 00:54:46.349689 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Jul 16 00:54:46.349746 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] Jul 16 00:54:46.349803 kernel: pci 0001:00:03.0: bridge window [mem 0x380004200000-0x3800043fffff 64bit pref] Jul 16 00:54:46.349860 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Jul 16 00:54:46.349921 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] Jul 16 00:54:46.349983 kernel: pci 0001:00:04.0: bridge window [mem 0x380004400000-0x3800045fffff 64bit pref] Jul 16 00:54:46.350035 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] Jul 16 00:54:46.350086 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] Jul 16 00:54:46.350147 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] Jul 16 00:54:46.350201 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380003ffffff 64bit pref] Jul 16 00:54:46.350269 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] Jul 16 00:54:46.350325 kernel: pci_bus 0001:02: resource 2 [mem 0x380004000000-0x3800041fffff 64bit pref] Jul 16 00:54:46.350387 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] Jul 16 00:54:46.350440 kernel: pci_bus 0001:03: resource 2 [mem 0x380004200000-0x3800043fffff 64bit pref] Jul 16 00:54:46.350500 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] Jul 16 00:54:46.350553 kernel: pci_bus 0001:04: resource 2 [mem 0x380004400000-0x3800045fffff 64bit pref] Jul 16 00:54:46.350563 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) Jul 16 00:54:46.350628 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 16 00:54:46.350684 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 16 00:54:46.350739 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] Jul 16 00:54:46.350794 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 16 00:54:46.350849 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 Jul 16 00:54:46.350904 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] Jul 16 00:54:46.350914 kernel: PCI host bridge to bus 0004:00 Jul 16 00:54:46.350977 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] Jul 16 00:54:46.351030 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] Jul 16 00:54:46.351081 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] Jul 16 00:54:46.351145 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jul 16 00:54:46.351210 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.351269 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Jul 16 00:54:46.351328 kernel: pci 0004:00:01.0: bridge window [io 0x0000-0x0fff] Jul 16 00:54:46.351388 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x220fffff] Jul 16 00:54:46.351445 kernel: pci 0004:00:01.0: supports D1 D2 Jul 16 00:54:46.351502 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.351566 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.351624 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Jul 16 00:54:46.351681 kernel: pci 0004:00:03.0: bridge window [mem 0x22200000-0x222fffff] Jul 16 00:54:46.351738 kernel: pci 0004:00:03.0: supports D1 D2 Jul 16 00:54:46.351797 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.351860 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jul 16 00:54:46.351918 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Jul 16 00:54:46.351980 kernel: pci 0004:00:05.0: supports D1 D2 Jul 16 00:54:46.352038 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot Jul 16 00:54:46.352105 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jul 16 00:54:46.352165 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Jul 16 00:54:46.352226 kernel: pci 0004:01:00.0: bridge window [io 0x0000-0x0fff] Jul 16 00:54:46.352285 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x220fffff] Jul 16 00:54:46.352344 kernel: pci 0004:01:00.0: enabling Extended Tags Jul 16 00:54:46.352404 kernel: pci 0004:01:00.0: supports D1 D2 Jul 16 00:54:46.352463 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 16 00:54:46.352527 kernel: pci_bus 0004:02: extended config space not accessible Jul 16 00:54:46.352597 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 conventional PCI endpoint Jul 16 00:54:46.352663 kernel: pci 0004:02:00.0: BAR 0 [mem 0x20000000-0x21ffffff] Jul 16 00:54:46.352724 kernel: pci 0004:02:00.0: BAR 1 [mem 0x22000000-0x2201ffff] Jul 16 00:54:46.352785 kernel: pci 0004:02:00.0: BAR 2 [io 0x0000-0x007f] Jul 16 00:54:46.352846 kernel: pci 0004:02:00.0: supports D1 D2 Jul 16 00:54:46.352907 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 16 00:54:46.352986 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 PCIe Endpoint Jul 16 00:54:46.353048 kernel: pci 0004:03:00.0: BAR 0 [mem 0x22200000-0x22201fff 64bit] Jul 16 00:54:46.353110 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold Jul 16 00:54:46.353162 kernel: pci_bus 0004:00: on NUMA node 0 Jul 16 00:54:46.353221 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 Jul 16 00:54:46.353279 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 16 00:54:46.353338 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 16 00:54:46.353396 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jul 16 00:54:46.353454 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 16 00:54:46.353514 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:54:46.353571 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 16 00:54:46.353629 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff]: assigned Jul 16 00:54:46.353686 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref]: assigned Jul 16 00:54:46.353744 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff]: assigned Jul 16 00:54:46.353801 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref]: assigned Jul 16 00:54:46.353859 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff]: assigned Jul 16 00:54:46.353916 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref]: assigned Jul 16 00:54:46.353980 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.354039 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.354097 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.354154 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.354212 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.354269 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.354327 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.354384 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.354444 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.354501 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.354559 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.354616 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.354675 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff]: assigned Jul 16 00:54:46.354735 kernel: pci 0004:01:00.0: bridge window [io size 0x1000]: can't assign; no space Jul 16 00:54:46.354795 kernel: pci 0004:01:00.0: bridge window [io size 0x1000]: failed to assign Jul 16 00:54:46.354859 kernel: pci 0004:02:00.0: BAR 0 [mem 0x20000000-0x21ffffff]: assigned Jul 16 00:54:46.354922 kernel: pci 0004:02:00.0: BAR 1 [mem 0x22000000-0x2201ffff]: assigned Jul 16 00:54:46.354987 kernel: pci 0004:02:00.0: BAR 2 [io size 0x0080]: can't assign; no space Jul 16 00:54:46.355048 kernel: pci 0004:02:00.0: BAR 2 [io size 0x0080]: failed to assign Jul 16 00:54:46.355107 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Jul 16 00:54:46.355165 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] Jul 16 00:54:46.355225 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Jul 16 00:54:46.355283 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] Jul 16 00:54:46.355342 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] Jul 16 00:54:46.355402 kernel: pci 0004:03:00.0: BAR 0 [mem 0x23000000-0x23001fff 64bit]: assigned Jul 16 00:54:46.355459 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Jul 16 00:54:46.355517 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] Jul 16 00:54:46.355574 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] Jul 16 00:54:46.355632 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Jul 16 00:54:46.355690 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] Jul 16 00:54:46.355750 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] Jul 16 00:54:46.355802 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 16 00:54:46.355853 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] Jul 16 00:54:46.355904 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] Jul 16 00:54:46.355978 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] Jul 16 00:54:46.356034 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] Jul 16 00:54:46.356091 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] Jul 16 00:54:46.356155 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] Jul 16 00:54:46.356210 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] Jul 16 00:54:46.356270 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] Jul 16 00:54:46.356323 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] Jul 16 00:54:46.356333 kernel: ACPI: CPU18 has been hot-added Jul 16 00:54:46.356341 kernel: ACPI: CPU58 has been hot-added Jul 16 00:54:46.356348 kernel: ACPI: CPU38 has been hot-added Jul 16 00:54:46.356358 kernel: ACPI: CPU78 has been hot-added Jul 16 00:54:46.356367 kernel: ACPI: CPU16 has been hot-added Jul 16 00:54:46.356374 kernel: ACPI: CPU56 has been hot-added Jul 16 00:54:46.356382 kernel: ACPI: CPU36 has been hot-added Jul 16 00:54:46.356389 kernel: ACPI: CPU76 has been hot-added Jul 16 00:54:46.356397 kernel: ACPI: CPU17 has been hot-added Jul 16 00:54:46.356404 kernel: ACPI: CPU57 has been hot-added Jul 16 00:54:46.356412 kernel: ACPI: CPU37 has been hot-added Jul 16 00:54:46.356419 kernel: ACPI: CPU77 has been hot-added Jul 16 00:54:46.356427 kernel: ACPI: CPU19 has been hot-added Jul 16 00:54:46.356436 kernel: ACPI: CPU59 has been hot-added Jul 16 00:54:46.356443 kernel: ACPI: CPU39 has been hot-added Jul 16 00:54:46.356451 kernel: ACPI: CPU79 has been hot-added Jul 16 00:54:46.356458 kernel: ACPI: CPU12 has been hot-added Jul 16 00:54:46.356465 kernel: ACPI: CPU52 has been hot-added Jul 16 00:54:46.356473 kernel: ACPI: CPU32 has been hot-added Jul 16 00:54:46.356480 kernel: ACPI: CPU72 has been hot-added Jul 16 00:54:46.356488 kernel: ACPI: CPU8 has been hot-added Jul 16 00:54:46.356495 kernel: ACPI: CPU48 has been hot-added Jul 16 00:54:46.356504 kernel: ACPI: CPU28 has been hot-added Jul 16 00:54:46.356512 kernel: ACPI: CPU68 has been hot-added Jul 16 00:54:46.356519 kernel: ACPI: CPU10 has been hot-added Jul 16 00:54:46.356527 kernel: ACPI: CPU50 has been hot-added Jul 16 00:54:46.356534 kernel: ACPI: CPU30 has been hot-added Jul 16 00:54:46.356542 kernel: ACPI: CPU70 has been hot-added Jul 16 00:54:46.356549 kernel: ACPI: CPU14 has been hot-added Jul 16 00:54:46.356556 kernel: ACPI: CPU54 has been hot-added Jul 16 00:54:46.356564 kernel: ACPI: CPU34 has been hot-added Jul 16 00:54:46.356571 kernel: ACPI: CPU74 has been hot-added Jul 16 00:54:46.356580 kernel: ACPI: CPU4 has been hot-added Jul 16 00:54:46.356587 kernel: ACPI: CPU44 has been hot-added Jul 16 00:54:46.356595 kernel: ACPI: CPU24 has been hot-added Jul 16 00:54:46.356602 kernel: ACPI: CPU64 has been hot-added Jul 16 00:54:46.356610 kernel: ACPI: CPU0 has been hot-added Jul 16 00:54:46.356617 kernel: ACPI: CPU40 has been hot-added Jul 16 00:54:46.356625 kernel: ACPI: CPU20 has been hot-added Jul 16 00:54:46.356632 kernel: ACPI: CPU60 has been hot-added Jul 16 00:54:46.356640 kernel: ACPI: CPU2 has been hot-added Jul 16 00:54:46.356650 kernel: ACPI: CPU42 has been hot-added Jul 16 00:54:46.356658 kernel: ACPI: CPU22 has been hot-added Jul 16 00:54:46.356665 kernel: ACPI: CPU62 has been hot-added Jul 16 00:54:46.356673 kernel: ACPI: CPU6 has been hot-added Jul 16 00:54:46.356680 kernel: ACPI: CPU46 has been hot-added Jul 16 00:54:46.356688 kernel: ACPI: CPU26 has been hot-added Jul 16 00:54:46.356695 kernel: ACPI: CPU66 has been hot-added Jul 16 00:54:46.356703 kernel: ACPI: CPU5 has been hot-added Jul 16 00:54:46.356710 kernel: ACPI: CPU45 has been hot-added Jul 16 00:54:46.356719 kernel: ACPI: CPU25 has been hot-added Jul 16 00:54:46.356726 kernel: ACPI: CPU65 has been hot-added Jul 16 00:54:46.356734 kernel: ACPI: CPU1 has been hot-added Jul 16 00:54:46.356741 kernel: ACPI: CPU41 has been hot-added Jul 16 00:54:46.356749 kernel: ACPI: CPU21 has been hot-added Jul 16 00:54:46.356756 kernel: ACPI: CPU61 has been hot-added Jul 16 00:54:46.356764 kernel: ACPI: CPU3 has been hot-added Jul 16 00:54:46.356771 kernel: ACPI: CPU43 has been hot-added Jul 16 00:54:46.356779 kernel: ACPI: CPU23 has been hot-added Jul 16 00:54:46.356787 kernel: ACPI: CPU63 has been hot-added Jul 16 00:54:46.356795 kernel: ACPI: CPU7 has been hot-added Jul 16 00:54:46.356803 kernel: ACPI: CPU47 has been hot-added Jul 16 00:54:46.356810 kernel: ACPI: CPU27 has been hot-added Jul 16 00:54:46.356818 kernel: ACPI: CPU67 has been hot-added Jul 16 00:54:46.356825 kernel: ACPI: CPU13 has been hot-added Jul 16 00:54:46.356833 kernel: ACPI: CPU53 has been hot-added Jul 16 00:54:46.356840 kernel: ACPI: CPU33 has been hot-added Jul 16 00:54:46.356848 kernel: ACPI: CPU73 has been hot-added Jul 16 00:54:46.356855 kernel: ACPI: CPU9 has been hot-added Jul 16 00:54:46.356864 kernel: ACPI: CPU49 has been hot-added Jul 16 00:54:46.356872 kernel: ACPI: CPU29 has been hot-added Jul 16 00:54:46.356879 kernel: ACPI: CPU69 has been hot-added Jul 16 00:54:46.356887 kernel: ACPI: CPU11 has been hot-added Jul 16 00:54:46.356894 kernel: ACPI: CPU51 has been hot-added Jul 16 00:54:46.356901 kernel: ACPI: CPU31 has been hot-added Jul 16 00:54:46.356909 kernel: ACPI: CPU71 has been hot-added Jul 16 00:54:46.356916 kernel: ACPI: CPU15 has been hot-added Jul 16 00:54:46.356924 kernel: ACPI: CPU55 has been hot-added Jul 16 00:54:46.356931 kernel: ACPI: CPU35 has been hot-added Jul 16 00:54:46.356940 kernel: ACPI: CPU75 has been hot-added Jul 16 00:54:46.356952 kernel: iommu: Default domain type: Translated Jul 16 00:54:46.356960 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 16 00:54:46.356967 kernel: efivars: Registered efivars operations Jul 16 00:54:46.357034 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device Jul 16 00:54:46.357096 kernel: pci 0004:02:00.0: vgaarb: bridge control possible Jul 16 00:54:46.357158 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none Jul 16 00:54:46.357168 kernel: vgaarb: loaded Jul 16 00:54:46.357176 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 16 00:54:46.357185 kernel: VFS: Disk quotas dquot_6.6.0 Jul 16 00:54:46.357193 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 16 00:54:46.357201 kernel: pnp: PnP ACPI init Jul 16 00:54:46.357264 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved Jul 16 00:54:46.357318 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved Jul 16 00:54:46.357371 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved Jul 16 00:54:46.357422 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved Jul 16 00:54:46.357477 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved Jul 16 00:54:46.357529 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved Jul 16 00:54:46.357582 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved Jul 16 00:54:46.357634 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved Jul 16 00:54:46.357644 kernel: pnp: PnP ACPI: found 1 devices Jul 16 00:54:46.357651 kernel: NET: Registered PF_INET protocol family Jul 16 00:54:46.357659 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 16 00:54:46.357668 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jul 16 00:54:46.357678 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 16 00:54:46.357685 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 16 00:54:46.357693 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 16 00:54:46.357701 kernel: TCP: Hash tables configured (established 524288 bind 65536) Jul 16 00:54:46.357708 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 16 00:54:46.357716 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 16 00:54:46.357724 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 16 00:54:46.357785 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes Jul 16 00:54:46.357795 kernel: kvm [1]: nv: 554 coarse grained trap handlers Jul 16 00:54:46.357804 kernel: kvm [1]: IPA Size Limit: 48 bits Jul 16 00:54:46.357812 kernel: kvm [1]: GICv3: no GICV resource entry Jul 16 00:54:46.357819 kernel: kvm [1]: disabling GICv2 emulation Jul 16 00:54:46.357827 kernel: kvm [1]: GIC system register CPU interface enabled Jul 16 00:54:46.357835 kernel: kvm [1]: vgic interrupt IRQ9 Jul 16 00:54:46.357843 kernel: kvm [1]: VHE mode initialized successfully Jul 16 00:54:46.357850 kernel: Initialise system trusted keyrings Jul 16 00:54:46.357859 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 Jul 16 00:54:46.357866 kernel: Key type asymmetric registered Jul 16 00:54:46.357875 kernel: Asymmetric key parser 'x509' registered Jul 16 00:54:46.357882 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 16 00:54:46.357890 kernel: io scheduler mq-deadline registered Jul 16 00:54:46.357897 kernel: io scheduler kyber registered Jul 16 00:54:46.357905 kernel: io scheduler bfq registered Jul 16 00:54:46.357913 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 16 00:54:46.357920 kernel: ACPI: button: Power Button [PWRB] Jul 16 00:54:46.357928 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). Jul 16 00:54:46.357935 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 16 00:54:46.358013 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 Jul 16 00:54:46.358069 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) Jul 16 00:54:46.358124 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 16 00:54:46.358177 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 65536 entries for cmdq Jul 16 00:54:46.358230 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 32768 entries for evtq Jul 16 00:54:46.358284 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 65536 entries for priq Jul 16 00:54:46.358348 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 Jul 16 00:54:46.358401 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) Jul 16 00:54:46.358454 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 16 00:54:46.358464 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 16 00:54:46.358472 kernel: cma: number of available pages: 128@3968=> 128 free of 4096 total pages Jul 16 00:54:46.358523 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 65536 entries for cmdq Jul 16 00:54:46.358533 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 16 00:54:46.358540 kernel: cma: number of available pages: 128@3968=> 128 free of 4096 total pages Jul 16 00:54:46.358593 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 32768 entries for evtq Jul 16 00:54:46.358603 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 16 00:54:46.358610 kernel: cma: number of available pages: 128@3968=> 128 free of 4096 total pages Jul 16 00:54:46.358661 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 65536 entries for priq Jul 16 00:54:46.358737 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 Jul 16 00:54:46.358792 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) Jul 16 00:54:46.358845 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 16 00:54:46.358857 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 16 00:54:46.358864 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.358916 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 65536 entries for cmdq Jul 16 00:54:46.358926 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 16 00:54:46.358933 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.358991 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 32768 entries for evtq Jul 16 00:54:46.359001 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 16 00:54:46.359009 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.359060 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 65536 entries for priq Jul 16 00:54:46.359072 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 128 pages, ret: -12 Jul 16 00:54:46.359080 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.359139 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 Jul 16 00:54:46.359196 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) Jul 16 00:54:46.359249 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 16 00:54:46.359259 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 16 00:54:46.359266 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.359317 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 65536 entries for cmdq Jul 16 00:54:46.359327 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 16 00:54:46.359336 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.359387 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 32768 entries for evtq Jul 16 00:54:46.359397 kernel: cma: __cma_alloc: reserved: alloc failed, req-size: 256 pages, ret: -12 Jul 16 00:54:46.359405 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.359455 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 65536 entries for priq Jul 16 00:54:46.359465 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.359523 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 Jul 16 00:54:46.359577 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) Jul 16 00:54:46.359632 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 16 00:54:46.359641 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.359692 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 65536 entries for cmdq Jul 16 00:54:46.359702 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.359753 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 32768 entries for evtq Jul 16 00:54:46.359762 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.359813 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 65536 entries for priq Jul 16 00:54:46.359823 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.359885 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 Jul 16 00:54:46.359941 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) Jul 16 00:54:46.359999 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 16 00:54:46.360009 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.360060 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 65536 entries for cmdq Jul 16 00:54:46.360069 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.360121 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 32768 entries for evtq Jul 16 00:54:46.360130 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.360181 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 65536 entries for priq Jul 16 00:54:46.360193 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.360260 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 Jul 16 00:54:46.360314 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) Jul 16 00:54:46.360367 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 16 00:54:46.360377 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.360428 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 65536 entries for cmdq Jul 16 00:54:46.360437 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.360494 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 32768 entries for evtq Jul 16 00:54:46.360504 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.360555 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 65536 entries for priq Jul 16 00:54:46.360565 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.360622 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 Jul 16 00:54:46.360676 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) Jul 16 00:54:46.360730 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 16 00:54:46.360741 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.360792 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 65536 entries for cmdq Jul 16 00:54:46.360802 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.360853 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 32768 entries for evtq Jul 16 00:54:46.360863 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.360915 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 65536 entries for priq Jul 16 00:54:46.360925 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.360933 kernel: thunder_xcv, ver 1.0 Jul 16 00:54:46.360941 kernel: thunder_bgx, ver 1.0 Jul 16 00:54:46.360952 kernel: nicpf, ver 1.0 Jul 16 00:54:46.360961 kernel: nicvf, ver 1.0 Jul 16 00:54:46.361022 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 16 00:54:46.361077 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-16T00:54:44 UTC (1752627284) Jul 16 00:54:46.361087 kernel: efifb: probing for efifb Jul 16 00:54:46.361094 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k Jul 16 00:54:46.361102 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jul 16 00:54:46.361111 kernel: efifb: scrolling: redraw Jul 16 00:54:46.361119 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 16 00:54:46.361128 kernel: Console: switching to colour frame buffer device 100x37 Jul 16 00:54:46.361136 kernel: fb0: EFI VGA frame buffer device Jul 16 00:54:46.361143 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 Jul 16 00:54:46.361151 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 16 00:54:46.361159 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 16 00:54:46.361166 kernel: watchdog: NMI not fully supported Jul 16 00:54:46.361174 kernel: NET: Registered PF_INET6 protocol family Jul 16 00:54:46.361181 kernel: watchdog: Hard watchdog permanently disabled Jul 16 00:54:46.361189 kernel: Segment Routing with IPv6 Jul 16 00:54:46.361198 kernel: In-situ OAM (IOAM) with IPv6 Jul 16 00:54:46.361206 kernel: NET: Registered PF_PACKET protocol family Jul 16 00:54:46.361213 kernel: Key type dns_resolver registered Jul 16 00:54:46.361221 kernel: registered taskstats version 1 Jul 16 00:54:46.361229 kernel: Loading compiled-in X.509 certificates Jul 16 00:54:46.361236 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 2e049b1166d7080a2074348abe7e86e115624bdd' Jul 16 00:54:46.361244 kernel: Demotion targets for Node 0: null Jul 16 00:54:46.361252 kernel: Key type .fscrypt registered Jul 16 00:54:46.361259 kernel: Key type fscrypt-provisioning registered Jul 16 00:54:46.361268 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 16 00:54:46.361276 kernel: ima: Allocated hash algorithm: sha1 Jul 16 00:54:46.361283 kernel: ima: No architecture policies found Jul 16 00:54:46.361291 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 16 00:54:46.361299 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.361360 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 Jul 16 00:54:46.361419 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 Jul 16 00:54:46.361479 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 Jul 16 00:54:46.361537 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 Jul 16 00:54:46.361599 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 Jul 16 00:54:46.361657 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 Jul 16 00:54:46.361718 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 Jul 16 00:54:46.361776 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 Jul 16 00:54:46.361787 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.361845 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 Jul 16 00:54:46.361903 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 Jul 16 00:54:46.361967 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 Jul 16 00:54:46.362025 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 Jul 16 00:54:46.362088 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 Jul 16 00:54:46.362146 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 Jul 16 00:54:46.362206 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 Jul 16 00:54:46.362263 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 Jul 16 00:54:46.362273 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.362330 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 Jul 16 00:54:46.362389 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 Jul 16 00:54:46.362449 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 Jul 16 00:54:46.362507 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 Jul 16 00:54:46.362568 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 Jul 16 00:54:46.362626 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 Jul 16 00:54:46.362685 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 Jul 16 00:54:46.362743 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 Jul 16 00:54:46.362753 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.362810 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 Jul 16 00:54:46.362868 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 Jul 16 00:54:46.362927 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 Jul 16 00:54:46.362991 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 Jul 16 00:54:46.363051 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 Jul 16 00:54:46.363109 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 Jul 16 00:54:46.363118 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.363176 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 Jul 16 00:54:46.363234 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 Jul 16 00:54:46.363295 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 Jul 16 00:54:46.363353 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 Jul 16 00:54:46.363412 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 Jul 16 00:54:46.363472 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 Jul 16 00:54:46.363531 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 Jul 16 00:54:46.363589 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 Jul 16 00:54:46.363598 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.363656 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 Jul 16 00:54:46.363715 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 Jul 16 00:54:46.363774 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 Jul 16 00:54:46.363832 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 Jul 16 00:54:46.363893 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 Jul 16 00:54:46.363955 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 Jul 16 00:54:46.364015 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 Jul 16 00:54:46.364073 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 Jul 16 00:54:46.364083 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.364140 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 Jul 16 00:54:46.364198 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 Jul 16 00:54:46.364258 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 Jul 16 00:54:46.364317 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 Jul 16 00:54:46.364378 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 Jul 16 00:54:46.364436 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 Jul 16 00:54:46.364495 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 Jul 16 00:54:46.364552 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 Jul 16 00:54:46.364562 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.364619 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 Jul 16 00:54:46.364676 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 Jul 16 00:54:46.364735 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 Jul 16 00:54:46.364793 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 Jul 16 00:54:46.364853 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 Jul 16 00:54:46.364911 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 Jul 16 00:54:46.364921 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:46.364983 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 Jul 16 00:54:46.364994 kernel: clk: Disabling unused clocks Jul 16 00:54:46.365001 kernel: PM: genpd: Disabling unused power domains Jul 16 00:54:46.365009 kernel: Warning: unable to open an initial console. Jul 16 00:54:46.365017 kernel: Freeing unused kernel memory: 39488K Jul 16 00:54:46.365024 kernel: Run /init as init process Jul 16 00:54:46.365033 kernel: with arguments: Jul 16 00:54:46.365041 kernel: /init Jul 16 00:54:46.365049 kernel: with environment: Jul 16 00:54:46.365056 kernel: HOME=/ Jul 16 00:54:46.365063 kernel: TERM=linux Jul 16 00:54:46.365070 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 16 00:54:46.365079 systemd[1]: Successfully made /usr/ read-only. Jul 16 00:54:46.365090 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 16 00:54:46.365099 systemd[1]: Detected architecture arm64. Jul 16 00:54:46.365107 systemd[1]: Running in initrd. Jul 16 00:54:46.365115 systemd[1]: No hostname configured, using default hostname. Jul 16 00:54:46.365123 systemd[1]: Hostname set to . Jul 16 00:54:46.365131 systemd[1]: Initializing machine ID from random generator. Jul 16 00:54:46.365139 systemd[1]: Queued start job for default target initrd.target. Jul 16 00:54:46.365147 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 16 00:54:46.365155 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 16 00:54:46.365165 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 16 00:54:46.365173 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 16 00:54:46.365181 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 16 00:54:46.365189 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 16 00:54:46.365198 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 16 00:54:46.365206 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 16 00:54:46.365216 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 16 00:54:46.365225 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 16 00:54:46.365233 systemd[1]: Reached target paths.target - Path Units. Jul 16 00:54:46.365241 systemd[1]: Reached target slices.target - Slice Units. Jul 16 00:54:46.365249 systemd[1]: Reached target swap.target - Swaps. Jul 16 00:54:46.365257 systemd[1]: Reached target timers.target - Timer Units. Jul 16 00:54:46.365265 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 16 00:54:46.365273 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 16 00:54:46.365282 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 16 00:54:46.365291 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 16 00:54:46.365299 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 16 00:54:46.365307 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 16 00:54:46.365315 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 16 00:54:46.365323 systemd[1]: Reached target sockets.target - Socket Units. Jul 16 00:54:46.365331 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 16 00:54:46.365339 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 16 00:54:46.365347 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 16 00:54:46.365357 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 16 00:54:46.365365 systemd[1]: Starting systemd-fsck-usr.service... Jul 16 00:54:46.365373 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 16 00:54:46.365381 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 16 00:54:46.365389 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 16 00:54:46.365418 systemd-journald[910]: Collecting audit messages is disabled. Jul 16 00:54:46.365440 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 16 00:54:46.365449 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 16 00:54:46.365456 kernel: Bridge firewalling registered Jul 16 00:54:46.365465 systemd-journald[910]: Journal started Jul 16 00:54:46.365485 systemd-journald[910]: Runtime Journal (/run/log/journal/09928925666642edb91a74802853b7f4) is 8M, max 4G, 3.9G free. Jul 16 00:54:46.301722 systemd-modules-load[912]: Inserted module 'overlay' Jul 16 00:54:46.388904 systemd[1]: Started systemd-journald.service - Journal Service. Jul 16 00:54:46.358882 systemd-modules-load[912]: Inserted module 'br_netfilter' Jul 16 00:54:46.394498 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 16 00:54:46.406979 systemd[1]: Finished systemd-fsck-usr.service. Jul 16 00:54:46.416293 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 16 00:54:46.426910 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:54:46.441469 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 16 00:54:46.449191 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 16 00:54:46.474434 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 16 00:54:46.481072 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 16 00:54:46.493629 systemd-tmpfiles[942]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 16 00:54:46.499198 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 16 00:54:46.514578 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 16 00:54:46.532000 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 16 00:54:46.541982 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 16 00:54:46.561484 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 16 00:54:46.593147 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 16 00:54:46.606429 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 16 00:54:46.619322 dracut-cmdline[960]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 16 00:54:46.627441 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 16 00:54:46.631714 systemd-resolved[963]: Positive Trust Anchors: Jul 16 00:54:46.631723 systemd-resolved[963]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 16 00:54:46.631755 systemd-resolved[963]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 16 00:54:46.647152 systemd-resolved[963]: Defaulting to hostname 'linux'. Jul 16 00:54:46.664833 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 16 00:54:46.685203 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 16 00:54:46.789957 kernel: SCSI subsystem initialized Jul 16 00:54:46.804957 kernel: Loading iSCSI transport class v2.0-870. Jul 16 00:54:46.823956 kernel: iscsi: registered transport (tcp) Jul 16 00:54:46.851886 kernel: iscsi: registered transport (qla4xxx) Jul 16 00:54:46.851911 kernel: QLogic iSCSI HBA Driver Jul 16 00:54:46.871040 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 16 00:54:46.901992 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 16 00:54:46.918182 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 16 00:54:46.969807 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 16 00:54:46.981310 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 16 00:54:47.071957 kernel: raid6: neonx8 gen() 15848 MB/s Jul 16 00:54:47.097951 kernel: raid6: neonx4 gen() 15895 MB/s Jul 16 00:54:47.122954 kernel: raid6: neonx2 gen() 13277 MB/s Jul 16 00:54:47.147955 kernel: raid6: neonx1 gen() 10472 MB/s Jul 16 00:54:47.172954 kernel: raid6: int64x8 gen() 6927 MB/s Jul 16 00:54:47.197955 kernel: raid6: int64x4 gen() 7397 MB/s Jul 16 00:54:47.222951 kernel: raid6: int64x2 gen() 6130 MB/s Jul 16 00:54:47.251274 kernel: raid6: int64x1 gen() 5075 MB/s Jul 16 00:54:47.251295 kernel: raid6: using algorithm neonx4 gen() 15895 MB/s Jul 16 00:54:47.285718 kernel: raid6: .... xor() 12375 MB/s, rmw enabled Jul 16 00:54:47.285738 kernel: raid6: using neon recovery algorithm Jul 16 00:54:47.310411 kernel: xor: measuring software checksum speed Jul 16 00:54:47.310432 kernel: 8regs : 21630 MB/sec Jul 16 00:54:47.318740 kernel: 32regs : 21704 MB/sec Jul 16 00:54:47.326932 kernel: arm64_neon : 28254 MB/sec Jul 16 00:54:47.334961 kernel: xor: using function: arm64_neon (28254 MB/sec) Jul 16 00:54:47.400960 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 16 00:54:47.406534 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 16 00:54:47.413297 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 16 00:54:47.452700 systemd-udevd[1179]: Using default interface naming scheme 'v255'. Jul 16 00:54:47.456663 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 16 00:54:47.463033 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 16 00:54:47.503828 dracut-pre-trigger[1191]: rd.md=0: removing MD RAID activation Jul 16 00:54:47.527005 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 16 00:54:47.536530 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 16 00:54:47.828447 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 16 00:54:47.973848 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 16 00:54:47.973878 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 16 00:54:47.973903 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:47.973921 kernel: nvme 0005:03:00.0: Adding to iommu group 31 Jul 16 00:54:47.974107 kernel: ACPI: bus type USB registered Jul 16 00:54:47.974117 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:47.974126 kernel: nvme 0005:04:00.0: Adding to iommu group 32 Jul 16 00:54:47.974209 kernel: usbcore: registered new interface driver usbfs Jul 16 00:54:47.974219 kernel: usbcore: registered new interface driver hub Jul 16 00:54:47.974228 kernel: usbcore: registered new device driver usb Jul 16 00:54:47.974237 kernel: nvme nvme0: pci function 0005:03:00.0 Jul 16 00:54:47.974353 kernel: PTP clock support registered Jul 16 00:54:47.974364 kernel: nvme nvme1: pci function 0005:04:00.0 Jul 16 00:54:47.974444 kernel: nvme nvme1: D3 entry latency set to 8 seconds Jul 16 00:54:47.974515 kernel: nvme nvme0: D3 entry latency set to 8 seconds Jul 16 00:54:47.993032 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 16 00:54:48.076419 kernel: nvme nvme0: 32/0/0 default/read/poll queues Jul 16 00:54:48.076572 kernel: nvme nvme1: 32/0/0 default/read/poll queues Jul 16 00:54:48.076682 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 16 00:54:48.076693 kernel: GPT:9289727 != 1875385007 Jul 16 00:54:48.076702 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 16 00:54:48.076713 kernel: GPT:9289727 != 1875385007 Jul 16 00:54:48.076722 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 16 00:54:48.076730 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 16 00:54:48.081602 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 16 00:54:48.081757 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:54:48.091700 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 16 00:54:48.264516 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jul 16 00:54:48.264533 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jul 16 00:54:48.264543 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:48.264552 kernel: igb 0003:03:00.0: Adding to iommu group 33 Jul 16 00:54:48.264674 kernel: cma: number of available pages: => 0 free of 4096 total pages Jul 16 00:54:48.264684 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 34 Jul 16 00:54:48.264773 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Jul 16 00:54:48.264846 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 Jul 16 00:54:48.264918 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault Jul 16 00:54:48.264999 kernel: igb 0003:03:00.0: added PHC on eth0 Jul 16 00:54:48.265071 kernel: cma: number of available pages: Jul 16 00:54:48.265081 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 16 00:54:48.265151 kernel: => 0 free of 4096 total pages Jul 16 00:54:48.265161 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 35 Jul 16 00:54:48.265241 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6f:94 Jul 16 00:54:48.260080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 16 00:54:48.270368 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 16 00:54:48.291012 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 16 00:54:48.314023 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:54:48.327176 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 Jul 16 00:54:48.331953 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Jul 16 00:54:48.332126 kernel: igb 0003:03:00.1: Adding to iommu group 36 Jul 16 00:54:48.371228 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. Jul 16 00:54:48.391435 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. Jul 16 00:54:48.405834 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Jul 16 00:54:48.422246 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Jul 16 00:54:48.531508 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000000100000010 Jul 16 00:54:48.531835 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Jul 16 00:54:48.531915 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 Jul 16 00:54:48.532009 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed Jul 16 00:54:48.532084 kernel: hub 1-0:1.0: USB hub found Jul 16 00:54:48.532177 kernel: hub 1-0:1.0: 4 ports detected Jul 16 00:54:48.532250 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 16 00:54:48.532271 kernel: mlx5_core 0001:01:00.0: PTM is not supported by PCIe Jul 16 00:54:48.532351 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 Jul 16 00:54:48.532423 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 16 00:54:48.433304 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Jul 16 00:54:48.599530 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 16 00:54:48.616077 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 16 00:54:48.726902 kernel: hub 2-0:1.0: USB hub found Jul 16 00:54:48.727085 kernel: igb 0003:03:00.1: added PHC on eth1 Jul 16 00:54:48.727176 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection Jul 16 00:54:48.727253 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0c:6f:95 Jul 16 00:54:48.727328 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 Jul 16 00:54:48.727398 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Jul 16 00:54:48.727467 kernel: igb 0003:03:00.1 eno2: renamed from eth1 Jul 16 00:54:48.727535 kernel: igb 0003:03:00.0 eno1: renamed from eth0 Jul 16 00:54:48.727608 kernel: hub 2-0:1.0: 4 ports detected Jul 16 00:54:48.632894 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 16 00:54:48.732766 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 16 00:54:48.761028 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 16 00:54:48.744535 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 16 00:54:48.771302 disk-uuid[1328]: Primary Header is updated. Jul 16 00:54:48.771302 disk-uuid[1328]: Secondary Entries is updated. Jul 16 00:54:48.771302 disk-uuid[1328]: Secondary Header is updated. Jul 16 00:54:48.800976 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 16 00:54:48.951959 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd Jul 16 00:54:48.952019 kernel: mlx5_core 0001:01:00.0: E-Switch: Total vports 2, per vport: max uc(128) max mc(2048) Jul 16 00:54:48.988856 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged Jul 16 00:54:49.099386 kernel: hub 1-3:1.0: USB hub found Jul 16 00:54:49.099588 kernel: hub 1-3:1.0: 4 ports detected Jul 16 00:54:49.202956 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd Jul 16 00:54:49.235242 kernel: hub 2-3:1.0: USB hub found Jul 16 00:54:49.235416 kernel: hub 2-3:1.0: 4 ports detected Jul 16 00:54:49.297957 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 16 00:54:49.309958 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 Jul 16 00:54:49.325986 kernel: mlx5_core 0001:01:00.1: PTM is not supported by PCIe Jul 16 00:54:49.326143 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 Jul 16 00:54:49.340454 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 16 00:54:49.681967 kernel: mlx5_core 0001:01:00.1: E-Switch: Total vports 2, per vport: max uc(128) max mc(2048) Jul 16 00:54:49.699276 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged Jul 16 00:54:49.769559 disk-uuid[1330]: The operation has completed successfully. Jul 16 00:54:49.774463 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 16 00:54:50.031962 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 16 00:54:50.046966 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 Jul 16 00:54:50.047066 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 Jul 16 00:54:50.082866 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 16 00:54:50.082964 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 16 00:54:50.088789 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 16 00:54:50.108482 sh[1538]: Success Jul 16 00:54:50.146593 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 16 00:54:50.146626 kernel: device-mapper: uevent: version 1.0.3 Jul 16 00:54:50.155862 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 16 00:54:50.182957 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 16 00:54:50.214620 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 16 00:54:50.225727 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 16 00:54:50.243656 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 16 00:54:50.250950 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 16 00:54:50.250965 kernel: BTRFS: device fsid e70e9257-c19d-4e0a-b2ee-631da7d0eb2b devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (1552) Jul 16 00:54:50.252949 kernel: BTRFS info (device dm-0): first mount of filesystem e70e9257-c19d-4e0a-b2ee-631da7d0eb2b Jul 16 00:54:50.252966 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 16 00:54:50.252976 kernel: BTRFS info (device dm-0): using free-space-tree Jul 16 00:54:50.337028 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 16 00:54:50.342985 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 16 00:54:50.353161 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 16 00:54:50.354319 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 16 00:54:50.382540 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 16 00:54:50.433736 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1578) Jul 16 00:54:50.433756 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 16 00:54:50.449574 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 16 00:54:50.463950 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 16 00:54:50.497952 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 16 00:54:50.498324 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 16 00:54:50.509794 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 16 00:54:50.527903 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 16 00:54:50.551309 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 16 00:54:50.584917 systemd-networkd[1730]: lo: Link UP Jul 16 00:54:50.584922 systemd-networkd[1730]: lo: Gained carrier Jul 16 00:54:50.588463 systemd-networkd[1730]: Enumeration completed Jul 16 00:54:50.588614 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 16 00:54:50.589740 systemd-networkd[1730]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 00:54:50.596670 systemd[1]: Reached target network.target - Network. Jul 16 00:54:50.641254 systemd-networkd[1730]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 00:54:50.660396 ignition[1726]: Ignition 2.21.0 Jul 16 00:54:50.660403 ignition[1726]: Stage: fetch-offline Jul 16 00:54:50.660437 ignition[1726]: no configs at "/usr/lib/ignition/base.d" Jul 16 00:54:50.668658 unknown[1726]: fetched base config from "system" Jul 16 00:54:50.660445 ignition[1726]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:54:50.668665 unknown[1726]: fetched user config from "system" Jul 16 00:54:50.660668 ignition[1726]: parsed url from cmdline: "" Jul 16 00:54:50.672713 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 16 00:54:50.660671 ignition[1726]: no config URL provided Jul 16 00:54:50.683401 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 16 00:54:50.660675 ignition[1726]: reading system config file "/usr/lib/ignition/user.ign" Jul 16 00:54:50.684673 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 16 00:54:50.660781 ignition[1726]: parsing config with SHA512: 8e9ab44818bfb4a09ed011b9eb2fd21bc76082be1393f46020bce7f4e79587752fdf8bdc463872f9be5c1ee295ba2b2fe95ff129f23140509b3bae5aeed0ec66 Jul 16 00:54:50.692644 systemd-networkd[1730]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 00:54:50.668990 ignition[1726]: fetch-offline: fetch-offline passed Jul 16 00:54:50.668995 ignition[1726]: POST message to Packet Timeline Jul 16 00:54:50.669001 ignition[1726]: POST Status error: resource requires networking Jul 16 00:54:50.669048 ignition[1726]: Ignition finished successfully Jul 16 00:54:50.736358 ignition[1782]: Ignition 2.21.0 Jul 16 00:54:50.736364 ignition[1782]: Stage: kargs Jul 16 00:54:50.736526 ignition[1782]: no configs at "/usr/lib/ignition/base.d" Jul 16 00:54:50.736535 ignition[1782]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:54:50.737372 ignition[1782]: kargs: kargs passed Jul 16 00:54:50.737376 ignition[1782]: POST message to Packet Timeline Jul 16 00:54:50.737600 ignition[1782]: GET https://metadata.packet.net/metadata: attempt #1 Jul 16 00:54:50.741605 ignition[1782]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38146->[::1]:53: read: connection refused Jul 16 00:54:50.941729 ignition[1782]: GET https://metadata.packet.net/metadata: attempt #2 Jul 16 00:54:50.942350 ignition[1782]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:45982->[::1]:53: read: connection refused Jul 16 00:54:51.266956 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Jul 16 00:54:51.269870 systemd-networkd[1730]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 00:54:51.342885 ignition[1782]: GET https://metadata.packet.net/metadata: attempt #3 Jul 16 00:54:51.343372 ignition[1782]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59083->[::1]:53: read: connection refused Jul 16 00:54:51.897962 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Jul 16 00:54:51.900933 systemd-networkd[1730]: eno1: Link UP Jul 16 00:54:51.901082 systemd-networkd[1730]: eno2: Link UP Jul 16 00:54:51.901201 systemd-networkd[1730]: enP1p1s0f0np0: Link UP Jul 16 00:54:51.901339 systemd-networkd[1730]: enP1p1s0f0np0: Gained carrier Jul 16 00:54:51.911089 systemd-networkd[1730]: enP1p1s0f1np1: Link UP Jul 16 00:54:51.912339 systemd-networkd[1730]: enP1p1s0f1np1: Gained carrier Jul 16 00:54:51.953976 systemd-networkd[1730]: enP1p1s0f0np0: DHCPv4 address 147.28.150.207/31, gateway 147.28.150.206 acquired from 147.28.144.140 Jul 16 00:54:52.143708 ignition[1782]: GET https://metadata.packet.net/metadata: attempt #4 Jul 16 00:54:52.144309 ignition[1782]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58242->[::1]:53: read: connection refused Jul 16 00:54:52.914001 systemd-networkd[1730]: enP1p1s0f0np0: Gained IPv6LL Jul 16 00:54:53.745829 ignition[1782]: GET https://metadata.packet.net/metadata: attempt #5 Jul 16 00:54:53.746472 ignition[1782]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53634->[::1]:53: read: connection refused Jul 16 00:54:53.809013 systemd-networkd[1730]: enP1p1s0f1np1: Gained IPv6LL Jul 16 00:54:56.949863 ignition[1782]: GET https://metadata.packet.net/metadata: attempt #6 Jul 16 00:54:57.854203 ignition[1782]: GET result: OK Jul 16 00:54:58.221193 ignition[1782]: Ignition finished successfully Jul 16 00:54:58.226035 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 16 00:54:58.229182 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 16 00:54:58.269300 ignition[1815]: Ignition 2.21.0 Jul 16 00:54:58.269308 ignition[1815]: Stage: disks Jul 16 00:54:58.269448 ignition[1815]: no configs at "/usr/lib/ignition/base.d" Jul 16 00:54:58.269457 ignition[1815]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:54:58.270470 ignition[1815]: disks: disks passed Jul 16 00:54:58.270475 ignition[1815]: POST message to Packet Timeline Jul 16 00:54:58.270491 ignition[1815]: GET https://metadata.packet.net/metadata: attempt #1 Jul 16 00:54:58.826518 ignition[1815]: GET result: OK Jul 16 00:54:59.133717 ignition[1815]: Ignition finished successfully Jul 16 00:54:59.136444 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 16 00:54:59.142350 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 16 00:54:59.149974 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 16 00:54:59.158075 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 16 00:54:59.166883 systemd[1]: Reached target sysinit.target - System Initialization. Jul 16 00:54:59.175875 systemd[1]: Reached target basic.target - Basic System. Jul 16 00:54:59.186298 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 16 00:54:59.226531 systemd-fsck[1838]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 16 00:54:59.229610 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 16 00:54:59.237877 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 16 00:54:59.333952 kernel: EXT4-fs (nvme0n1p9): mounted filesystem db08fdf6-07fd-45a1-bb3b-a7d0399d70fd r/w with ordered data mode. Quota mode: none. Jul 16 00:54:59.334220 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 16 00:54:59.344465 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 16 00:54:59.355512 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 16 00:54:59.379443 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 16 00:54:59.387955 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 (259:5) scanned by mount (1848) Jul 16 00:54:59.387977 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 16 00:54:59.387987 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 16 00:54:59.387997 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 16 00:54:59.455526 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 16 00:54:59.481522 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jul 16 00:54:59.493000 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 16 00:54:59.493031 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 16 00:54:59.513468 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 16 00:54:59.521719 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 16 00:54:59.535607 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 16 00:54:59.549685 coreos-metadata[1866]: Jul 16 00:54:59.528 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 16 00:54:59.560616 coreos-metadata[1865]: Jul 16 00:54:59.528 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 16 00:54:59.588276 initrd-setup-root[1894]: cut: /sysroot/etc/passwd: No such file or directory Jul 16 00:54:59.594653 initrd-setup-root[1901]: cut: /sysroot/etc/group: No such file or directory Jul 16 00:54:59.601042 initrd-setup-root[1909]: cut: /sysroot/etc/shadow: No such file or directory Jul 16 00:54:59.607358 initrd-setup-root[1917]: cut: /sysroot/etc/gshadow: No such file or directory Jul 16 00:54:59.677384 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 16 00:54:59.689308 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 16 00:54:59.721468 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 16 00:54:59.729951 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 16 00:54:59.754248 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 16 00:54:59.764263 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 16 00:54:59.778267 ignition[1992]: INFO : Ignition 2.21.0 Jul 16 00:54:59.784000 ignition[1992]: INFO : Stage: mount Jul 16 00:54:59.784000 ignition[1992]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 16 00:54:59.784000 ignition[1992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:54:59.784000 ignition[1992]: INFO : mount: mount passed Jul 16 00:54:59.784000 ignition[1992]: INFO : POST message to Packet Timeline Jul 16 00:54:59.784000 ignition[1992]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 16 00:55:00.054583 coreos-metadata[1865]: Jul 16 00:55:00.054 INFO Fetch successful Jul 16 00:55:00.104658 coreos-metadata[1865]: Jul 16 00:55:00.104 INFO wrote hostname ci-4372.0.1-n-4904b64135 to /sysroot/etc/hostname Jul 16 00:55:00.108016 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 16 00:55:00.287234 coreos-metadata[1866]: Jul 16 00:55:00.287 INFO Fetch successful Jul 16 00:55:00.310350 ignition[1992]: INFO : GET result: OK Jul 16 00:55:00.334110 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jul 16 00:55:00.335041 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jul 16 00:55:00.788141 ignition[1992]: INFO : Ignition finished successfully Jul 16 00:55:00.792089 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 16 00:55:00.799473 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 16 00:55:00.824197 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 16 00:55:00.865952 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 (259:5) scanned by mount (2020) Jul 16 00:55:00.890528 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 16 00:55:00.890552 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 16 00:55:00.903764 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 16 00:55:00.912799 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 16 00:55:00.961830 ignition[2037]: INFO : Ignition 2.21.0 Jul 16 00:55:00.961830 ignition[2037]: INFO : Stage: files Jul 16 00:55:00.971804 ignition[2037]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 16 00:55:00.971804 ignition[2037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:55:00.971804 ignition[2037]: DEBUG : files: compiled without relabeling support, skipping Jul 16 00:55:00.971804 ignition[2037]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 16 00:55:00.971804 ignition[2037]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 16 00:55:00.971804 ignition[2037]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 16 00:55:00.971804 ignition[2037]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 16 00:55:00.971804 ignition[2037]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 16 00:55:00.971804 ignition[2037]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 16 00:55:00.971804 ignition[2037]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 16 00:55:00.966679 unknown[2037]: wrote ssh authorized keys file for user: core Jul 16 00:55:01.069621 ignition[2037]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 16 00:55:01.269559 ignition[2037]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 16 00:55:01.280269 ignition[2037]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 16 00:55:01.280269 ignition[2037]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 16 00:55:01.280269 ignition[2037]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 16 00:55:01.280269 ignition[2037]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 16 00:55:01.280269 ignition[2037]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 16 00:55:01.280269 ignition[2037]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 16 00:55:01.280269 ignition[2037]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 16 00:55:01.280269 ignition[2037]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 16 00:55:01.280269 ignition[2037]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 16 00:55:01.280269 ignition[2037]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 16 00:55:01.280269 ignition[2037]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 16 00:55:01.280269 ignition[2037]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 16 00:55:01.280269 ignition[2037]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 16 00:55:01.280269 ignition[2037]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 16 00:55:01.526574 ignition[2037]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 16 00:55:01.926522 ignition[2037]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 16 00:55:01.938832 ignition[2037]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 16 00:55:01.938832 ignition[2037]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 16 00:55:01.938832 ignition[2037]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 16 00:55:01.938832 ignition[2037]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 16 00:55:01.938832 ignition[2037]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 16 00:55:01.938832 ignition[2037]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 16 00:55:01.938832 ignition[2037]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 16 00:55:01.938832 ignition[2037]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 16 00:55:01.938832 ignition[2037]: INFO : files: files passed Jul 16 00:55:01.938832 ignition[2037]: INFO : POST message to Packet Timeline Jul 16 00:55:01.938832 ignition[2037]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 16 00:55:02.486018 ignition[2037]: INFO : GET result: OK Jul 16 00:55:03.203609 ignition[2037]: INFO : Ignition finished successfully Jul 16 00:55:03.208025 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 16 00:55:03.211597 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 16 00:55:03.239600 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 16 00:55:03.258098 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 16 00:55:03.259216 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 16 00:55:03.275910 initrd-setup-root-after-ignition[2082]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 16 00:55:03.275910 initrd-setup-root-after-ignition[2082]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 16 00:55:03.270486 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 16 00:55:03.326757 initrd-setup-root-after-ignition[2086]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 16 00:55:03.283166 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 16 00:55:03.299628 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 16 00:55:03.346709 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 16 00:55:03.346922 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 16 00:55:03.362015 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 16 00:55:03.372270 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 16 00:55:03.388867 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 16 00:55:03.389938 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 16 00:55:03.430394 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 16 00:55:03.442852 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 16 00:55:03.478193 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 16 00:55:03.489844 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 16 00:55:03.495585 systemd[1]: Stopped target timers.target - Timer Units. Jul 16 00:55:03.506889 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 16 00:55:03.507000 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 16 00:55:03.518362 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 16 00:55:03.529359 systemd[1]: Stopped target basic.target - Basic System. Jul 16 00:55:03.540587 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 16 00:55:03.551699 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 16 00:55:03.562627 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 16 00:55:03.573581 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 16 00:55:03.584554 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 16 00:55:03.595500 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 16 00:55:03.606514 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 16 00:55:03.617535 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 16 00:55:03.634089 systemd[1]: Stopped target swap.target - Swaps. Jul 16 00:55:03.645136 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 16 00:55:03.645237 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 16 00:55:03.656444 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 16 00:55:03.667663 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 16 00:55:03.678718 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 16 00:55:03.681977 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 16 00:55:03.689694 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 16 00:55:03.689808 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 16 00:55:03.701011 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 16 00:55:03.701125 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 16 00:55:03.712195 systemd[1]: Stopped target paths.target - Path Units. Jul 16 00:55:03.723232 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 16 00:55:03.726991 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 16 00:55:03.740048 systemd[1]: Stopped target slices.target - Slice Units. Jul 16 00:55:03.751310 systemd[1]: Stopped target sockets.target - Socket Units. Jul 16 00:55:03.762759 systemd[1]: iscsid.socket: Deactivated successfully. Jul 16 00:55:03.762838 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 16 00:55:03.774214 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 16 00:55:03.774300 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 16 00:55:03.878131 ignition[2108]: INFO : Ignition 2.21.0 Jul 16 00:55:03.878131 ignition[2108]: INFO : Stage: umount Jul 16 00:55:03.878131 ignition[2108]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 16 00:55:03.878131 ignition[2108]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:55:03.878131 ignition[2108]: INFO : umount: umount passed Jul 16 00:55:03.878131 ignition[2108]: INFO : POST message to Packet Timeline Jul 16 00:55:03.878131 ignition[2108]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 16 00:55:03.785722 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 16 00:55:03.785823 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 16 00:55:03.797150 systemd[1]: ignition-files.service: Deactivated successfully. Jul 16 00:55:03.797239 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 16 00:55:03.808636 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 16 00:55:03.808726 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 16 00:55:03.826474 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 16 00:55:03.851541 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 16 00:55:03.860108 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 16 00:55:03.860218 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 16 00:55:03.872393 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 16 00:55:03.872482 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 16 00:55:03.886465 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 16 00:55:03.888407 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 16 00:55:03.888487 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 16 00:55:03.929132 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 16 00:55:03.929333 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 16 00:55:05.798314 ignition[2108]: INFO : GET result: OK Jul 16 00:55:06.272839 ignition[2108]: INFO : Ignition finished successfully Jul 16 00:55:06.275605 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 16 00:55:06.275848 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 16 00:55:06.278727 systemd[1]: Stopped target network.target - Network. Jul 16 00:55:06.287693 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 16 00:55:06.287765 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 16 00:55:06.297213 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 16 00:55:06.297260 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 16 00:55:06.306684 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 16 00:55:06.306737 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 16 00:55:06.316193 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 16 00:55:06.316225 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 16 00:55:06.325883 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 16 00:55:06.325934 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 16 00:55:06.335858 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 16 00:55:06.345541 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 16 00:55:06.355530 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 16 00:55:06.355678 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 16 00:55:06.369181 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 16 00:55:06.370263 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 16 00:55:06.370333 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 16 00:55:06.382287 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 16 00:55:06.382560 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 16 00:55:06.382678 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 16 00:55:06.391273 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 16 00:55:06.392124 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 16 00:55:06.400503 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 16 00:55:06.400626 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 16 00:55:06.412449 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 16 00:55:06.420784 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 16 00:55:06.420842 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 16 00:55:06.431195 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 16 00:55:06.431235 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 16 00:55:06.441845 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 16 00:55:06.441899 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 16 00:55:06.457525 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 16 00:55:06.469382 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 16 00:55:06.473238 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 16 00:55:06.474321 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 16 00:55:06.486625 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 16 00:55:06.486854 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 16 00:55:06.502038 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 16 00:55:06.502125 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 16 00:55:06.513056 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 16 00:55:06.513122 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 16 00:55:06.529668 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 16 00:55:06.529726 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 16 00:55:06.540810 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 16 00:55:06.540848 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 16 00:55:06.553205 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 16 00:55:06.563610 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 16 00:55:06.563664 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 16 00:55:06.575358 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 16 00:55:06.575401 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 16 00:55:06.587006 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 16 00:55:06.587051 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 16 00:55:06.598705 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 16 00:55:06.598744 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 16 00:55:06.615892 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 16 00:55:06.615930 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:55:06.630260 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 16 00:55:06.630342 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 16 00:55:06.630371 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 16 00:55:06.630398 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 16 00:55:06.630755 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 16 00:55:06.631448 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 16 00:55:07.149415 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 16 00:55:07.149553 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 16 00:55:07.160921 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 16 00:55:07.171914 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 16 00:55:07.204306 systemd[1]: Switching root. Jul 16 00:55:07.261528 systemd-journald[910]: Journal stopped Jul 16 00:55:09.450164 systemd-journald[910]: Received SIGTERM from PID 1 (systemd). Jul 16 00:55:09.450192 kernel: SELinux: policy capability network_peer_controls=1 Jul 16 00:55:09.450203 kernel: SELinux: policy capability open_perms=1 Jul 16 00:55:09.450211 kernel: SELinux: policy capability extended_socket_class=1 Jul 16 00:55:09.450219 kernel: SELinux: policy capability always_check_network=0 Jul 16 00:55:09.450226 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 16 00:55:09.450234 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 16 00:55:09.450244 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 16 00:55:09.450251 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 16 00:55:09.450259 kernel: SELinux: policy capability userspace_initial_context=0 Jul 16 00:55:09.450266 kernel: audit: type=1403 audit(1752627307.460:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 16 00:55:09.450276 systemd[1]: Successfully loaded SELinux policy in 133.383ms. Jul 16 00:55:09.450285 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.787ms. Jul 16 00:55:09.450297 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 16 00:55:09.450307 systemd[1]: Detected architecture arm64. Jul 16 00:55:09.450316 systemd[1]: Detected first boot. Jul 16 00:55:09.450324 systemd[1]: Hostname set to . Jul 16 00:55:09.450333 systemd[1]: Initializing machine ID from random generator. Jul 16 00:55:09.450341 zram_generator::config[2178]: No configuration found. Jul 16 00:55:09.450352 systemd[1]: Populated /etc with preset unit settings. Jul 16 00:55:09.450361 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 16 00:55:09.450370 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 16 00:55:09.450378 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 16 00:55:09.450387 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 16 00:55:09.450396 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 16 00:55:09.450405 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 16 00:55:09.450415 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 16 00:55:09.450424 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 16 00:55:09.450433 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 16 00:55:09.450442 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 16 00:55:09.450451 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 16 00:55:09.450460 systemd[1]: Created slice user.slice - User and Session Slice. Jul 16 00:55:09.450469 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 16 00:55:09.450477 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 16 00:55:09.450488 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 16 00:55:09.450496 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 16 00:55:09.450505 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 16 00:55:09.450514 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 16 00:55:09.450523 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 16 00:55:09.450532 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 16 00:55:09.450543 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 16 00:55:09.450552 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 16 00:55:09.450562 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 16 00:55:09.450572 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 16 00:55:09.450581 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 16 00:55:09.450589 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 16 00:55:09.450598 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 16 00:55:09.450607 systemd[1]: Reached target slices.target - Slice Units. Jul 16 00:55:09.450616 systemd[1]: Reached target swap.target - Swaps. Jul 16 00:55:09.450626 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 16 00:55:09.450635 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 16 00:55:09.450644 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 16 00:55:09.450654 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 16 00:55:09.450663 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 16 00:55:09.450674 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 16 00:55:09.450684 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 16 00:55:09.450694 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 16 00:55:09.450703 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 16 00:55:09.450712 systemd[1]: Mounting media.mount - External Media Directory... Jul 16 00:55:09.450721 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 16 00:55:09.450730 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 16 00:55:09.450739 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 16 00:55:09.450750 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 16 00:55:09.450759 systemd[1]: Reached target machines.target - Containers. Jul 16 00:55:09.450768 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 16 00:55:09.450778 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 16 00:55:09.450787 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 16 00:55:09.450796 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 16 00:55:09.450805 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 16 00:55:09.450814 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 16 00:55:09.450823 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 16 00:55:09.450833 kernel: ACPI: bus type drm_connector registered Jul 16 00:55:09.450841 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 16 00:55:09.450851 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 16 00:55:09.450859 kernel: fuse: init (API version 7.41) Jul 16 00:55:09.450868 kernel: loop: module loaded Jul 16 00:55:09.450877 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 16 00:55:09.450886 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 16 00:55:09.450895 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 16 00:55:09.450905 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 16 00:55:09.450914 systemd[1]: Stopped systemd-fsck-usr.service. Jul 16 00:55:09.450924 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 16 00:55:09.450933 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 16 00:55:09.450942 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 16 00:55:09.450955 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 16 00:55:09.450964 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 16 00:55:09.450992 systemd-journald[2287]: Collecting audit messages is disabled. Jul 16 00:55:09.451013 systemd-journald[2287]: Journal started Jul 16 00:55:09.451032 systemd-journald[2287]: Runtime Journal (/run/log/journal/4640d6a0d5444191bcc3bb5d38e74b55) is 8M, max 4G, 3.9G free. Jul 16 00:55:08.008322 systemd[1]: Queued start job for default target multi-user.target. Jul 16 00:55:08.036601 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 16 00:55:08.036940 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 16 00:55:08.037251 systemd[1]: systemd-journald.service: Consumed 3.396s CPU time. Jul 16 00:55:09.473964 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 16 00:55:09.494960 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 16 00:55:09.518059 systemd[1]: verity-setup.service: Deactivated successfully. Jul 16 00:55:09.518077 systemd[1]: Stopped verity-setup.service. Jul 16 00:55:09.543955 systemd[1]: Started systemd-journald.service - Journal Service. Jul 16 00:55:09.549028 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 16 00:55:09.554558 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 16 00:55:09.559975 systemd[1]: Mounted media.mount - External Media Directory. Jul 16 00:55:09.565395 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 16 00:55:09.570839 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 16 00:55:09.576198 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 16 00:55:09.581856 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 16 00:55:09.588975 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 16 00:55:09.594494 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 16 00:55:09.594665 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 16 00:55:09.600153 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 16 00:55:09.600321 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 16 00:55:09.605592 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 16 00:55:09.605759 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 16 00:55:09.611147 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 16 00:55:09.611324 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 16 00:55:09.616667 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 16 00:55:09.616836 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 16 00:55:09.621967 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 16 00:55:09.623026 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 16 00:55:09.628231 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 16 00:55:09.633275 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 16 00:55:09.639694 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 16 00:55:09.644780 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 16 00:55:09.660526 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 16 00:55:09.666681 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 16 00:55:09.688677 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 16 00:55:09.693589 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 16 00:55:09.693624 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 16 00:55:09.699257 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 16 00:55:09.705043 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 16 00:55:09.710036 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 16 00:55:09.711385 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 16 00:55:09.717139 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 16 00:55:09.722015 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 16 00:55:09.723127 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 16 00:55:09.727784 systemd-journald[2287]: Time spent on flushing to /var/log/journal/4640d6a0d5444191bcc3bb5d38e74b55 is 25.785ms for 2531 entries. Jul 16 00:55:09.727784 systemd-journald[2287]: System Journal (/var/log/journal/4640d6a0d5444191bcc3bb5d38e74b55) is 8M, max 195.6M, 187.6M free. Jul 16 00:55:09.760747 systemd-journald[2287]: Received client request to flush runtime journal. Jul 16 00:55:09.740115 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 16 00:55:09.761403 kernel: loop0: detected capacity change from 0 to 107312 Jul 16 00:55:09.741233 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 16 00:55:09.746889 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 16 00:55:09.752612 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 16 00:55:09.758744 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 16 00:55:09.774439 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 16 00:55:09.777693 systemd-tmpfiles[2325]: ACLs are not supported, ignoring. Jul 16 00:55:09.777704 systemd-tmpfiles[2325]: ACLs are not supported, ignoring. Jul 16 00:55:09.785197 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 16 00:55:09.790658 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 16 00:55:09.795386 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 16 00:55:09.801969 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 16 00:55:09.806647 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 16 00:55:09.811296 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 16 00:55:09.819754 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 16 00:55:09.825710 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 16 00:55:09.844744 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 16 00:55:09.848953 kernel: loop1: detected capacity change from 0 to 138376 Jul 16 00:55:09.853753 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 16 00:55:09.855109 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 16 00:55:09.870846 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 16 00:55:09.876989 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 16 00:55:09.908952 kernel: loop2: detected capacity change from 0 to 203944 Jul 16 00:55:09.916185 systemd-tmpfiles[2350]: ACLs are not supported, ignoring. Jul 16 00:55:09.916198 systemd-tmpfiles[2350]: ACLs are not supported, ignoring. Jul 16 00:55:09.920050 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 16 00:55:09.953956 kernel: loop3: detected capacity change from 0 to 8 Jul 16 00:55:09.971585 ldconfig[2318]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 16 00:55:09.973311 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 16 00:55:10.009960 kernel: loop4: detected capacity change from 0 to 107312 Jul 16 00:55:10.026960 kernel: loop5: detected capacity change from 0 to 138376 Jul 16 00:55:10.044960 kernel: loop6: detected capacity change from 0 to 203944 Jul 16 00:55:10.062958 kernel: loop7: detected capacity change from 0 to 8 Jul 16 00:55:10.063534 (sd-merge)[2359]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jul 16 00:55:10.063952 (sd-merge)[2359]: Merged extensions into '/usr'. Jul 16 00:55:10.067303 systemd[1]: Reload requested from client PID 2323 ('systemd-sysext') (unit systemd-sysext.service)... Jul 16 00:55:10.067314 systemd[1]: Reloading... Jul 16 00:55:10.116952 zram_generator::config[2388]: No configuration found. Jul 16 00:55:10.195778 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 00:55:10.269539 systemd[1]: Reloading finished in 201 ms. Jul 16 00:55:10.300256 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 16 00:55:10.305647 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 16 00:55:10.330161 systemd[1]: Starting ensure-sysext.service... Jul 16 00:55:10.336182 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 16 00:55:10.342921 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 16 00:55:10.354107 systemd[1]: Reload requested from client PID 2442 ('systemctl') (unit ensure-sysext.service)... Jul 16 00:55:10.354120 systemd[1]: Reloading... Jul 16 00:55:10.355602 systemd-tmpfiles[2443]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 16 00:55:10.355627 systemd-tmpfiles[2443]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 16 00:55:10.355821 systemd-tmpfiles[2443]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 16 00:55:10.356010 systemd-tmpfiles[2443]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 16 00:55:10.356602 systemd-tmpfiles[2443]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 16 00:55:10.356793 systemd-tmpfiles[2443]: ACLs are not supported, ignoring. Jul 16 00:55:10.356836 systemd-tmpfiles[2443]: ACLs are not supported, ignoring. Jul 16 00:55:10.359964 systemd-tmpfiles[2443]: Detected autofs mount point /boot during canonicalization of boot. Jul 16 00:55:10.359971 systemd-tmpfiles[2443]: Skipping /boot Jul 16 00:55:10.368690 systemd-tmpfiles[2443]: Detected autofs mount point /boot during canonicalization of boot. Jul 16 00:55:10.368698 systemd-tmpfiles[2443]: Skipping /boot Jul 16 00:55:10.371739 systemd-udevd[2444]: Using default interface naming scheme 'v255'. Jul 16 00:55:10.395954 zram_generator::config[2472]: No configuration found. Jul 16 00:55:10.446959 kernel: IPMI message handler: version 39.2 Jul 16 00:55:10.457967 kernel: ipmi device interface Jul 16 00:55:10.458006 kernel: MACsec IEEE 802.1AE Jul 16 00:55:10.475895 kernel: ipmi_si: IPMI System Interface driver Jul 16 00:55:10.475936 kernel: ipmi_ssif: IPMI SSIF Interface driver Jul 16 00:55:10.476005 kernel: ipmi_si: Unable to find any System Interface(s) Jul 16 00:55:10.482128 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 00:55:10.573774 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Jul 16 00:55:10.578399 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 16 00:55:10.578522 systemd[1]: Reloading finished in 224 ms. Jul 16 00:55:10.605137 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 16 00:55:10.623552 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 16 00:55:10.646341 systemd[1]: Finished ensure-sysext.service. Jul 16 00:55:10.668597 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 16 00:55:10.697956 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 16 00:55:10.702922 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 16 00:55:10.703828 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 16 00:55:10.709680 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 16 00:55:10.715492 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 16 00:55:10.721171 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 16 00:55:10.725937 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 16 00:55:10.726801 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 16 00:55:10.731498 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 16 00:55:10.732647 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 16 00:55:10.737509 augenrules[2724]: No rules Jul 16 00:55:10.739203 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 16 00:55:10.745704 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 16 00:55:10.751874 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 16 00:55:10.757371 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 16 00:55:10.762766 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 16 00:55:10.767895 systemd[1]: audit-rules.service: Deactivated successfully. Jul 16 00:55:10.768116 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 16 00:55:10.774744 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 16 00:55:10.779303 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 16 00:55:10.780017 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 16 00:55:10.784461 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 16 00:55:10.784635 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 16 00:55:10.788981 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 16 00:55:10.789732 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 16 00:55:10.794272 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 16 00:55:10.794459 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 16 00:55:10.799550 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 16 00:55:10.804267 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 16 00:55:10.811150 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:55:10.822967 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 16 00:55:10.827941 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 16 00:55:10.828033 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 16 00:55:10.829435 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 16 00:55:10.847252 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 16 00:55:10.851718 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 16 00:55:10.854932 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 16 00:55:10.884148 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 16 00:55:10.951050 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 16 00:55:10.955732 systemd[1]: Reached target time-set.target - System Time Set. Jul 16 00:55:10.957792 systemd-resolved[2731]: Positive Trust Anchors: Jul 16 00:55:10.957804 systemd-resolved[2731]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 16 00:55:10.957842 systemd-resolved[2731]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 16 00:55:10.961835 systemd-resolved[2731]: Using system hostname 'ci-4372.0.1-n-4904b64135'. Jul 16 00:55:10.963392 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 16 00:55:10.965153 systemd-networkd[2730]: lo: Link UP Jul 16 00:55:10.965159 systemd-networkd[2730]: lo: Gained carrier Jul 16 00:55:10.967673 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 16 00:55:10.968590 systemd-networkd[2730]: bond0: netdev ready Jul 16 00:55:10.972027 systemd[1]: Reached target sysinit.target - System Initialization. Jul 16 00:55:10.976325 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 16 00:55:10.977329 systemd-networkd[2730]: Enumeration completed Jul 16 00:55:10.980637 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 16 00:55:10.985054 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 16 00:55:10.989356 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 16 00:55:10.990913 systemd-networkd[2730]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:49:ed:dc.network. Jul 16 00:55:10.993579 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 16 00:55:10.997814 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 16 00:55:10.997835 systemd[1]: Reached target paths.target - Path Units. Jul 16 00:55:11.001998 systemd[1]: Reached target timers.target - Timer Units. Jul 16 00:55:11.006991 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 16 00:55:11.012651 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 16 00:55:11.018895 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 16 00:55:11.031021 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 16 00:55:11.035726 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 16 00:55:11.040476 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 16 00:55:11.044980 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 16 00:55:11.049422 systemd[1]: Reached target network.target - Network. Jul 16 00:55:11.053736 systemd[1]: Reached target sockets.target - Socket Units. Jul 16 00:55:11.058028 systemd[1]: Reached target basic.target - Basic System. Jul 16 00:55:11.062259 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 16 00:55:11.062281 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 16 00:55:11.063355 systemd[1]: Starting containerd.service - containerd container runtime... Jul 16 00:55:11.081285 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 16 00:55:11.086932 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 16 00:55:11.092377 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 16 00:55:11.097839 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 16 00:55:11.102926 coreos-metadata[2778]: Jul 16 00:55:11.102 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 16 00:55:11.103343 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 16 00:55:11.105730 coreos-metadata[2778]: Jul 16 00:55:11.105 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 16 00:55:11.107765 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 16 00:55:11.108020 jq[2784]: false Jul 16 00:55:11.108874 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 16 00:55:11.114405 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 16 00:55:11.119462 extend-filesystems[2786]: Found /dev/nvme0n1p6 Jul 16 00:55:11.124235 extend-filesystems[2786]: Found /dev/nvme0n1p9 Jul 16 00:55:11.120007 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 16 00:55:11.133289 extend-filesystems[2786]: Checking size of /dev/nvme0n1p9 Jul 16 00:55:11.129843 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 16 00:55:11.142291 extend-filesystems[2786]: Resized partition /dev/nvme0n1p9 Jul 16 00:55:11.164051 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks Jul 16 00:55:11.142167 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 16 00:55:11.164276 extend-filesystems[2808]: resize2fs 1.47.2 (1-Jan-2025) Jul 16 00:55:11.160510 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 16 00:55:11.169967 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 16 00:55:11.178660 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 16 00:55:11.179230 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 16 00:55:11.179813 systemd[1]: Starting update-engine.service - Update Engine... Jul 16 00:55:11.185869 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 16 00:55:11.192212 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 16 00:55:11.193544 jq[2819]: true Jul 16 00:55:11.197434 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 16 00:55:11.197622 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 16 00:55:11.197861 systemd[1]: motdgen.service: Deactivated successfully. Jul 16 00:55:11.198585 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 16 00:55:11.204131 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 16 00:55:11.204320 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 16 00:55:11.204827 systemd-logind[2810]: Watching system buttons on /dev/input/event0 (Power Button) Jul 16 00:55:11.209274 systemd-logind[2810]: New seat seat0. Jul 16 00:55:11.211444 systemd[1]: Started systemd-logind.service - User Login Management. Jul 16 00:55:11.215433 update_engine[2818]: I20250716 00:55:11.215088 2818 main.cc:92] Flatcar Update Engine starting Jul 16 00:55:11.217482 (ntainerd)[2825]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 16 00:55:11.220019 jq[2824]: true Jul 16 00:55:11.228177 tar[2823]: linux-arm64/helm Jul 16 00:55:11.236844 dbus-daemon[2779]: [system] SELinux support is enabled Jul 16 00:55:11.237012 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 16 00:55:11.240642 update_engine[2818]: I20250716 00:55:11.240607 2818 update_check_scheduler.cc:74] Next update check in 3m37s Jul 16 00:55:11.246363 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 16 00:55:11.246393 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 16 00:55:11.246657 dbus-daemon[2779]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 16 00:55:11.250837 bash[2850]: Updated "/home/core/.ssh/authorized_keys" Jul 16 00:55:11.251321 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 16 00:55:11.251340 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 16 00:55:11.256391 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 16 00:55:11.261874 systemd[1]: Started update-engine.service - Update Engine. Jul 16 00:55:11.268572 systemd[1]: Starting sshkeys.service... Jul 16 00:55:11.288345 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 16 00:55:11.297819 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 16 00:55:11.303710 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 16 00:55:11.316950 locksmithd[2854]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 16 00:55:11.335272 coreos-metadata[2865]: Jul 16 00:55:11.335 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 16 00:55:11.336359 coreos-metadata[2865]: Jul 16 00:55:11.336 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 16 00:55:11.372020 containerd[2825]: time="2025-07-16T00:55:11Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 16 00:55:11.373260 containerd[2825]: time="2025-07-16T00:55:11.373229040Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 16 00:55:11.380551 containerd[2825]: time="2025-07-16T00:55:11.380524360Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.96µs" Jul 16 00:55:11.380573 containerd[2825]: time="2025-07-16T00:55:11.380550640Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 16 00:55:11.380605 containerd[2825]: time="2025-07-16T00:55:11.380573880Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 16 00:55:11.380731 containerd[2825]: time="2025-07-16T00:55:11.380718040Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 16 00:55:11.380752 containerd[2825]: time="2025-07-16T00:55:11.380735120Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 16 00:55:11.380769 containerd[2825]: time="2025-07-16T00:55:11.380756240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 16 00:55:11.380814 containerd[2825]: time="2025-07-16T00:55:11.380801120Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 16 00:55:11.380837 containerd[2825]: time="2025-07-16T00:55:11.380813720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 16 00:55:11.381028 containerd[2825]: time="2025-07-16T00:55:11.381009360Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 16 00:55:11.381053 containerd[2825]: time="2025-07-16T00:55:11.381028120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 16 00:55:11.381053 containerd[2825]: time="2025-07-16T00:55:11.381042720Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 16 00:55:11.381053 containerd[2825]: time="2025-07-16T00:55:11.381050360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 16 00:55:11.381135 containerd[2825]: time="2025-07-16T00:55:11.381124480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 16 00:55:11.381326 containerd[2825]: time="2025-07-16T00:55:11.381312360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 16 00:55:11.381351 containerd[2825]: time="2025-07-16T00:55:11.381340600Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 16 00:55:11.381373 containerd[2825]: time="2025-07-16T00:55:11.381350600Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 16 00:55:11.381390 containerd[2825]: time="2025-07-16T00:55:11.381378400Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 16 00:55:11.382263 containerd[2825]: time="2025-07-16T00:55:11.382235560Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 16 00:55:11.382354 containerd[2825]: time="2025-07-16T00:55:11.382341640Z" level=info msg="metadata content store policy set" policy=shared Jul 16 00:55:11.387922 containerd[2825]: time="2025-07-16T00:55:11.387905000Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 16 00:55:11.387957 containerd[2825]: time="2025-07-16T00:55:11.387942160Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 16 00:55:11.387975 containerd[2825]: time="2025-07-16T00:55:11.387961480Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 16 00:55:11.387992 containerd[2825]: time="2025-07-16T00:55:11.387974760Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 16 00:55:11.387992 containerd[2825]: time="2025-07-16T00:55:11.387986760Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 16 00:55:11.388057 containerd[2825]: time="2025-07-16T00:55:11.387996880Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 16 00:55:11.388057 containerd[2825]: time="2025-07-16T00:55:11.388009480Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 16 00:55:11.388057 containerd[2825]: time="2025-07-16T00:55:11.388020880Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 16 00:55:11.388057 containerd[2825]: time="2025-07-16T00:55:11.388032280Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 16 00:55:11.388057 containerd[2825]: time="2025-07-16T00:55:11.388042280Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 16 00:55:11.388057 containerd[2825]: time="2025-07-16T00:55:11.388051200Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 16 00:55:11.388228 containerd[2825]: time="2025-07-16T00:55:11.388063000Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 16 00:55:11.388228 containerd[2825]: time="2025-07-16T00:55:11.388183200Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 16 00:55:11.388228 containerd[2825]: time="2025-07-16T00:55:11.388208880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 16 00:55:11.388228 containerd[2825]: time="2025-07-16T00:55:11.388222400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 16 00:55:11.388293 containerd[2825]: time="2025-07-16T00:55:11.388231960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 16 00:55:11.388293 containerd[2825]: time="2025-07-16T00:55:11.388241960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 16 00:55:11.388293 containerd[2825]: time="2025-07-16T00:55:11.388252120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 16 00:55:11.388293 containerd[2825]: time="2025-07-16T00:55:11.388262800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 16 00:55:11.388293 containerd[2825]: time="2025-07-16T00:55:11.388272720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 16 00:55:11.388293 containerd[2825]: time="2025-07-16T00:55:11.388284600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 16 00:55:11.388389 containerd[2825]: time="2025-07-16T00:55:11.388294640Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 16 00:55:11.388389 containerd[2825]: time="2025-07-16T00:55:11.388304760Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 16 00:55:11.388490 containerd[2825]: time="2025-07-16T00:55:11.388479640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 16 00:55:11.388511 containerd[2825]: time="2025-07-16T00:55:11.388496280Z" level=info msg="Start snapshots syncer" Jul 16 00:55:11.388528 containerd[2825]: time="2025-07-16T00:55:11.388518680Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 16 00:55:11.388730 containerd[2825]: time="2025-07-16T00:55:11.388702400Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 16 00:55:11.388810 containerd[2825]: time="2025-07-16T00:55:11.388747400Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 16 00:55:11.388833 containerd[2825]: time="2025-07-16T00:55:11.388813800Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 16 00:55:11.388926 containerd[2825]: time="2025-07-16T00:55:11.388913400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 16 00:55:11.388962 containerd[2825]: time="2025-07-16T00:55:11.388937520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 16 00:55:11.388962 containerd[2825]: time="2025-07-16T00:55:11.388955760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 16 00:55:11.388996 containerd[2825]: time="2025-07-16T00:55:11.388966840Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 16 00:55:11.388996 containerd[2825]: time="2025-07-16T00:55:11.388979880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 16 00:55:11.388996 containerd[2825]: time="2025-07-16T00:55:11.388990280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 16 00:55:11.389051 containerd[2825]: time="2025-07-16T00:55:11.389000200Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 16 00:55:11.389051 containerd[2825]: time="2025-07-16T00:55:11.389023760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 16 00:55:11.389051 containerd[2825]: time="2025-07-16T00:55:11.389040880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 16 00:55:11.389099 containerd[2825]: time="2025-07-16T00:55:11.389052360Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 16 00:55:11.389099 containerd[2825]: time="2025-07-16T00:55:11.389084240Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 16 00:55:11.389131 containerd[2825]: time="2025-07-16T00:55:11.389098280Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 16 00:55:11.389131 containerd[2825]: time="2025-07-16T00:55:11.389107720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 16 00:55:11.389131 containerd[2825]: time="2025-07-16T00:55:11.389117600Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 16 00:55:11.389131 containerd[2825]: time="2025-07-16T00:55:11.389125360Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 16 00:55:11.389197 containerd[2825]: time="2025-07-16T00:55:11.389136600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 16 00:55:11.389197 containerd[2825]: time="2025-07-16T00:55:11.389151880Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 16 00:55:11.389242 containerd[2825]: time="2025-07-16T00:55:11.389233440Z" level=info msg="runtime interface created" Jul 16 00:55:11.389242 containerd[2825]: time="2025-07-16T00:55:11.389239760Z" level=info msg="created NRI interface" Jul 16 00:55:11.389278 containerd[2825]: time="2025-07-16T00:55:11.389249800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 16 00:55:11.389278 containerd[2825]: time="2025-07-16T00:55:11.389261200Z" level=info msg="Connect containerd service" Jul 16 00:55:11.389310 containerd[2825]: time="2025-07-16T00:55:11.389288960Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 16 00:55:11.389916 containerd[2825]: time="2025-07-16T00:55:11.389898840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 16 00:55:11.468561 containerd[2825]: time="2025-07-16T00:55:11.468520880Z" level=info msg="Start subscribing containerd event" Jul 16 00:55:11.468626 containerd[2825]: time="2025-07-16T00:55:11.468574360Z" level=info msg="Start recovering state" Jul 16 00:55:11.468665 containerd[2825]: time="2025-07-16T00:55:11.468658800Z" level=info msg="Start event monitor" Jul 16 00:55:11.468687 containerd[2825]: time="2025-07-16T00:55:11.468671160Z" level=info msg="Start cni network conf syncer for default" Jul 16 00:55:11.468687 containerd[2825]: time="2025-07-16T00:55:11.468680400Z" level=info msg="Start streaming server" Jul 16 00:55:11.468720 containerd[2825]: time="2025-07-16T00:55:11.468687920Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 16 00:55:11.468720 containerd[2825]: time="2025-07-16T00:55:11.468694080Z" level=info msg="runtime interface starting up..." Jul 16 00:55:11.468720 containerd[2825]: time="2025-07-16T00:55:11.468699040Z" level=info msg="starting plugins..." Jul 16 00:55:11.468720 containerd[2825]: time="2025-07-16T00:55:11.468710720Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 16 00:55:11.468815 containerd[2825]: time="2025-07-16T00:55:11.468793440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 16 00:55:11.468854 containerd[2825]: time="2025-07-16T00:55:11.468845920Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 16 00:55:11.468904 containerd[2825]: time="2025-07-16T00:55:11.468895240Z" level=info msg="containerd successfully booted in 0.098862s" Jul 16 00:55:11.468966 systemd[1]: Started containerd.service - containerd container runtime. Jul 16 00:55:11.528246 tar[2823]: linux-arm64/LICENSE Jul 16 00:55:11.528766 tar[2823]: linux-arm64/README.md Jul 16 00:55:11.549989 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 16 00:55:11.726955 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 Jul 16 00:55:11.745421 extend-filesystems[2808]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 16 00:55:11.745421 extend-filesystems[2808]: old_desc_blocks = 1, new_desc_blocks = 112 Jul 16 00:55:11.745421 extend-filesystems[2808]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. Jul 16 00:55:11.776066 extend-filesystems[2786]: Resized filesystem in /dev/nvme0n1p9 Jul 16 00:55:11.746320 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 16 00:55:11.746669 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 16 00:55:11.760928 systemd[1]: extend-filesystems.service: Consumed 225ms CPU time, 69.2M memory peak. Jul 16 00:55:11.853735 sshd_keygen[2812]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 16 00:55:11.872645 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 16 00:55:11.879874 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 16 00:55:11.903887 systemd[1]: issuegen.service: Deactivated successfully. Jul 16 00:55:11.904113 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 16 00:55:11.911219 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 16 00:55:11.944722 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 16 00:55:11.951538 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 16 00:55:11.958044 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 16 00:55:11.963625 systemd[1]: Reached target getty.target - Login Prompts. Jul 16 00:55:12.105912 coreos-metadata[2778]: Jul 16 00:55:12.105 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 16 00:55:12.106442 coreos-metadata[2778]: Jul 16 00:55:12.106 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 16 00:55:12.281962 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Jul 16 00:55:12.299954 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link Jul 16 00:55:12.300808 systemd-networkd[2730]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:49:ed:dd.network. Jul 16 00:55:12.336484 coreos-metadata[2865]: Jul 16 00:55:12.336 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 16 00:55:12.336923 coreos-metadata[2865]: Jul 16 00:55:12.336 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 16 00:55:12.907956 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up Jul 16 00:55:12.924956 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link Jul 16 00:55:12.925372 systemd-networkd[2730]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jul 16 00:55:12.926575 systemd-networkd[2730]: enP1p1s0f0np0: Link UP Jul 16 00:55:12.926816 systemd-networkd[2730]: enP1p1s0f0np0: Gained carrier Jul 16 00:55:12.927247 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 16 00:55:12.944844 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 16 00:55:12.946298 systemd-networkd[2730]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:49:ed:dc.network. Jul 16 00:55:12.946588 systemd-networkd[2730]: enP1p1s0f1np1: Link UP Jul 16 00:55:12.946790 systemd-networkd[2730]: enP1p1s0f1np1: Gained carrier Jul 16 00:55:12.959217 systemd-networkd[2730]: bond0: Link UP Jul 16 00:55:12.959481 systemd-networkd[2730]: bond0: Gained carrier Jul 16 00:55:12.959664 systemd-timesyncd[2732]: Network configuration changed, trying to establish connection. Jul 16 00:55:12.960296 systemd-timesyncd[2732]: Network configuration changed, trying to establish connection. Jul 16 00:55:12.960533 systemd-timesyncd[2732]: Network configuration changed, trying to establish connection. Jul 16 00:55:12.960676 systemd-timesyncd[2732]: Network configuration changed, trying to establish connection. Jul 16 00:55:13.052137 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex Jul 16 00:55:13.052189 kernel: bond0: active interface up! Jul 16 00:55:13.176958 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex Jul 16 00:55:14.106548 coreos-metadata[2778]: Jul 16 00:55:14.106 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Jul 16 00:55:14.337023 coreos-metadata[2865]: Jul 16 00:55:14.336 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Jul 16 00:55:14.800993 systemd-networkd[2730]: bond0: Gained IPv6LL Jul 16 00:55:14.801550 systemd-timesyncd[2732]: Network configuration changed, trying to establish connection. Jul 16 00:55:14.994309 systemd-timesyncd[2732]: Network configuration changed, trying to establish connection. Jul 16 00:55:14.994406 systemd-timesyncd[2732]: Network configuration changed, trying to establish connection. Jul 16 00:55:14.996906 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 16 00:55:15.002673 systemd[1]: Reached target network-online.target - Network is Online. Jul 16 00:55:15.009667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:55:15.026369 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 16 00:55:15.048507 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 16 00:55:15.706151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:55:15.712167 (kubelet)[2946]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 16 00:55:16.138062 kubelet[2946]: E0716 00:55:16.137996 2946 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 00:55:16.140642 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 00:55:16.140772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 00:55:16.141131 systemd[1]: kubelet.service: Consumed 759ms CPU time, 264.8M memory peak. Jul 16 00:55:16.344606 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 16 00:55:16.350812 systemd[1]: Started sshd@0-147.28.150.207:22-139.178.89.65:33852.service - OpenSSH per-connection server daemon (139.178.89.65:33852). Jul 16 00:55:16.443302 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 Jul 16 00:55:16.443599 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity Jul 16 00:55:16.669117 coreos-metadata[2778]: Jul 16 00:55:16.669 INFO Fetch successful Jul 16 00:55:16.739392 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 16 00:55:16.746282 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jul 16 00:55:16.771656 sshd[2971]: Accepted publickey for core from 139.178.89.65 port 33852 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:55:16.773542 sshd-session[2971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:16.783322 systemd-logind[2810]: New session 1 of user core. Jul 16 00:55:16.784701 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 16 00:55:16.790981 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 16 00:55:16.815266 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 16 00:55:16.822305 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 16 00:55:16.847424 (systemd)[2984]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 16 00:55:16.849351 systemd-logind[2810]: New session c1 of user core. Jul 16 00:55:16.982555 systemd[2984]: Queued start job for default target default.target. Jul 16 00:55:17.002210 systemd[2984]: Created slice app.slice - User Application Slice. Jul 16 00:55:17.002237 systemd[2984]: Reached target paths.target - Paths. Jul 16 00:55:17.002271 systemd[2984]: Reached target timers.target - Timers. Jul 16 00:55:17.003554 systemd[2984]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 16 00:55:17.012151 systemd[2984]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 16 00:55:17.012203 systemd[2984]: Reached target sockets.target - Sockets. Jul 16 00:55:17.012244 systemd[2984]: Reached target basic.target - Basic System. Jul 16 00:55:17.012272 systemd[2984]: Reached target default.target - Main User Target. Jul 16 00:55:17.012296 systemd[2984]: Startup finished in 158ms. Jul 16 00:55:17.012572 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 16 00:55:17.014099 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 16 00:55:17.014465 login[2921]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:17.015403 login[2922]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:17.017536 systemd-logind[2810]: New session 2 of user core. Jul 16 00:55:17.018862 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 16 00:55:17.020564 systemd-logind[2810]: New session 3 of user core. Jul 16 00:55:17.021790 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 16 00:55:17.073513 coreos-metadata[2865]: Jul 16 00:55:17.073 INFO Fetch successful Jul 16 00:55:17.106982 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jul 16 00:55:17.121041 unknown[2865]: wrote ssh authorized keys file for user: core Jul 16 00:55:17.146874 update-ssh-keys[3022]: Updated "/home/core/.ssh/authorized_keys" Jul 16 00:55:17.148796 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 16 00:55:17.150358 systemd[1]: Finished sshkeys.service. Jul 16 00:55:17.151226 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 16 00:55:17.156019 systemd[1]: Startup finished in 5.258s (kernel) + 21.922s (initrd) + 9.828s (userspace) = 37.010s. Jul 16 00:55:17.327696 systemd[1]: Started sshd@1-147.28.150.207:22-139.178.89.65:33856.service - OpenSSH per-connection server daemon (139.178.89.65:33856). Jul 16 00:55:17.729394 sshd[3030]: Accepted publickey for core from 139.178.89.65 port 33856 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:55:17.730658 sshd-session[3030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:17.733989 systemd-logind[2810]: New session 4 of user core. Jul 16 00:55:17.757051 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 16 00:55:18.022013 sshd[3032]: Connection closed by 139.178.89.65 port 33856 Jul 16 00:55:18.022308 sshd-session[3030]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:18.025116 systemd[1]: sshd@1-147.28.150.207:22-139.178.89.65:33856.service: Deactivated successfully. Jul 16 00:55:18.027414 systemd[1]: session-4.scope: Deactivated successfully. Jul 16 00:55:18.027963 systemd-logind[2810]: Session 4 logged out. Waiting for processes to exit. Jul 16 00:55:18.028751 systemd-logind[2810]: Removed session 4. Jul 16 00:55:18.102442 systemd[1]: Started sshd@2-147.28.150.207:22-139.178.89.65:33866.service - OpenSSH per-connection server daemon (139.178.89.65:33866). Jul 16 00:55:18.504756 sshd[3038]: Accepted publickey for core from 139.178.89.65 port 33866 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:55:18.505913 sshd-session[3038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:18.508959 systemd-logind[2810]: New session 5 of user core. Jul 16 00:55:18.525104 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 16 00:55:18.792751 sshd[3040]: Connection closed by 139.178.89.65 port 33866 Jul 16 00:55:18.792995 sshd-session[3038]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:18.796180 systemd[1]: sshd@2-147.28.150.207:22-139.178.89.65:33866.service: Deactivated successfully. Jul 16 00:55:18.797666 systemd[1]: session-5.scope: Deactivated successfully. Jul 16 00:55:18.798250 systemd-logind[2810]: Session 5 logged out. Waiting for processes to exit. Jul 16 00:55:18.799009 systemd-logind[2810]: Removed session 5. Jul 16 00:55:18.873430 systemd[1]: Started sshd@3-147.28.150.207:22-139.178.89.65:33872.service - OpenSSH per-connection server daemon (139.178.89.65:33872). Jul 16 00:55:19.279134 sshd[3046]: Accepted publickey for core from 139.178.89.65 port 33872 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:55:19.280271 sshd-session[3046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:19.283462 systemd-logind[2810]: New session 6 of user core. Jul 16 00:55:19.305060 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 16 00:55:19.569884 sshd[3048]: Connection closed by 139.178.89.65 port 33872 Jul 16 00:55:19.570179 sshd-session[3046]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:19.572754 systemd[1]: sshd@3-147.28.150.207:22-139.178.89.65:33872.service: Deactivated successfully. Jul 16 00:55:19.575152 systemd[1]: session-6.scope: Deactivated successfully. Jul 16 00:55:19.575653 systemd-logind[2810]: Session 6 logged out. Waiting for processes to exit. Jul 16 00:55:19.576445 systemd-logind[2810]: Removed session 6. Jul 16 00:55:19.650280 systemd[1]: Started sshd@4-147.28.150.207:22-139.178.89.65:33878.service - OpenSSH per-connection server daemon (139.178.89.65:33878). Jul 16 00:55:19.842385 systemd-timesyncd[2732]: Network configuration changed, trying to establish connection. Jul 16 00:55:20.056275 sshd[3054]: Accepted publickey for core from 139.178.89.65 port 33878 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:55:20.057509 sshd-session[3054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:20.060550 systemd-logind[2810]: New session 7 of user core. Jul 16 00:55:20.084047 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 16 00:55:20.293535 sudo[3057]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 16 00:55:20.293795 sudo[3057]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 16 00:55:20.313488 sudo[3057]: pam_unix(sudo:session): session closed for user root Jul 16 00:55:20.376159 sshd[3056]: Connection closed by 139.178.89.65 port 33878 Jul 16 00:55:20.376571 sshd-session[3054]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:20.379460 systemd[1]: sshd@4-147.28.150.207:22-139.178.89.65:33878.service: Deactivated successfully. Jul 16 00:55:20.380847 systemd[1]: session-7.scope: Deactivated successfully. Jul 16 00:55:20.381413 systemd-logind[2810]: Session 7 logged out. Waiting for processes to exit. Jul 16 00:55:20.382180 systemd-logind[2810]: Removed session 7. Jul 16 00:55:20.463478 systemd[1]: Started sshd@5-147.28.150.207:22-139.178.89.65:55318.service - OpenSSH per-connection server daemon (139.178.89.65:55318). Jul 16 00:55:20.872079 sshd[3063]: Accepted publickey for core from 139.178.89.65 port 55318 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:55:20.873369 sshd-session[3063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:20.876501 systemd-logind[2810]: New session 8 of user core. Jul 16 00:55:20.898049 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 16 00:55:21.103290 sudo[3068]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 16 00:55:21.103531 sudo[3068]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 16 00:55:21.106273 sudo[3068]: pam_unix(sudo:session): session closed for user root Jul 16 00:55:21.110452 sudo[3067]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 16 00:55:21.110688 sudo[3067]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 16 00:55:21.117781 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 16 00:55:21.163042 augenrules[3090]: No rules Jul 16 00:55:21.164068 systemd[1]: audit-rules.service: Deactivated successfully. Jul 16 00:55:21.165088 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 16 00:55:21.165824 sudo[3067]: pam_unix(sudo:session): session closed for user root Jul 16 00:55:21.228191 sshd[3066]: Connection closed by 139.178.89.65 port 55318 Jul 16 00:55:21.228484 sshd-session[3063]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:21.231316 systemd[1]: sshd@5-147.28.150.207:22-139.178.89.65:55318.service: Deactivated successfully. Jul 16 00:55:21.234215 systemd[1]: session-8.scope: Deactivated successfully. Jul 16 00:55:21.234763 systemd-logind[2810]: Session 8 logged out. Waiting for processes to exit. Jul 16 00:55:21.235599 systemd-logind[2810]: Removed session 8. Jul 16 00:55:21.306471 systemd[1]: Started sshd@6-147.28.150.207:22-139.178.89.65:55326.service - OpenSSH per-connection server daemon (139.178.89.65:55326). Jul 16 00:55:21.711631 sshd[3100]: Accepted publickey for core from 139.178.89.65 port 55326 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 00:55:21.712843 sshd-session[3100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:55:21.715929 systemd-logind[2810]: New session 9 of user core. Jul 16 00:55:21.738108 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 16 00:55:21.941305 sudo[3103]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 16 00:55:21.941548 sudo[3103]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 16 00:55:22.238889 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 16 00:55:22.254243 (dockerd)[3134]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 16 00:55:22.469117 dockerd[3134]: time="2025-07-16T00:55:22.469058280Z" level=info msg="Starting up" Jul 16 00:55:22.469780 dockerd[3134]: time="2025-07-16T00:55:22.469762520Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 16 00:55:22.496221 dockerd[3134]: time="2025-07-16T00:55:22.496139440Z" level=info msg="Loading containers: start." Jul 16 00:55:22.507960 kernel: Initializing XFRM netlink socket Jul 16 00:55:22.665105 systemd-timesyncd[2732]: Network configuration changed, trying to establish connection. Jul 16 00:55:22.707710 systemd-networkd[2730]: docker0: Link UP Jul 16 00:55:22.708510 dockerd[3134]: time="2025-07-16T00:55:22.708476560Z" level=info msg="Loading containers: done." Jul 16 00:55:22.717638 dockerd[3134]: time="2025-07-16T00:55:22.717612400Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 16 00:55:22.717705 dockerd[3134]: time="2025-07-16T00:55:22.717672840Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 16 00:55:22.717780 dockerd[3134]: time="2025-07-16T00:55:22.717766440Z" level=info msg="Initializing buildkit" Jul 16 00:55:22.732530 dockerd[3134]: time="2025-07-16T00:55:22.732504200Z" level=info msg="Completed buildkit initialization" Jul 16 00:55:22.737094 dockerd[3134]: time="2025-07-16T00:55:22.737064720Z" level=info msg="Daemon has completed initialization" Jul 16 00:55:22.737151 dockerd[3134]: time="2025-07-16T00:55:22.737115880Z" level=info msg="API listen on /run/docker.sock" Jul 16 00:55:22.737283 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 16 00:55:23.274194 containerd[2825]: time="2025-07-16T00:55:23.274159720Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Jul 16 00:55:23.485676 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck474442703-merged.mount: Deactivated successfully. Jul 16 00:55:23.734673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount18586142.mount: Deactivated successfully. Jul 16 00:55:23.437193 systemd-resolved[2731]: Clock change detected. Flushing caches. Jul 16 00:55:23.445172 systemd-journald[2287]: Time jumped backwards, rotating. Jul 16 00:55:23.437402 systemd-timesyncd[2732]: Contacted time server [2606:65c0:20:17:d747:b17f:e95c:6374]:123 (2.flatcar.pool.ntp.org). Jul 16 00:55:23.437451 systemd-timesyncd[2732]: Initial clock synchronization to Wed 2025-07-16 00:55:23.437132 UTC. Jul 16 00:55:23.746978 containerd[2825]: time="2025-07-16T00:55:23.746877627Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=25651813" Jul 16 00:55:23.746978 containerd[2825]: time="2025-07-16T00:55:23.746898827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:23.747843 containerd[2825]: time="2025-07-16T00:55:23.747816947Z" level=info msg="ImageCreate event name:\"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:23.750292 containerd[2825]: time="2025-07-16T00:55:23.750271507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:23.751251 containerd[2825]: time="2025-07-16T00:55:23.751217987Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"25648613\" in 1.52139752s" Jul 16 00:55:23.751275 containerd[2825]: time="2025-07-16T00:55:23.751264267Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\"" Jul 16 00:55:23.752304 containerd[2825]: time="2025-07-16T00:55:23.752290267Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Jul 16 00:55:25.068642 containerd[2825]: time="2025-07-16T00:55:25.068574587Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=22460283" Jul 16 00:55:25.068642 containerd[2825]: time="2025-07-16T00:55:25.068575147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:25.069582 containerd[2825]: time="2025-07-16T00:55:25.069549827Z" level=info msg="ImageCreate event name:\"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:25.071805 containerd[2825]: time="2025-07-16T00:55:25.071785787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:25.072681 containerd[2825]: time="2025-07-16T00:55:25.072664747Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"23996073\" in 1.32035288s" Jul 16 00:55:25.072699 containerd[2825]: time="2025-07-16T00:55:25.072688307Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\"" Jul 16 00:55:25.072997 containerd[2825]: time="2025-07-16T00:55:25.072983467Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Jul 16 00:55:25.346787 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 16 00:55:25.348296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:55:25.487324 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:55:25.490593 (kubelet)[3462]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 16 00:55:25.533901 kubelet[3462]: E0716 00:55:25.533861 3462 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 00:55:25.537165 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 00:55:25.537284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 00:55:25.537666 systemd[1]: kubelet.service: Consumed 154ms CPU time, 115.4M memory peak. Jul 16 00:55:26.105538 containerd[2825]: time="2025-07-16T00:55:26.105466587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:26.105845 containerd[2825]: time="2025-07-16T00:55:26.105552067Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=17125089" Jul 16 00:55:26.106326 containerd[2825]: time="2025-07-16T00:55:26.106307067Z" level=info msg="ImageCreate event name:\"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:26.108703 containerd[2825]: time="2025-07-16T00:55:26.108677747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:26.109707 containerd[2825]: time="2025-07-16T00:55:26.109673267Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"18660897\" in 1.03666244s" Jul 16 00:55:26.109733 containerd[2825]: time="2025-07-16T00:55:26.109716227Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\"" Jul 16 00:55:26.110064 containerd[2825]: time="2025-07-16T00:55:26.110039707Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Jul 16 00:55:26.643078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount94361227.mount: Deactivated successfully. Jul 16 00:55:27.113928 containerd[2825]: time="2025-07-16T00:55:27.113889867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:27.114216 containerd[2825]: time="2025-07-16T00:55:27.113900187Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=26915993" Jul 16 00:55:27.114579 containerd[2825]: time="2025-07-16T00:55:27.114554867Z" level=info msg="ImageCreate event name:\"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:27.116030 containerd[2825]: time="2025-07-16T00:55:27.116009907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:27.116647 containerd[2825]: time="2025-07-16T00:55:27.116618827Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"26915012\" in 1.00654668s" Jul 16 00:55:27.116667 containerd[2825]: time="2025-07-16T00:55:27.116654747Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\"" Jul 16 00:55:27.116991 containerd[2825]: time="2025-07-16T00:55:27.116970627Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 16 00:55:27.455185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount998526413.mount: Deactivated successfully. Jul 16 00:55:27.966038 containerd[2825]: time="2025-07-16T00:55:27.965996667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:27.966142 containerd[2825]: time="2025-07-16T00:55:27.966041707Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 16 00:55:27.967010 containerd[2825]: time="2025-07-16T00:55:27.966984507Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:27.969719 containerd[2825]: time="2025-07-16T00:55:27.969690987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:27.970601 containerd[2825]: time="2025-07-16T00:55:27.970573987Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 853.57556ms" Jul 16 00:55:27.970704 containerd[2825]: time="2025-07-16T00:55:27.970600147Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 16 00:55:27.971006 containerd[2825]: time="2025-07-16T00:55:27.970979707Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 16 00:55:28.214634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1705140837.mount: Deactivated successfully. Jul 16 00:55:28.215003 containerd[2825]: time="2025-07-16T00:55:28.214977907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 16 00:55:28.215171 containerd[2825]: time="2025-07-16T00:55:28.215022947Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 16 00:55:28.215592 containerd[2825]: time="2025-07-16T00:55:28.215572627Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 16 00:55:28.217169 containerd[2825]: time="2025-07-16T00:55:28.217113307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 16 00:55:28.217789 containerd[2825]: time="2025-07-16T00:55:28.217769667Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 246.75924ms" Jul 16 00:55:28.217839 containerd[2825]: time="2025-07-16T00:55:28.217792867Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 16 00:55:28.218142 containerd[2825]: time="2025-07-16T00:55:28.218113747Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 16 00:55:28.485098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount296612390.mount: Deactivated successfully. Jul 16 00:55:30.704927 containerd[2825]: time="2025-07-16T00:55:30.704882067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:30.705228 containerd[2825]: time="2025-07-16T00:55:30.704922547Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Jul 16 00:55:30.705988 containerd[2825]: time="2025-07-16T00:55:30.705958067Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:30.708406 containerd[2825]: time="2025-07-16T00:55:30.708383867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:30.709485 containerd[2825]: time="2025-07-16T00:55:30.709459947Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.49130708s" Jul 16 00:55:30.709505 containerd[2825]: time="2025-07-16T00:55:30.709492867Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 16 00:55:35.582608 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 16 00:55:35.584266 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:55:35.598053 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 16 00:55:35.598233 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 16 00:55:35.598631 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:55:35.602275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:55:35.622884 systemd[1]: Reload requested from client PID 3691 ('systemctl') (unit session-9.scope)... Jul 16 00:55:35.622895 systemd[1]: Reloading... Jul 16 00:55:35.696578 zram_generator::config[3741]: No configuration found. Jul 16 00:55:35.773001 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 00:55:35.876596 systemd[1]: Reloading finished in 253 ms. Jul 16 00:55:35.930887 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:55:35.934256 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:55:35.934534 systemd[1]: kubelet.service: Deactivated successfully. Jul 16 00:55:35.936597 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:55:35.936635 systemd[1]: kubelet.service: Consumed 85ms CPU time, 95.2M memory peak. Jul 16 00:55:35.939677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:55:36.055415 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:55:36.058853 (kubelet)[3807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 16 00:55:36.088130 kubelet[3807]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 00:55:36.088130 kubelet[3807]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 16 00:55:36.088130 kubelet[3807]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 00:55:36.088384 kubelet[3807]: I0716 00:55:36.088170 3807 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 16 00:55:36.764282 kubelet[3807]: I0716 00:55:36.764248 3807 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 16 00:55:36.764282 kubelet[3807]: I0716 00:55:36.764275 3807 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 16 00:55:36.764526 kubelet[3807]: I0716 00:55:36.764512 3807 server.go:934] "Client rotation is on, will bootstrap in background" Jul 16 00:55:36.786354 kubelet[3807]: E0716 00:55:36.786324 3807 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.28.150.207:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.28.150.207:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:55:36.787517 kubelet[3807]: I0716 00:55:36.787496 3807 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 16 00:55:36.793813 kubelet[3807]: I0716 00:55:36.793799 3807 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 16 00:55:36.815238 kubelet[3807]: I0716 00:55:36.815214 3807 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 16 00:55:36.815988 kubelet[3807]: I0716 00:55:36.815972 3807 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 16 00:55:36.816128 kubelet[3807]: I0716 00:55:36.816098 3807 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 16 00:55:36.816278 kubelet[3807]: I0716 00:55:36.816128 3807 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-n-4904b64135","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 16 00:55:36.816347 kubelet[3807]: I0716 00:55:36.816285 3807 topology_manager.go:138] "Creating topology manager with none policy" Jul 16 00:55:36.816347 kubelet[3807]: I0716 00:55:36.816294 3807 container_manager_linux.go:300] "Creating device plugin manager" Jul 16 00:55:36.816541 kubelet[3807]: I0716 00:55:36.816530 3807 state_mem.go:36] "Initialized new in-memory state store" Jul 16 00:55:36.818564 kubelet[3807]: I0716 00:55:36.818535 3807 kubelet.go:408] "Attempting to sync node with API server" Jul 16 00:55:36.818564 kubelet[3807]: I0716 00:55:36.818558 3807 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 16 00:55:36.818609 kubelet[3807]: I0716 00:55:36.818584 3807 kubelet.go:314] "Adding apiserver pod source" Jul 16 00:55:36.818664 kubelet[3807]: I0716 00:55:36.818657 3807 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 16 00:55:36.822379 kubelet[3807]: I0716 00:55:36.822365 3807 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 16 00:55:36.823099 kubelet[3807]: W0716 00:55:36.823057 3807 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.150.207:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-n-4904b64135&limit=500&resourceVersion=0": dial tcp 147.28.150.207:6443: connect: connection refused Jul 16 00:55:36.823129 kubelet[3807]: E0716 00:55:36.823116 3807 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.150.207:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-n-4904b64135&limit=500&resourceVersion=0\": dial tcp 147.28.150.207:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:55:36.823149 kubelet[3807]: W0716 00:55:36.823064 3807 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.150.207:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.150.207:6443: connect: connection refused Jul 16 00:55:36.823167 kubelet[3807]: E0716 00:55:36.823144 3807 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.150.207:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.150.207:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:55:36.823231 kubelet[3807]: I0716 00:55:36.823217 3807 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 16 00:55:36.823394 kubelet[3807]: W0716 00:55:36.823386 3807 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 16 00:55:36.824212 kubelet[3807]: I0716 00:55:36.824201 3807 server.go:1274] "Started kubelet" Jul 16 00:55:36.824454 kubelet[3807]: I0716 00:55:36.824402 3807 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 16 00:55:36.824521 kubelet[3807]: I0716 00:55:36.824425 3807 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 16 00:55:36.824683 kubelet[3807]: I0716 00:55:36.824671 3807 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 16 00:55:36.828219 kubelet[3807]: I0716 00:55:36.828198 3807 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 16 00:55:36.828243 kubelet[3807]: I0716 00:55:36.828208 3807 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 16 00:55:36.828328 kubelet[3807]: I0716 00:55:36.828313 3807 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 16 00:55:36.828355 kubelet[3807]: I0716 00:55:36.828335 3807 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 16 00:55:36.828453 kubelet[3807]: E0716 00:55:36.828428 3807 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-n-4904b64135\" not found" Jul 16 00:55:36.828474 kubelet[3807]: I0716 00:55:36.828454 3807 reconciler.go:26] "Reconciler: start to sync state" Jul 16 00:55:36.829713 kubelet[3807]: I0716 00:55:36.829691 3807 factory.go:221] Registration of the systemd container factory successfully Jul 16 00:55:36.829765 kubelet[3807]: E0716 00:55:36.829731 3807 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.150.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-n-4904b64135?timeout=10s\": dial tcp 147.28.150.207:6443: connect: connection refused" interval="200ms" Jul 16 00:55:36.829797 kubelet[3807]: W0716 00:55:36.829760 3807 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.150.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.150.207:6443: connect: connection refused Jul 16 00:55:36.829820 kubelet[3807]: I0716 00:55:36.829800 3807 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 16 00:55:36.829841 kubelet[3807]: E0716 00:55:36.829812 3807 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.150.207:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.150.207:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:55:36.829963 kubelet[3807]: E0716 00:55:36.829947 3807 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 16 00:55:36.830759 kubelet[3807]: E0716 00:55:36.829696 3807 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.150.207:6443/api/v1/namespaces/default/events\": dial tcp 147.28.150.207:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.0.1-n-4904b64135.18529546eb0b4e7b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.1-n-4904b64135,UID:ci-4372.0.1-n-4904b64135,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.1-n-4904b64135,},FirstTimestamp:2025-07-16 00:55:36.824180347 +0000 UTC m=+0.762555721,LastTimestamp:2025-07-16 00:55:36.824180347 +0000 UTC m=+0.762555721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.1-n-4904b64135,}" Jul 16 00:55:36.830809 kubelet[3807]: I0716 00:55:36.830748 3807 server.go:449] "Adding debug handlers to kubelet server" Jul 16 00:55:36.831369 kubelet[3807]: I0716 00:55:36.831350 3807 factory.go:221] Registration of the containerd container factory successfully Jul 16 00:55:36.842310 kubelet[3807]: I0716 00:55:36.842277 3807 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 16 00:55:36.843212 kubelet[3807]: I0716 00:55:36.843199 3807 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 16 00:55:36.843232 kubelet[3807]: I0716 00:55:36.843217 3807 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 16 00:55:36.843258 kubelet[3807]: I0716 00:55:36.843233 3807 kubelet.go:2321] "Starting kubelet main sync loop" Jul 16 00:55:36.843289 kubelet[3807]: E0716 00:55:36.843274 3807 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 16 00:55:36.843676 kubelet[3807]: W0716 00:55:36.843634 3807 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.150.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.150.207:6443: connect: connection refused Jul 16 00:55:36.843709 kubelet[3807]: E0716 00:55:36.843691 3807 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.150.207:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.150.207:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:55:36.845028 kubelet[3807]: I0716 00:55:36.845014 3807 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 16 00:55:36.845050 kubelet[3807]: I0716 00:55:36.845028 3807 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 16 00:55:36.845050 kubelet[3807]: I0716 00:55:36.845046 3807 state_mem.go:36] "Initialized new in-memory state store" Jul 16 00:55:36.845686 kubelet[3807]: I0716 00:55:36.845672 3807 policy_none.go:49] "None policy: Start" Jul 16 00:55:36.846057 kubelet[3807]: I0716 00:55:36.846045 3807 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 16 00:55:36.846076 kubelet[3807]: I0716 00:55:36.846069 3807 state_mem.go:35] "Initializing new in-memory state store" Jul 16 00:55:36.849869 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 16 00:55:36.865927 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 16 00:55:36.868435 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 16 00:55:36.880489 kubelet[3807]: I0716 00:55:36.880464 3807 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 16 00:55:36.880688 kubelet[3807]: I0716 00:55:36.880677 3807 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 16 00:55:36.880716 kubelet[3807]: I0716 00:55:36.880690 3807 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 16 00:55:36.880878 kubelet[3807]: I0716 00:55:36.880862 3807 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 16 00:55:36.881532 kubelet[3807]: E0716 00:55:36.881515 3807 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.0.1-n-4904b64135\" not found" Jul 16 00:55:36.950927 systemd[1]: Created slice kubepods-burstable-poda5a42b4db91aae2d09e4dd86a8ee2d88.slice - libcontainer container kubepods-burstable-poda5a42b4db91aae2d09e4dd86a8ee2d88.slice. Jul 16 00:55:36.982130 kubelet[3807]: I0716 00:55:36.982102 3807 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.1-n-4904b64135" Jul 16 00:55:36.982554 kubelet[3807]: E0716 00:55:36.982529 3807 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.150.207:6443/api/v1/nodes\": dial tcp 147.28.150.207:6443: connect: connection refused" node="ci-4372.0.1-n-4904b64135" Jul 16 00:55:36.991417 systemd[1]: Created slice kubepods-burstable-pod3314ba6808caeb77769d6aa05eff4fa0.slice - libcontainer container kubepods-burstable-pod3314ba6808caeb77769d6aa05eff4fa0.slice. Jul 16 00:55:37.010894 systemd[1]: Created slice kubepods-burstable-poda19cbedb3586510f8c62be10258fa5ae.slice - libcontainer container kubepods-burstable-poda19cbedb3586510f8c62be10258fa5ae.slice. Jul 16 00:55:37.030580 kubelet[3807]: E0716 00:55:37.030510 3807 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.150.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-n-4904b64135?timeout=10s\": dial tcp 147.28.150.207:6443: connect: connection refused" interval="400ms" Jul 16 00:55:37.129243 kubelet[3807]: I0716 00:55:37.129216 3807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a5a42b4db91aae2d09e4dd86a8ee2d88-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-n-4904b64135\" (UID: \"a5a42b4db91aae2d09e4dd86a8ee2d88\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-4904b64135" Jul 16 00:55:37.129479 kubelet[3807]: I0716 00:55:37.129249 3807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5a42b4db91aae2d09e4dd86a8ee2d88-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-n-4904b64135\" (UID: \"a5a42b4db91aae2d09e4dd86a8ee2d88\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-4904b64135" Jul 16 00:55:37.129479 kubelet[3807]: I0716 00:55:37.129270 3807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3314ba6808caeb77769d6aa05eff4fa0-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-n-4904b64135\" (UID: \"3314ba6808caeb77769d6aa05eff4fa0\") " pod="kube-system/kube-scheduler-ci-4372.0.1-n-4904b64135" Jul 16 00:55:37.129479 kubelet[3807]: I0716 00:55:37.129288 3807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a19cbedb3586510f8c62be10258fa5ae-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-n-4904b64135\" (UID: \"a19cbedb3586510f8c62be10258fa5ae\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-4904b64135" Jul 16 00:55:37.129479 kubelet[3807]: I0716 00:55:37.129307 3807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5a42b4db91aae2d09e4dd86a8ee2d88-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-4904b64135\" (UID: \"a5a42b4db91aae2d09e4dd86a8ee2d88\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-4904b64135" Jul 16 00:55:37.129479 kubelet[3807]: I0716 00:55:37.129324 3807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a5a42b4db91aae2d09e4dd86a8ee2d88-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-n-4904b64135\" (UID: \"a5a42b4db91aae2d09e4dd86a8ee2d88\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-4904b64135" Jul 16 00:55:37.129636 kubelet[3807]: I0716 00:55:37.129415 3807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a19cbedb3586510f8c62be10258fa5ae-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-n-4904b64135\" (UID: \"a19cbedb3586510f8c62be10258fa5ae\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-4904b64135" Jul 16 00:55:37.129636 kubelet[3807]: I0716 00:55:37.129454 3807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a19cbedb3586510f8c62be10258fa5ae-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-n-4904b64135\" (UID: \"a19cbedb3586510f8c62be10258fa5ae\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-4904b64135" Jul 16 00:55:37.129636 kubelet[3807]: I0716 00:55:37.129485 3807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5a42b4db91aae2d09e4dd86a8ee2d88-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-4904b64135\" (UID: \"a5a42b4db91aae2d09e4dd86a8ee2d88\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-4904b64135" Jul 16 00:55:37.184770 kubelet[3807]: I0716 00:55:37.184755 3807 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.1-n-4904b64135" Jul 16 00:55:37.185039 kubelet[3807]: E0716 00:55:37.185011 3807 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.28.150.207:6443/api/v1/nodes\": dial tcp 147.28.150.207:6443: connect: connection refused" node="ci-4372.0.1-n-4904b64135" Jul 16 00:55:37.290201 containerd[2825]: time="2025-07-16T00:55:37.290129267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-n-4904b64135,Uid:a5a42b4db91aae2d09e4dd86a8ee2d88,Namespace:kube-system,Attempt:0,}" Jul 16 00:55:37.293506 containerd[2825]: time="2025-07-16T00:55:37.293484627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-n-4904b64135,Uid:3314ba6808caeb77769d6aa05eff4fa0,Namespace:kube-system,Attempt:0,}" Jul 16 00:55:37.303014 containerd[2825]: time="2025-07-16T00:55:37.302987587Z" level=info msg="connecting to shim 94fe1ed7b07efd23c25034747bc68903420800bb89a4079bcbac84e1cca33f1a" address="unix:///run/containerd/s/01605ca5aeed96a85261cc391acfbca8143d632d72619477041f35459972142d" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:55:37.303054 containerd[2825]: time="2025-07-16T00:55:37.303026547Z" level=info msg="connecting to shim 2479e8b4546daafbdf78322b12ccde33b243dcf14ce25aa9cd4dd876a5bcdfc4" address="unix:///run/containerd/s/2c21bce14b044ca62b1e4faa2c53a8d362a13af7399bf7ffb0529ec27ac3c1e2" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:55:37.313080 containerd[2825]: time="2025-07-16T00:55:37.313052067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-n-4904b64135,Uid:a19cbedb3586510f8c62be10258fa5ae,Namespace:kube-system,Attempt:0,}" Jul 16 00:55:37.320906 containerd[2825]: time="2025-07-16T00:55:37.320876787Z" level=info msg="connecting to shim 3f55d822e70b5c88c163b000d9fd1c941321abe3347576fc716bcd305bd8c42b" address="unix:///run/containerd/s/5fac69050643331fc55fd4243a88e2befd8ad267f3fd094008334bb1aaa15780" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:55:37.327753 systemd[1]: Started cri-containerd-2479e8b4546daafbdf78322b12ccde33b243dcf14ce25aa9cd4dd876a5bcdfc4.scope - libcontainer container 2479e8b4546daafbdf78322b12ccde33b243dcf14ce25aa9cd4dd876a5bcdfc4. Jul 16 00:55:37.329098 systemd[1]: Started cri-containerd-94fe1ed7b07efd23c25034747bc68903420800bb89a4079bcbac84e1cca33f1a.scope - libcontainer container 94fe1ed7b07efd23c25034747bc68903420800bb89a4079bcbac84e1cca33f1a. Jul 16 00:55:37.334746 systemd[1]: Started cri-containerd-3f55d822e70b5c88c163b000d9fd1c941321abe3347576fc716bcd305bd8c42b.scope - libcontainer container 3f55d822e70b5c88c163b000d9fd1c941321abe3347576fc716bcd305bd8c42b. Jul 16 00:55:37.353818 containerd[2825]: time="2025-07-16T00:55:37.353791627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-n-4904b64135,Uid:a5a42b4db91aae2d09e4dd86a8ee2d88,Namespace:kube-system,Attempt:0,} returns sandbox id \"2479e8b4546daafbdf78322b12ccde33b243dcf14ce25aa9cd4dd876a5bcdfc4\"" Jul 16 00:55:37.354066 containerd[2825]: time="2025-07-16T00:55:37.354039827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-n-4904b64135,Uid:3314ba6808caeb77769d6aa05eff4fa0,Namespace:kube-system,Attempt:0,} returns sandbox id \"94fe1ed7b07efd23c25034747bc68903420800bb89a4079bcbac84e1cca33f1a\"" Jul 16 00:55:37.356031 containerd[2825]: time="2025-07-16T00:55:37.356009347Z" level=info msg="CreateContainer within sandbox \"94fe1ed7b07efd23c25034747bc68903420800bb89a4079bcbac84e1cca33f1a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 16 00:55:37.356094 containerd[2825]: time="2025-07-16T00:55:37.356075547Z" level=info msg="CreateContainer within sandbox \"2479e8b4546daafbdf78322b12ccde33b243dcf14ce25aa9cd4dd876a5bcdfc4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 16 00:55:37.361130 containerd[2825]: time="2025-07-16T00:55:37.361097627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-n-4904b64135,Uid:a19cbedb3586510f8c62be10258fa5ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f55d822e70b5c88c163b000d9fd1c941321abe3347576fc716bcd305bd8c42b\"" Jul 16 00:55:37.362211 containerd[2825]: time="2025-07-16T00:55:37.362188707Z" level=info msg="Container 2cc9543b2d2da054d90f716cb16dfd204317e1815ba2f5291d4cdb9aa19e0e52: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:55:37.362606 containerd[2825]: time="2025-07-16T00:55:37.362583427Z" level=info msg="Container 24a7ff732cfdfdc787ea2972313e38c50e0ea188349bd72805a79b28a29bfea2: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:55:37.362853 containerd[2825]: time="2025-07-16T00:55:37.362835667Z" level=info msg="CreateContainer within sandbox \"3f55d822e70b5c88c163b000d9fd1c941321abe3347576fc716bcd305bd8c42b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 16 00:55:37.365452 containerd[2825]: time="2025-07-16T00:55:37.365425267Z" level=info msg="CreateContainer within sandbox \"94fe1ed7b07efd23c25034747bc68903420800bb89a4079bcbac84e1cca33f1a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2cc9543b2d2da054d90f716cb16dfd204317e1815ba2f5291d4cdb9aa19e0e52\"" Jul 16 00:55:37.365842 containerd[2825]: time="2025-07-16T00:55:37.365823107Z" level=info msg="StartContainer for \"2cc9543b2d2da054d90f716cb16dfd204317e1815ba2f5291d4cdb9aa19e0e52\"" Jul 16 00:55:37.366341 containerd[2825]: time="2025-07-16T00:55:37.366319467Z" level=info msg="CreateContainer within sandbox \"2479e8b4546daafbdf78322b12ccde33b243dcf14ce25aa9cd4dd876a5bcdfc4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"24a7ff732cfdfdc787ea2972313e38c50e0ea188349bd72805a79b28a29bfea2\"" Jul 16 00:55:37.366591 containerd[2825]: time="2025-07-16T00:55:37.366557907Z" level=info msg="StartContainer for \"24a7ff732cfdfdc787ea2972313e38c50e0ea188349bd72805a79b28a29bfea2\"" Jul 16 00:55:37.366790 containerd[2825]: time="2025-07-16T00:55:37.366770707Z" level=info msg="connecting to shim 2cc9543b2d2da054d90f716cb16dfd204317e1815ba2f5291d4cdb9aa19e0e52" address="unix:///run/containerd/s/01605ca5aeed96a85261cc391acfbca8143d632d72619477041f35459972142d" protocol=ttrpc version=3 Jul 16 00:55:37.367517 containerd[2825]: time="2025-07-16T00:55:37.367494867Z" level=info msg="connecting to shim 24a7ff732cfdfdc787ea2972313e38c50e0ea188349bd72805a79b28a29bfea2" address="unix:///run/containerd/s/2c21bce14b044ca62b1e4faa2c53a8d362a13af7399bf7ffb0529ec27ac3c1e2" protocol=ttrpc version=3 Jul 16 00:55:37.368040 containerd[2825]: time="2025-07-16T00:55:37.368016387Z" level=info msg="Container 2576bce2f8a945078a001e006c90d41f779a5299dc87877a7cc75581c0de5e0c: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:55:37.371090 containerd[2825]: time="2025-07-16T00:55:37.371069507Z" level=info msg="CreateContainer within sandbox \"3f55d822e70b5c88c163b000d9fd1c941321abe3347576fc716bcd305bd8c42b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2576bce2f8a945078a001e006c90d41f779a5299dc87877a7cc75581c0de5e0c\"" Jul 16 00:55:37.371390 containerd[2825]: time="2025-07-16T00:55:37.371370587Z" level=info msg="StartContainer for \"2576bce2f8a945078a001e006c90d41f779a5299dc87877a7cc75581c0de5e0c\"" Jul 16 00:55:37.372359 containerd[2825]: time="2025-07-16T00:55:37.372337787Z" level=info msg="connecting to shim 2576bce2f8a945078a001e006c90d41f779a5299dc87877a7cc75581c0de5e0c" address="unix:///run/containerd/s/5fac69050643331fc55fd4243a88e2befd8ad267f3fd094008334bb1aaa15780" protocol=ttrpc version=3 Jul 16 00:55:37.397745 systemd[1]: Started cri-containerd-24a7ff732cfdfdc787ea2972313e38c50e0ea188349bd72805a79b28a29bfea2.scope - libcontainer container 24a7ff732cfdfdc787ea2972313e38c50e0ea188349bd72805a79b28a29bfea2. Jul 16 00:55:37.398984 systemd[1]: Started cri-containerd-2cc9543b2d2da054d90f716cb16dfd204317e1815ba2f5291d4cdb9aa19e0e52.scope - libcontainer container 2cc9543b2d2da054d90f716cb16dfd204317e1815ba2f5291d4cdb9aa19e0e52. Jul 16 00:55:37.401300 systemd[1]: Started cri-containerd-2576bce2f8a945078a001e006c90d41f779a5299dc87877a7cc75581c0de5e0c.scope - libcontainer container 2576bce2f8a945078a001e006c90d41f779a5299dc87877a7cc75581c0de5e0c. Jul 16 00:55:37.430051 containerd[2825]: time="2025-07-16T00:55:37.430024987Z" level=info msg="StartContainer for \"2cc9543b2d2da054d90f716cb16dfd204317e1815ba2f5291d4cdb9aa19e0e52\" returns successfully" Jul 16 00:55:37.430134 containerd[2825]: time="2025-07-16T00:55:37.430112667Z" level=info msg="StartContainer for \"24a7ff732cfdfdc787ea2972313e38c50e0ea188349bd72805a79b28a29bfea2\" returns successfully" Jul 16 00:55:37.430219 containerd[2825]: time="2025-07-16T00:55:37.430195587Z" level=info msg="StartContainer for \"2576bce2f8a945078a001e006c90d41f779a5299dc87877a7cc75581c0de5e0c\" returns successfully" Jul 16 00:55:37.431774 kubelet[3807]: E0716 00:55:37.431738 3807 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.150.207:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-n-4904b64135?timeout=10s\": dial tcp 147.28.150.207:6443: connect: connection refused" interval="800ms" Jul 16 00:55:37.586979 kubelet[3807]: I0716 00:55:37.586904 3807 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.1-n-4904b64135" Jul 16 00:55:38.874791 kubelet[3807]: E0716 00:55:38.874752 3807 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.0.1-n-4904b64135\" not found" node="ci-4372.0.1-n-4904b64135" Jul 16 00:55:38.923513 kubelet[3807]: E0716 00:55:38.923401 3807 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4372.0.1-n-4904b64135.18529546eb0b4e7b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.1-n-4904b64135,UID:ci-4372.0.1-n-4904b64135,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.1-n-4904b64135,},FirstTimestamp:2025-07-16 00:55:36.824180347 +0000 UTC m=+0.762555721,LastTimestamp:2025-07-16 00:55:36.824180347 +0000 UTC m=+0.762555721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.1-n-4904b64135,}" Jul 16 00:55:38.975053 kubelet[3807]: I0716 00:55:38.975021 3807 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372.0.1-n-4904b64135" Jul 16 00:55:39.821180 kubelet[3807]: I0716 00:55:39.821152 3807 apiserver.go:52] "Watching apiserver" Jul 16 00:55:39.829264 kubelet[3807]: I0716 00:55:39.829241 3807 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 16 00:55:39.856305 kubelet[3807]: E0716 00:55:39.856278 3807 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4372.0.1-n-4904b64135\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.1-n-4904b64135" Jul 16 00:55:41.002780 systemd[1]: Reload requested from client PID 4229 ('systemctl') (unit session-9.scope)... Jul 16 00:55:41.002791 systemd[1]: Reloading... Jul 16 00:55:41.072581 zram_generator::config[4278]: No configuration found. Jul 16 00:55:41.149060 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 00:55:41.261363 systemd[1]: Reloading finished in 258 ms. Jul 16 00:55:41.282676 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:55:41.309937 systemd[1]: kubelet.service: Deactivated successfully. Jul 16 00:55:41.310214 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:55:41.310265 systemd[1]: kubelet.service: Consumed 1.269s CPU time, 143.6M memory peak. Jul 16 00:55:41.312001 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:55:41.451443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:55:41.454957 (kubelet)[4338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 16 00:55:41.484152 kubelet[4338]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 00:55:41.484152 kubelet[4338]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 16 00:55:41.484152 kubelet[4338]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 00:55:41.484412 kubelet[4338]: I0716 00:55:41.484204 4338 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 16 00:55:41.488977 kubelet[4338]: I0716 00:55:41.488955 4338 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 16 00:55:41.489008 kubelet[4338]: I0716 00:55:41.488977 4338 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 16 00:55:41.489186 kubelet[4338]: I0716 00:55:41.489177 4338 server.go:934] "Client rotation is on, will bootstrap in background" Jul 16 00:55:41.490403 kubelet[4338]: I0716 00:55:41.490391 4338 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 16 00:55:41.492183 kubelet[4338]: I0716 00:55:41.492164 4338 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 16 00:55:41.495050 kubelet[4338]: I0716 00:55:41.495037 4338 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 16 00:55:41.514345 kubelet[4338]: I0716 00:55:41.514291 4338 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 16 00:55:41.514435 kubelet[4338]: I0716 00:55:41.514423 4338 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 16 00:55:41.514547 kubelet[4338]: I0716 00:55:41.514522 4338 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 16 00:55:41.514715 kubelet[4338]: I0716 00:55:41.514547 4338 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-n-4904b64135","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 16 00:55:41.514792 kubelet[4338]: I0716 00:55:41.514723 4338 topology_manager.go:138] "Creating topology manager with none policy" Jul 16 00:55:41.514792 kubelet[4338]: I0716 00:55:41.514732 4338 container_manager_linux.go:300] "Creating device plugin manager" Jul 16 00:55:41.514792 kubelet[4338]: I0716 00:55:41.514768 4338 state_mem.go:36] "Initialized new in-memory state store" Jul 16 00:55:41.514868 kubelet[4338]: I0716 00:55:41.514858 4338 kubelet.go:408] "Attempting to sync node with API server" Jul 16 00:55:41.514890 kubelet[4338]: I0716 00:55:41.514870 4338 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 16 00:55:41.514890 kubelet[4338]: I0716 00:55:41.514887 4338 kubelet.go:314] "Adding apiserver pod source" Jul 16 00:55:41.514926 kubelet[4338]: I0716 00:55:41.514900 4338 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 16 00:55:41.515319 kubelet[4338]: I0716 00:55:41.515294 4338 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 16 00:55:41.515808 kubelet[4338]: I0716 00:55:41.515794 4338 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 16 00:55:41.516201 kubelet[4338]: I0716 00:55:41.516191 4338 server.go:1274] "Started kubelet" Jul 16 00:55:41.516882 kubelet[4338]: I0716 00:55:41.516258 4338 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 16 00:55:41.517209 kubelet[4338]: I0716 00:55:41.516246 4338 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 16 00:55:41.517330 kubelet[4338]: I0716 00:55:41.517286 4338 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 16 00:55:41.518739 kubelet[4338]: I0716 00:55:41.518726 4338 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 16 00:55:41.518762 kubelet[4338]: I0716 00:55:41.518744 4338 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 16 00:55:41.518815 kubelet[4338]: I0716 00:55:41.518804 4338 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 16 00:55:41.518860 kubelet[4338]: E0716 00:55:41.518838 4338 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-n-4904b64135\" not found" Jul 16 00:55:41.518883 kubelet[4338]: I0716 00:55:41.518850 4338 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 16 00:55:41.518905 kubelet[4338]: E0716 00:55:41.518882 4338 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 16 00:55:41.518971 kubelet[4338]: I0716 00:55:41.518957 4338 reconciler.go:26] "Reconciler: start to sync state" Jul 16 00:55:41.519287 kubelet[4338]: I0716 00:55:41.519274 4338 factory.go:221] Registration of the systemd container factory successfully Jul 16 00:55:41.519390 kubelet[4338]: I0716 00:55:41.519374 4338 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 16 00:55:41.519733 kubelet[4338]: I0716 00:55:41.519706 4338 server.go:449] "Adding debug handlers to kubelet server" Jul 16 00:55:41.520147 kubelet[4338]: I0716 00:55:41.520131 4338 factory.go:221] Registration of the containerd container factory successfully Jul 16 00:55:41.526428 kubelet[4338]: I0716 00:55:41.526370 4338 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 16 00:55:41.527414 kubelet[4338]: I0716 00:55:41.527389 4338 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 16 00:55:41.527414 kubelet[4338]: I0716 00:55:41.527414 4338 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 16 00:55:41.527516 kubelet[4338]: I0716 00:55:41.527432 4338 kubelet.go:2321] "Starting kubelet main sync loop" Jul 16 00:55:41.527516 kubelet[4338]: E0716 00:55:41.527479 4338 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 16 00:55:41.549394 kubelet[4338]: I0716 00:55:41.549371 4338 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 16 00:55:41.549394 kubelet[4338]: I0716 00:55:41.549388 4338 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 16 00:55:41.549498 kubelet[4338]: I0716 00:55:41.549407 4338 state_mem.go:36] "Initialized new in-memory state store" Jul 16 00:55:41.549568 kubelet[4338]: I0716 00:55:41.549550 4338 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 16 00:55:41.549592 kubelet[4338]: I0716 00:55:41.549572 4338 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 16 00:55:41.549618 kubelet[4338]: I0716 00:55:41.549593 4338 policy_none.go:49] "None policy: Start" Jul 16 00:55:41.550094 kubelet[4338]: I0716 00:55:41.550077 4338 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 16 00:55:41.550141 kubelet[4338]: I0716 00:55:41.550102 4338 state_mem.go:35] "Initializing new in-memory state store" Jul 16 00:55:41.550245 kubelet[4338]: I0716 00:55:41.550236 4338 state_mem.go:75] "Updated machine memory state" Jul 16 00:55:41.553396 kubelet[4338]: I0716 00:55:41.553376 4338 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 16 00:55:41.553549 kubelet[4338]: I0716 00:55:41.553537 4338 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 16 00:55:41.553583 kubelet[4338]: I0716 00:55:41.553551 4338 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 16 00:55:41.553726 kubelet[4338]: I0716 00:55:41.553708 4338 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 16 00:55:41.631246 kubelet[4338]: W0716 00:55:41.631223 4338 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:55:41.631664 kubelet[4338]: W0716 00:55:41.631617 4338 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:55:41.631728 kubelet[4338]: W0716 00:55:41.631708 4338 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:55:41.656154 kubelet[4338]: I0716 00:55:41.656134 4338 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.1-n-4904b64135" Jul 16 00:55:41.659951 kubelet[4338]: I0716 00:55:41.659928 4338 kubelet_node_status.go:111] "Node was previously registered" node="ci-4372.0.1-n-4904b64135" Jul 16 00:55:41.660003 kubelet[4338]: I0716 00:55:41.659993 4338 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372.0.1-n-4904b64135" Jul 16 00:55:41.719319 kubelet[4338]: I0716 00:55:41.719299 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a19cbedb3586510f8c62be10258fa5ae-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-n-4904b64135\" (UID: \"a19cbedb3586510f8c62be10258fa5ae\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-4904b64135" Jul 16 00:55:41.719362 kubelet[4338]: I0716 00:55:41.719326 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a19cbedb3586510f8c62be10258fa5ae-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-n-4904b64135\" (UID: \"a19cbedb3586510f8c62be10258fa5ae\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-4904b64135" Jul 16 00:55:41.719362 kubelet[4338]: I0716 00:55:41.719346 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a19cbedb3586510f8c62be10258fa5ae-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-n-4904b64135\" (UID: \"a19cbedb3586510f8c62be10258fa5ae\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-4904b64135" Jul 16 00:55:41.719403 kubelet[4338]: I0716 00:55:41.719364 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5a42b4db91aae2d09e4dd86a8ee2d88-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-4904b64135\" (UID: \"a5a42b4db91aae2d09e4dd86a8ee2d88\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-4904b64135" Jul 16 00:55:41.719403 kubelet[4338]: I0716 00:55:41.719391 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a5a42b4db91aae2d09e4dd86a8ee2d88-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-n-4904b64135\" (UID: \"a5a42b4db91aae2d09e4dd86a8ee2d88\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-4904b64135" Jul 16 00:55:41.719446 kubelet[4338]: I0716 00:55:41.719410 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5a42b4db91aae2d09e4dd86a8ee2d88-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-4904b64135\" (UID: \"a5a42b4db91aae2d09e4dd86a8ee2d88\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-4904b64135" Jul 16 00:55:41.719446 kubelet[4338]: I0716 00:55:41.719428 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a5a42b4db91aae2d09e4dd86a8ee2d88-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-n-4904b64135\" (UID: \"a5a42b4db91aae2d09e4dd86a8ee2d88\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-4904b64135" Jul 16 00:55:41.719484 kubelet[4338]: I0716 00:55:41.719447 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5a42b4db91aae2d09e4dd86a8ee2d88-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-n-4904b64135\" (UID: \"a5a42b4db91aae2d09e4dd86a8ee2d88\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-4904b64135" Jul 16 00:55:41.719484 kubelet[4338]: I0716 00:55:41.719464 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3314ba6808caeb77769d6aa05eff4fa0-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-n-4904b64135\" (UID: \"3314ba6808caeb77769d6aa05eff4fa0\") " pod="kube-system/kube-scheduler-ci-4372.0.1-n-4904b64135" Jul 16 00:55:42.516006 kubelet[4338]: I0716 00:55:42.515983 4338 apiserver.go:52] "Watching apiserver" Jul 16 00:55:42.519122 kubelet[4338]: I0716 00:55:42.519107 4338 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 16 00:55:42.537972 kubelet[4338]: W0716 00:55:42.537943 4338 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:55:42.538026 kubelet[4338]: E0716 00:55:42.537998 4338 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4372.0.1-n-4904b64135\" already exists" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-4904b64135" Jul 16 00:55:42.538067 kubelet[4338]: W0716 00:55:42.538023 4338 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:55:42.538102 kubelet[4338]: E0716 00:55:42.538075 4338 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4372.0.1-n-4904b64135\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.1-n-4904b64135" Jul 16 00:55:42.549561 kubelet[4338]: I0716 00:55:42.549515 4338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-4904b64135" podStartSLOduration=1.549472507 podStartE2EDuration="1.549472507s" podCreationTimestamp="2025-07-16 00:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:55:42.549426627 +0000 UTC m=+1.091616201" watchObservedRunningTime="2025-07-16 00:55:42.549472507 +0000 UTC m=+1.091662041" Jul 16 00:55:42.559213 kubelet[4338]: I0716 00:55:42.559173 4338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.0.1-n-4904b64135" podStartSLOduration=1.5591587869999999 podStartE2EDuration="1.559158787s" podCreationTimestamp="2025-07-16 00:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:55:42.559100907 +0000 UTC m=+1.101290441" watchObservedRunningTime="2025-07-16 00:55:42.559158787 +0000 UTC m=+1.101348321" Jul 16 00:55:42.559301 kubelet[4338]: I0716 00:55:42.559280 4338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.0.1-n-4904b64135" podStartSLOduration=1.559275547 podStartE2EDuration="1.559275547s" podCreationTimestamp="2025-07-16 00:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:55:42.554102467 +0000 UTC m=+1.096291961" watchObservedRunningTime="2025-07-16 00:55:42.559275547 +0000 UTC m=+1.101465081" Jul 16 00:55:45.955793 kubelet[4338]: I0716 00:55:45.955755 4338 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 16 00:55:45.956192 kubelet[4338]: I0716 00:55:45.956140 4338 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 16 00:55:45.956216 containerd[2825]: time="2025-07-16T00:55:45.956006947Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 16 00:55:46.841066 systemd[1]: Created slice kubepods-besteffort-pod53d523e4_6498_4bad_9750_7f9900ca135f.slice - libcontainer container kubepods-besteffort-pod53d523e4_6498_4bad_9750_7f9900ca135f.slice. Jul 16 00:55:46.846052 kubelet[4338]: I0716 00:55:46.846010 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/53d523e4-6498-4bad-9750-7f9900ca135f-kube-proxy\") pod \"kube-proxy-mdbdd\" (UID: \"53d523e4-6498-4bad-9750-7f9900ca135f\") " pod="kube-system/kube-proxy-mdbdd" Jul 16 00:55:46.846106 kubelet[4338]: I0716 00:55:46.846063 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53d523e4-6498-4bad-9750-7f9900ca135f-xtables-lock\") pod \"kube-proxy-mdbdd\" (UID: \"53d523e4-6498-4bad-9750-7f9900ca135f\") " pod="kube-system/kube-proxy-mdbdd" Jul 16 00:55:46.846106 kubelet[4338]: I0716 00:55:46.846084 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6j4r\" (UniqueName: \"kubernetes.io/projected/53d523e4-6498-4bad-9750-7f9900ca135f-kube-api-access-x6j4r\") pod \"kube-proxy-mdbdd\" (UID: \"53d523e4-6498-4bad-9750-7f9900ca135f\") " pod="kube-system/kube-proxy-mdbdd" Jul 16 00:55:46.846148 kubelet[4338]: I0716 00:55:46.846104 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53d523e4-6498-4bad-9750-7f9900ca135f-lib-modules\") pod \"kube-proxy-mdbdd\" (UID: \"53d523e4-6498-4bad-9750-7f9900ca135f\") " pod="kube-system/kube-proxy-mdbdd" Jul 16 00:55:47.046608 kubelet[4338]: I0716 00:55:47.046567 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7632d1a5-957b-45ba-9587-b211fa691ae7-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-tzm9b\" (UID: \"7632d1a5-957b-45ba-9587-b211fa691ae7\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-tzm9b" Jul 16 00:55:47.046608 kubelet[4338]: I0716 00:55:47.046601 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lkkq\" (UniqueName: \"kubernetes.io/projected/7632d1a5-957b-45ba-9587-b211fa691ae7-kube-api-access-7lkkq\") pod \"tigera-operator-5bf8dfcb4-tzm9b\" (UID: \"7632d1a5-957b-45ba-9587-b211fa691ae7\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-tzm9b" Jul 16 00:55:47.049320 systemd[1]: Created slice kubepods-besteffort-pod7632d1a5_957b_45ba_9587_b211fa691ae7.slice - libcontainer container kubepods-besteffort-pod7632d1a5_957b_45ba_9587_b211fa691ae7.slice. Jul 16 00:55:47.164771 containerd[2825]: time="2025-07-16T00:55:47.164735947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mdbdd,Uid:53d523e4-6498-4bad-9750-7f9900ca135f,Namespace:kube-system,Attempt:0,}" Jul 16 00:55:47.173067 containerd[2825]: time="2025-07-16T00:55:47.173041907Z" level=info msg="connecting to shim 41ed71740a70521b881bea5b7a4f1ba27aba6a637604246ef057796b2fad3f6f" address="unix:///run/containerd/s/61354c308890b37b21bba61b5a5a5b4ade55f49955f200aa75fd132a9b927626" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:55:47.206757 systemd[1]: Started cri-containerd-41ed71740a70521b881bea5b7a4f1ba27aba6a637604246ef057796b2fad3f6f.scope - libcontainer container 41ed71740a70521b881bea5b7a4f1ba27aba6a637604246ef057796b2fad3f6f. Jul 16 00:55:47.224194 containerd[2825]: time="2025-07-16T00:55:47.224167827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mdbdd,Uid:53d523e4-6498-4bad-9750-7f9900ca135f,Namespace:kube-system,Attempt:0,} returns sandbox id \"41ed71740a70521b881bea5b7a4f1ba27aba6a637604246ef057796b2fad3f6f\"" Jul 16 00:55:47.226201 containerd[2825]: time="2025-07-16T00:55:47.226177667Z" level=info msg="CreateContainer within sandbox \"41ed71740a70521b881bea5b7a4f1ba27aba6a637604246ef057796b2fad3f6f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 16 00:55:47.231348 containerd[2825]: time="2025-07-16T00:55:47.231323227Z" level=info msg="Container 82f7400ba26e55f639f22e022e51ccf98332089e1518ed8230710feb25d576c0: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:55:47.236840 containerd[2825]: time="2025-07-16T00:55:47.236804707Z" level=info msg="CreateContainer within sandbox \"41ed71740a70521b881bea5b7a4f1ba27aba6a637604246ef057796b2fad3f6f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"82f7400ba26e55f639f22e022e51ccf98332089e1518ed8230710feb25d576c0\"" Jul 16 00:55:47.237233 containerd[2825]: time="2025-07-16T00:55:47.237212627Z" level=info msg="StartContainer for \"82f7400ba26e55f639f22e022e51ccf98332089e1518ed8230710feb25d576c0\"" Jul 16 00:55:47.238475 containerd[2825]: time="2025-07-16T00:55:47.238455787Z" level=info msg="connecting to shim 82f7400ba26e55f639f22e022e51ccf98332089e1518ed8230710feb25d576c0" address="unix:///run/containerd/s/61354c308890b37b21bba61b5a5a5b4ade55f49955f200aa75fd132a9b927626" protocol=ttrpc version=3 Jul 16 00:55:47.271737 systemd[1]: Started cri-containerd-82f7400ba26e55f639f22e022e51ccf98332089e1518ed8230710feb25d576c0.scope - libcontainer container 82f7400ba26e55f639f22e022e51ccf98332089e1518ed8230710feb25d576c0. Jul 16 00:55:47.299098 containerd[2825]: time="2025-07-16T00:55:47.299067787Z" level=info msg="StartContainer for \"82f7400ba26e55f639f22e022e51ccf98332089e1518ed8230710feb25d576c0\" returns successfully" Jul 16 00:55:47.351966 containerd[2825]: time="2025-07-16T00:55:47.351932347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-tzm9b,Uid:7632d1a5-957b-45ba-9587-b211fa691ae7,Namespace:tigera-operator,Attempt:0,}" Jul 16 00:55:47.363439 containerd[2825]: time="2025-07-16T00:55:47.363414067Z" level=info msg="connecting to shim 040dcc707bdb2334584393ad831bd0b8f24c94196ee8e5a78fd530dba0305933" address="unix:///run/containerd/s/f84d2c7d43fe32acdf17d25f21c89242229a7e017f8649fa6ed20b79f10bd9b1" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:55:47.392686 systemd[1]: Started cri-containerd-040dcc707bdb2334584393ad831bd0b8f24c94196ee8e5a78fd530dba0305933.scope - libcontainer container 040dcc707bdb2334584393ad831bd0b8f24c94196ee8e5a78fd530dba0305933. Jul 16 00:55:47.417974 containerd[2825]: time="2025-07-16T00:55:47.417899507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-tzm9b,Uid:7632d1a5-957b-45ba-9587-b211fa691ae7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"040dcc707bdb2334584393ad831bd0b8f24c94196ee8e5a78fd530dba0305933\"" Jul 16 00:55:47.419106 containerd[2825]: time="2025-07-16T00:55:47.419061227Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 16 00:55:47.547379 kubelet[4338]: I0716 00:55:47.547332 4338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mdbdd" podStartSLOduration=1.547317347 podStartE2EDuration="1.547317347s" podCreationTimestamp="2025-07-16 00:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:55:47.547171187 +0000 UTC m=+6.089360721" watchObservedRunningTime="2025-07-16 00:55:47.547317347 +0000 UTC m=+6.089506881" Jul 16 00:55:48.121022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount189279152.mount: Deactivated successfully. Jul 16 00:55:48.544103 containerd[2825]: time="2025-07-16T00:55:48.544068627Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:48.544385 containerd[2825]: time="2025-07-16T00:55:48.544042947Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 16 00:55:48.544817 containerd[2825]: time="2025-07-16T00:55:48.544798467Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:48.546453 containerd[2825]: time="2025-07-16T00:55:48.546431147Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:48.547111 containerd[2825]: time="2025-07-16T00:55:48.547095627Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.12800364s" Jul 16 00:55:48.547135 containerd[2825]: time="2025-07-16T00:55:48.547116947Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 16 00:55:48.548593 containerd[2825]: time="2025-07-16T00:55:48.548571627Z" level=info msg="CreateContainer within sandbox \"040dcc707bdb2334584393ad831bd0b8f24c94196ee8e5a78fd530dba0305933\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 16 00:55:48.553524 containerd[2825]: time="2025-07-16T00:55:48.553182027Z" level=info msg="Container c3d1ad87738b1c30631c56663be578f851ce43042e4b34f61f7729437395e44b: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:55:48.555304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2978330434.mount: Deactivated successfully. Jul 16 00:55:48.555807 containerd[2825]: time="2025-07-16T00:55:48.555785187Z" level=info msg="CreateContainer within sandbox \"040dcc707bdb2334584393ad831bd0b8f24c94196ee8e5a78fd530dba0305933\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c3d1ad87738b1c30631c56663be578f851ce43042e4b34f61f7729437395e44b\"" Jul 16 00:55:48.556094 containerd[2825]: time="2025-07-16T00:55:48.556072707Z" level=info msg="StartContainer for \"c3d1ad87738b1c30631c56663be578f851ce43042e4b34f61f7729437395e44b\"" Jul 16 00:55:48.556802 containerd[2825]: time="2025-07-16T00:55:48.556777027Z" level=info msg="connecting to shim c3d1ad87738b1c30631c56663be578f851ce43042e4b34f61f7729437395e44b" address="unix:///run/containerd/s/f84d2c7d43fe32acdf17d25f21c89242229a7e017f8649fa6ed20b79f10bd9b1" protocol=ttrpc version=3 Jul 16 00:55:48.586747 systemd[1]: Started cri-containerd-c3d1ad87738b1c30631c56663be578f851ce43042e4b34f61f7729437395e44b.scope - libcontainer container c3d1ad87738b1c30631c56663be578f851ce43042e4b34f61f7729437395e44b. Jul 16 00:55:48.606950 containerd[2825]: time="2025-07-16T00:55:48.606922227Z" level=info msg="StartContainer for \"c3d1ad87738b1c30631c56663be578f851ce43042e4b34f61f7729437395e44b\" returns successfully" Jul 16 00:55:49.550052 kubelet[4338]: I0716 00:55:49.550006 4338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-tzm9b" podStartSLOduration=1.421080587 podStartE2EDuration="2.549981547s" podCreationTimestamp="2025-07-16 00:55:47 +0000 UTC" firstStartedPulling="2025-07-16 00:55:47.418736627 +0000 UTC m=+5.960926161" lastFinishedPulling="2025-07-16 00:55:48.547637627 +0000 UTC m=+7.089827121" observedRunningTime="2025-07-16 00:55:49.549900627 +0000 UTC m=+8.092090161" watchObservedRunningTime="2025-07-16 00:55:49.549981547 +0000 UTC m=+8.092171041" Jul 16 00:55:53.561612 sudo[3103]: pam_unix(sudo:session): session closed for user root Jul 16 00:55:53.623649 sshd[3102]: Connection closed by 139.178.89.65 port 55326 Jul 16 00:55:53.625027 sshd-session[3100]: pam_unix(sshd:session): session closed for user core Jul 16 00:55:53.627932 systemd[1]: sshd@6-147.28.150.207:22-139.178.89.65:55326.service: Deactivated successfully. Jul 16 00:55:53.631104 systemd[1]: session-9.scope: Deactivated successfully. Jul 16 00:55:53.631377 systemd[1]: session-9.scope: Consumed 7.047s CPU time, 248.2M memory peak. Jul 16 00:55:53.632632 systemd-logind[2810]: Session 9 logged out. Waiting for processes to exit. Jul 16 00:55:53.633660 systemd-logind[2810]: Removed session 9. Jul 16 00:55:55.103587 update_engine[2818]: I20250716 00:55:55.103065 2818 update_attempter.cc:509] Updating boot flags... Jul 16 00:55:58.199221 systemd[1]: Created slice kubepods-besteffort-pod9d6c061d_1dcc_4944_ad53_e9d285435cd6.slice - libcontainer container kubepods-besteffort-pod9d6c061d_1dcc_4944_ad53_e9d285435cd6.slice. Jul 16 00:55:58.312364 kubelet[4338]: I0716 00:55:58.312322 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6fxt\" (UniqueName: \"kubernetes.io/projected/9d6c061d-1dcc-4944-ad53-e9d285435cd6-kube-api-access-p6fxt\") pod \"calico-typha-5945695b9b-cczl4\" (UID: \"9d6c061d-1dcc-4944-ad53-e9d285435cd6\") " pod="calico-system/calico-typha-5945695b9b-cczl4" Jul 16 00:55:58.312364 kubelet[4338]: I0716 00:55:58.312366 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9d6c061d-1dcc-4944-ad53-e9d285435cd6-typha-certs\") pod \"calico-typha-5945695b9b-cczl4\" (UID: \"9d6c061d-1dcc-4944-ad53-e9d285435cd6\") " pod="calico-system/calico-typha-5945695b9b-cczl4" Jul 16 00:55:58.312728 kubelet[4338]: I0716 00:55:58.312387 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d6c061d-1dcc-4944-ad53-e9d285435cd6-tigera-ca-bundle\") pod \"calico-typha-5945695b9b-cczl4\" (UID: \"9d6c061d-1dcc-4944-ad53-e9d285435cd6\") " pod="calico-system/calico-typha-5945695b9b-cczl4" Jul 16 00:55:58.502603 containerd[2825]: time="2025-07-16T00:55:58.502458088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5945695b9b-cczl4,Uid:9d6c061d-1dcc-4944-ad53-e9d285435cd6,Namespace:calico-system,Attempt:0,}" Jul 16 00:55:58.511298 containerd[2825]: time="2025-07-16T00:55:58.511268769Z" level=info msg="connecting to shim 61c245adaf84402253f7c046c3e71a4cfebabfd6af39e210ed6190406eacd117" address="unix:///run/containerd/s/22819e86eb8a803408425a2be82937eac87d34df9bb5ac91edbfb05ee018a046" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:55:58.546758 systemd[1]: Started cri-containerd-61c245adaf84402253f7c046c3e71a4cfebabfd6af39e210ed6190406eacd117.scope - libcontainer container 61c245adaf84402253f7c046c3e71a4cfebabfd6af39e210ed6190406eacd117. Jul 16 00:55:58.550915 systemd[1]: Created slice kubepods-besteffort-podf986caf6_7c92_49d2_aed9_0d6a43171cac.slice - libcontainer container kubepods-besteffort-podf986caf6_7c92_49d2_aed9_0d6a43171cac.slice. Jul 16 00:55:58.572218 containerd[2825]: time="2025-07-16T00:55:58.572187512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5945695b9b-cczl4,Uid:9d6c061d-1dcc-4944-ad53-e9d285435cd6,Namespace:calico-system,Attempt:0,} returns sandbox id \"61c245adaf84402253f7c046c3e71a4cfebabfd6af39e210ed6190406eacd117\"" Jul 16 00:55:58.573191 containerd[2825]: time="2025-07-16T00:55:58.573173250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 16 00:55:58.715902 kubelet[4338]: I0716 00:55:58.715857 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f986caf6-7c92-49d2-aed9-0d6a43171cac-var-lib-calico\") pod \"calico-node-9256d\" (UID: \"f986caf6-7c92-49d2-aed9-0d6a43171cac\") " pod="calico-system/calico-node-9256d" Jul 16 00:55:58.715902 kubelet[4338]: I0716 00:55:58.715898 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f986caf6-7c92-49d2-aed9-0d6a43171cac-var-run-calico\") pod \"calico-node-9256d\" (UID: \"f986caf6-7c92-49d2-aed9-0d6a43171cac\") " pod="calico-system/calico-node-9256d" Jul 16 00:55:58.716094 kubelet[4338]: I0716 00:55:58.715917 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f986caf6-7c92-49d2-aed9-0d6a43171cac-lib-modules\") pod \"calico-node-9256d\" (UID: \"f986caf6-7c92-49d2-aed9-0d6a43171cac\") " pod="calico-system/calico-node-9256d" Jul 16 00:55:58.716094 kubelet[4338]: I0716 00:55:58.715975 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f986caf6-7c92-49d2-aed9-0d6a43171cac-cni-net-dir\") pod \"calico-node-9256d\" (UID: \"f986caf6-7c92-49d2-aed9-0d6a43171cac\") " pod="calico-system/calico-node-9256d" Jul 16 00:55:58.716094 kubelet[4338]: I0716 00:55:58.716020 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76qnp\" (UniqueName: \"kubernetes.io/projected/f986caf6-7c92-49d2-aed9-0d6a43171cac-kube-api-access-76qnp\") pod \"calico-node-9256d\" (UID: \"f986caf6-7c92-49d2-aed9-0d6a43171cac\") " pod="calico-system/calico-node-9256d" Jul 16 00:55:58.716094 kubelet[4338]: I0716 00:55:58.716067 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f986caf6-7c92-49d2-aed9-0d6a43171cac-policysync\") pod \"calico-node-9256d\" (UID: \"f986caf6-7c92-49d2-aed9-0d6a43171cac\") " pod="calico-system/calico-node-9256d" Jul 16 00:55:58.716094 kubelet[4338]: I0716 00:55:58.716086 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f986caf6-7c92-49d2-aed9-0d6a43171cac-tigera-ca-bundle\") pod \"calico-node-9256d\" (UID: \"f986caf6-7c92-49d2-aed9-0d6a43171cac\") " pod="calico-system/calico-node-9256d" Jul 16 00:55:58.716243 kubelet[4338]: I0716 00:55:58.716103 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f986caf6-7c92-49d2-aed9-0d6a43171cac-cni-bin-dir\") pod \"calico-node-9256d\" (UID: \"f986caf6-7c92-49d2-aed9-0d6a43171cac\") " pod="calico-system/calico-node-9256d" Jul 16 00:55:58.716243 kubelet[4338]: I0716 00:55:58.716119 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f986caf6-7c92-49d2-aed9-0d6a43171cac-xtables-lock\") pod \"calico-node-9256d\" (UID: \"f986caf6-7c92-49d2-aed9-0d6a43171cac\") " pod="calico-system/calico-node-9256d" Jul 16 00:55:58.716243 kubelet[4338]: I0716 00:55:58.716148 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f986caf6-7c92-49d2-aed9-0d6a43171cac-node-certs\") pod \"calico-node-9256d\" (UID: \"f986caf6-7c92-49d2-aed9-0d6a43171cac\") " pod="calico-system/calico-node-9256d" Jul 16 00:55:58.716243 kubelet[4338]: I0716 00:55:58.716169 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f986caf6-7c92-49d2-aed9-0d6a43171cac-cni-log-dir\") pod \"calico-node-9256d\" (UID: \"f986caf6-7c92-49d2-aed9-0d6a43171cac\") " pod="calico-system/calico-node-9256d" Jul 16 00:55:58.716243 kubelet[4338]: I0716 00:55:58.716184 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f986caf6-7c92-49d2-aed9-0d6a43171cac-flexvol-driver-host\") pod \"calico-node-9256d\" (UID: \"f986caf6-7c92-49d2-aed9-0d6a43171cac\") " pod="calico-system/calico-node-9256d" Jul 16 00:55:58.818124 kubelet[4338]: E0716 00:55:58.818051 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.818124 kubelet[4338]: W0716 00:55:58.818069 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.818124 kubelet[4338]: E0716 00:55:58.818085 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.819953 kubelet[4338]: E0716 00:55:58.819931 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.819983 kubelet[4338]: W0716 00:55:58.819949 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.819983 kubelet[4338]: E0716 00:55:58.819965 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.825810 kubelet[4338]: E0716 00:55:58.825793 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.825883 kubelet[4338]: W0716 00:55:58.825810 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.825883 kubelet[4338]: E0716 00:55:58.825824 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.832641 kubelet[4338]: E0716 00:55:58.832608 4338 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hb2b4" podUID="8cdc2fb9-6825-4b25-99b7-48a3daeb5053" Jul 16 00:55:58.852831 containerd[2825]: time="2025-07-16T00:55:58.852800609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9256d,Uid:f986caf6-7c92-49d2-aed9-0d6a43171cac,Namespace:calico-system,Attempt:0,}" Jul 16 00:55:58.861135 containerd[2825]: time="2025-07-16T00:55:58.861105861Z" level=info msg="connecting to shim 059920926f333d802f9ea46af0803f86ea575871993d742fc2889c32c7eb1f4a" address="unix:///run/containerd/s/6f8be7dac0c3b8533dae4e163ebfb0b841ffbcd16a31311bd9594608da9633bc" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:55:58.896692 systemd[1]: Started cri-containerd-059920926f333d802f9ea46af0803f86ea575871993d742fc2889c32c7eb1f4a.scope - libcontainer container 059920926f333d802f9ea46af0803f86ea575871993d742fc2889c32c7eb1f4a. Jul 16 00:55:58.913994 containerd[2825]: time="2025-07-16T00:55:58.913965746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9256d,Uid:f986caf6-7c92-49d2-aed9-0d6a43171cac,Namespace:calico-system,Attempt:0,} returns sandbox id \"059920926f333d802f9ea46af0803f86ea575871993d742fc2889c32c7eb1f4a\"" Jul 16 00:55:58.920312 kubelet[4338]: E0716 00:55:58.920292 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.920345 kubelet[4338]: W0716 00:55:58.920314 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.920345 kubelet[4338]: E0716 00:55:58.920331 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.920508 kubelet[4338]: E0716 00:55:58.920498 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.920532 kubelet[4338]: W0716 00:55:58.920507 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.920532 kubelet[4338]: E0716 00:55:58.920515 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.920692 kubelet[4338]: E0716 00:55:58.920683 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.920718 kubelet[4338]: W0716 00:55:58.920692 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.920718 kubelet[4338]: E0716 00:55:58.920699 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.920862 kubelet[4338]: E0716 00:55:58.920853 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.920881 kubelet[4338]: W0716 00:55:58.920861 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.920881 kubelet[4338]: E0716 00:55:58.920869 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.921601 kubelet[4338]: E0716 00:55:58.921553 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.921601 kubelet[4338]: W0716 00:55:58.921560 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.921601 kubelet[4338]: E0716 00:55:58.921574 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.922295 kubelet[4338]: E0716 00:55:58.921945 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.922295 kubelet[4338]: W0716 00:55:58.921958 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.922295 kubelet[4338]: E0716 00:55:58.921968 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.922295 kubelet[4338]: E0716 00:55:58.922125 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.922295 kubelet[4338]: W0716 00:55:58.922133 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.922295 kubelet[4338]: E0716 00:55:58.922140 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.922778 kubelet[4338]: E0716 00:55:58.922737 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.922778 kubelet[4338]: W0716 00:55:58.922747 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.922778 kubelet[4338]: E0716 00:55:58.922757 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.922960 kubelet[4338]: E0716 00:55:58.922947 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.922988 kubelet[4338]: W0716 00:55:58.922968 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.922988 kubelet[4338]: E0716 00:55:58.922978 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.923167 kubelet[4338]: E0716 00:55:58.923157 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.923190 kubelet[4338]: W0716 00:55:58.923167 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.923190 kubelet[4338]: E0716 00:55:58.923176 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.923356 kubelet[4338]: E0716 00:55:58.923345 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.923356 kubelet[4338]: W0716 00:55:58.923354 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.923406 kubelet[4338]: E0716 00:55:58.923363 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.923531 kubelet[4338]: E0716 00:55:58.923521 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.923531 kubelet[4338]: W0716 00:55:58.923530 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.923576 kubelet[4338]: E0716 00:55:58.923540 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.923686 kubelet[4338]: E0716 00:55:58.923677 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.923705 kubelet[4338]: W0716 00:55:58.923685 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.923705 kubelet[4338]: E0716 00:55:58.923693 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.923867 kubelet[4338]: E0716 00:55:58.923859 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.923892 kubelet[4338]: W0716 00:55:58.923866 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.923892 kubelet[4338]: E0716 00:55:58.923874 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.924080 kubelet[4338]: E0716 00:55:58.924072 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.924099 kubelet[4338]: W0716 00:55:58.924080 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.924099 kubelet[4338]: E0716 00:55:58.924086 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.924252 kubelet[4338]: E0716 00:55:58.924244 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.924274 kubelet[4338]: W0716 00:55:58.924253 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.924274 kubelet[4338]: E0716 00:55:58.924261 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.924469 kubelet[4338]: E0716 00:55:58.924461 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.924490 kubelet[4338]: W0716 00:55:58.924469 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.924490 kubelet[4338]: E0716 00:55:58.924476 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.924654 kubelet[4338]: E0716 00:55:58.924645 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.924674 kubelet[4338]: W0716 00:55:58.924654 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.924674 kubelet[4338]: E0716 00:55:58.924662 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.924819 kubelet[4338]: E0716 00:55:58.924812 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.924843 kubelet[4338]: W0716 00:55:58.924819 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.924843 kubelet[4338]: E0716 00:55:58.924826 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:58.924983 kubelet[4338]: E0716 00:55:58.924975 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:58.925002 kubelet[4338]: W0716 00:55:58.924982 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:58.925002 kubelet[4338]: E0716 00:55:58.924990 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.018910 kubelet[4338]: E0716 00:55:59.018890 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.018910 kubelet[4338]: W0716 00:55:59.018905 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.019055 kubelet[4338]: E0716 00:55:59.018920 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.019055 kubelet[4338]: I0716 00:55:59.018944 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8cdc2fb9-6825-4b25-99b7-48a3daeb5053-kubelet-dir\") pod \"csi-node-driver-hb2b4\" (UID: \"8cdc2fb9-6825-4b25-99b7-48a3daeb5053\") " pod="calico-system/csi-node-driver-hb2b4" Jul 16 00:55:59.019099 kubelet[4338]: E0716 00:55:59.019079 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.019099 kubelet[4338]: W0716 00:55:59.019090 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.019138 kubelet[4338]: E0716 00:55:59.019101 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.019138 kubelet[4338]: I0716 00:55:59.019115 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8cdc2fb9-6825-4b25-99b7-48a3daeb5053-varrun\") pod \"csi-node-driver-hb2b4\" (UID: \"8cdc2fb9-6825-4b25-99b7-48a3daeb5053\") " pod="calico-system/csi-node-driver-hb2b4" Jul 16 00:55:59.019319 kubelet[4338]: E0716 00:55:59.019305 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.019319 kubelet[4338]: W0716 00:55:59.019315 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.019365 kubelet[4338]: E0716 00:55:59.019327 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.019365 kubelet[4338]: I0716 00:55:59.019341 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzdsr\" (UniqueName: \"kubernetes.io/projected/8cdc2fb9-6825-4b25-99b7-48a3daeb5053-kube-api-access-xzdsr\") pod \"csi-node-driver-hb2b4\" (UID: \"8cdc2fb9-6825-4b25-99b7-48a3daeb5053\") " pod="calico-system/csi-node-driver-hb2b4" Jul 16 00:55:59.019502 kubelet[4338]: E0716 00:55:59.019492 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.019524 kubelet[4338]: W0716 00:55:59.019502 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.019524 kubelet[4338]: E0716 00:55:59.019514 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.019572 kubelet[4338]: I0716 00:55:59.019526 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8cdc2fb9-6825-4b25-99b7-48a3daeb5053-socket-dir\") pod \"csi-node-driver-hb2b4\" (UID: \"8cdc2fb9-6825-4b25-99b7-48a3daeb5053\") " pod="calico-system/csi-node-driver-hb2b4" Jul 16 00:55:59.019666 kubelet[4338]: E0716 00:55:59.019654 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.019666 kubelet[4338]: W0716 00:55:59.019665 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.019707 kubelet[4338]: E0716 00:55:59.019677 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.019707 kubelet[4338]: I0716 00:55:59.019690 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8cdc2fb9-6825-4b25-99b7-48a3daeb5053-registration-dir\") pod \"csi-node-driver-hb2b4\" (UID: \"8cdc2fb9-6825-4b25-99b7-48a3daeb5053\") " pod="calico-system/csi-node-driver-hb2b4" Jul 16 00:55:59.019856 kubelet[4338]: E0716 00:55:59.019846 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.019878 kubelet[4338]: W0716 00:55:59.019856 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.019878 kubelet[4338]: E0716 00:55:59.019867 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.019995 kubelet[4338]: E0716 00:55:59.019987 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.020017 kubelet[4338]: W0716 00:55:59.019995 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.020017 kubelet[4338]: E0716 00:55:59.020005 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.020155 kubelet[4338]: E0716 00:55:59.020146 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.020177 kubelet[4338]: W0716 00:55:59.020155 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.020177 kubelet[4338]: E0716 00:55:59.020165 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.020297 kubelet[4338]: E0716 00:55:59.020288 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.020318 kubelet[4338]: W0716 00:55:59.020296 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.020338 kubelet[4338]: E0716 00:55:59.020318 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.020433 kubelet[4338]: E0716 00:55:59.020425 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.020454 kubelet[4338]: W0716 00:55:59.020433 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.020454 kubelet[4338]: E0716 00:55:59.020449 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.020617 kubelet[4338]: E0716 00:55:59.020608 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.020617 kubelet[4338]: W0716 00:55:59.020617 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.020663 kubelet[4338]: E0716 00:55:59.020627 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.020763 kubelet[4338]: E0716 00:55:59.020755 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.020785 kubelet[4338]: W0716 00:55:59.020763 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.020785 kubelet[4338]: E0716 00:55:59.020773 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.020895 kubelet[4338]: E0716 00:55:59.020886 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.020916 kubelet[4338]: W0716 00:55:59.020895 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.020916 kubelet[4338]: E0716 00:55:59.020903 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.021042 kubelet[4338]: E0716 00:55:59.021033 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.021061 kubelet[4338]: W0716 00:55:59.021041 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.021061 kubelet[4338]: E0716 00:55:59.021049 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.021184 kubelet[4338]: E0716 00:55:59.021176 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.021206 kubelet[4338]: W0716 00:55:59.021185 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.021206 kubelet[4338]: E0716 00:55:59.021191 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.120887 kubelet[4338]: E0716 00:55:59.120812 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.120887 kubelet[4338]: W0716 00:55:59.120826 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.120887 kubelet[4338]: E0716 00:55:59.120842 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.121070 kubelet[4338]: E0716 00:55:59.121058 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.121070 kubelet[4338]: W0716 00:55:59.121066 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.121114 kubelet[4338]: E0716 00:55:59.121078 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.121260 kubelet[4338]: E0716 00:55:59.121251 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.121282 kubelet[4338]: W0716 00:55:59.121259 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.121282 kubelet[4338]: E0716 00:55:59.121270 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.121662 kubelet[4338]: E0716 00:55:59.121648 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.121683 kubelet[4338]: W0716 00:55:59.121661 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.121683 kubelet[4338]: E0716 00:55:59.121676 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.121850 kubelet[4338]: E0716 00:55:59.121838 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.121850 kubelet[4338]: W0716 00:55:59.121847 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.121896 kubelet[4338]: E0716 00:55:59.121858 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.122038 kubelet[4338]: E0716 00:55:59.122023 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.122038 kubelet[4338]: W0716 00:55:59.122032 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.122099 kubelet[4338]: E0716 00:55:59.122044 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.122340 kubelet[4338]: E0716 00:55:59.122325 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.122340 kubelet[4338]: W0716 00:55:59.122336 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.122381 kubelet[4338]: E0716 00:55:59.122365 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.122505 kubelet[4338]: E0716 00:55:59.122493 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.122505 kubelet[4338]: W0716 00:55:59.122502 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.122557 kubelet[4338]: E0716 00:55:59.122517 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.122648 kubelet[4338]: E0716 00:55:59.122639 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.122648 kubelet[4338]: W0716 00:55:59.122647 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.122692 kubelet[4338]: E0716 00:55:59.122662 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.122849 kubelet[4338]: E0716 00:55:59.122838 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.122873 kubelet[4338]: W0716 00:55:59.122849 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.122873 kubelet[4338]: E0716 00:55:59.122867 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.123133 kubelet[4338]: E0716 00:55:59.123120 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.123133 kubelet[4338]: W0716 00:55:59.123131 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.123186 kubelet[4338]: E0716 00:55:59.123147 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.123352 kubelet[4338]: E0716 00:55:59.123341 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.123352 kubelet[4338]: W0716 00:55:59.123349 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.123409 kubelet[4338]: E0716 00:55:59.123362 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.123607 kubelet[4338]: E0716 00:55:59.123599 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.123677 kubelet[4338]: W0716 00:55:59.123607 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.123677 kubelet[4338]: E0716 00:55:59.123619 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.123868 kubelet[4338]: E0716 00:55:59.123852 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.123893 kubelet[4338]: W0716 00:55:59.123869 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.123893 kubelet[4338]: E0716 00:55:59.123884 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.124018 kubelet[4338]: E0716 00:55:59.124009 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.124038 kubelet[4338]: W0716 00:55:59.124018 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.124059 kubelet[4338]: E0716 00:55:59.124034 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.124149 kubelet[4338]: E0716 00:55:59.124141 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.124173 kubelet[4338]: W0716 00:55:59.124149 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.124192 kubelet[4338]: E0716 00:55:59.124173 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.124269 kubelet[4338]: E0716 00:55:59.124260 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.124288 kubelet[4338]: W0716 00:55:59.124269 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.124288 kubelet[4338]: E0716 00:55:59.124284 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.124390 kubelet[4338]: E0716 00:55:59.124382 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.124414 kubelet[4338]: W0716 00:55:59.124390 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.124414 kubelet[4338]: E0716 00:55:59.124405 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.124529 kubelet[4338]: E0716 00:55:59.124520 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.124551 kubelet[4338]: W0716 00:55:59.124529 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.124551 kubelet[4338]: E0716 00:55:59.124540 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.124702 kubelet[4338]: E0716 00:55:59.124693 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.124702 kubelet[4338]: W0716 00:55:59.124702 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.124748 kubelet[4338]: E0716 00:55:59.124713 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.124884 kubelet[4338]: E0716 00:55:59.124874 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.124905 kubelet[4338]: W0716 00:55:59.124884 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.124905 kubelet[4338]: E0716 00:55:59.124894 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.125119 kubelet[4338]: E0716 00:55:59.125110 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.125141 kubelet[4338]: W0716 00:55:59.125122 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.125141 kubelet[4338]: E0716 00:55:59.125132 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.125381 kubelet[4338]: E0716 00:55:59.125371 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.125400 kubelet[4338]: W0716 00:55:59.125381 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.125400 kubelet[4338]: E0716 00:55:59.125392 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.125535 kubelet[4338]: E0716 00:55:59.125526 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.125557 kubelet[4338]: W0716 00:55:59.125535 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.125557 kubelet[4338]: E0716 00:55:59.125543 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.125701 kubelet[4338]: E0716 00:55:59.125692 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.125725 kubelet[4338]: W0716 00:55:59.125702 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.125725 kubelet[4338]: E0716 00:55:59.125709 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.132256 kubelet[4338]: E0716 00:55:59.132238 4338 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:55:59.132256 kubelet[4338]: W0716 00:55:59.132252 4338 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:55:59.132305 kubelet[4338]: E0716 00:55:59.132263 4338 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:55:59.547137 containerd[2825]: time="2025-07-16T00:55:59.547095286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:59.547513 containerd[2825]: time="2025-07-16T00:55:59.547156165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 16 00:55:59.547752 containerd[2825]: time="2025-07-16T00:55:59.547730393Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:59.549204 containerd[2825]: time="2025-07-16T00:55:59.549184962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:59.549851 containerd[2825]: time="2025-07-16T00:55:59.549828668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 976.629259ms" Jul 16 00:55:59.549873 containerd[2825]: time="2025-07-16T00:55:59.549859188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 16 00:55:59.550543 containerd[2825]: time="2025-07-16T00:55:59.550525053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 16 00:55:59.554991 containerd[2825]: time="2025-07-16T00:55:59.554968559Z" level=info msg="CreateContainer within sandbox \"61c245adaf84402253f7c046c3e71a4cfebabfd6af39e210ed6190406eacd117\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 16 00:55:59.559696 containerd[2825]: time="2025-07-16T00:55:59.559671260Z" level=info msg="Container 5889a2ed68bd2dd773e7b6da89e7752979a4300d7c9ad8e4903063a47395ca28: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:55:59.563123 containerd[2825]: time="2025-07-16T00:55:59.563099227Z" level=info msg="CreateContainer within sandbox \"61c245adaf84402253f7c046c3e71a4cfebabfd6af39e210ed6190406eacd117\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5889a2ed68bd2dd773e7b6da89e7752979a4300d7c9ad8e4903063a47395ca28\"" Jul 16 00:55:59.563364 containerd[2825]: time="2025-07-16T00:55:59.563345542Z" level=info msg="StartContainer for \"5889a2ed68bd2dd773e7b6da89e7752979a4300d7c9ad8e4903063a47395ca28\"" Jul 16 00:55:59.564328 containerd[2825]: time="2025-07-16T00:55:59.564309561Z" level=info msg="connecting to shim 5889a2ed68bd2dd773e7b6da89e7752979a4300d7c9ad8e4903063a47395ca28" address="unix:///run/containerd/s/22819e86eb8a803408425a2be82937eac87d34df9bb5ac91edbfb05ee018a046" protocol=ttrpc version=3 Jul 16 00:55:59.588739 systemd[1]: Started cri-containerd-5889a2ed68bd2dd773e7b6da89e7752979a4300d7c9ad8e4903063a47395ca28.scope - libcontainer container 5889a2ed68bd2dd773e7b6da89e7752979a4300d7c9ad8e4903063a47395ca28. Jul 16 00:55:59.616758 containerd[2825]: time="2025-07-16T00:55:59.616721731Z" level=info msg="StartContainer for \"5889a2ed68bd2dd773e7b6da89e7752979a4300d7c9ad8e4903063a47395ca28\" returns successfully" Jul 16 00:55:59.862013 containerd[2825]: time="2025-07-16T00:55:59.861900575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:59.862013 containerd[2825]: time="2025-07-16T00:55:59.861924854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 16 00:55:59.862505 containerd[2825]: time="2025-07-16T00:55:59.862487562Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:59.864067 containerd[2825]: time="2025-07-16T00:55:59.864043929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:55:59.864661 containerd[2825]: time="2025-07-16T00:55:59.864632077Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 314.078664ms" Jul 16 00:55:59.864705 containerd[2825]: time="2025-07-16T00:55:59.864659476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 16 00:55:59.866148 containerd[2825]: time="2025-07-16T00:55:59.866125925Z" level=info msg="CreateContainer within sandbox \"059920926f333d802f9ea46af0803f86ea575871993d742fc2889c32c7eb1f4a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 16 00:55:59.870483 containerd[2825]: time="2025-07-16T00:55:59.870454793Z" level=info msg="Container 0a6269874295520ce3f914645bbc3c3e5c9e25ebc49cdc81db84b37efaa6075c: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:55:59.876045 containerd[2825]: time="2025-07-16T00:55:59.876011956Z" level=info msg="CreateContainer within sandbox \"059920926f333d802f9ea46af0803f86ea575871993d742fc2889c32c7eb1f4a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0a6269874295520ce3f914645bbc3c3e5c9e25ebc49cdc81db84b37efaa6075c\"" Jul 16 00:55:59.876343 containerd[2825]: time="2025-07-16T00:55:59.876320749Z" level=info msg="StartContainer for \"0a6269874295520ce3f914645bbc3c3e5c9e25ebc49cdc81db84b37efaa6075c\"" Jul 16 00:55:59.877630 containerd[2825]: time="2025-07-16T00:55:59.877610522Z" level=info msg="connecting to shim 0a6269874295520ce3f914645bbc3c3e5c9e25ebc49cdc81db84b37efaa6075c" address="unix:///run/containerd/s/6f8be7dac0c3b8533dae4e163ebfb0b841ffbcd16a31311bd9594608da9633bc" protocol=ttrpc version=3 Jul 16 00:55:59.907708 systemd[1]: Started cri-containerd-0a6269874295520ce3f914645bbc3c3e5c9e25ebc49cdc81db84b37efaa6075c.scope - libcontainer container 0a6269874295520ce3f914645bbc3c3e5c9e25ebc49cdc81db84b37efaa6075c. Jul 16 00:55:59.934546 containerd[2825]: time="2025-07-16T00:55:59.934514556Z" level=info msg="StartContainer for \"0a6269874295520ce3f914645bbc3c3e5c9e25ebc49cdc81db84b37efaa6075c\" returns successfully" Jul 16 00:55:59.943907 systemd[1]: cri-containerd-0a6269874295520ce3f914645bbc3c3e5c9e25ebc49cdc81db84b37efaa6075c.scope: Deactivated successfully. Jul 16 00:55:59.945320 containerd[2825]: time="2025-07-16T00:55:59.945284807Z" level=info msg="received exit event container_id:\"0a6269874295520ce3f914645bbc3c3e5c9e25ebc49cdc81db84b37efaa6075c\" id:\"0a6269874295520ce3f914645bbc3c3e5c9e25ebc49cdc81db84b37efaa6075c\" pid:5365 exited_at:{seconds:1752627359 nanos:945026373}" Jul 16 00:55:59.945416 containerd[2825]: time="2025-07-16T00:55:59.945376486Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a6269874295520ce3f914645bbc3c3e5c9e25ebc49cdc81db84b37efaa6075c\" id:\"0a6269874295520ce3f914645bbc3c3e5c9e25ebc49cdc81db84b37efaa6075c\" pid:5365 exited_at:{seconds:1752627359 nanos:945026373}" Jul 16 00:56:00.528112 kubelet[4338]: E0716 00:56:00.528064 4338 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hb2b4" podUID="8cdc2fb9-6825-4b25-99b7-48a3daeb5053" Jul 16 00:56:00.562109 containerd[2825]: time="2025-07-16T00:56:00.562077760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 16 00:56:00.579201 kubelet[4338]: I0716 00:56:00.579149 4338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5945695b9b-cczl4" podStartSLOduration=1.60169806 podStartE2EDuration="2.579133781s" podCreationTimestamp="2025-07-16 00:55:58 +0000 UTC" firstStartedPulling="2025-07-16 00:55:58.572960535 +0000 UTC m=+17.115150069" lastFinishedPulling="2025-07-16 00:55:59.550396256 +0000 UTC m=+18.092585790" observedRunningTime="2025-07-16 00:56:00.579060142 +0000 UTC m=+19.121249676" watchObservedRunningTime="2025-07-16 00:56:00.579133781 +0000 UTC m=+19.121323315" Jul 16 00:56:01.564221 kubelet[4338]: I0716 00:56:01.564190 4338 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:56:01.821101 containerd[2825]: time="2025-07-16T00:56:01.821028725Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:01.821101 containerd[2825]: time="2025-07-16T00:56:01.821054845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 16 00:56:01.821685 containerd[2825]: time="2025-07-16T00:56:01.821664873Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:01.823507 containerd[2825]: time="2025-07-16T00:56:01.823487879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:01.824164 containerd[2825]: time="2025-07-16T00:56:01.824142147Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 1.262028988s" Jul 16 00:56:01.824192 containerd[2825]: time="2025-07-16T00:56:01.824171707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 16 00:56:01.825743 containerd[2825]: time="2025-07-16T00:56:01.825722798Z" level=info msg="CreateContainer within sandbox \"059920926f333d802f9ea46af0803f86ea575871993d742fc2889c32c7eb1f4a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 16 00:56:01.830209 containerd[2825]: time="2025-07-16T00:56:01.830182515Z" level=info msg="Container 56aa25fe8e5198dea3121b1af7956733b0cdd92c620a86faea48329b9010b6f9: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:56:01.836741 containerd[2825]: time="2025-07-16T00:56:01.836711593Z" level=info msg="CreateContainer within sandbox \"059920926f333d802f9ea46af0803f86ea575871993d742fc2889c32c7eb1f4a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"56aa25fe8e5198dea3121b1af7956733b0cdd92c620a86faea48329b9010b6f9\"" Jul 16 00:56:01.837079 containerd[2825]: time="2025-07-16T00:56:01.837058467Z" level=info msg="StartContainer for \"56aa25fe8e5198dea3121b1af7956733b0cdd92c620a86faea48329b9010b6f9\"" Jul 16 00:56:01.838371 containerd[2825]: time="2025-07-16T00:56:01.838346723Z" level=info msg="connecting to shim 56aa25fe8e5198dea3121b1af7956733b0cdd92c620a86faea48329b9010b6f9" address="unix:///run/containerd/s/6f8be7dac0c3b8533dae4e163ebfb0b841ffbcd16a31311bd9594608da9633bc" protocol=ttrpc version=3 Jul 16 00:56:01.862734 systemd[1]: Started cri-containerd-56aa25fe8e5198dea3121b1af7956733b0cdd92c620a86faea48329b9010b6f9.scope - libcontainer container 56aa25fe8e5198dea3121b1af7956733b0cdd92c620a86faea48329b9010b6f9. Jul 16 00:56:01.890340 containerd[2825]: time="2025-07-16T00:56:01.890311355Z" level=info msg="StartContainer for \"56aa25fe8e5198dea3121b1af7956733b0cdd92c620a86faea48329b9010b6f9\" returns successfully" Jul 16 00:56:02.277359 containerd[2825]: time="2025-07-16T00:56:02.277314909Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 16 00:56:02.279022 systemd[1]: cri-containerd-56aa25fe8e5198dea3121b1af7956733b0cdd92c620a86faea48329b9010b6f9.scope: Deactivated successfully. Jul 16 00:56:02.279332 systemd[1]: cri-containerd-56aa25fe8e5198dea3121b1af7956733b0cdd92c620a86faea48329b9010b6f9.scope: Consumed 941ms CPU time, 200M memory peak, 165.8M written to disk. Jul 16 00:56:02.280471 containerd[2825]: time="2025-07-16T00:56:02.280435254Z" level=info msg="received exit event container_id:\"56aa25fe8e5198dea3121b1af7956733b0cdd92c620a86faea48329b9010b6f9\" id:\"56aa25fe8e5198dea3121b1af7956733b0cdd92c620a86faea48329b9010b6f9\" pid:5431 exited_at:{seconds:1752627362 nanos:280281937}" Jul 16 00:56:02.280558 containerd[2825]: time="2025-07-16T00:56:02.280533532Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56aa25fe8e5198dea3121b1af7956733b0cdd92c620a86faea48329b9010b6f9\" id:\"56aa25fe8e5198dea3121b1af7956733b0cdd92c620a86faea48329b9010b6f9\" pid:5431 exited_at:{seconds:1752627362 nanos:280281937}" Jul 16 00:56:02.295422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56aa25fe8e5198dea3121b1af7956733b0cdd92c620a86faea48329b9010b6f9-rootfs.mount: Deactivated successfully. Jul 16 00:56:02.336523 kubelet[4338]: I0716 00:56:02.336494 4338 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 16 00:56:02.355281 systemd[1]: Created slice kubepods-burstable-podcaf434f7_0263_451b_ae16_75908f12cf47.slice - libcontainer container kubepods-burstable-podcaf434f7_0263_451b_ae16_75908f12cf47.slice. Jul 16 00:56:02.359532 systemd[1]: Created slice kubepods-burstable-podd85d0c4c_a0f0_4004_adda_490a66de2839.slice - libcontainer container kubepods-burstable-podd85d0c4c_a0f0_4004_adda_490a66de2839.slice. Jul 16 00:56:02.363471 systemd[1]: Created slice kubepods-besteffort-pod83ce170b_697d_4094_a118_3f839e2f6019.slice - libcontainer container kubepods-besteffort-pod83ce170b_697d_4094_a118_3f839e2f6019.slice. Jul 16 00:56:02.367615 systemd[1]: Created slice kubepods-besteffort-pod0d200db2_ad4d_452e_b27a_8044dc6e0ad8.slice - libcontainer container kubepods-besteffort-pod0d200db2_ad4d_452e_b27a_8044dc6e0ad8.slice. Jul 16 00:56:02.371377 systemd[1]: Created slice kubepods-besteffort-podd9227393_fefc_4037_9b2f_786497754f59.slice - libcontainer container kubepods-besteffort-podd9227393_fefc_4037_9b2f_786497754f59.slice. Jul 16 00:56:02.375097 systemd[1]: Created slice kubepods-besteffort-podb8773cba_618b_4488_91dc_feaafed062e2.slice - libcontainer container kubepods-besteffort-podb8773cba_618b_4488_91dc_feaafed062e2.slice. Jul 16 00:56:02.378715 systemd[1]: Created slice kubepods-besteffort-poda9a31aa9_77c4_42ea_9bd8_4aaf37290d9b.slice - libcontainer container kubepods-besteffort-poda9a31aa9_77c4_42ea_9bd8_4aaf37290d9b.slice. Jul 16 00:56:02.531957 systemd[1]: Created slice kubepods-besteffort-pod8cdc2fb9_6825_4b25_99b7_48a3daeb5053.slice - libcontainer container kubepods-besteffort-pod8cdc2fb9_6825_4b25_99b7_48a3daeb5053.slice. Jul 16 00:56:02.533663 containerd[2825]: time="2025-07-16T00:56:02.533634593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hb2b4,Uid:8cdc2fb9-6825-4b25-99b7-48a3daeb5053,Namespace:calico-system,Attempt:0,}" Jul 16 00:56:02.540933 kubelet[4338]: I0716 00:56:02.540908 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8srp\" (UniqueName: \"kubernetes.io/projected/d85d0c4c-a0f0-4004-adda-490a66de2839-kube-api-access-w8srp\") pod \"coredns-7c65d6cfc9-c8s82\" (UID: \"d85d0c4c-a0f0-4004-adda-490a66de2839\") " pod="kube-system/coredns-7c65d6cfc9-c8s82" Jul 16 00:56:02.540992 kubelet[4338]: I0716 00:56:02.540964 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b8773cba-618b-4488-91dc-feaafed062e2-goldmane-key-pair\") pod \"goldmane-58fd7646b9-267hx\" (UID: \"b8773cba-618b-4488-91dc-feaafed062e2\") " pod="calico-system/goldmane-58fd7646b9-267hx" Jul 16 00:56:02.541031 kubelet[4338]: I0716 00:56:02.541017 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0d200db2-ad4d-452e-b27a-8044dc6e0ad8-calico-apiserver-certs\") pod \"calico-apiserver-fc6b987ff-ct8q7\" (UID: \"0d200db2-ad4d-452e-b27a-8044dc6e0ad8\") " pod="calico-apiserver/calico-apiserver-fc6b987ff-ct8q7" Jul 16 00:56:02.541058 kubelet[4338]: I0716 00:56:02.541037 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cwh9\" (UniqueName: \"kubernetes.io/projected/b8773cba-618b-4488-91dc-feaafed062e2-kube-api-access-9cwh9\") pod \"goldmane-58fd7646b9-267hx\" (UID: \"b8773cba-618b-4488-91dc-feaafed062e2\") " pod="calico-system/goldmane-58fd7646b9-267hx" Jul 16 00:56:02.541125 kubelet[4338]: I0716 00:56:02.541055 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83ce170b-697d-4094-a118-3f839e2f6019-tigera-ca-bundle\") pod \"calico-kube-controllers-8669c847c4-jckr2\" (UID: \"83ce170b-697d-4094-a118-3f839e2f6019\") " pod="calico-system/calico-kube-controllers-8669c847c4-jckr2" Jul 16 00:56:02.541125 kubelet[4338]: I0716 00:56:02.541075 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29b89\" (UniqueName: \"kubernetes.io/projected/0d200db2-ad4d-452e-b27a-8044dc6e0ad8-kube-api-access-29b89\") pod \"calico-apiserver-fc6b987ff-ct8q7\" (UID: \"0d200db2-ad4d-452e-b27a-8044dc6e0ad8\") " pod="calico-apiserver/calico-apiserver-fc6b987ff-ct8q7" Jul 16 00:56:02.541125 kubelet[4338]: I0716 00:56:02.541095 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8773cba-618b-4488-91dc-feaafed062e2-config\") pod \"goldmane-58fd7646b9-267hx\" (UID: \"b8773cba-618b-4488-91dc-feaafed062e2\") " pod="calico-system/goldmane-58fd7646b9-267hx" Jul 16 00:56:02.541125 kubelet[4338]: I0716 00:56:02.541116 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rpkn\" (UniqueName: \"kubernetes.io/projected/d9227393-fefc-4037-9b2f-786497754f59-kube-api-access-2rpkn\") pod \"whisker-7777b8d9c-vgkpq\" (UID: \"d9227393-fefc-4037-9b2f-786497754f59\") " pod="calico-system/whisker-7777b8d9c-vgkpq" Jul 16 00:56:02.541202 kubelet[4338]: I0716 00:56:02.541157 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d9227393-fefc-4037-9b2f-786497754f59-whisker-backend-key-pair\") pod \"whisker-7777b8d9c-vgkpq\" (UID: \"d9227393-fefc-4037-9b2f-786497754f59\") " pod="calico-system/whisker-7777b8d9c-vgkpq" Jul 16 00:56:02.541202 kubelet[4338]: I0716 00:56:02.541186 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtsfr\" (UniqueName: \"kubernetes.io/projected/caf434f7-0263-451b-ae16-75908f12cf47-kube-api-access-dtsfr\") pod \"coredns-7c65d6cfc9-4rrfs\" (UID: \"caf434f7-0263-451b-ae16-75908f12cf47\") " pod="kube-system/coredns-7c65d6cfc9-4rrfs" Jul 16 00:56:02.541246 kubelet[4338]: I0716 00:56:02.541203 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8773cba-618b-4488-91dc-feaafed062e2-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-267hx\" (UID: \"b8773cba-618b-4488-91dc-feaafed062e2\") " pod="calico-system/goldmane-58fd7646b9-267hx" Jul 16 00:56:02.541246 kubelet[4338]: I0716 00:56:02.541223 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdl5b\" (UniqueName: \"kubernetes.io/projected/83ce170b-697d-4094-a118-3f839e2f6019-kube-api-access-kdl5b\") pod \"calico-kube-controllers-8669c847c4-jckr2\" (UID: \"83ce170b-697d-4094-a118-3f839e2f6019\") " pod="calico-system/calico-kube-controllers-8669c847c4-jckr2" Jul 16 00:56:02.541285 kubelet[4338]: I0716 00:56:02.541244 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9227393-fefc-4037-9b2f-786497754f59-whisker-ca-bundle\") pod \"whisker-7777b8d9c-vgkpq\" (UID: \"d9227393-fefc-4037-9b2f-786497754f59\") " pod="calico-system/whisker-7777b8d9c-vgkpq" Jul 16 00:56:02.541285 kubelet[4338]: I0716 00:56:02.541265 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xbx4\" (UniqueName: \"kubernetes.io/projected/a9a31aa9-77c4-42ea-9bd8-4aaf37290d9b-kube-api-access-8xbx4\") pod \"calico-apiserver-fc6b987ff-ql4qk\" (UID: \"a9a31aa9-77c4-42ea-9bd8-4aaf37290d9b\") " pod="calico-apiserver/calico-apiserver-fc6b987ff-ql4qk" Jul 16 00:56:02.541337 kubelet[4338]: I0716 00:56:02.541314 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a9a31aa9-77c4-42ea-9bd8-4aaf37290d9b-calico-apiserver-certs\") pod \"calico-apiserver-fc6b987ff-ql4qk\" (UID: \"a9a31aa9-77c4-42ea-9bd8-4aaf37290d9b\") " pod="calico-apiserver/calico-apiserver-fc6b987ff-ql4qk" Jul 16 00:56:02.541371 kubelet[4338]: I0716 00:56:02.541354 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d85d0c4c-a0f0-4004-adda-490a66de2839-config-volume\") pod \"coredns-7c65d6cfc9-c8s82\" (UID: \"d85d0c4c-a0f0-4004-adda-490a66de2839\") " pod="kube-system/coredns-7c65d6cfc9-c8s82" Jul 16 00:56:02.541397 kubelet[4338]: I0716 00:56:02.541376 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/caf434f7-0263-451b-ae16-75908f12cf47-config-volume\") pod \"coredns-7c65d6cfc9-4rrfs\" (UID: \"caf434f7-0263-451b-ae16-75908f12cf47\") " pod="kube-system/coredns-7c65d6cfc9-4rrfs" Jul 16 00:56:02.569075 containerd[2825]: time="2025-07-16T00:56:02.569047454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 16 00:56:02.590101 containerd[2825]: time="2025-07-16T00:56:02.590058327Z" level=error msg="Failed to destroy network for sandbox \"15e0326056578663a498b3c18036951ef5be9ae4e63e194248d0e82a53335af8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.590471 containerd[2825]: time="2025-07-16T00:56:02.590441641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hb2b4,Uid:8cdc2fb9-6825-4b25-99b7-48a3daeb5053,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"15e0326056578663a498b3c18036951ef5be9ae4e63e194248d0e82a53335af8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.590668 kubelet[4338]: E0716 00:56:02.590617 4338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15e0326056578663a498b3c18036951ef5be9ae4e63e194248d0e82a53335af8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.590898 kubelet[4338]: E0716 00:56:02.590696 4338 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15e0326056578663a498b3c18036951ef5be9ae4e63e194248d0e82a53335af8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hb2b4" Jul 16 00:56:02.590898 kubelet[4338]: E0716 00:56:02.590715 4338 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15e0326056578663a498b3c18036951ef5be9ae4e63e194248d0e82a53335af8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hb2b4" Jul 16 00:56:02.590898 kubelet[4338]: E0716 00:56:02.590753 4338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hb2b4_calico-system(8cdc2fb9-6825-4b25-99b7-48a3daeb5053)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hb2b4_calico-system(8cdc2fb9-6825-4b25-99b7-48a3daeb5053)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15e0326056578663a498b3c18036951ef5be9ae4e63e194248d0e82a53335af8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hb2b4" podUID="8cdc2fb9-6825-4b25-99b7-48a3daeb5053" Jul 16 00:56:02.658662 containerd[2825]: time="2025-07-16T00:56:02.658619530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4rrfs,Uid:caf434f7-0263-451b-ae16-75908f12cf47,Namespace:kube-system,Attempt:0,}" Jul 16 00:56:02.662045 containerd[2825]: time="2025-07-16T00:56:02.662014231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c8s82,Uid:d85d0c4c-a0f0-4004-adda-490a66de2839,Namespace:kube-system,Attempt:0,}" Jul 16 00:56:02.665526 containerd[2825]: time="2025-07-16T00:56:02.665499650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8669c847c4-jckr2,Uid:83ce170b-697d-4094-a118-3f839e2f6019,Namespace:calico-system,Attempt:0,}" Jul 16 00:56:02.669953 containerd[2825]: time="2025-07-16T00:56:02.669926373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc6b987ff-ct8q7,Uid:0d200db2-ad4d-452e-b27a-8044dc6e0ad8,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:56:02.673484 containerd[2825]: time="2025-07-16T00:56:02.673410552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7777b8d9c-vgkpq,Uid:d9227393-fefc-4037-9b2f-786497754f59,Namespace:calico-system,Attempt:0,}" Jul 16 00:56:02.677934 containerd[2825]: time="2025-07-16T00:56:02.677903153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-267hx,Uid:b8773cba-618b-4488-91dc-feaafed062e2,Namespace:calico-system,Attempt:0,}" Jul 16 00:56:02.681791 containerd[2825]: time="2025-07-16T00:56:02.681758206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc6b987ff-ql4qk,Uid:a9a31aa9-77c4-42ea-9bd8-4aaf37290d9b,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:56:02.699094 containerd[2825]: time="2025-07-16T00:56:02.699041664Z" level=error msg="Failed to destroy network for sandbox \"9b2c08caa7da7905582a0993b03bee5ff60b62be452822894d166bce717a1a08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.701795 containerd[2825]: time="2025-07-16T00:56:02.701755977Z" level=error msg="Failed to destroy network for sandbox \"d2a65b9e7e7d116f8351580e20a6ad0158a84bd06e7b79b662304a62761ecff6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.705936 containerd[2825]: time="2025-07-16T00:56:02.705899945Z" level=error msg="Failed to destroy network for sandbox \"e71a129a1e32e9257ffcf67746d1a7eaf1ea62dc09d814f2151ec08d36e9bed5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.707337 containerd[2825]: time="2025-07-16T00:56:02.707294360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4rrfs,Uid:caf434f7-0263-451b-ae16-75908f12cf47,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b2c08caa7da7905582a0993b03bee5ff60b62be452822894d166bce717a1a08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.707469 containerd[2825]: time="2025-07-16T00:56:02.707439278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c8s82,Uid:d85d0c4c-a0f0-4004-adda-490a66de2839,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2a65b9e7e7d116f8351580e20a6ad0158a84bd06e7b79b662304a62761ecff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.707580 kubelet[4338]: E0716 00:56:02.707540 4338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b2c08caa7da7905582a0993b03bee5ff60b62be452822894d166bce717a1a08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.707629 kubelet[4338]: E0716 00:56:02.707588 4338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2a65b9e7e7d116f8351580e20a6ad0158a84bd06e7b79b662304a62761ecff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.707629 kubelet[4338]: E0716 00:56:02.707606 4338 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b2c08caa7da7905582a0993b03bee5ff60b62be452822894d166bce717a1a08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4rrfs" Jul 16 00:56:02.707679 kubelet[4338]: E0716 00:56:02.707625 4338 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b2c08caa7da7905582a0993b03bee5ff60b62be452822894d166bce717a1a08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4rrfs" Jul 16 00:56:02.707679 kubelet[4338]: E0716 00:56:02.707631 4338 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2a65b9e7e7d116f8351580e20a6ad0158a84bd06e7b79b662304a62761ecff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-c8s82" Jul 16 00:56:02.707679 kubelet[4338]: E0716 00:56:02.707649 4338 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2a65b9e7e7d116f8351580e20a6ad0158a84bd06e7b79b662304a62761ecff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-c8s82" Jul 16 00:56:02.707742 containerd[2825]: time="2025-07-16T00:56:02.707619195Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8669c847c4-jckr2,Uid:83ce170b-697d-4094-a118-3f839e2f6019,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e71a129a1e32e9257ffcf67746d1a7eaf1ea62dc09d814f2151ec08d36e9bed5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.707781 kubelet[4338]: E0716 00:56:02.707667 4338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-4rrfs_kube-system(caf434f7-0263-451b-ae16-75908f12cf47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-4rrfs_kube-system(caf434f7-0263-451b-ae16-75908f12cf47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b2c08caa7da7905582a0993b03bee5ff60b62be452822894d166bce717a1a08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4rrfs" podUID="caf434f7-0263-451b-ae16-75908f12cf47" Jul 16 00:56:02.707781 kubelet[4338]: E0716 00:56:02.707684 4338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-c8s82_kube-system(d85d0c4c-a0f0-4004-adda-490a66de2839)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-c8s82_kube-system(d85d0c4c-a0f0-4004-adda-490a66de2839)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2a65b9e7e7d116f8351580e20a6ad0158a84bd06e7b79b662304a62761ecff6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-c8s82" podUID="d85d0c4c-a0f0-4004-adda-490a66de2839" Jul 16 00:56:02.707781 kubelet[4338]: E0716 00:56:02.707755 4338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e71a129a1e32e9257ffcf67746d1a7eaf1ea62dc09d814f2151ec08d36e9bed5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.707866 kubelet[4338]: E0716 00:56:02.707800 4338 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e71a129a1e32e9257ffcf67746d1a7eaf1ea62dc09d814f2151ec08d36e9bed5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8669c847c4-jckr2" Jul 16 00:56:02.707866 kubelet[4338]: E0716 00:56:02.707816 4338 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e71a129a1e32e9257ffcf67746d1a7eaf1ea62dc09d814f2151ec08d36e9bed5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8669c847c4-jckr2" Jul 16 00:56:02.707866 kubelet[4338]: E0716 00:56:02.707849 4338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8669c847c4-jckr2_calico-system(83ce170b-697d-4094-a118-3f839e2f6019)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8669c847c4-jckr2_calico-system(83ce170b-697d-4094-a118-3f839e2f6019)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e71a129a1e32e9257ffcf67746d1a7eaf1ea62dc09d814f2151ec08d36e9bed5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8669c847c4-jckr2" podUID="83ce170b-697d-4094-a118-3f839e2f6019" Jul 16 00:56:02.714670 containerd[2825]: time="2025-07-16T00:56:02.714633312Z" level=error msg="Failed to destroy network for sandbox \"91fa346626324e45eed5052a51d6a557b3afaf66ee944c4cf7a3c25295e31626\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.715042 containerd[2825]: time="2025-07-16T00:56:02.715010346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc6b987ff-ct8q7,Uid:0d200db2-ad4d-452e-b27a-8044dc6e0ad8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"91fa346626324e45eed5052a51d6a557b3afaf66ee944c4cf7a3c25295e31626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.715182 containerd[2825]: time="2025-07-16T00:56:02.715145103Z" level=error msg="Failed to destroy network for sandbox \"f0f9601498d037fc4f978a1e4e6a97850f4e043a8e694c31e773bf6986ae8f30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.715225 kubelet[4338]: E0716 00:56:02.715172 4338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91fa346626324e45eed5052a51d6a557b3afaf66ee944c4cf7a3c25295e31626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.715225 kubelet[4338]: E0716 00:56:02.715215 4338 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91fa346626324e45eed5052a51d6a557b3afaf66ee944c4cf7a3c25295e31626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fc6b987ff-ct8q7" Jul 16 00:56:02.715284 kubelet[4338]: E0716 00:56:02.715233 4338 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91fa346626324e45eed5052a51d6a557b3afaf66ee944c4cf7a3c25295e31626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fc6b987ff-ct8q7" Jul 16 00:56:02.715284 kubelet[4338]: E0716 00:56:02.715264 4338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fc6b987ff-ct8q7_calico-apiserver(0d200db2-ad4d-452e-b27a-8044dc6e0ad8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fc6b987ff-ct8q7_calico-apiserver(0d200db2-ad4d-452e-b27a-8044dc6e0ad8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91fa346626324e45eed5052a51d6a557b3afaf66ee944c4cf7a3c25295e31626\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fc6b987ff-ct8q7" podUID="0d200db2-ad4d-452e-b27a-8044dc6e0ad8" Jul 16 00:56:02.715489 containerd[2825]: time="2025-07-16T00:56:02.715461378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7777b8d9c-vgkpq,Uid:d9227393-fefc-4037-9b2f-786497754f59,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f9601498d037fc4f978a1e4e6a97850f4e043a8e694c31e773bf6986ae8f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.715599 kubelet[4338]: E0716 00:56:02.715576 4338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f9601498d037fc4f978a1e4e6a97850f4e043a8e694c31e773bf6986ae8f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.715643 kubelet[4338]: E0716 00:56:02.715615 4338 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f9601498d037fc4f978a1e4e6a97850f4e043a8e694c31e773bf6986ae8f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7777b8d9c-vgkpq" Jul 16 00:56:02.715643 kubelet[4338]: E0716 00:56:02.715631 4338 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0f9601498d037fc4f978a1e4e6a97850f4e043a8e694c31e773bf6986ae8f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7777b8d9c-vgkpq" Jul 16 00:56:02.715682 kubelet[4338]: E0716 00:56:02.715666 4338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7777b8d9c-vgkpq_calico-system(d9227393-fefc-4037-9b2f-786497754f59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7777b8d9c-vgkpq_calico-system(d9227393-fefc-4037-9b2f-786497754f59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0f9601498d037fc4f978a1e4e6a97850f4e043a8e694c31e773bf6986ae8f30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7777b8d9c-vgkpq" podUID="d9227393-fefc-4037-9b2f-786497754f59" Jul 16 00:56:02.719457 containerd[2825]: time="2025-07-16T00:56:02.719425628Z" level=error msg="Failed to destroy network for sandbox \"85e080ce70673ab5333015dacbf15db67524bf2023f67cda9327f7be976ddce7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.719946 containerd[2825]: time="2025-07-16T00:56:02.719917540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-267hx,Uid:b8773cba-618b-4488-91dc-feaafed062e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"85e080ce70673ab5333015dacbf15db67524bf2023f67cda9327f7be976ddce7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.720058 kubelet[4338]: E0716 00:56:02.720038 4338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85e080ce70673ab5333015dacbf15db67524bf2023f67cda9327f7be976ddce7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.720086 kubelet[4338]: E0716 00:56:02.720070 4338 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85e080ce70673ab5333015dacbf15db67524bf2023f67cda9327f7be976ddce7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-267hx" Jul 16 00:56:02.720117 kubelet[4338]: E0716 00:56:02.720085 4338 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85e080ce70673ab5333015dacbf15db67524bf2023f67cda9327f7be976ddce7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-267hx" Jul 16 00:56:02.720143 kubelet[4338]: E0716 00:56:02.720116 4338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-267hx_calico-system(b8773cba-618b-4488-91dc-feaafed062e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-267hx_calico-system(b8773cba-618b-4488-91dc-feaafed062e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85e080ce70673ab5333015dacbf15db67524bf2023f67cda9327f7be976ddce7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-267hx" podUID="b8773cba-618b-4488-91dc-feaafed062e2" Jul 16 00:56:02.722226 containerd[2825]: time="2025-07-16T00:56:02.722200060Z" level=error msg="Failed to destroy network for sandbox \"33ad1f429ae8731bb9523a91448a2c5353bcace2dc633bd17584e80bc9445e44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.722515 containerd[2825]: time="2025-07-16T00:56:02.722491975Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc6b987ff-ql4qk,Uid:a9a31aa9-77c4-42ea-9bd8-4aaf37290d9b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ad1f429ae8731bb9523a91448a2c5353bcace2dc633bd17584e80bc9445e44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.722640 kubelet[4338]: E0716 00:56:02.722614 4338 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ad1f429ae8731bb9523a91448a2c5353bcace2dc633bd17584e80bc9445e44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:56:02.722667 kubelet[4338]: E0716 00:56:02.722654 4338 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ad1f429ae8731bb9523a91448a2c5353bcace2dc633bd17584e80bc9445e44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fc6b987ff-ql4qk" Jul 16 00:56:02.722690 kubelet[4338]: E0716 00:56:02.722669 4338 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ad1f429ae8731bb9523a91448a2c5353bcace2dc633bd17584e80bc9445e44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-fc6b987ff-ql4qk" Jul 16 00:56:02.722717 kubelet[4338]: E0716 00:56:02.722698 4338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-fc6b987ff-ql4qk_calico-apiserver(a9a31aa9-77c4-42ea-9bd8-4aaf37290d9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-fc6b987ff-ql4qk_calico-apiserver(a9a31aa9-77c4-42ea-9bd8-4aaf37290d9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33ad1f429ae8731bb9523a91448a2c5353bcace2dc633bd17584e80bc9445e44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-fc6b987ff-ql4qk" podUID="a9a31aa9-77c4-42ea-9bd8-4aaf37290d9b" Jul 16 00:56:02.840435 systemd[1]: run-netns-cni\x2d89dc4141\x2dcec5\x2d762e\x2db4f8\x2d13478c9b2910.mount: Deactivated successfully. Jul 16 00:56:05.393224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2609075295.mount: Deactivated successfully. Jul 16 00:56:05.410261 containerd[2825]: time="2025-07-16T00:56:05.410227864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:05.410453 containerd[2825]: time="2025-07-16T00:56:05.410281023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 16 00:56:05.410880 containerd[2825]: time="2025-07-16T00:56:05.410862534Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:05.412182 containerd[2825]: time="2025-07-16T00:56:05.412160676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:05.412762 containerd[2825]: time="2025-07-16T00:56:05.412740587Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 2.843659734s" Jul 16 00:56:05.412784 containerd[2825]: time="2025-07-16T00:56:05.412769107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 16 00:56:05.418166 containerd[2825]: time="2025-07-16T00:56:05.418144910Z" level=info msg="CreateContainer within sandbox \"059920926f333d802f9ea46af0803f86ea575871993d742fc2889c32c7eb1f4a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 16 00:56:05.435352 containerd[2825]: time="2025-07-16T00:56:05.435320863Z" level=info msg="Container ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:56:05.443981 containerd[2825]: time="2025-07-16T00:56:05.443946938Z" level=info msg="CreateContainer within sandbox \"059920926f333d802f9ea46af0803f86ea575871993d742fc2889c32c7eb1f4a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\"" Jul 16 00:56:05.444272 containerd[2825]: time="2025-07-16T00:56:05.444250094Z" level=info msg="StartContainer for \"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\"" Jul 16 00:56:05.445644 containerd[2825]: time="2025-07-16T00:56:05.445620874Z" level=info msg="connecting to shim ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb" address="unix:///run/containerd/s/6f8be7dac0c3b8533dae4e163ebfb0b841ffbcd16a31311bd9594608da9633bc" protocol=ttrpc version=3 Jul 16 00:56:05.475746 systemd[1]: Started cri-containerd-ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb.scope - libcontainer container ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb. Jul 16 00:56:05.505552 containerd[2825]: time="2025-07-16T00:56:05.505524772Z" level=info msg="StartContainer for \"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\" returns successfully" Jul 16 00:56:05.587570 kubelet[4338]: I0716 00:56:05.587523 4338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9256d" podStartSLOduration=1.088981262 podStartE2EDuration="7.587506353s" podCreationTimestamp="2025-07-16 00:55:58 +0000 UTC" firstStartedPulling="2025-07-16 00:55:58.914786888 +0000 UTC m=+17.456976422" lastFinishedPulling="2025-07-16 00:56:05.413311979 +0000 UTC m=+23.955501513" observedRunningTime="2025-07-16 00:56:05.587085439 +0000 UTC m=+24.129274973" watchObservedRunningTime="2025-07-16 00:56:05.587506353 +0000 UTC m=+24.129695847" Jul 16 00:56:05.635718 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 16 00:56:05.635757 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 16 00:56:05.859471 kubelet[4338]: I0716 00:56:05.859432 4338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d9227393-fefc-4037-9b2f-786497754f59-whisker-backend-key-pair\") pod \"d9227393-fefc-4037-9b2f-786497754f59\" (UID: \"d9227393-fefc-4037-9b2f-786497754f59\") " Jul 16 00:56:05.859471 kubelet[4338]: I0716 00:56:05.859472 4338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rpkn\" (UniqueName: \"kubernetes.io/projected/d9227393-fefc-4037-9b2f-786497754f59-kube-api-access-2rpkn\") pod \"d9227393-fefc-4037-9b2f-786497754f59\" (UID: \"d9227393-fefc-4037-9b2f-786497754f59\") " Jul 16 00:56:05.859614 kubelet[4338]: I0716 00:56:05.859497 4338 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9227393-fefc-4037-9b2f-786497754f59-whisker-ca-bundle\") pod \"d9227393-fefc-4037-9b2f-786497754f59\" (UID: \"d9227393-fefc-4037-9b2f-786497754f59\") " Jul 16 00:56:05.859864 kubelet[4338]: I0716 00:56:05.859844 4338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9227393-fefc-4037-9b2f-786497754f59-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d9227393-fefc-4037-9b2f-786497754f59" (UID: "d9227393-fefc-4037-9b2f-786497754f59"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 16 00:56:05.861688 kubelet[4338]: I0716 00:56:05.861661 4338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9227393-fefc-4037-9b2f-786497754f59-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d9227393-fefc-4037-9b2f-786497754f59" (UID: "d9227393-fefc-4037-9b2f-786497754f59"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 16 00:56:05.861773 kubelet[4338]: I0716 00:56:05.861747 4338 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9227393-fefc-4037-9b2f-786497754f59-kube-api-access-2rpkn" (OuterVolumeSpecName: "kube-api-access-2rpkn") pod "d9227393-fefc-4037-9b2f-786497754f59" (UID: "d9227393-fefc-4037-9b2f-786497754f59"). InnerVolumeSpecName "kube-api-access-2rpkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 16 00:56:05.960076 kubelet[4338]: I0716 00:56:05.960050 4338 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rpkn\" (UniqueName: \"kubernetes.io/projected/d9227393-fefc-4037-9b2f-786497754f59-kube-api-access-2rpkn\") on node \"ci-4372.0.1-n-4904b64135\" DevicePath \"\"" Jul 16 00:56:05.960076 kubelet[4338]: I0716 00:56:05.960070 4338 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9227393-fefc-4037-9b2f-786497754f59-whisker-ca-bundle\") on node \"ci-4372.0.1-n-4904b64135\" DevicePath \"\"" Jul 16 00:56:05.960172 kubelet[4338]: I0716 00:56:05.960081 4338 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d9227393-fefc-4037-9b2f-786497754f59-whisker-backend-key-pair\") on node \"ci-4372.0.1-n-4904b64135\" DevicePath \"\"" Jul 16 00:56:06.394113 systemd[1]: var-lib-kubelet-pods-d9227393\x2dfefc\x2d4037\x2d9b2f\x2d786497754f59-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2rpkn.mount: Deactivated successfully. Jul 16 00:56:06.394200 systemd[1]: var-lib-kubelet-pods-d9227393\x2dfefc\x2d4037\x2d9b2f\x2d786497754f59-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 16 00:56:06.577443 kubelet[4338]: I0716 00:56:06.577416 4338 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:56:06.581617 systemd[1]: Removed slice kubepods-besteffort-podd9227393_fefc_4037_9b2f_786497754f59.slice - libcontainer container kubepods-besteffort-podd9227393_fefc_4037_9b2f_786497754f59.slice. Jul 16 00:56:06.607417 systemd[1]: Created slice kubepods-besteffort-pod332cc0f4_e091_4c5f_8c7d_4146e21b7ad1.slice - libcontainer container kubepods-besteffort-pod332cc0f4_e091_4c5f_8c7d_4146e21b7ad1.slice. Jul 16 00:56:06.763327 kubelet[4338]: I0716 00:56:06.763287 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/332cc0f4-e091-4c5f-8c7d-4146e21b7ad1-whisker-ca-bundle\") pod \"whisker-784db84b95-cv59g\" (UID: \"332cc0f4-e091-4c5f-8c7d-4146e21b7ad1\") " pod="calico-system/whisker-784db84b95-cv59g" Jul 16 00:56:06.763657 kubelet[4338]: I0716 00:56:06.763347 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/332cc0f4-e091-4c5f-8c7d-4146e21b7ad1-whisker-backend-key-pair\") pod \"whisker-784db84b95-cv59g\" (UID: \"332cc0f4-e091-4c5f-8c7d-4146e21b7ad1\") " pod="calico-system/whisker-784db84b95-cv59g" Jul 16 00:56:06.763657 kubelet[4338]: I0716 00:56:06.763388 4338 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqsgb\" (UniqueName: \"kubernetes.io/projected/332cc0f4-e091-4c5f-8c7d-4146e21b7ad1-kube-api-access-tqsgb\") pod \"whisker-784db84b95-cv59g\" (UID: \"332cc0f4-e091-4c5f-8c7d-4146e21b7ad1\") " pod="calico-system/whisker-784db84b95-cv59g" Jul 16 00:56:06.909807 containerd[2825]: time="2025-07-16T00:56:06.909763862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-784db84b95-cv59g,Uid:332cc0f4-e091-4c5f-8c7d-4146e21b7ad1,Namespace:calico-system,Attempt:0,}" Jul 16 00:56:07.010325 systemd-networkd[2730]: cali664b2974ed2: Link UP Jul 16 00:56:07.010513 systemd-networkd[2730]: cali664b2974ed2: Gained carrier Jul 16 00:56:07.018707 containerd[2825]: 2025-07-16 00:56:06.927 [INFO][6190] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 16 00:56:07.018707 containerd[2825]: 2025-07-16 00:56:06.941 [INFO][6190] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--4904b64135-k8s-whisker--784db84b95--cv59g-eth0 whisker-784db84b95- calico-system 332cc0f4-e091-4c5f-8c7d-4146e21b7ad1 867 0 2025-07-16 00:56:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:784db84b95 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372.0.1-n-4904b64135 whisker-784db84b95-cv59g eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali664b2974ed2 [] [] }} ContainerID="68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c" Namespace="calico-system" Pod="whisker-784db84b95-cv59g" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-whisker--784db84b95--cv59g-" Jul 16 00:56:07.018707 containerd[2825]: 2025-07-16 00:56:06.941 [INFO][6190] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c" Namespace="calico-system" Pod="whisker-784db84b95-cv59g" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-whisker--784db84b95--cv59g-eth0" Jul 16 00:56:07.018707 containerd[2825]: 2025-07-16 00:56:06.977 [INFO][6217] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c" HandleID="k8s-pod-network.68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c" Workload="ci--4372.0.1--n--4904b64135-k8s-whisker--784db84b95--cv59g-eth0" Jul 16 00:56:07.018850 containerd[2825]: 2025-07-16 00:56:06.977 [INFO][6217] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c" HandleID="k8s-pod-network.68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c" Workload="ci--4372.0.1--n--4904b64135-k8s-whisker--784db84b95--cv59g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000698850), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-n-4904b64135", "pod":"whisker-784db84b95-cv59g", "timestamp":"2025-07-16 00:56:06.977471229 +0000 UTC"}, Hostname:"ci-4372.0.1-n-4904b64135", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:56:07.018850 containerd[2825]: 2025-07-16 00:56:06.977 [INFO][6217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:56:07.018850 containerd[2825]: 2025-07-16 00:56:06.977 [INFO][6217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:56:07.018850 containerd[2825]: 2025-07-16 00:56:06.977 [INFO][6217] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-4904b64135' Jul 16 00:56:07.018850 containerd[2825]: 2025-07-16 00:56:06.985 [INFO][6217] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:07.018850 containerd[2825]: 2025-07-16 00:56:06.989 [INFO][6217] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:07.018850 containerd[2825]: 2025-07-16 00:56:06.992 [INFO][6217] ipam/ipam.go 511: Trying affinity for 192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:07.018850 containerd[2825]: 2025-07-16 00:56:06.993 [INFO][6217] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:07.018850 containerd[2825]: 2025-07-16 00:56:06.994 [INFO][6217] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:07.019013 containerd[2825]: 2025-07-16 00:56:06.994 [INFO][6217] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:07.019013 containerd[2825]: 2025-07-16 00:56:06.996 [INFO][6217] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c Jul 16 00:56:07.019013 containerd[2825]: 2025-07-16 00:56:06.999 [INFO][6217] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:07.019013 containerd[2825]: 2025-07-16 00:56:07.002 [INFO][6217] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.193/26] block=192.168.95.192/26 handle="k8s-pod-network.68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:07.019013 containerd[2825]: 2025-07-16 00:56:07.002 [INFO][6217] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.193/26] handle="k8s-pod-network.68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:07.019013 containerd[2825]: 2025-07-16 00:56:07.002 [INFO][6217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:56:07.019013 containerd[2825]: 2025-07-16 00:56:07.002 [INFO][6217] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.193/26] IPv6=[] ContainerID="68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c" HandleID="k8s-pod-network.68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c" Workload="ci--4372.0.1--n--4904b64135-k8s-whisker--784db84b95--cv59g-eth0" Jul 16 00:56:07.019128 containerd[2825]: 2025-07-16 00:56:07.005 [INFO][6190] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c" Namespace="calico-system" Pod="whisker-784db84b95-cv59g" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-whisker--784db84b95--cv59g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--4904b64135-k8s-whisker--784db84b95--cv59g-eth0", GenerateName:"whisker-784db84b95-", Namespace:"calico-system", SelfLink:"", UID:"332cc0f4-e091-4c5f-8c7d-4146e21b7ad1", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"784db84b95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-4904b64135", ContainerID:"", Pod:"whisker-784db84b95-cv59g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.95.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali664b2974ed2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:56:07.019128 containerd[2825]: 2025-07-16 00:56:07.005 [INFO][6190] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.193/32] ContainerID="68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c" Namespace="calico-system" Pod="whisker-784db84b95-cv59g" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-whisker--784db84b95--cv59g-eth0" Jul 16 00:56:07.019191 containerd[2825]: 2025-07-16 00:56:07.005 [INFO][6190] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali664b2974ed2 ContainerID="68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c" Namespace="calico-system" Pod="whisker-784db84b95-cv59g" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-whisker--784db84b95--cv59g-eth0" Jul 16 00:56:07.019191 containerd[2825]: 2025-07-16 00:56:07.010 [INFO][6190] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c" Namespace="calico-system" Pod="whisker-784db84b95-cv59g" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-whisker--784db84b95--cv59g-eth0" Jul 16 00:56:07.019225 containerd[2825]: 2025-07-16 00:56:07.011 [INFO][6190] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c" Namespace="calico-system" Pod="whisker-784db84b95-cv59g" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-whisker--784db84b95--cv59g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--4904b64135-k8s-whisker--784db84b95--cv59g-eth0", GenerateName:"whisker-784db84b95-", Namespace:"calico-system", SelfLink:"", UID:"332cc0f4-e091-4c5f-8c7d-4146e21b7ad1", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"784db84b95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-4904b64135", ContainerID:"68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c", Pod:"whisker-784db84b95-cv59g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.95.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali664b2974ed2", MAC:"e6:df:85:bb:d6:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:56:07.019317 containerd[2825]: 2025-07-16 00:56:07.017 [INFO][6190] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c" Namespace="calico-system" Pod="whisker-784db84b95-cv59g" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-whisker--784db84b95--cv59g-eth0" Jul 16 00:56:07.030435 containerd[2825]: time="2025-07-16T00:56:07.030397379Z" level=info msg="connecting to shim 68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c" address="unix:///run/containerd/s/6a0975cade1ebe1d6a390ad20efab732cb0481973e62ad85f5016e2bdfec86b7" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:56:07.055737 systemd[1]: Started cri-containerd-68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c.scope - libcontainer container 68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c. Jul 16 00:56:07.081972 containerd[2825]: time="2025-07-16T00:56:07.081945407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-784db84b95-cv59g,Uid:332cc0f4-e091-4c5f-8c7d-4146e21b7ad1,Namespace:calico-system,Attempt:0,} returns sandbox id \"68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c\"" Jul 16 00:56:07.082953 containerd[2825]: time="2025-07-16T00:56:07.082938835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 16 00:56:07.401063 containerd[2825]: time="2025-07-16T00:56:07.400976053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:07.401063 containerd[2825]: time="2025-07-16T00:56:07.401029732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 16 00:56:07.401626 containerd[2825]: time="2025-07-16T00:56:07.401608565Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:07.403319 containerd[2825]: time="2025-07-16T00:56:07.403300543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:07.404008 containerd[2825]: time="2025-07-16T00:56:07.403986135Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 321.023461ms" Jul 16 00:56:07.404034 containerd[2825]: time="2025-07-16T00:56:07.404014574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 16 00:56:07.405574 containerd[2825]: time="2025-07-16T00:56:07.405549115Z" level=info msg="CreateContainer within sandbox \"68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 16 00:56:07.409513 containerd[2825]: time="2025-07-16T00:56:07.408784674Z" level=info msg="Container d4981331c273878e2292daef3d7ca56d339ca85278ce97b9a1adce8badd43ca6: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:56:07.412148 containerd[2825]: time="2025-07-16T00:56:07.412121032Z" level=info msg="CreateContainer within sandbox \"68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d4981331c273878e2292daef3d7ca56d339ca85278ce97b9a1adce8badd43ca6\"" Jul 16 00:56:07.412459 containerd[2825]: time="2025-07-16T00:56:07.412440668Z" level=info msg="StartContainer for \"d4981331c273878e2292daef3d7ca56d339ca85278ce97b9a1adce8badd43ca6\"" Jul 16 00:56:07.413362 containerd[2825]: time="2025-07-16T00:56:07.413341296Z" level=info msg="connecting to shim d4981331c273878e2292daef3d7ca56d339ca85278ce97b9a1adce8badd43ca6" address="unix:///run/containerd/s/6a0975cade1ebe1d6a390ad20efab732cb0481973e62ad85f5016e2bdfec86b7" protocol=ttrpc version=3 Jul 16 00:56:07.432686 systemd[1]: Started cri-containerd-d4981331c273878e2292daef3d7ca56d339ca85278ce97b9a1adce8badd43ca6.scope - libcontainer container d4981331c273878e2292daef3d7ca56d339ca85278ce97b9a1adce8badd43ca6. Jul 16 00:56:07.475334 containerd[2825]: time="2025-07-16T00:56:07.475301273Z" level=info msg="StartContainer for \"d4981331c273878e2292daef3d7ca56d339ca85278ce97b9a1adce8badd43ca6\" returns successfully" Jul 16 00:56:07.476162 containerd[2825]: time="2025-07-16T00:56:07.476131182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 16 00:56:07.530373 kubelet[4338]: I0716 00:56:07.530305 4338 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9227393-fefc-4037-9b2f-786497754f59" path="/var/lib/kubelet/pods/d9227393-fefc-4037-9b2f-786497754f59/volumes" Jul 16 00:56:08.106839 containerd[2825]: time="2025-07-16T00:56:08.106800970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:08.107190 containerd[2825]: time="2025-07-16T00:56:08.106871129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 16 00:56:08.107453 containerd[2825]: time="2025-07-16T00:56:08.107434283Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:08.109194 containerd[2825]: time="2025-07-16T00:56:08.109170142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:08.109906 containerd[2825]: time="2025-07-16T00:56:08.109886653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 633.723432ms" Jul 16 00:56:08.109931 containerd[2825]: time="2025-07-16T00:56:08.109913213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 16 00:56:08.111515 containerd[2825]: time="2025-07-16T00:56:08.111492314Z" level=info msg="CreateContainer within sandbox \"68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 16 00:56:08.114949 containerd[2825]: time="2025-07-16T00:56:08.114921394Z" level=info msg="Container b7ba1ca1fe95e6c83ad92bd4e87e522f2c93e70ddda530f94881b5dbc67bb61c: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:56:08.118494 containerd[2825]: time="2025-07-16T00:56:08.118467992Z" level=info msg="CreateContainer within sandbox \"68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b7ba1ca1fe95e6c83ad92bd4e87e522f2c93e70ddda530f94881b5dbc67bb61c\"" Jul 16 00:56:08.118841 containerd[2825]: time="2025-07-16T00:56:08.118822428Z" level=info msg="StartContainer for \"b7ba1ca1fe95e6c83ad92bd4e87e522f2c93e70ddda530f94881b5dbc67bb61c\"" Jul 16 00:56:08.119777 containerd[2825]: time="2025-07-16T00:56:08.119753257Z" level=info msg="connecting to shim b7ba1ca1fe95e6c83ad92bd4e87e522f2c93e70ddda530f94881b5dbc67bb61c" address="unix:///run/containerd/s/6a0975cade1ebe1d6a390ad20efab732cb0481973e62ad85f5016e2bdfec86b7" protocol=ttrpc version=3 Jul 16 00:56:08.150744 systemd[1]: Started cri-containerd-b7ba1ca1fe95e6c83ad92bd4e87e522f2c93e70ddda530f94881b5dbc67bb61c.scope - libcontainer container b7ba1ca1fe95e6c83ad92bd4e87e522f2c93e70ddda530f94881b5dbc67bb61c. Jul 16 00:56:08.180363 containerd[2825]: time="2025-07-16T00:56:08.180317499Z" level=info msg="StartContainer for \"b7ba1ca1fe95e6c83ad92bd4e87e522f2c93e70ddda530f94881b5dbc67bb61c\" returns successfully" Jul 16 00:56:08.394313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3169388447.mount: Deactivated successfully. Jul 16 00:56:08.732646 systemd-networkd[2730]: cali664b2974ed2: Gained IPv6LL Jul 16 00:56:13.528425 containerd[2825]: time="2025-07-16T00:56:13.528384578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4rrfs,Uid:caf434f7-0263-451b-ae16-75908f12cf47,Namespace:kube-system,Attempt:0,}" Jul 16 00:56:13.528951 containerd[2825]: time="2025-07-16T00:56:13.528464257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hb2b4,Uid:8cdc2fb9-6825-4b25-99b7-48a3daeb5053,Namespace:calico-system,Attempt:0,}" Jul 16 00:56:13.529023 containerd[2825]: time="2025-07-16T00:56:13.528554297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc6b987ff-ct8q7,Uid:0d200db2-ad4d-452e-b27a-8044dc6e0ad8,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:56:13.529023 containerd[2825]: time="2025-07-16T00:56:13.528647216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc6b987ff-ql4qk,Uid:a9a31aa9-77c4-42ea-9bd8-4aaf37290d9b,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:56:13.606160 systemd-networkd[2730]: cali686c6dfc573: Link UP Jul 16 00:56:13.606406 systemd-networkd[2730]: cali686c6dfc573: Gained carrier Jul 16 00:56:13.613719 kubelet[4338]: I0716 00:56:13.613670 4338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-784db84b95-cv59g" podStartSLOduration=6.585900077 podStartE2EDuration="7.613652686s" podCreationTimestamp="2025-07-16 00:56:06 +0000 UTC" firstStartedPulling="2025-07-16 00:56:07.082754797 +0000 UTC m=+25.624944331" lastFinishedPulling="2025-07-16 00:56:08.110507406 +0000 UTC m=+26.652696940" observedRunningTime="2025-07-16 00:56:08.591067709 +0000 UTC m=+27.133257243" watchObservedRunningTime="2025-07-16 00:56:13.613652686 +0000 UTC m=+32.155842220" Jul 16 00:56:13.614607 containerd[2825]: 2025-07-16 00:56:13.547 [INFO][6805] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 16 00:56:13.614607 containerd[2825]: 2025-07-16 00:56:13.558 [INFO][6805] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--4rrfs-eth0 coredns-7c65d6cfc9- kube-system caf434f7-0263-451b-ae16-75908f12cf47 800 0 2025-07-16 00:55:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.1-n-4904b64135 coredns-7c65d6cfc9-4rrfs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali686c6dfc573 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4rrfs" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--4rrfs-" Jul 16 00:56:13.614607 containerd[2825]: 2025-07-16 00:56:13.558 [INFO][6805] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4rrfs" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--4rrfs-eth0" Jul 16 00:56:13.614607 containerd[2825]: 2025-07-16 00:56:13.578 [INFO][6904] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f" HandleID="k8s-pod-network.2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f" Workload="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--4rrfs-eth0" Jul 16 00:56:13.614804 containerd[2825]: 2025-07-16 00:56:13.578 [INFO][6904] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f" HandleID="k8s-pod-network.2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f" Workload="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--4rrfs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000364930), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.1-n-4904b64135", "pod":"coredns-7c65d6cfc9-4rrfs", "timestamp":"2025-07-16 00:56:13.578734426 +0000 UTC"}, Hostname:"ci-4372.0.1-n-4904b64135", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:56:13.614804 containerd[2825]: 2025-07-16 00:56:13.578 [INFO][6904] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:56:13.614804 containerd[2825]: 2025-07-16 00:56:13.578 [INFO][6904] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:56:13.614804 containerd[2825]: 2025-07-16 00:56:13.578 [INFO][6904] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-4904b64135' Jul 16 00:56:13.614804 containerd[2825]: 2025-07-16 00:56:13.586 [INFO][6904] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.614804 containerd[2825]: 2025-07-16 00:56:13.590 [INFO][6904] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.614804 containerd[2825]: 2025-07-16 00:56:13.593 [INFO][6904] ipam/ipam.go 511: Trying affinity for 192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.614804 containerd[2825]: 2025-07-16 00:56:13.594 [INFO][6904] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.614804 containerd[2825]: 2025-07-16 00:56:13.596 [INFO][6904] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.614971 containerd[2825]: 2025-07-16 00:56:13.596 [INFO][6904] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.614971 containerd[2825]: 2025-07-16 00:56:13.597 [INFO][6904] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f Jul 16 00:56:13.614971 containerd[2825]: 2025-07-16 00:56:13.599 [INFO][6904] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.614971 containerd[2825]: 2025-07-16 00:56:13.603 [INFO][6904] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.194/26] block=192.168.95.192/26 handle="k8s-pod-network.2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.614971 containerd[2825]: 2025-07-16 00:56:13.603 [INFO][6904] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.194/26] handle="k8s-pod-network.2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.614971 containerd[2825]: 2025-07-16 00:56:13.603 [INFO][6904] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:56:13.614971 containerd[2825]: 2025-07-16 00:56:13.603 [INFO][6904] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.194/26] IPv6=[] ContainerID="2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f" HandleID="k8s-pod-network.2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f" Workload="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--4rrfs-eth0" Jul 16 00:56:13.615093 containerd[2825]: 2025-07-16 00:56:13.604 [INFO][6805] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4rrfs" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--4rrfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--4rrfs-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"caf434f7-0263-451b-ae16-75908f12cf47", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-4904b64135", ContainerID:"", Pod:"coredns-7c65d6cfc9-4rrfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali686c6dfc573", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:56:13.615093 containerd[2825]: 2025-07-16 00:56:13.604 [INFO][6805] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.194/32] ContainerID="2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4rrfs" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--4rrfs-eth0" Jul 16 00:56:13.615093 containerd[2825]: 2025-07-16 00:56:13.604 [INFO][6805] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali686c6dfc573 ContainerID="2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4rrfs" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--4rrfs-eth0" Jul 16 00:56:13.615093 containerd[2825]: 2025-07-16 00:56:13.606 [INFO][6805] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4rrfs" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--4rrfs-eth0" Jul 16 00:56:13.615093 containerd[2825]: 2025-07-16 00:56:13.606 [INFO][6805] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4rrfs" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--4rrfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--4rrfs-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"caf434f7-0263-451b-ae16-75908f12cf47", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-4904b64135", ContainerID:"2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f", Pod:"coredns-7c65d6cfc9-4rrfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali686c6dfc573", MAC:"b2:bb:21:8f:a7:f9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:56:13.615093 containerd[2825]: 2025-07-16 00:56:13.613 [INFO][6805] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4rrfs" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--4rrfs-eth0" Jul 16 00:56:13.624841 containerd[2825]: time="2025-07-16T00:56:13.624809070Z" level=info msg="connecting to shim 2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f" address="unix:///run/containerd/s/bedcee2722f65d4881e7f194d658e331c75a5f3611bc4723705608ba0559260b" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:56:13.651750 systemd[1]: Started cri-containerd-2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f.scope - libcontainer container 2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f. Jul 16 00:56:13.677206 containerd[2825]: time="2025-07-16T00:56:13.677175101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4rrfs,Uid:caf434f7-0263-451b-ae16-75908f12cf47,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f\"" Jul 16 00:56:13.678886 containerd[2825]: time="2025-07-16T00:56:13.678862726Z" level=info msg="CreateContainer within sandbox \"2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 16 00:56:13.683308 containerd[2825]: time="2025-07-16T00:56:13.683279648Z" level=info msg="Container 68bc51b00ccb0fe1cb8fb2644e14d1c3279bd20738d0d5d2b61ed1612b91cda7: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:56:13.685692 containerd[2825]: time="2025-07-16T00:56:13.685667708Z" level=info msg="CreateContainer within sandbox \"2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"68bc51b00ccb0fe1cb8fb2644e14d1c3279bd20738d0d5d2b61ed1612b91cda7\"" Jul 16 00:56:13.686030 containerd[2825]: time="2025-07-16T00:56:13.685999145Z" level=info msg="StartContainer for \"68bc51b00ccb0fe1cb8fb2644e14d1c3279bd20738d0d5d2b61ed1612b91cda7\"" Jul 16 00:56:13.686714 containerd[2825]: time="2025-07-16T00:56:13.686693459Z" level=info msg="connecting to shim 68bc51b00ccb0fe1cb8fb2644e14d1c3279bd20738d0d5d2b61ed1612b91cda7" address="unix:///run/containerd/s/bedcee2722f65d4881e7f194d658e331c75a5f3611bc4723705608ba0559260b" protocol=ttrpc version=3 Jul 16 00:56:13.706048 systemd-networkd[2730]: cali13db363a3b9: Link UP Jul 16 00:56:13.707151 systemd-networkd[2730]: cali13db363a3b9: Gained carrier Jul 16 00:56:13.713688 systemd[1]: Started cri-containerd-68bc51b00ccb0fe1cb8fb2644e14d1c3279bd20738d0d5d2b61ed1612b91cda7.scope - libcontainer container 68bc51b00ccb0fe1cb8fb2644e14d1c3279bd20738d0d5d2b61ed1612b91cda7. Jul 16 00:56:13.714318 containerd[2825]: 2025-07-16 00:56:13.547 [INFO][6825] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 16 00:56:13.714318 containerd[2825]: 2025-07-16 00:56:13.561 [INFO][6825] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ql4qk-eth0 calico-apiserver-fc6b987ff- calico-apiserver a9a31aa9-77c4-42ea-9bd8-4aaf37290d9b 808 0 2025-07-16 00:55:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fc6b987ff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-n-4904b64135 calico-apiserver-fc6b987ff-ql4qk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali13db363a3b9 [] [] }} ContainerID="3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3" Namespace="calico-apiserver" Pod="calico-apiserver-fc6b987ff-ql4qk" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ql4qk-" Jul 16 00:56:13.714318 containerd[2825]: 2025-07-16 00:56:13.562 [INFO][6825] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3" Namespace="calico-apiserver" Pod="calico-apiserver-fc6b987ff-ql4qk" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ql4qk-eth0" Jul 16 00:56:13.714318 containerd[2825]: 2025-07-16 00:56:13.580 [INFO][6911] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3" HandleID="k8s-pod-network.3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3" Workload="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ql4qk-eth0" Jul 16 00:56:13.714318 containerd[2825]: 2025-07-16 00:56:13.581 [INFO][6911] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3" HandleID="k8s-pod-network.3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3" Workload="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ql4qk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40006f4290), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-n-4904b64135", "pod":"calico-apiserver-fc6b987ff-ql4qk", "timestamp":"2025-07-16 00:56:13.580941807 +0000 UTC"}, Hostname:"ci-4372.0.1-n-4904b64135", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:56:13.714318 containerd[2825]: 2025-07-16 00:56:13.581 [INFO][6911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:56:13.714318 containerd[2825]: 2025-07-16 00:56:13.603 [INFO][6911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:56:13.714318 containerd[2825]: 2025-07-16 00:56:13.603 [INFO][6911] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-4904b64135' Jul 16 00:56:13.714318 containerd[2825]: 2025-07-16 00:56:13.687 [INFO][6911] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.714318 containerd[2825]: 2025-07-16 00:56:13.690 [INFO][6911] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.714318 containerd[2825]: 2025-07-16 00:56:13.693 [INFO][6911] ipam/ipam.go 511: Trying affinity for 192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.714318 containerd[2825]: 2025-07-16 00:56:13.694 [INFO][6911] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.714318 containerd[2825]: 2025-07-16 00:56:13.696 [INFO][6911] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.714318 containerd[2825]: 2025-07-16 00:56:13.696 [INFO][6911] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.714318 containerd[2825]: 2025-07-16 00:56:13.697 [INFO][6911] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3 Jul 16 00:56:13.714318 containerd[2825]: 2025-07-16 00:56:13.699 [INFO][6911] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.714318 containerd[2825]: 2025-07-16 00:56:13.703 [INFO][6911] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.195/26] block=192.168.95.192/26 handle="k8s-pod-network.3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.714318 containerd[2825]: 2025-07-16 00:56:13.703 [INFO][6911] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.195/26] handle="k8s-pod-network.3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.714318 containerd[2825]: 2025-07-16 00:56:13.703 [INFO][6911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:56:13.714318 containerd[2825]: 2025-07-16 00:56:13.703 [INFO][6911] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.195/26] IPv6=[] ContainerID="3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3" HandleID="k8s-pod-network.3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3" Workload="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ql4qk-eth0" Jul 16 00:56:13.714750 containerd[2825]: 2025-07-16 00:56:13.704 [INFO][6825] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3" Namespace="calico-apiserver" Pod="calico-apiserver-fc6b987ff-ql4qk" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ql4qk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ql4qk-eth0", GenerateName:"calico-apiserver-fc6b987ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"a9a31aa9-77c4-42ea-9bd8-4aaf37290d9b", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 55, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc6b987ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-4904b64135", ContainerID:"", Pod:"calico-apiserver-fc6b987ff-ql4qk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali13db363a3b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:56:13.714750 containerd[2825]: 2025-07-16 00:56:13.704 [INFO][6825] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.195/32] ContainerID="3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3" Namespace="calico-apiserver" Pod="calico-apiserver-fc6b987ff-ql4qk" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ql4qk-eth0" Jul 16 00:56:13.714750 containerd[2825]: 2025-07-16 00:56:13.704 [INFO][6825] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13db363a3b9 ContainerID="3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3" Namespace="calico-apiserver" Pod="calico-apiserver-fc6b987ff-ql4qk" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ql4qk-eth0" Jul 16 00:56:13.714750 containerd[2825]: 2025-07-16 00:56:13.707 [INFO][6825] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3" Namespace="calico-apiserver" Pod="calico-apiserver-fc6b987ff-ql4qk" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ql4qk-eth0" Jul 16 00:56:13.714750 containerd[2825]: 2025-07-16 00:56:13.707 [INFO][6825] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3" Namespace="calico-apiserver" Pod="calico-apiserver-fc6b987ff-ql4qk" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ql4qk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ql4qk-eth0", GenerateName:"calico-apiserver-fc6b987ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"a9a31aa9-77c4-42ea-9bd8-4aaf37290d9b", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 55, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc6b987ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-4904b64135", ContainerID:"3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3", Pod:"calico-apiserver-fc6b987ff-ql4qk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali13db363a3b9", MAC:"ee:67:f2:ad:00:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:56:13.714750 containerd[2825]: 2025-07-16 00:56:13.713 [INFO][6825] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3" Namespace="calico-apiserver" Pod="calico-apiserver-fc6b987ff-ql4qk" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ql4qk-eth0" Jul 16 00:56:13.733883 containerd[2825]: time="2025-07-16T00:56:13.733849774Z" level=info msg="StartContainer for \"68bc51b00ccb0fe1cb8fb2644e14d1c3279bd20738d0d5d2b61ed1612b91cda7\" returns successfully" Jul 16 00:56:13.736619 containerd[2825]: time="2025-07-16T00:56:13.736583951Z" level=info msg="connecting to shim 3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3" address="unix:///run/containerd/s/1f13e487f53c9db93bf3d7e2e3dccc815e59b63c17286a1f7f0e9b2c7cb79211" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:56:13.759695 systemd[1]: Started cri-containerd-3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3.scope - libcontainer container 3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3. Jul 16 00:56:13.785739 containerd[2825]: time="2025-07-16T00:56:13.785673929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc6b987ff-ql4qk,Uid:a9a31aa9-77c4-42ea-9bd8-4aaf37290d9b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3\"" Jul 16 00:56:13.786711 containerd[2825]: time="2025-07-16T00:56:13.786688040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 16 00:56:13.811049 systemd-networkd[2730]: caliead4a0c214e: Link UP Jul 16 00:56:13.811242 systemd-networkd[2730]: caliead4a0c214e: Gained carrier Jul 16 00:56:13.819759 containerd[2825]: 2025-07-16 00:56:13.547 [INFO][6813] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 16 00:56:13.819759 containerd[2825]: 2025-07-16 00:56:13.562 [INFO][6813] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--4904b64135-k8s-csi--node--driver--hb2b4-eth0 csi-node-driver- calico-system 8cdc2fb9-6825-4b25-99b7-48a3daeb5053 713 0 2025-07-16 00:55:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372.0.1-n-4904b64135 csi-node-driver-hb2b4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliead4a0c214e [] [] }} ContainerID="b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21" Namespace="calico-system" Pod="csi-node-driver-hb2b4" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-csi--node--driver--hb2b4-" Jul 16 00:56:13.819759 containerd[2825]: 2025-07-16 00:56:13.562 [INFO][6813] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21" Namespace="calico-system" Pod="csi-node-driver-hb2b4" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-csi--node--driver--hb2b4-eth0" Jul 16 00:56:13.819759 containerd[2825]: 2025-07-16 00:56:13.581 [INFO][6912] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21" HandleID="k8s-pod-network.b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21" Workload="ci--4372.0.1--n--4904b64135-k8s-csi--node--driver--hb2b4-eth0" Jul 16 00:56:13.819759 containerd[2825]: 2025-07-16 00:56:13.581 [INFO][6912] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21" HandleID="k8s-pod-network.b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21" Workload="ci--4372.0.1--n--4904b64135-k8s-csi--node--driver--hb2b4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400072cc00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-n-4904b64135", "pod":"csi-node-driver-hb2b4", "timestamp":"2025-07-16 00:56:13.581335884 +0000 UTC"}, Hostname:"ci-4372.0.1-n-4904b64135", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:56:13.819759 containerd[2825]: 2025-07-16 00:56:13.581 [INFO][6912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:56:13.819759 containerd[2825]: 2025-07-16 00:56:13.703 [INFO][6912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:56:13.819759 containerd[2825]: 2025-07-16 00:56:13.703 [INFO][6912] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-4904b64135' Jul 16 00:56:13.819759 containerd[2825]: 2025-07-16 00:56:13.787 [INFO][6912] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.819759 containerd[2825]: 2025-07-16 00:56:13.795 [INFO][6912] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.819759 containerd[2825]: 2025-07-16 00:56:13.798 [INFO][6912] ipam/ipam.go 511: Trying affinity for 192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.819759 containerd[2825]: 2025-07-16 00:56:13.799 [INFO][6912] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.819759 containerd[2825]: 2025-07-16 00:56:13.801 [INFO][6912] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.819759 containerd[2825]: 2025-07-16 00:56:13.801 [INFO][6912] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.819759 containerd[2825]: 2025-07-16 00:56:13.802 [INFO][6912] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21 Jul 16 00:56:13.819759 containerd[2825]: 2025-07-16 00:56:13.804 [INFO][6912] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.819759 containerd[2825]: 2025-07-16 00:56:13.808 [INFO][6912] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.196/26] block=192.168.95.192/26 handle="k8s-pod-network.b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.819759 containerd[2825]: 2025-07-16 00:56:13.808 [INFO][6912] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.196/26] handle="k8s-pod-network.b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.819759 containerd[2825]: 2025-07-16 00:56:13.808 [INFO][6912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:56:13.819759 containerd[2825]: 2025-07-16 00:56:13.808 [INFO][6912] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.196/26] IPv6=[] ContainerID="b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21" HandleID="k8s-pod-network.b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21" Workload="ci--4372.0.1--n--4904b64135-k8s-csi--node--driver--hb2b4-eth0" Jul 16 00:56:13.820156 containerd[2825]: 2025-07-16 00:56:13.809 [INFO][6813] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21" Namespace="calico-system" Pod="csi-node-driver-hb2b4" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-csi--node--driver--hb2b4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--4904b64135-k8s-csi--node--driver--hb2b4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8cdc2fb9-6825-4b25-99b7-48a3daeb5053", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-4904b64135", ContainerID:"", Pod:"csi-node-driver-hb2b4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliead4a0c214e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:56:13.820156 containerd[2825]: 2025-07-16 00:56:13.809 [INFO][6813] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.196/32] ContainerID="b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21" Namespace="calico-system" Pod="csi-node-driver-hb2b4" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-csi--node--driver--hb2b4-eth0" Jul 16 00:56:13.820156 containerd[2825]: 2025-07-16 00:56:13.809 [INFO][6813] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliead4a0c214e ContainerID="b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21" Namespace="calico-system" Pod="csi-node-driver-hb2b4" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-csi--node--driver--hb2b4-eth0" Jul 16 00:56:13.820156 containerd[2825]: 2025-07-16 00:56:13.811 [INFO][6813] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21" Namespace="calico-system" Pod="csi-node-driver-hb2b4" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-csi--node--driver--hb2b4-eth0" Jul 16 00:56:13.820156 containerd[2825]: 2025-07-16 00:56:13.811 [INFO][6813] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21" Namespace="calico-system" Pod="csi-node-driver-hb2b4" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-csi--node--driver--hb2b4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--4904b64135-k8s-csi--node--driver--hb2b4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8cdc2fb9-6825-4b25-99b7-48a3daeb5053", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-4904b64135", ContainerID:"b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21", Pod:"csi-node-driver-hb2b4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliead4a0c214e", MAC:"92:c2:64:b8:87:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:56:13.820156 containerd[2825]: 2025-07-16 00:56:13.818 [INFO][6813] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21" Namespace="calico-system" Pod="csi-node-driver-hb2b4" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-csi--node--driver--hb2b4-eth0" Jul 16 00:56:13.829002 containerd[2825]: time="2025-07-16T00:56:13.828976317Z" level=info msg="connecting to shim b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21" address="unix:///run/containerd/s/8bead702ea3587fc9ccd7728d0f0d59a0c3e2e0bc2f38e061bdc25f0c8d1fd06" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:56:13.862697 systemd[1]: Started cri-containerd-b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21.scope - libcontainer container b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21. Jul 16 00:56:13.880789 containerd[2825]: time="2025-07-16T00:56:13.880761473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hb2b4,Uid:8cdc2fb9-6825-4b25-99b7-48a3daeb5053,Namespace:calico-system,Attempt:0,} returns sandbox id \"b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21\"" Jul 16 00:56:13.911980 systemd-networkd[2730]: cali318c684f414: Link UP Jul 16 00:56:13.912242 systemd-networkd[2730]: cali318c684f414: Gained carrier Jul 16 00:56:13.931282 containerd[2825]: 2025-07-16 00:56:13.547 [INFO][6807] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 16 00:56:13.931282 containerd[2825]: 2025-07-16 00:56:13.562 [INFO][6807] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ct8q7-eth0 calico-apiserver-fc6b987ff- calico-apiserver 0d200db2-ad4d-452e-b27a-8044dc6e0ad8 806 0 2025-07-16 00:55:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:fc6b987ff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-n-4904b64135 calico-apiserver-fc6b987ff-ct8q7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali318c684f414 [] [] }} ContainerID="0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9" Namespace="calico-apiserver" Pod="calico-apiserver-fc6b987ff-ct8q7" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ct8q7-" Jul 16 00:56:13.931282 containerd[2825]: 2025-07-16 00:56:13.562 [INFO][6807] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9" Namespace="calico-apiserver" Pod="calico-apiserver-fc6b987ff-ct8q7" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ct8q7-eth0" Jul 16 00:56:13.931282 containerd[2825]: 2025-07-16 00:56:13.581 [INFO][6914] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9" HandleID="k8s-pod-network.0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9" Workload="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ct8q7-eth0" Jul 16 00:56:13.931282 containerd[2825]: 2025-07-16 00:56:13.581 [INFO][6914] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9" HandleID="k8s-pod-network.0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9" Workload="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ct8q7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40006aa7b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-n-4904b64135", "pod":"calico-apiserver-fc6b987ff-ct8q7", "timestamp":"2025-07-16 00:56:13.581608041 +0000 UTC"}, Hostname:"ci-4372.0.1-n-4904b64135", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:56:13.931282 containerd[2825]: 2025-07-16 00:56:13.581 [INFO][6914] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:56:13.931282 containerd[2825]: 2025-07-16 00:56:13.808 [INFO][6914] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:56:13.931282 containerd[2825]: 2025-07-16 00:56:13.808 [INFO][6914] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-4904b64135' Jul 16 00:56:13.931282 containerd[2825]: 2025-07-16 00:56:13.887 [INFO][6914] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.931282 containerd[2825]: 2025-07-16 00:56:13.896 [INFO][6914] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.931282 containerd[2825]: 2025-07-16 00:56:13.899 [INFO][6914] ipam/ipam.go 511: Trying affinity for 192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.931282 containerd[2825]: 2025-07-16 00:56:13.900 [INFO][6914] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.931282 containerd[2825]: 2025-07-16 00:56:13.901 [INFO][6914] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.931282 containerd[2825]: 2025-07-16 00:56:13.902 [INFO][6914] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.931282 containerd[2825]: 2025-07-16 00:56:13.902 [INFO][6914] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9 Jul 16 00:56:13.931282 containerd[2825]: 2025-07-16 00:56:13.905 [INFO][6914] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.931282 containerd[2825]: 2025-07-16 00:56:13.909 [INFO][6914] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.197/26] block=192.168.95.192/26 handle="k8s-pod-network.0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.931282 containerd[2825]: 2025-07-16 00:56:13.909 [INFO][6914] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.197/26] handle="k8s-pod-network.0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:13.931282 containerd[2825]: 2025-07-16 00:56:13.909 [INFO][6914] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:56:13.931282 containerd[2825]: 2025-07-16 00:56:13.909 [INFO][6914] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.197/26] IPv6=[] ContainerID="0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9" HandleID="k8s-pod-network.0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9" Workload="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ct8q7-eth0" Jul 16 00:56:13.931785 containerd[2825]: 2025-07-16 00:56:13.910 [INFO][6807] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9" Namespace="calico-apiserver" Pod="calico-apiserver-fc6b987ff-ct8q7" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ct8q7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ct8q7-eth0", GenerateName:"calico-apiserver-fc6b987ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d200db2-ad4d-452e-b27a-8044dc6e0ad8", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 55, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc6b987ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-4904b64135", ContainerID:"", Pod:"calico-apiserver-fc6b987ff-ct8q7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali318c684f414", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:56:13.931785 containerd[2825]: 2025-07-16 00:56:13.910 [INFO][6807] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.197/32] ContainerID="0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9" Namespace="calico-apiserver" Pod="calico-apiserver-fc6b987ff-ct8q7" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ct8q7-eth0" Jul 16 00:56:13.931785 containerd[2825]: 2025-07-16 00:56:13.910 [INFO][6807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali318c684f414 ContainerID="0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9" Namespace="calico-apiserver" Pod="calico-apiserver-fc6b987ff-ct8q7" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ct8q7-eth0" Jul 16 00:56:13.931785 containerd[2825]: 2025-07-16 00:56:13.912 [INFO][6807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9" Namespace="calico-apiserver" Pod="calico-apiserver-fc6b987ff-ct8q7" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ct8q7-eth0" Jul 16 00:56:13.931785 containerd[2825]: 2025-07-16 00:56:13.912 [INFO][6807] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9" Namespace="calico-apiserver" Pod="calico-apiserver-fc6b987ff-ct8q7" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ct8q7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ct8q7-eth0", GenerateName:"calico-apiserver-fc6b987ff-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d200db2-ad4d-452e-b27a-8044dc6e0ad8", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 55, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"fc6b987ff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-4904b64135", ContainerID:"0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9", Pod:"calico-apiserver-fc6b987ff-ct8q7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali318c684f414", MAC:"ba:48:7d:48:cb:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:56:13.931785 containerd[2825]: 2025-07-16 00:56:13.929 [INFO][6807] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9" Namespace="calico-apiserver" Pod="calico-apiserver-fc6b987ff-ct8q7" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--apiserver--fc6b987ff--ct8q7-eth0" Jul 16 00:56:13.942334 containerd[2825]: time="2025-07-16T00:56:13.942295344Z" level=info msg="connecting to shim 0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9" address="unix:///run/containerd/s/8a92988e56a16975e5e0d69abe7fe798f399e332c4c9a3ecded7df1825e83566" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:56:13.969697 systemd[1]: Started cri-containerd-0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9.scope - libcontainer container 0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9. Jul 16 00:56:14.001288 containerd[2825]: time="2025-07-16T00:56:14.001252478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-fc6b987ff-ct8q7,Uid:0d200db2-ad4d-452e-b27a-8044dc6e0ad8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9\"" Jul 16 00:56:14.528815 containerd[2825]: time="2025-07-16T00:56:14.528776472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8669c847c4-jckr2,Uid:83ce170b-697d-4094-a118-3f839e2f6019,Namespace:calico-system,Attempt:0,}" Jul 16 00:56:14.529105 containerd[2825]: time="2025-07-16T00:56:14.528777952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-267hx,Uid:b8773cba-618b-4488-91dc-feaafed062e2,Namespace:calico-system,Attempt:0,}" Jul 16 00:56:14.537600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3924635534.mount: Deactivated successfully. Jul 16 00:56:14.609577 kubelet[4338]: I0716 00:56:14.609510 4338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4rrfs" podStartSLOduration=27.609492662 podStartE2EDuration="27.609492662s" podCreationTimestamp="2025-07-16 00:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:56:14.609054706 +0000 UTC m=+33.151244240" watchObservedRunningTime="2025-07-16 00:56:14.609492662 +0000 UTC m=+33.151682156" Jul 16 00:56:14.615519 systemd-networkd[2730]: calicde3602a25e: Link UP Jul 16 00:56:14.616863 systemd-networkd[2730]: calicde3602a25e: Gained carrier Jul 16 00:56:14.623737 containerd[2825]: 2025-07-16 00:56:14.547 [INFO][7312] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 16 00:56:14.623737 containerd[2825]: 2025-07-16 00:56:14.558 [INFO][7312] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--4904b64135-k8s-calico--kube--controllers--8669c847c4--jckr2-eth0 calico-kube-controllers-8669c847c4- calico-system 83ce170b-697d-4094-a118-3f839e2f6019 809 0 2025-07-16 00:55:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8669c847c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372.0.1-n-4904b64135 calico-kube-controllers-8669c847c4-jckr2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicde3602a25e [] [] }} ContainerID="974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772" Namespace="calico-system" Pod="calico-kube-controllers-8669c847c4-jckr2" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--kube--controllers--8669c847c4--jckr2-" Jul 16 00:56:14.623737 containerd[2825]: 2025-07-16 00:56:14.558 [INFO][7312] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772" Namespace="calico-system" Pod="calico-kube-controllers-8669c847c4-jckr2" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--kube--controllers--8669c847c4--jckr2-eth0" Jul 16 00:56:14.623737 containerd[2825]: 2025-07-16 00:56:14.579 [INFO][7361] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772" HandleID="k8s-pod-network.974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772" Workload="ci--4372.0.1--n--4904b64135-k8s-calico--kube--controllers--8669c847c4--jckr2-eth0" Jul 16 00:56:14.623737 containerd[2825]: 2025-07-16 00:56:14.579 [INFO][7361] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772" HandleID="k8s-pod-network.974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772" Workload="ci--4372.0.1--n--4904b64135-k8s-calico--kube--controllers--8669c847c4--jckr2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001b6e00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-n-4904b64135", "pod":"calico-kube-controllers-8669c847c4-jckr2", "timestamp":"2025-07-16 00:56:14.579596783 +0000 UTC"}, Hostname:"ci-4372.0.1-n-4904b64135", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:56:14.623737 containerd[2825]: 2025-07-16 00:56:14.579 [INFO][7361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:56:14.623737 containerd[2825]: 2025-07-16 00:56:14.579 [INFO][7361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:56:14.623737 containerd[2825]: 2025-07-16 00:56:14.579 [INFO][7361] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-4904b64135' Jul 16 00:56:14.623737 containerd[2825]: 2025-07-16 00:56:14.587 [INFO][7361] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:14.623737 containerd[2825]: 2025-07-16 00:56:14.590 [INFO][7361] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:14.623737 containerd[2825]: 2025-07-16 00:56:14.593 [INFO][7361] ipam/ipam.go 511: Trying affinity for 192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:14.623737 containerd[2825]: 2025-07-16 00:56:14.594 [INFO][7361] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:14.623737 containerd[2825]: 2025-07-16 00:56:14.596 [INFO][7361] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:14.623737 containerd[2825]: 2025-07-16 00:56:14.596 [INFO][7361] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:14.623737 containerd[2825]: 2025-07-16 00:56:14.597 [INFO][7361] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772 Jul 16 00:56:14.623737 containerd[2825]: 2025-07-16 00:56:14.608 [INFO][7361] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:14.623737 containerd[2825]: 2025-07-16 00:56:14.612 [INFO][7361] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.198/26] block=192.168.95.192/26 handle="k8s-pod-network.974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:14.623737 containerd[2825]: 2025-07-16 00:56:14.612 [INFO][7361] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.198/26] handle="k8s-pod-network.974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:14.623737 containerd[2825]: 2025-07-16 00:56:14.612 [INFO][7361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:56:14.623737 containerd[2825]: 2025-07-16 00:56:14.612 [INFO][7361] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.198/26] IPv6=[] ContainerID="974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772" HandleID="k8s-pod-network.974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772" Workload="ci--4372.0.1--n--4904b64135-k8s-calico--kube--controllers--8669c847c4--jckr2-eth0" Jul 16 00:56:14.624193 containerd[2825]: 2025-07-16 00:56:14.614 [INFO][7312] cni-plugin/k8s.go 418: Populated endpoint ContainerID="974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772" Namespace="calico-system" Pod="calico-kube-controllers-8669c847c4-jckr2" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--kube--controllers--8669c847c4--jckr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--4904b64135-k8s-calico--kube--controllers--8669c847c4--jckr2-eth0", GenerateName:"calico-kube-controllers-8669c847c4-", Namespace:"calico-system", SelfLink:"", UID:"83ce170b-697d-4094-a118-3f839e2f6019", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8669c847c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-4904b64135", ContainerID:"", Pod:"calico-kube-controllers-8669c847c4-jckr2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicde3602a25e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:56:14.624193 containerd[2825]: 2025-07-16 00:56:14.614 [INFO][7312] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.198/32] ContainerID="974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772" Namespace="calico-system" Pod="calico-kube-controllers-8669c847c4-jckr2" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--kube--controllers--8669c847c4--jckr2-eth0" Jul 16 00:56:14.624193 containerd[2825]: 2025-07-16 00:56:14.614 [INFO][7312] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicde3602a25e ContainerID="974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772" Namespace="calico-system" Pod="calico-kube-controllers-8669c847c4-jckr2" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--kube--controllers--8669c847c4--jckr2-eth0" Jul 16 00:56:14.624193 containerd[2825]: 2025-07-16 00:56:14.616 [INFO][7312] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772" Namespace="calico-system" Pod="calico-kube-controllers-8669c847c4-jckr2" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--kube--controllers--8669c847c4--jckr2-eth0" Jul 16 00:56:14.624193 containerd[2825]: 2025-07-16 00:56:14.617 [INFO][7312] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772" Namespace="calico-system" Pod="calico-kube-controllers-8669c847c4-jckr2" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--kube--controllers--8669c847c4--jckr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--4904b64135-k8s-calico--kube--controllers--8669c847c4--jckr2-eth0", GenerateName:"calico-kube-controllers-8669c847c4-", Namespace:"calico-system", SelfLink:"", UID:"83ce170b-697d-4094-a118-3f839e2f6019", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8669c847c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-4904b64135", ContainerID:"974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772", Pod:"calico-kube-controllers-8669c847c4-jckr2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicde3602a25e", MAC:"fa:3d:39:3a:1a:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:56:14.624193 containerd[2825]: 2025-07-16 00:56:14.622 [INFO][7312] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772" Namespace="calico-system" Pod="calico-kube-controllers-8669c847c4-jckr2" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-calico--kube--controllers--8669c847c4--jckr2-eth0" Jul 16 00:56:14.630158 containerd[2825]: time="2025-07-16T00:56:14.630129416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:14.630239 containerd[2825]: time="2025-07-16T00:56:14.630197336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 16 00:56:14.630813 containerd[2825]: time="2025-07-16T00:56:14.630787611Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:14.632368 containerd[2825]: time="2025-07-16T00:56:14.632321519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:14.632974 containerd[2825]: time="2025-07-16T00:56:14.632952353Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 846.234954ms" Jul 16 00:56:14.632999 containerd[2825]: time="2025-07-16T00:56:14.632978713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 16 00:56:14.633559 containerd[2825]: time="2025-07-16T00:56:14.633535949Z" level=info msg="connecting to shim 974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772" address="unix:///run/containerd/s/de285c1fbfad267f7f7b3ef2260d3c6d35e479748ed421eab8fe345868cad360" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:56:14.633666 containerd[2825]: time="2025-07-16T00:56:14.633648668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 16 00:56:14.634405 containerd[2825]: time="2025-07-16T00:56:14.634384942Z" level=info msg="CreateContainer within sandbox \"3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 16 00:56:14.637770 containerd[2825]: time="2025-07-16T00:56:14.637744235Z" level=info msg="Container 29f8988acb827b631463ab94c9fe0fde941f45cea1fae4ccdb518ad8119abca7: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:56:14.640725 containerd[2825]: time="2025-07-16T00:56:14.640704491Z" level=info msg="CreateContainer within sandbox \"3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"29f8988acb827b631463ab94c9fe0fde941f45cea1fae4ccdb518ad8119abca7\"" Jul 16 00:56:14.643107 containerd[2825]: time="2025-07-16T00:56:14.643081152Z" level=info msg="StartContainer for \"29f8988acb827b631463ab94c9fe0fde941f45cea1fae4ccdb518ad8119abca7\"" Jul 16 00:56:14.644051 containerd[2825]: time="2025-07-16T00:56:14.644029304Z" level=info msg="connecting to shim 29f8988acb827b631463ab94c9fe0fde941f45cea1fae4ccdb518ad8119abca7" address="unix:///run/containerd/s/1f13e487f53c9db93bf3d7e2e3dccc815e59b63c17286a1f7f0e9b2c7cb79211" protocol=ttrpc version=3 Jul 16 00:56:14.658683 systemd[1]: Started cri-containerd-974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772.scope - libcontainer container 974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772. Jul 16 00:56:14.661146 systemd[1]: Started cri-containerd-29f8988acb827b631463ab94c9fe0fde941f45cea1fae4ccdb518ad8119abca7.scope - libcontainer container 29f8988acb827b631463ab94c9fe0fde941f45cea1fae4ccdb518ad8119abca7. Jul 16 00:56:14.684594 containerd[2825]: time="2025-07-16T00:56:14.684556898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8669c847c4-jckr2,Uid:83ce170b-697d-4094-a118-3f839e2f6019,Namespace:calico-system,Attempt:0,} returns sandbox id \"974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772\"" Jul 16 00:56:14.688000 containerd[2825]: time="2025-07-16T00:56:14.687975671Z" level=info msg="StartContainer for \"29f8988acb827b631463ab94c9fe0fde941f45cea1fae4ccdb518ad8119abca7\" returns successfully" Jul 16 00:56:14.708004 systemd-networkd[2730]: cali650e36d6a27: Link UP Jul 16 00:56:14.708265 systemd-networkd[2730]: cali650e36d6a27: Gained carrier Jul 16 00:56:14.715885 containerd[2825]: 2025-07-16 00:56:14.548 [INFO][7315] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 16 00:56:14.715885 containerd[2825]: 2025-07-16 00:56:14.562 [INFO][7315] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--4904b64135-k8s-goldmane--58fd7646b9--267hx-eth0 goldmane-58fd7646b9- calico-system b8773cba-618b-4488-91dc-feaafed062e2 810 0 2025-07-16 00:55:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372.0.1-n-4904b64135 goldmane-58fd7646b9-267hx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali650e36d6a27 [] [] }} ContainerID="cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac" Namespace="calico-system" Pod="goldmane-58fd7646b9-267hx" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-goldmane--58fd7646b9--267hx-" Jul 16 00:56:14.715885 containerd[2825]: 2025-07-16 00:56:14.562 [INFO][7315] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac" Namespace="calico-system" Pod="goldmane-58fd7646b9-267hx" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-goldmane--58fd7646b9--267hx-eth0" Jul 16 00:56:14.715885 containerd[2825]: 2025-07-16 00:56:14.580 [INFO][7367] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac" HandleID="k8s-pod-network.cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac" Workload="ci--4372.0.1--n--4904b64135-k8s-goldmane--58fd7646b9--267hx-eth0" Jul 16 00:56:14.715885 containerd[2825]: 2025-07-16 00:56:14.580 [INFO][7367] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac" HandleID="k8s-pod-network.cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac" Workload="ci--4372.0.1--n--4904b64135-k8s-goldmane--58fd7646b9--267hx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e3e20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-n-4904b64135", "pod":"goldmane-58fd7646b9-267hx", "timestamp":"2025-07-16 00:56:14.580311857 +0000 UTC"}, Hostname:"ci-4372.0.1-n-4904b64135", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:56:14.715885 containerd[2825]: 2025-07-16 00:56:14.580 [INFO][7367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:56:14.715885 containerd[2825]: 2025-07-16 00:56:14.613 [INFO][7367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:56:14.715885 containerd[2825]: 2025-07-16 00:56:14.613 [INFO][7367] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-4904b64135' Jul 16 00:56:14.715885 containerd[2825]: 2025-07-16 00:56:14.687 [INFO][7367] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:14.715885 containerd[2825]: 2025-07-16 00:56:14.690 [INFO][7367] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:14.715885 containerd[2825]: 2025-07-16 00:56:14.693 [INFO][7367] ipam/ipam.go 511: Trying affinity for 192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:14.715885 containerd[2825]: 2025-07-16 00:56:14.694 [INFO][7367] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:14.715885 containerd[2825]: 2025-07-16 00:56:14.696 [INFO][7367] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:14.715885 containerd[2825]: 2025-07-16 00:56:14.696 [INFO][7367] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:14.715885 containerd[2825]: 2025-07-16 00:56:14.697 [INFO][7367] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac Jul 16 00:56:14.715885 containerd[2825]: 2025-07-16 00:56:14.700 [INFO][7367] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:14.715885 containerd[2825]: 2025-07-16 00:56:14.704 [INFO][7367] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.199/26] block=192.168.95.192/26 handle="k8s-pod-network.cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:14.715885 containerd[2825]: 2025-07-16 00:56:14.704 [INFO][7367] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.199/26] handle="k8s-pod-network.cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:14.715885 containerd[2825]: 2025-07-16 00:56:14.704 [INFO][7367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:56:14.715885 containerd[2825]: 2025-07-16 00:56:14.704 [INFO][7367] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.199/26] IPv6=[] ContainerID="cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac" HandleID="k8s-pod-network.cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac" Workload="ci--4372.0.1--n--4904b64135-k8s-goldmane--58fd7646b9--267hx-eth0" Jul 16 00:56:14.716264 containerd[2825]: 2025-07-16 00:56:14.705 [INFO][7315] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac" Namespace="calico-system" Pod="goldmane-58fd7646b9-267hx" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-goldmane--58fd7646b9--267hx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--4904b64135-k8s-goldmane--58fd7646b9--267hx-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"b8773cba-618b-4488-91dc-feaafed062e2", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-4904b64135", ContainerID:"", Pod:"goldmane-58fd7646b9-267hx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.95.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali650e36d6a27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:56:14.716264 containerd[2825]: 2025-07-16 00:56:14.706 [INFO][7315] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.199/32] ContainerID="cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac" Namespace="calico-system" Pod="goldmane-58fd7646b9-267hx" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-goldmane--58fd7646b9--267hx-eth0" Jul 16 00:56:14.716264 containerd[2825]: 2025-07-16 00:56:14.706 [INFO][7315] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali650e36d6a27 ContainerID="cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac" Namespace="calico-system" Pod="goldmane-58fd7646b9-267hx" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-goldmane--58fd7646b9--267hx-eth0" Jul 16 00:56:14.716264 containerd[2825]: 2025-07-16 00:56:14.708 [INFO][7315] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac" Namespace="calico-system" Pod="goldmane-58fd7646b9-267hx" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-goldmane--58fd7646b9--267hx-eth0" Jul 16 00:56:14.716264 containerd[2825]: 2025-07-16 00:56:14.708 [INFO][7315] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac" Namespace="calico-system" Pod="goldmane-58fd7646b9-267hx" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-goldmane--58fd7646b9--267hx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--4904b64135-k8s-goldmane--58fd7646b9--267hx-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"b8773cba-618b-4488-91dc-feaafed062e2", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-4904b64135", ContainerID:"cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac", Pod:"goldmane-58fd7646b9-267hx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.95.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali650e36d6a27", MAC:"ea:43:f9:5c:41:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:56:14.716264 containerd[2825]: 2025-07-16 00:56:14.714 [INFO][7315] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac" Namespace="calico-system" Pod="goldmane-58fd7646b9-267hx" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-goldmane--58fd7646b9--267hx-eth0" Jul 16 00:56:14.725803 containerd[2825]: time="2025-07-16T00:56:14.725775646Z" level=info msg="connecting to shim cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac" address="unix:///run/containerd/s/954256d28b72b2316d14dc62b5621f3030af884e409c499ddccbcd22bb894e91" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:56:14.760693 systemd[1]: Started cri-containerd-cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac.scope - libcontainer container cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac. Jul 16 00:56:14.787534 containerd[2825]: time="2025-07-16T00:56:14.787445150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-267hx,Uid:b8773cba-618b-4488-91dc-feaafed062e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac\"" Jul 16 00:56:14.941675 systemd-networkd[2730]: cali686c6dfc573: Gained IPv6LL Jul 16 00:56:14.943957 containerd[2825]: time="2025-07-16T00:56:14.943920690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:14.944017 containerd[2825]: time="2025-07-16T00:56:14.943988690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 16 00:56:14.944653 containerd[2825]: time="2025-07-16T00:56:14.944635405Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:14.946189 containerd[2825]: time="2025-07-16T00:56:14.946172112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:14.946816 containerd[2825]: time="2025-07-16T00:56:14.946787867Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 313.111199ms" Jul 16 00:56:14.946882 containerd[2825]: time="2025-07-16T00:56:14.946822467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 16 00:56:14.947550 containerd[2825]: time="2025-07-16T00:56:14.947531741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 16 00:56:14.948422 containerd[2825]: time="2025-07-16T00:56:14.948400414Z" level=info msg="CreateContainer within sandbox \"b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 16 00:56:14.953641 containerd[2825]: time="2025-07-16T00:56:14.953615492Z" level=info msg="Container c479a09b6c5b7c89cbe0525444dd288493a807cb03d560d858b0fe7faf14fa60: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:56:14.957438 containerd[2825]: time="2025-07-16T00:56:14.957411622Z" level=info msg="CreateContainer within sandbox \"b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c479a09b6c5b7c89cbe0525444dd288493a807cb03d560d858b0fe7faf14fa60\"" Jul 16 00:56:14.957807 containerd[2825]: time="2025-07-16T00:56:14.957783779Z" level=info msg="StartContainer for \"c479a09b6c5b7c89cbe0525444dd288493a807cb03d560d858b0fe7faf14fa60\"" Jul 16 00:56:14.959123 containerd[2825]: time="2025-07-16T00:56:14.959099048Z" level=info msg="connecting to shim c479a09b6c5b7c89cbe0525444dd288493a807cb03d560d858b0fe7faf14fa60" address="unix:///run/containerd/s/8bead702ea3587fc9ccd7728d0f0d59a0c3e2e0bc2f38e061bdc25f0c8d1fd06" protocol=ttrpc version=3 Jul 16 00:56:14.989746 systemd[1]: Started cri-containerd-c479a09b6c5b7c89cbe0525444dd288493a807cb03d560d858b0fe7faf14fa60.scope - libcontainer container c479a09b6c5b7c89cbe0525444dd288493a807cb03d560d858b0fe7faf14fa60. Jul 16 00:56:15.013793 containerd[2825]: time="2025-07-16T00:56:15.013763654Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:15.013838 containerd[2825]: time="2025-07-16T00:56:15.013814214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 16 00:56:15.015965 containerd[2825]: time="2025-07-16T00:56:15.015943918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 68.385017ms" Jul 16 00:56:15.015994 containerd[2825]: time="2025-07-16T00:56:15.015969838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 16 00:56:15.016573 containerd[2825]: time="2025-07-16T00:56:15.016553113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 16 00:56:15.017402 containerd[2825]: time="2025-07-16T00:56:15.017375267Z" level=info msg="CreateContainer within sandbox \"0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 16 00:56:15.020264 containerd[2825]: time="2025-07-16T00:56:15.020236966Z" level=info msg="StartContainer for \"c479a09b6c5b7c89cbe0525444dd288493a807cb03d560d858b0fe7faf14fa60\" returns successfully" Jul 16 00:56:15.021015 containerd[2825]: time="2025-07-16T00:56:15.020988360Z" level=info msg="Container 161ecdd7b410831eb6b0b93f622c50e6aade1089fee6cef46fe3f5e204d0f2f7: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:56:15.025168 containerd[2825]: time="2025-07-16T00:56:15.025142409Z" level=info msg="CreateContainer within sandbox \"0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"161ecdd7b410831eb6b0b93f622c50e6aade1089fee6cef46fe3f5e204d0f2f7\"" Jul 16 00:56:15.025520 containerd[2825]: time="2025-07-16T00:56:15.025484206Z" level=info msg="StartContainer for \"161ecdd7b410831eb6b0b93f622c50e6aade1089fee6cef46fe3f5e204d0f2f7\"" Jul 16 00:56:15.026473 containerd[2825]: time="2025-07-16T00:56:15.026443199Z" level=info msg="connecting to shim 161ecdd7b410831eb6b0b93f622c50e6aade1089fee6cef46fe3f5e204d0f2f7" address="unix:///run/containerd/s/8a92988e56a16975e5e0d69abe7fe798f399e332c4c9a3ecded7df1825e83566" protocol=ttrpc version=3 Jul 16 00:56:15.051688 systemd[1]: Started cri-containerd-161ecdd7b410831eb6b0b93f622c50e6aade1089fee6cef46fe3f5e204d0f2f7.scope - libcontainer container 161ecdd7b410831eb6b0b93f622c50e6aade1089fee6cef46fe3f5e204d0f2f7. Jul 16 00:56:15.084205 containerd[2825]: time="2025-07-16T00:56:15.084156123Z" level=info msg="StartContainer for \"161ecdd7b410831eb6b0b93f622c50e6aade1089fee6cef46fe3f5e204d0f2f7\" returns successfully" Jul 16 00:56:15.388705 systemd-networkd[2730]: cali13db363a3b9: Gained IPv6LL Jul 16 00:56:15.452630 systemd-networkd[2730]: caliead4a0c214e: Gained IPv6LL Jul 16 00:56:15.606035 kubelet[4338]: I0716 00:56:15.605991 4338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-fc6b987ff-ql4qk" podStartSLOduration=20.758962279 podStartE2EDuration="21.605977186s" podCreationTimestamp="2025-07-16 00:55:54 +0000 UTC" firstStartedPulling="2025-07-16 00:56:13.786496442 +0000 UTC m=+32.328685976" lastFinishedPulling="2025-07-16 00:56:14.633511389 +0000 UTC m=+33.175700883" observedRunningTime="2025-07-16 00:56:15.605732067 +0000 UTC m=+34.147921601" watchObservedRunningTime="2025-07-16 00:56:15.605977186 +0000 UTC m=+34.148166720" Jul 16 00:56:15.612434 kubelet[4338]: I0716 00:56:15.612396 4338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-fc6b987ff-ct8q7" podStartSLOduration=20.598064534 podStartE2EDuration="21.612385497s" podCreationTimestamp="2025-07-16 00:55:54 +0000 UTC" firstStartedPulling="2025-07-16 00:56:14.002112071 +0000 UTC m=+32.544301605" lastFinishedPulling="2025-07-16 00:56:15.016433074 +0000 UTC m=+33.558622568" observedRunningTime="2025-07-16 00:56:15.612208899 +0000 UTC m=+34.154398433" watchObservedRunningTime="2025-07-16 00:56:15.612385497 +0000 UTC m=+34.154575031" Jul 16 00:56:15.900680 systemd-networkd[2730]: cali650e36d6a27: Gained IPv6LL Jul 16 00:56:15.900995 systemd-networkd[2730]: cali318c684f414: Gained IPv6LL Jul 16 00:56:15.920024 containerd[2825]: time="2025-07-16T00:56:15.919989416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:15.920316 containerd[2825]: time="2025-07-16T00:56:15.919999216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 16 00:56:15.920684 containerd[2825]: time="2025-07-16T00:56:15.920665891Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:15.922146 containerd[2825]: time="2025-07-16T00:56:15.922125200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:15.922763 containerd[2825]: time="2025-07-16T00:56:15.922739995Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 906.156162ms" Jul 16 00:56:15.922786 containerd[2825]: time="2025-07-16T00:56:15.922770355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 16 00:56:15.923498 containerd[2825]: time="2025-07-16T00:56:15.923480230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 16 00:56:15.927912 containerd[2825]: time="2025-07-16T00:56:15.927890596Z" level=info msg="CreateContainer within sandbox \"974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 16 00:56:15.931435 containerd[2825]: time="2025-07-16T00:56:15.931411890Z" level=info msg="Container e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:56:15.934773 containerd[2825]: time="2025-07-16T00:56:15.934750065Z" level=info msg="CreateContainer within sandbox \"974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\"" Jul 16 00:56:15.935067 containerd[2825]: time="2025-07-16T00:56:15.935045822Z" level=info msg="StartContainer for \"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\"" Jul 16 00:56:15.936014 containerd[2825]: time="2025-07-16T00:56:15.935992895Z" level=info msg="connecting to shim e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469" address="unix:///run/containerd/s/de285c1fbfad267f7f7b3ef2260d3c6d35e479748ed421eab8fe345868cad360" protocol=ttrpc version=3 Jul 16 00:56:15.967691 systemd[1]: Started cri-containerd-e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469.scope - libcontainer container e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469. Jul 16 00:56:15.997529 containerd[2825]: time="2025-07-16T00:56:15.997500751Z" level=info msg="StartContainer for \"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" returns successfully" Jul 16 00:56:16.093652 systemd-networkd[2730]: calicde3602a25e: Gained IPv6LL Jul 16 00:56:16.576914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount783257570.mount: Deactivated successfully. Jul 16 00:56:16.604993 kubelet[4338]: I0716 00:56:16.604970 4338 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:56:16.613320 kubelet[4338]: I0716 00:56:16.613279 4338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8669c847c4-jckr2" podStartSLOduration=17.375306413 podStartE2EDuration="18.613265553s" podCreationTimestamp="2025-07-16 00:55:58 +0000 UTC" firstStartedPulling="2025-07-16 00:56:14.685377331 +0000 UTC m=+33.227566825" lastFinishedPulling="2025-07-16 00:56:15.923336431 +0000 UTC m=+34.465525965" observedRunningTime="2025-07-16 00:56:16.612845996 +0000 UTC m=+35.155035530" watchObservedRunningTime="2025-07-16 00:56:16.613265553 +0000 UTC m=+35.155455047" Jul 16 00:56:16.772514 containerd[2825]: time="2025-07-16T00:56:16.772478027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:16.772597 containerd[2825]: time="2025-07-16T00:56:16.772490907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 16 00:56:16.773182 containerd[2825]: time="2025-07-16T00:56:16.773156102Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:16.774900 containerd[2825]: time="2025-07-16T00:56:16.774879930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:16.775577 containerd[2825]: time="2025-07-16T00:56:16.775539645Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 852.031655ms" Jul 16 00:56:16.775609 containerd[2825]: time="2025-07-16T00:56:16.775575685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 16 00:56:16.776338 containerd[2825]: time="2025-07-16T00:56:16.776320040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 16 00:56:16.777188 containerd[2825]: time="2025-07-16T00:56:16.777169354Z" level=info msg="CreateContainer within sandbox \"cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 16 00:56:16.781167 containerd[2825]: time="2025-07-16T00:56:16.781138405Z" level=info msg="Container fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:56:16.784582 containerd[2825]: time="2025-07-16T00:56:16.784547021Z" level=info msg="CreateContainer within sandbox \"cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\"" Jul 16 00:56:16.784899 containerd[2825]: time="2025-07-16T00:56:16.784878819Z" level=info msg="StartContainer for \"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\"" Jul 16 00:56:16.785856 containerd[2825]: time="2025-07-16T00:56:16.785833692Z" level=info msg="connecting to shim fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b" address="unix:///run/containerd/s/954256d28b72b2316d14dc62b5621f3030af884e409c499ddccbcd22bb894e91" protocol=ttrpc version=3 Jul 16 00:56:16.811736 systemd[1]: Started cri-containerd-fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b.scope - libcontainer container fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b. Jul 16 00:56:16.812017 kubelet[4338]: I0716 00:56:16.811999 4338 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:56:16.842199 containerd[2825]: time="2025-07-16T00:56:16.842133894Z" level=info msg="StartContainer for \"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" returns successfully" Jul 16 00:56:17.110391 containerd[2825]: time="2025-07-16T00:56:17.110320445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:17.110391 containerd[2825]: time="2025-07-16T00:56:17.110370204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 16 00:56:17.111063 containerd[2825]: time="2025-07-16T00:56:17.111043880Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:17.112521 containerd[2825]: time="2025-07-16T00:56:17.112502590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:56:17.113142 containerd[2825]: time="2025-07-16T00:56:17.113121306Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 336.770787ms" Jul 16 00:56:17.113165 containerd[2825]: time="2025-07-16T00:56:17.113147906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 16 00:56:17.114938 containerd[2825]: time="2025-07-16T00:56:17.114918334Z" level=info msg="CreateContainer within sandbox \"b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 16 00:56:17.120041 containerd[2825]: time="2025-07-16T00:56:17.120012260Z" level=info msg="Container 97e2927acb4f52894206b4bbfdfc153b25abc3834bd6fba823d2962ad5227517: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:56:17.124586 containerd[2825]: time="2025-07-16T00:56:17.124552550Z" level=info msg="CreateContainer within sandbox \"b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"97e2927acb4f52894206b4bbfdfc153b25abc3834bd6fba823d2962ad5227517\"" Jul 16 00:56:17.124927 containerd[2825]: time="2025-07-16T00:56:17.124904668Z" level=info msg="StartContainer for \"97e2927acb4f52894206b4bbfdfc153b25abc3834bd6fba823d2962ad5227517\"" Jul 16 00:56:17.126280 containerd[2825]: time="2025-07-16T00:56:17.126256339Z" level=info msg="connecting to shim 97e2927acb4f52894206b4bbfdfc153b25abc3834bd6fba823d2962ad5227517" address="unix:///run/containerd/s/8bead702ea3587fc9ccd7728d0f0d59a0c3e2e0bc2f38e061bdc25f0c8d1fd06" protocol=ttrpc version=3 Jul 16 00:56:17.156690 systemd[1]: Started cri-containerd-97e2927acb4f52894206b4bbfdfc153b25abc3834bd6fba823d2962ad5227517.scope - libcontainer container 97e2927acb4f52894206b4bbfdfc153b25abc3834bd6fba823d2962ad5227517. Jul 16 00:56:17.184020 containerd[2825]: time="2025-07-16T00:56:17.183991236Z" level=info msg="StartContainer for \"97e2927acb4f52894206b4bbfdfc153b25abc3834bd6fba823d2962ad5227517\" returns successfully" Jul 16 00:56:17.314894 systemd-networkd[2730]: vxlan.calico: Link UP Jul 16 00:56:17.314898 systemd-networkd[2730]: vxlan.calico: Gained carrier Jul 16 00:56:17.528278 containerd[2825]: time="2025-07-16T00:56:17.528229993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c8s82,Uid:d85d0c4c-a0f0-4004-adda-490a66de2839,Namespace:kube-system,Attempt:0,}" Jul 16 00:56:17.576874 kubelet[4338]: I0716 00:56:17.576850 4338 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 16 00:56:17.576874 kubelet[4338]: I0716 00:56:17.576878 4338 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 16 00:56:17.616850 kubelet[4338]: I0716 00:56:17.616803 4338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hb2b4" podStartSLOduration=16.384697529 podStartE2EDuration="19.616786606s" podCreationTimestamp="2025-07-16 00:55:58 +0000 UTC" firstStartedPulling="2025-07-16 00:56:13.881606825 +0000 UTC m=+32.423796359" lastFinishedPulling="2025-07-16 00:56:17.113695902 +0000 UTC m=+35.655885436" observedRunningTime="2025-07-16 00:56:17.616420208 +0000 UTC m=+36.158609742" watchObservedRunningTime="2025-07-16 00:56:17.616786606 +0000 UTC m=+36.158976140" Jul 16 00:56:17.625314 kubelet[4338]: I0716 00:56:17.625270 4338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-267hx" podStartSLOduration=17.638639322 podStartE2EDuration="19.62525743s" podCreationTimestamp="2025-07-16 00:55:58 +0000 UTC" firstStartedPulling="2025-07-16 00:56:14.789610852 +0000 UTC m=+33.331800386" lastFinishedPulling="2025-07-16 00:56:16.77622896 +0000 UTC m=+35.318418494" observedRunningTime="2025-07-16 00:56:17.625049671 +0000 UTC m=+36.167239205" watchObservedRunningTime="2025-07-16 00:56:17.62525743 +0000 UTC m=+36.167446964" Jul 16 00:56:17.625350 systemd-networkd[2730]: calidd94838b93f: Link UP Jul 16 00:56:17.626136 systemd-networkd[2730]: calidd94838b93f: Gained carrier Jul 16 00:56:17.633850 containerd[2825]: 2025-07-16 00:56:17.568 [INFO][8332] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--c8s82-eth0 coredns-7c65d6cfc9- kube-system d85d0c4c-a0f0-4004-adda-490a66de2839 805 0 2025-07-16 00:55:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.1-n-4904b64135 coredns-7c65d6cfc9-c8s82 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidd94838b93f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c8s82" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--c8s82-" Jul 16 00:56:17.633850 containerd[2825]: 2025-07-16 00:56:17.568 [INFO][8332] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c8s82" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--c8s82-eth0" Jul 16 00:56:17.633850 containerd[2825]: 2025-07-16 00:56:17.590 [INFO][8362] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8" HandleID="k8s-pod-network.9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8" Workload="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--c8s82-eth0" Jul 16 00:56:17.633850 containerd[2825]: 2025-07-16 00:56:17.590 [INFO][8362] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8" HandleID="k8s-pod-network.9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8" Workload="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--c8s82-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000790420), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.1-n-4904b64135", "pod":"coredns-7c65d6cfc9-c8s82", "timestamp":"2025-07-16 00:56:17.590339261 +0000 UTC"}, Hostname:"ci-4372.0.1-n-4904b64135", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:56:17.633850 containerd[2825]: 2025-07-16 00:56:17.590 [INFO][8362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:56:17.633850 containerd[2825]: 2025-07-16 00:56:17.590 [INFO][8362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:56:17.633850 containerd[2825]: 2025-07-16 00:56:17.590 [INFO][8362] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-4904b64135' Jul 16 00:56:17.633850 containerd[2825]: 2025-07-16 00:56:17.601 [INFO][8362] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:17.633850 containerd[2825]: 2025-07-16 00:56:17.604 [INFO][8362] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:17.633850 containerd[2825]: 2025-07-16 00:56:17.608 [INFO][8362] ipam/ipam.go 511: Trying affinity for 192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:17.633850 containerd[2825]: 2025-07-16 00:56:17.609 [INFO][8362] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:17.633850 containerd[2825]: 2025-07-16 00:56:17.611 [INFO][8362] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.192/26 host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:17.633850 containerd[2825]: 2025-07-16 00:56:17.611 [INFO][8362] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.95.192/26 handle="k8s-pod-network.9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:17.633850 containerd[2825]: 2025-07-16 00:56:17.612 [INFO][8362] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8 Jul 16 00:56:17.633850 containerd[2825]: 2025-07-16 00:56:17.614 [INFO][8362] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.95.192/26 handle="k8s-pod-network.9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:17.633850 containerd[2825]: 2025-07-16 00:56:17.619 [INFO][8362] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.95.200/26] block=192.168.95.192/26 handle="k8s-pod-network.9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:17.633850 containerd[2825]: 2025-07-16 00:56:17.619 [INFO][8362] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.200/26] handle="k8s-pod-network.9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8" host="ci-4372.0.1-n-4904b64135" Jul 16 00:56:17.633850 containerd[2825]: 2025-07-16 00:56:17.619 [INFO][8362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:56:17.633850 containerd[2825]: 2025-07-16 00:56:17.619 [INFO][8362] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.95.200/26] IPv6=[] ContainerID="9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8" HandleID="k8s-pod-network.9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8" Workload="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--c8s82-eth0" Jul 16 00:56:17.634272 containerd[2825]: 2025-07-16 00:56:17.624 [INFO][8332] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c8s82" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--c8s82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--c8s82-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d85d0c4c-a0f0-4004-adda-490a66de2839", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-4904b64135", ContainerID:"", Pod:"coredns-7c65d6cfc9-c8s82", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd94838b93f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:56:17.634272 containerd[2825]: 2025-07-16 00:56:17.624 [INFO][8332] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.200/32] ContainerID="9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c8s82" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--c8s82-eth0" Jul 16 00:56:17.634272 containerd[2825]: 2025-07-16 00:56:17.624 [INFO][8332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd94838b93f ContainerID="9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c8s82" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--c8s82-eth0" Jul 16 00:56:17.634272 containerd[2825]: 2025-07-16 00:56:17.626 [INFO][8332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c8s82" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--c8s82-eth0" Jul 16 00:56:17.634272 containerd[2825]: 2025-07-16 00:56:17.626 [INFO][8332] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c8s82" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--c8s82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--c8s82-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d85d0c4c-a0f0-4004-adda-490a66de2839", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-4904b64135", ContainerID:"9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8", Pod:"coredns-7c65d6cfc9-c8s82", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd94838b93f", MAC:"aa:8c:d4:9a:bd:fa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:56:17.634272 containerd[2825]: 2025-07-16 00:56:17.632 [INFO][8332] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-c8s82" WorkloadEndpoint="ci--4372.0.1--n--4904b64135-k8s-coredns--7c65d6cfc9--c8s82-eth0" Jul 16 00:56:17.649577 containerd[2825]: time="2025-07-16T00:56:17.649527469Z" level=info msg="connecting to shim 9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8" address="unix:///run/containerd/s/f5ae25e134ef6f489f4d60923774a158d2fd3e468f92c5adade0a2835a20b4bc" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:56:17.651848 containerd[2825]: time="2025-07-16T00:56:17.651823093Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"d7890c81b15b0baae96ed714bea5996793ab0bbcbd7894b190fc26eb459a8962\" pid:8419 exited_at:{seconds:1752627377 nanos:651618055}" Jul 16 00:56:17.660248 systemd[1]: Started cri-containerd-9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8.scope - libcontainer container 9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8. Jul 16 00:56:17.686272 containerd[2825]: time="2025-07-16T00:56:17.686238785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c8s82,Uid:d85d0c4c-a0f0-4004-adda-490a66de2839,Namespace:kube-system,Attempt:0,} returns sandbox id \"9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8\"" Jul 16 00:56:17.688055 containerd[2825]: time="2025-07-16T00:56:17.688032933Z" level=info msg="CreateContainer within sandbox \"9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 16 00:56:17.692240 containerd[2825]: time="2025-07-16T00:56:17.692210146Z" level=info msg="Container d05763edcc901224fdacca0b6099c9716e4949ba90d0e4ddce913a76a6d960be: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:56:17.694853 containerd[2825]: time="2025-07-16T00:56:17.694827328Z" level=info msg="CreateContainer within sandbox \"9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d05763edcc901224fdacca0b6099c9716e4949ba90d0e4ddce913a76a6d960be\"" Jul 16 00:56:17.694972 containerd[2825]: time="2025-07-16T00:56:17.694956167Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"7f69aeff6dc3483ddae59f781dae359ea61bd743a1d7509b92a8d9c2c735d595\" pid:8418 exit_status:1 exited_at:{seconds:1752627377 nanos:694734369}" Jul 16 00:56:17.695187 containerd[2825]: time="2025-07-16T00:56:17.695168086Z" level=info msg="StartContainer for \"d05763edcc901224fdacca0b6099c9716e4949ba90d0e4ddce913a76a6d960be\"" Jul 16 00:56:17.695956 containerd[2825]: time="2025-07-16T00:56:17.695936041Z" level=info msg="connecting to shim d05763edcc901224fdacca0b6099c9716e4949ba90d0e4ddce913a76a6d960be" address="unix:///run/containerd/s/f5ae25e134ef6f489f4d60923774a158d2fd3e468f92c5adade0a2835a20b4bc" protocol=ttrpc version=3 Jul 16 00:56:17.730697 systemd[1]: Started cri-containerd-d05763edcc901224fdacca0b6099c9716e4949ba90d0e4ddce913a76a6d960be.scope - libcontainer container d05763edcc901224fdacca0b6099c9716e4949ba90d0e4ddce913a76a6d960be. Jul 16 00:56:17.751853 containerd[2825]: time="2025-07-16T00:56:17.751822390Z" level=info msg="StartContainer for \"d05763edcc901224fdacca0b6099c9716e4949ba90d0e4ddce913a76a6d960be\" returns successfully" Jul 16 00:56:18.620491 kubelet[4338]: I0716 00:56:18.620347 4338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-c8s82" podStartSLOduration=31.620330767 podStartE2EDuration="31.620330767s" podCreationTimestamp="2025-07-16 00:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:56:18.620160968 +0000 UTC m=+37.162350462" watchObservedRunningTime="2025-07-16 00:56:18.620330767 +0000 UTC m=+37.162520341" Jul 16 00:56:18.675747 containerd[2825]: time="2025-07-16T00:56:18.675710662Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"c93172ff333ca48c3b5b9337089853a4a75b47186436a49348edd7254c2be50b\" pid:8601 exit_status:1 exited_at:{seconds:1752627378 nanos:675516303}" Jul 16 00:56:19.228701 systemd-networkd[2730]: calidd94838b93f: Gained IPv6LL Jul 16 00:56:19.292628 systemd-networkd[2730]: vxlan.calico: Gained IPv6LL Jul 16 00:56:19.669008 containerd[2825]: time="2025-07-16T00:56:19.668976586Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"40d5faff77641124c407a09edd431f82e9e22974d33662c69e85cf70af7daec6\" pid:8649 exit_status:1 exited_at:{seconds:1752627379 nanos:668804547}" Jul 16 00:56:24.795402 kubelet[4338]: I0716 00:56:24.795345 4338 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:56:24.870537 containerd[2825]: time="2025-07-16T00:56:24.870508805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\" id:\"28ebaedd74dbeb645e35f9ff9b22ffb2461bc852e410524dcb78bec80e9ec915\" pid:8704 exited_at:{seconds:1752627384 nanos:870342326}" Jul 16 00:56:24.947329 containerd[2825]: time="2025-07-16T00:56:24.947285281Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\" id:\"8a34085dc226dcd41fea7881cdd6d00662da53e2e869a7a17982bb76f9daccc1\" pid:8738 exited_at:{seconds:1752627384 nanos:947042962}" Jul 16 00:56:32.426537 kubelet[4338]: I0716 00:56:32.426496 4338 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:56:34.805559 containerd[2825]: time="2025-07-16T00:56:34.805519974Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"b31a3df000a733a6e4cdeaa33e3f60872112ecd0b9d7c5b07c17dbe7f1b25241\" pid:8799 exited_at:{seconds:1752627394 nanos:805271535}" Jul 16 00:56:42.331351 containerd[2825]: time="2025-07-16T00:56:42.331243193Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"3207f7624302366cd6ff87e2bfa9253e585800cf42652abf4786adb79e19a34b\" pid:8862 exited_at:{seconds:1752627402 nanos:330875633}" Jul 16 00:56:43.233389 containerd[2825]: time="2025-07-16T00:56:43.233348660Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"37445db1fda4cd154155660b7ca5603497b17d8d75e22cfa11f2c9a9e513b7df\" pid:8898 exited_at:{seconds:1752627403 nanos:233189460}" Jul 16 00:56:54.869277 containerd[2825]: time="2025-07-16T00:56:54.869229826Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\" id:\"13ac53597a363a25b40f1f366f2beb8d4ff59eec4b614be9743cb8b2aa350e69\" pid:8923 exited_at:{seconds:1752627414 nanos:869030866}" Jul 16 00:57:04.234455 containerd[2825]: time="2025-07-16T00:57:04.234410523Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"e66a580dd21902363fc58d47ecb14852580365e4bedca317514699c33dcbc42d\" pid:8963 exited_at:{seconds:1752627424 nanos:234269441}" Jul 16 00:57:04.806653 containerd[2825]: time="2025-07-16T00:57:04.806607817Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"1d02c10bca56ebc77791bf3958ba26ab7d082be534c6b2f5ccbf6ef6a67a0d8a\" pid:8984 exited_at:{seconds:1752627424 nanos:806368373}" Jul 16 00:57:13.228421 containerd[2825]: time="2025-07-16T00:57:13.228383980Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"6f4990244168fe974e224ccf99db5b75232f9cd23e73ef74a501e0b31ad7f943\" pid:9025 exited_at:{seconds:1752627433 nanos:228204698}" Jul 16 00:57:24.869788 containerd[2825]: time="2025-07-16T00:57:24.869744923Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\" id:\"ff4dd702ecfb60053f91636ea7fc624576ce8c0d51ea3c86627dca4ffe6d697d\" pid:9055 exited_at:{seconds:1752627444 nanos:869539961}" Jul 16 00:57:34.797669 containerd[2825]: time="2025-07-16T00:57:34.797624133Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"88b3a45ac5808c67bf86cbf4817768e0b55de8e091ff0a5db6e3af028e82e526\" pid:9090 exited_at:{seconds:1752627454 nanos:797447451}" Jul 16 00:57:42.335135 containerd[2825]: time="2025-07-16T00:57:42.335090273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"0657262f94564cc07a18380f00f49318229bec34ad7a61d86fa27701ac0a1c13\" pid:9139 exited_at:{seconds:1752627462 nanos:334858752}" Jul 16 00:57:43.232369 containerd[2825]: time="2025-07-16T00:57:43.232330336Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"ce9fb01706b58cee11e6d6d5455da241e8429581b599089755051f8c1783b562\" pid:9176 exited_at:{seconds:1752627463 nanos:232192775}" Jul 16 00:57:54.867410 containerd[2825]: time="2025-07-16T00:57:54.867361032Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\" id:\"76b76a8281cdd41109a0c46dd1c03920a1a7c8ec27170e48509ef9d80b8ca20e\" pid:9227 exited_at:{seconds:1752627474 nanos:867090511}" Jul 16 00:58:04.236579 containerd[2825]: time="2025-07-16T00:58:04.236526176Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"55e52f0846263c77a5535c02ee60eff3a281b65d1dbcf3967c2dbecd1e3fe0c6\" pid:9268 exited_at:{seconds:1752627484 nanos:236329695}" Jul 16 00:58:04.801560 containerd[2825]: time="2025-07-16T00:58:04.801529892Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"dab08dedd75f33f971fe22b96983a9f0442f1565c64e58f3a84e8fcd6dc80a30\" pid:9289 exited_at:{seconds:1752627484 nanos:801304491}" Jul 16 00:58:13.231515 containerd[2825]: time="2025-07-16T00:58:13.231437418Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"6e32e162bb2ac5d37f92ea5347e1ceb8a9d5dc5a7d2f249bdcb1bc767ad3f8d8\" pid:9340 exited_at:{seconds:1752627493 nanos:231258338}" Jul 16 00:58:24.869077 containerd[2825]: time="2025-07-16T00:58:24.869026928Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\" id:\"f8bca5c636fd4632a8bbd1bc15311073fdc49527bea3fb086820a60d920914ec\" pid:9389 exited_at:{seconds:1752627504 nanos:868783568}" Jul 16 00:58:34.806515 containerd[2825]: time="2025-07-16T00:58:34.806465381Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"dcaa4673f365667527cdc76522f5135080dc822aff7c5564b7a0a52f66dd6aed\" pid:9424 exited_at:{seconds:1752627514 nanos:806227581}" Jul 16 00:58:42.333781 containerd[2825]: time="2025-07-16T00:58:42.333732870Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"01a1593900c7273169addc4710206d627cb6152e27ad8007c9ed886a72fea782\" pid:9462 exited_at:{seconds:1752627522 nanos:333557549}" Jul 16 00:58:43.230301 containerd[2825]: time="2025-07-16T00:58:43.230261180Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"cc33d38c66d452b36f5278212cff05b47ca2bce10b55f799e421f8fcf3b2048b\" pid:9499 exited_at:{seconds:1752627523 nanos:230085220}" Jul 16 00:58:47.123176 update_engine[2818]: I20250716 00:58:47.122659 2818 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 16 00:58:47.123176 update_engine[2818]: I20250716 00:58:47.122722 2818 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 16 00:58:47.123176 update_engine[2818]: I20250716 00:58:47.122960 2818 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 16 00:58:47.123505 update_engine[2818]: I20250716 00:58:47.123287 2818 omaha_request_params.cc:62] Current group set to alpha Jul 16 00:58:47.123505 update_engine[2818]: I20250716 00:58:47.123369 2818 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 16 00:58:47.123505 update_engine[2818]: I20250716 00:58:47.123376 2818 update_attempter.cc:643] Scheduling an action processor start. Jul 16 00:58:47.123505 update_engine[2818]: I20250716 00:58:47.123389 2818 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 16 00:58:47.123505 update_engine[2818]: I20250716 00:58:47.123416 2818 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 16 00:58:47.123505 update_engine[2818]: I20250716 00:58:47.123461 2818 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 16 00:58:47.123505 update_engine[2818]: I20250716 00:58:47.123467 2818 omaha_request_action.cc:272] Request: Jul 16 00:58:47.123505 update_engine[2818]: Jul 16 00:58:47.123505 update_engine[2818]: Jul 16 00:58:47.123505 update_engine[2818]: Jul 16 00:58:47.123505 update_engine[2818]: Jul 16 00:58:47.123505 update_engine[2818]: Jul 16 00:58:47.123505 update_engine[2818]: Jul 16 00:58:47.123505 update_engine[2818]: Jul 16 00:58:47.123505 update_engine[2818]: Jul 16 00:58:47.123505 update_engine[2818]: I20250716 00:58:47.123473 2818 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 16 00:58:47.123825 locksmithd[2854]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 16 00:58:47.124528 update_engine[2818]: I20250716 00:58:47.124509 2818 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 16 00:58:47.124847 update_engine[2818]: I20250716 00:58:47.124827 2818 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 16 00:58:47.125418 update_engine[2818]: E20250716 00:58:47.125398 2818 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 16 00:58:47.125460 update_engine[2818]: I20250716 00:58:47.125449 2818 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 16 00:58:54.867655 containerd[2825]: time="2025-07-16T00:58:54.867615798Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\" id:\"e41fd854ec13ce5cd15542dce45da520f71ea674871e306608d5d328662b6e50\" pid:9524 exited_at:{seconds:1752627534 nanos:867406757}" Jul 16 00:58:57.111664 update_engine[2818]: I20250716 00:58:57.111594 2818 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 16 00:58:57.112046 update_engine[2818]: I20250716 00:58:57.111881 2818 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 16 00:58:57.112137 update_engine[2818]: I20250716 00:58:57.112111 2818 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 16 00:58:57.112547 update_engine[2818]: E20250716 00:58:57.112532 2818 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 16 00:58:57.112604 update_engine[2818]: I20250716 00:58:57.112592 2818 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 16 00:59:04.238585 containerd[2825]: time="2025-07-16T00:59:04.238535914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"03530a434d1b42ca34dfd725159a580c05a87426d6a2688dadd0576d3aa44045\" pid:9567 exited_at:{seconds:1752627544 nanos:238391073}" Jul 16 00:59:04.803248 containerd[2825]: time="2025-07-16T00:59:04.803202432Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"834b77bab3b53fb88ebc5da1e212d2f98a3980834076f1643f639cfd9a1638f3\" pid:9590 exited_at:{seconds:1752627544 nanos:803005712}" Jul 16 00:59:07.111657 update_engine[2818]: I20250716 00:59:07.111588 2818 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 16 00:59:07.111992 update_engine[2818]: I20250716 00:59:07.111877 2818 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 16 00:59:07.112130 update_engine[2818]: I20250716 00:59:07.112110 2818 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 16 00:59:07.112534 update_engine[2818]: E20250716 00:59:07.112517 2818 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 16 00:59:07.112572 update_engine[2818]: I20250716 00:59:07.112552 2818 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 16 00:59:13.233538 containerd[2825]: time="2025-07-16T00:59:13.233498424Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"33dd66690351a4dd42a848322f45eb6e9999595960b217004be3c7804a88a733\" pid:9630 exited_at:{seconds:1752627553 nanos:233310058}" Jul 16 00:59:17.111666 update_engine[2818]: I20250716 00:59:17.111595 2818 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 16 00:59:17.112159 update_engine[2818]: I20250716 00:59:17.112139 2818 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 16 00:59:17.112453 update_engine[2818]: I20250716 00:59:17.112422 2818 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 16 00:59:17.112756 update_engine[2818]: E20250716 00:59:17.112730 2818 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 16 00:59:17.112852 update_engine[2818]: I20250716 00:59:17.112835 2818 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 16 00:59:17.112901 update_engine[2818]: I20250716 00:59:17.112890 2818 omaha_request_action.cc:617] Omaha request response: Jul 16 00:59:17.113032 update_engine[2818]: E20250716 00:59:17.113016 2818 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 16 00:59:17.113093 update_engine[2818]: I20250716 00:59:17.113080 2818 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 16 00:59:17.113138 update_engine[2818]: I20250716 00:59:17.113126 2818 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 16 00:59:17.113185 update_engine[2818]: I20250716 00:59:17.113170 2818 update_attempter.cc:306] Processing Done. Jul 16 00:59:17.113242 update_engine[2818]: E20250716 00:59:17.113227 2818 update_attempter.cc:619] Update failed. Jul 16 00:59:17.113297 update_engine[2818]: I20250716 00:59:17.113281 2818 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 16 00:59:17.113340 update_engine[2818]: I20250716 00:59:17.113327 2818 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 16 00:59:17.113385 update_engine[2818]: I20250716 00:59:17.113372 2818 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 16 00:59:17.113491 update_engine[2818]: I20250716 00:59:17.113474 2818 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 16 00:59:17.113556 update_engine[2818]: I20250716 00:59:17.113543 2818 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 16 00:59:17.113672 update_engine[2818]: I20250716 00:59:17.113611 2818 omaha_request_action.cc:272] Request: Jul 16 00:59:17.113672 update_engine[2818]: Jul 16 00:59:17.113672 update_engine[2818]: Jul 16 00:59:17.113672 update_engine[2818]: Jul 16 00:59:17.113672 update_engine[2818]: Jul 16 00:59:17.113672 update_engine[2818]: Jul 16 00:59:17.113672 update_engine[2818]: Jul 16 00:59:17.113672 update_engine[2818]: I20250716 00:59:17.113652 2818 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 16 00:59:17.113883 update_engine[2818]: I20250716 00:59:17.113798 2818 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 16 00:59:17.113905 locksmithd[2854]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 16 00:59:17.114101 update_engine[2818]: I20250716 00:59:17.113986 2818 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 16 00:59:17.114257 update_engine[2818]: E20250716 00:59:17.114233 2818 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 16 00:59:17.114364 update_engine[2818]: I20250716 00:59:17.114347 2818 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 16 00:59:17.114413 update_engine[2818]: I20250716 00:59:17.114400 2818 omaha_request_action.cc:617] Omaha request response: Jul 16 00:59:17.114532 update_engine[2818]: I20250716 00:59:17.114449 2818 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 16 00:59:17.114532 update_engine[2818]: I20250716 00:59:17.114457 2818 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 16 00:59:17.114532 update_engine[2818]: I20250716 00:59:17.114462 2818 update_attempter.cc:306] Processing Done. Jul 16 00:59:17.114532 update_engine[2818]: I20250716 00:59:17.114466 2818 update_attempter.cc:310] Error event sent. Jul 16 00:59:17.114532 update_engine[2818]: I20250716 00:59:17.114475 2818 update_check_scheduler.cc:74] Next update check in 47m55s Jul 16 00:59:17.114654 locksmithd[2854]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 16 00:59:24.867281 containerd[2825]: time="2025-07-16T00:59:24.867221747Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\" id:\"6fd85b7b9fe15d01a455ed3e7773e31690052808801161c569e5c0b836f6c6e7\" pid:9656 exited_at:{seconds:1752627564 nanos:866910940}" Jul 16 00:59:34.805734 containerd[2825]: time="2025-07-16T00:59:34.805691476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"167ae07fb50c6e5b1c5c58af7bcdae87bb520c0ab0673d15f1c42e62bec873ba\" pid:9714 exited_at:{seconds:1752627574 nanos:805501512}" Jul 16 00:59:42.337788 containerd[2825]: time="2025-07-16T00:59:42.337708995Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"f15fcbeb4fecb825c84c03428a8483431ffc5aa7fc7e9a4dfe363c22225b31b0\" pid:9754 exited_at:{seconds:1752627582 nanos:337488711}" Jul 16 00:59:43.235483 containerd[2825]: time="2025-07-16T00:59:43.235453670Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"438aee999bb38dfb969b5902929f7b3c629e29f4c8b980a8a2aae0799f831168\" pid:9793 exited_at:{seconds:1752627583 nanos:235293067}" Jul 16 00:59:54.861372 containerd[2825]: time="2025-07-16T00:59:54.861335818Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\" id:\"7a9823b17bac60d8d7460ba61eac58f882a56621c4c47b5b4f732e77e8335381\" pid:9818 exited_at:{seconds:1752627594 nanos:861077414}" Jul 16 01:00:04.230185 containerd[2825]: time="2025-07-16T01:00:04.230139059Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"244aff5d7286ee29ccb93495a1597b5dec8f5bdf410d9404d8761d02638fdce2\" pid:9853 exited_at:{seconds:1752627604 nanos:229948456}" Jul 16 01:00:04.798773 containerd[2825]: time="2025-07-16T01:00:04.798738601Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"ff5ea26800f5f2875c38929a0c2ef08dfd000cf63e7946f990688be6a41ae91b\" pid:9875 exited_at:{seconds:1752627604 nanos:798549118}" Jul 16 01:00:13.228387 containerd[2825]: time="2025-07-16T01:00:13.228349039Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"492e070912779386cb6a634ba9ba56f3f00418c231e5ed6e69c1abd647e17014\" pid:9924 exited_at:{seconds:1752627613 nanos:228181917}" Jul 16 01:00:24.863559 containerd[2825]: time="2025-07-16T01:00:24.863513339Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\" id:\"9b128c4e2950fb5614287d5c82eee8a777609e7c53953fc58aea7f499ed66038\" pid:9971 exited_at:{seconds:1752627624 nanos:863259176}" Jul 16 01:00:34.805456 containerd[2825]: time="2025-07-16T01:00:34.805413142Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"d22244effea137b5f8b96c03d7584188cbfc79e7361c7714dde90d4cd0cf627f\" pid:10014 exited_at:{seconds:1752627634 nanos:805251260}" Jul 16 01:00:37.354605 containerd[2825]: time="2025-07-16T01:00:37.354529847Z" level=warning msg="container event discarded" container=2479e8b4546daafbdf78322b12ccde33b243dcf14ce25aa9cd4dd876a5bcdfc4 type=CONTAINER_CREATED_EVENT Jul 16 01:00:37.367158 containerd[2825]: time="2025-07-16T01:00:37.367071864Z" level=warning msg="container event discarded" container=2479e8b4546daafbdf78322b12ccde33b243dcf14ce25aa9cd4dd876a5bcdfc4 type=CONTAINER_STARTED_EVENT Jul 16 01:00:37.367158 containerd[2825]: time="2025-07-16T01:00:37.367098104Z" level=warning msg="container event discarded" container=94fe1ed7b07efd23c25034747bc68903420800bb89a4079bcbac84e1cca33f1a type=CONTAINER_CREATED_EVENT Jul 16 01:00:37.367158 containerd[2825]: time="2025-07-16T01:00:37.367106104Z" level=warning msg="container event discarded" container=94fe1ed7b07efd23c25034747bc68903420800bb89a4079bcbac84e1cca33f1a type=CONTAINER_STARTED_EVENT Jul 16 01:00:37.367158 containerd[2825]: time="2025-07-16T01:00:37.367113744Z" level=warning msg="container event discarded" container=3f55d822e70b5c88c163b000d9fd1c941321abe3347576fc716bcd305bd8c42b type=CONTAINER_CREATED_EVENT Jul 16 01:00:37.367158 containerd[2825]: time="2025-07-16T01:00:37.367120264Z" level=warning msg="container event discarded" container=3f55d822e70b5c88c163b000d9fd1c941321abe3347576fc716bcd305bd8c42b type=CONTAINER_STARTED_EVENT Jul 16 01:00:37.367158 containerd[2825]: time="2025-07-16T01:00:37.367126144Z" level=warning msg="container event discarded" container=2cc9543b2d2da054d90f716cb16dfd204317e1815ba2f5291d4cdb9aa19e0e52 type=CONTAINER_CREATED_EVENT Jul 16 01:00:37.367158 containerd[2825]: time="2025-07-16T01:00:37.367134064Z" level=warning msg="container event discarded" container=24a7ff732cfdfdc787ea2972313e38c50e0ea188349bd72805a79b28a29bfea2 type=CONTAINER_CREATED_EVENT Jul 16 01:00:37.381328 containerd[2825]: time="2025-07-16T01:00:37.381291619Z" level=warning msg="container event discarded" container=2576bce2f8a945078a001e006c90d41f779a5299dc87877a7cc75581c0de5e0c type=CONTAINER_CREATED_EVENT Jul 16 01:00:37.439592 containerd[2825]: time="2025-07-16T01:00:37.439573296Z" level=warning msg="container event discarded" container=24a7ff732cfdfdc787ea2972313e38c50e0ea188349bd72805a79b28a29bfea2 type=CONTAINER_STARTED_EVENT Jul 16 01:00:37.439717 containerd[2825]: time="2025-07-16T01:00:37.439678337Z" level=warning msg="container event discarded" container=2576bce2f8a945078a001e006c90d41f779a5299dc87877a7cc75581c0de5e0c type=CONTAINER_STARTED_EVENT Jul 16 01:00:37.439717 containerd[2825]: time="2025-07-16T01:00:37.439697857Z" level=warning msg="container event discarded" container=2cc9543b2d2da054d90f716cb16dfd204317e1815ba2f5291d4cdb9aa19e0e52 type=CONTAINER_STARTED_EVENT Jul 16 01:00:42.342055 containerd[2825]: time="2025-07-16T01:00:42.342006436Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"8ad8a6310ce69fa83a49a63d5837b60f2cba710fa30c1eb2e49f94f65f06b9de\" pid:10054 exited_at:{seconds:1752627642 nanos:341757993}" Jul 16 01:00:43.230473 containerd[2825]: time="2025-07-16T01:00:43.230430754Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"4678676a7188d8f9c77e6ee68eb701850642665a50367f5718e370a1e6e10234\" pid:10091 exited_at:{seconds:1752627643 nanos:230115791}" Jul 16 01:00:47.234735 containerd[2825]: time="2025-07-16T01:00:47.234667327Z" level=warning msg="container event discarded" container=41ed71740a70521b881bea5b7a4f1ba27aba6a637604246ef057796b2fad3f6f type=CONTAINER_CREATED_EVENT Jul 16 01:00:47.234735 containerd[2825]: time="2025-07-16T01:00:47.234716568Z" level=warning msg="container event discarded" container=41ed71740a70521b881bea5b7a4f1ba27aba6a637604246ef057796b2fad3f6f type=CONTAINER_STARTED_EVENT Jul 16 01:00:47.245876 containerd[2825]: time="2025-07-16T01:00:47.245834959Z" level=warning msg="container event discarded" container=82f7400ba26e55f639f22e022e51ccf98332089e1518ed8230710feb25d576c0 type=CONTAINER_CREATED_EVENT Jul 16 01:00:47.309130 containerd[2825]: time="2025-07-16T01:00:47.309111875Z" level=warning msg="container event discarded" container=82f7400ba26e55f639f22e022e51ccf98332089e1518ed8230710feb25d576c0 type=CONTAINER_STARTED_EVENT Jul 16 01:00:47.428598 containerd[2825]: time="2025-07-16T01:00:47.428575075Z" level=warning msg="container event discarded" container=040dcc707bdb2334584393ad831bd0b8f24c94196ee8e5a78fd530dba0305933 type=CONTAINER_CREATED_EVENT Jul 16 01:00:47.428598 containerd[2825]: time="2025-07-16T01:00:47.428596755Z" level=warning msg="container event discarded" container=040dcc707bdb2334584393ad831bd0b8f24c94196ee8e5a78fd530dba0305933 type=CONTAINER_STARTED_EVENT Jul 16 01:00:48.565619 containerd[2825]: time="2025-07-16T01:00:48.565580770Z" level=warning msg="container event discarded" container=c3d1ad87738b1c30631c56663be578f851ce43042e4b34f61f7729437395e44b type=CONTAINER_CREATED_EVENT Jul 16 01:00:48.616799 containerd[2825]: time="2025-07-16T01:00:48.616772920Z" level=warning msg="container event discarded" container=c3d1ad87738b1c30631c56663be578f851ce43042e4b34f61f7729437395e44b type=CONTAINER_STARTED_EVENT Jul 16 01:00:54.869295 containerd[2825]: time="2025-07-16T01:00:54.869262664Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\" id:\"f4ca76a7b9140ddf0f826a530fa66d02dfcee2188039c8c270c6b83270639bbd\" pid:10115 exited_at:{seconds:1752627654 nanos:868886381}" Jul 16 01:00:58.582819 containerd[2825]: time="2025-07-16T01:00:58.582771582Z" level=warning msg="container event discarded" container=61c245adaf84402253f7c046c3e71a4cfebabfd6af39e210ed6190406eacd117 type=CONTAINER_CREATED_EVENT Jul 16 01:00:58.582819 containerd[2825]: time="2025-07-16T01:00:58.582804422Z" level=warning msg="container event discarded" container=61c245adaf84402253f7c046c3e71a4cfebabfd6af39e210ed6190406eacd117 type=CONTAINER_STARTED_EVENT Jul 16 01:00:58.924541 containerd[2825]: time="2025-07-16T01:00:58.924514095Z" level=warning msg="container event discarded" container=059920926f333d802f9ea46af0803f86ea575871993d742fc2889c32c7eb1f4a type=CONTAINER_CREATED_EVENT Jul 16 01:00:58.924541 containerd[2825]: time="2025-07-16T01:00:58.924538855Z" level=warning msg="container event discarded" container=059920926f333d802f9ea46af0803f86ea575871993d742fc2889c32c7eb1f4a type=CONTAINER_STARTED_EVENT Jul 16 01:00:59.573257 containerd[2825]: time="2025-07-16T01:00:59.573220402Z" level=warning msg="container event discarded" container=5889a2ed68bd2dd773e7b6da89e7752979a4300d7c9ad8e4903063a47395ca28 type=CONTAINER_CREATED_EVENT Jul 16 01:00:59.626516 containerd[2825]: time="2025-07-16T01:00:59.626490130Z" level=warning msg="container event discarded" container=5889a2ed68bd2dd773e7b6da89e7752979a4300d7c9ad8e4903063a47395ca28 type=CONTAINER_STARTED_EVENT Jul 16 01:00:59.886032 containerd[2825]: time="2025-07-16T01:00:59.885944867Z" level=warning msg="container event discarded" container=0a6269874295520ce3f914645bbc3c3e5c9e25ebc49cdc81db84b37efaa6075c type=CONTAINER_CREATED_EVENT Jul 16 01:00:59.944158 containerd[2825]: time="2025-07-16T01:00:59.944129479Z" level=warning msg="container event discarded" container=0a6269874295520ce3f914645bbc3c3e5c9e25ebc49cdc81db84b37efaa6075c type=CONTAINER_STARTED_EVENT Jul 16 01:01:00.267511 containerd[2825]: time="2025-07-16T01:01:00.267470943Z" level=warning msg="container event discarded" container=0a6269874295520ce3f914645bbc3c3e5c9e25ebc49cdc81db84b37efaa6075c type=CONTAINER_STOPPED_EVENT Jul 16 01:01:01.846109 containerd[2825]: time="2025-07-16T01:01:01.846017642Z" level=warning msg="container event discarded" container=56aa25fe8e5198dea3121b1af7956733b0cdd92c620a86faea48329b9010b6f9 type=CONTAINER_CREATED_EVENT Jul 16 01:01:01.900284 containerd[2825]: time="2025-07-16T01:01:01.900253491Z" level=warning msg="container event discarded" container=56aa25fe8e5198dea3121b1af7956733b0cdd92c620a86faea48329b9010b6f9 type=CONTAINER_STARTED_EVENT Jul 16 01:01:02.421742 containerd[2825]: time="2025-07-16T01:01:02.421683492Z" level=warning msg="container event discarded" container=56aa25fe8e5198dea3121b1af7956733b0cdd92c620a86faea48329b9010b6f9 type=CONTAINER_STOPPED_EVENT Jul 16 01:01:04.241374 containerd[2825]: time="2025-07-16T01:01:04.241337188Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"5a7ec778dca4aef3f4d25f13181a701203e72237861a1c23a3d738e78eb6db89\" pid:10157 exited_at:{seconds:1752627664 nanos:241136266}" Jul 16 01:01:04.797125 containerd[2825]: time="2025-07-16T01:01:04.797075099Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"8532c4df60cca466b592d7bfb529b0f6e70fed002fd3215e5f05704f84ea0e18\" pid:10179 exited_at:{seconds:1752627664 nanos:796861817}" Jul 16 01:01:05.453685 containerd[2825]: time="2025-07-16T01:01:05.453643553Z" level=warning msg="container event discarded" container=ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb type=CONTAINER_CREATED_EVENT Jul 16 01:01:05.514974 containerd[2825]: time="2025-07-16T01:01:05.514923371Z" level=warning msg="container event discarded" container=ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb type=CONTAINER_STARTED_EVENT Jul 16 01:01:07.092808 containerd[2825]: time="2025-07-16T01:01:07.092757787Z" level=warning msg="container event discarded" container=68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c type=CONTAINER_CREATED_EVENT Jul 16 01:01:07.092808 containerd[2825]: time="2025-07-16T01:01:07.092787587Z" level=warning msg="container event discarded" container=68691235fff7fbd4cc4477206befefb9b528ed7ce7f57cf437281a87a0ca688c type=CONTAINER_STARTED_EVENT Jul 16 01:01:07.422480 containerd[2825]: time="2025-07-16T01:01:07.422432360Z" level=warning msg="container event discarded" container=d4981331c273878e2292daef3d7ca56d339ca85278ce97b9a1adce8badd43ca6 type=CONTAINER_CREATED_EVENT Jul 16 01:01:07.484897 containerd[2825]: time="2025-07-16T01:01:07.484859700Z" level=warning msg="container event discarded" container=d4981331c273878e2292daef3d7ca56d339ca85278ce97b9a1adce8badd43ca6 type=CONTAINER_STARTED_EVENT Jul 16 01:01:08.128831 containerd[2825]: time="2025-07-16T01:01:08.128795507Z" level=warning msg="container event discarded" container=b7ba1ca1fe95e6c83ad92bd4e87e522f2c93e70ddda530f94881b5dbc67bb61c type=CONTAINER_CREATED_EVENT Jul 16 01:01:08.189099 containerd[2825]: time="2025-07-16T01:01:08.189066705Z" level=warning msg="container event discarded" container=b7ba1ca1fe95e6c83ad92bd4e87e522f2c93e70ddda530f94881b5dbc67bb61c type=CONTAINER_STARTED_EVENT Jul 16 01:01:13.232321 containerd[2825]: time="2025-07-16T01:01:13.232286189Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"54aca5069ebabd452ae620852c619f3d79e567d64aa9279d3aa15d8ac574b91a\" pid:10233 exited_at:{seconds:1752627673 nanos:232122628}" Jul 16 01:01:13.687273 containerd[2825]: time="2025-07-16T01:01:13.687227453Z" level=warning msg="container event discarded" container=2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f type=CONTAINER_CREATED_EVENT Jul 16 01:01:13.687273 containerd[2825]: time="2025-07-16T01:01:13.687269374Z" level=warning msg="container event discarded" container=2e3e00cd5313a37d088fa654a7d7cedad95b60a2046238e1bcbb61500618087f type=CONTAINER_STARTED_EVENT Jul 16 01:01:13.687442 containerd[2825]: time="2025-07-16T01:01:13.687286174Z" level=warning msg="container event discarded" container=68bc51b00ccb0fe1cb8fb2644e14d1c3279bd20738d0d5d2b61ed1612b91cda7 type=CONTAINER_CREATED_EVENT Jul 16 01:01:13.743697 containerd[2825]: time="2025-07-16T01:01:13.743654123Z" level=warning msg="container event discarded" container=68bc51b00ccb0fe1cb8fb2644e14d1c3279bd20738d0d5d2b61ed1612b91cda7 type=CONTAINER_STARTED_EVENT Jul 16 01:01:13.795866 containerd[2825]: time="2025-07-16T01:01:13.795838477Z" level=warning msg="container event discarded" container=3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3 type=CONTAINER_CREATED_EVENT Jul 16 01:01:13.795866 containerd[2825]: time="2025-07-16T01:01:13.795851037Z" level=warning msg="container event discarded" container=3f328e2303aa84e79d118105f9b2d41b524206951e002f6d4e9cd02e9a40ffd3 type=CONTAINER_STARTED_EVENT Jul 16 01:01:13.891226 containerd[2825]: time="2025-07-16T01:01:13.891183230Z" level=warning msg="container event discarded" container=b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21 type=CONTAINER_CREATED_EVENT Jul 16 01:01:13.891226 containerd[2825]: time="2025-07-16T01:01:13.891206030Z" level=warning msg="container event discarded" container=b322ab4eaa745ac4ac78f8ee0db5eca1cb6a216ae783e03d1fc0a63169540a21 type=CONTAINER_STARTED_EVENT Jul 16 01:01:14.011484 containerd[2825]: time="2025-07-16T01:01:14.011402709Z" level=warning msg="container event discarded" container=0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9 type=CONTAINER_CREATED_EVENT Jul 16 01:01:14.011484 containerd[2825]: time="2025-07-16T01:01:14.011437190Z" level=warning msg="container event discarded" container=0acfaa9382c9fc78079f706b7191623d27238579336162c69a3d05714ab2faa9 type=CONTAINER_STARTED_EVENT Jul 16 01:01:14.651096 containerd[2825]: time="2025-07-16T01:01:14.651053835Z" level=warning msg="container event discarded" container=29f8988acb827b631463ab94c9fe0fde941f45cea1fae4ccdb518ad8119abca7 type=CONTAINER_CREATED_EVENT Jul 16 01:01:14.695265 containerd[2825]: time="2025-07-16T01:01:14.695234001Z" level=warning msg="container event discarded" container=974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772 type=CONTAINER_CREATED_EVENT Jul 16 01:01:14.695265 containerd[2825]: time="2025-07-16T01:01:14.695249441Z" level=warning msg="container event discarded" container=974cdaa5ec632195aa34b811ed3d3927adcb2c19920e5a8c312270a512b3c772 type=CONTAINER_STARTED_EVENT Jul 16 01:01:14.695265 containerd[2825]: time="2025-07-16T01:01:14.695256241Z" level=warning msg="container event discarded" container=29f8988acb827b631463ab94c9fe0fde941f45cea1fae4ccdb518ad8119abca7 type=CONTAINER_STARTED_EVENT Jul 16 01:01:14.797498 containerd[2825]: time="2025-07-16T01:01:14.797446125Z" level=warning msg="container event discarded" container=cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac type=CONTAINER_CREATED_EVENT Jul 16 01:01:14.797498 containerd[2825]: time="2025-07-16T01:01:14.797471365Z" level=warning msg="container event discarded" container=cc1f9a61dd50865d461d0946470bd7dfd43a82b2e814a59c42b256c73626a5ac type=CONTAINER_STARTED_EVENT Jul 16 01:01:14.966735 containerd[2825]: time="2025-07-16T01:01:14.966689684Z" level=warning msg="container event discarded" container=c479a09b6c5b7c89cbe0525444dd288493a807cb03d560d858b0fe7faf14fa60 type=CONTAINER_CREATED_EVENT Jul 16 01:01:15.029901 containerd[2825]: time="2025-07-16T01:01:15.029877005Z" level=warning msg="container event discarded" container=c479a09b6c5b7c89cbe0525444dd288493a807cb03d560d858b0fe7faf14fa60 type=CONTAINER_STARTED_EVENT Jul 16 01:01:15.029999 containerd[2825]: time="2025-07-16T01:01:15.029973005Z" level=warning msg="container event discarded" container=161ecdd7b410831eb6b0b93f622c50e6aade1089fee6cef46fe3f5e204d0f2f7 type=CONTAINER_CREATED_EVENT Jul 16 01:01:15.093984 containerd[2825]: time="2025-07-16T01:01:15.093949891Z" level=warning msg="container event discarded" container=161ecdd7b410831eb6b0b93f622c50e6aade1089fee6cef46fe3f5e204d0f2f7 type=CONTAINER_STARTED_EVENT Jul 16 01:01:15.944864 containerd[2825]: time="2025-07-16T01:01:15.944818198Z" level=warning msg="container event discarded" container=e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469 type=CONTAINER_CREATED_EVENT Jul 16 01:01:16.007057 containerd[2825]: time="2025-07-16T01:01:16.007014028Z" level=warning msg="container event discarded" container=e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469 type=CONTAINER_STARTED_EVENT Jul 16 01:01:16.794864 containerd[2825]: time="2025-07-16T01:01:16.794823777Z" level=warning msg="container event discarded" container=fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b type=CONTAINER_CREATED_EVENT Jul 16 01:01:16.852054 containerd[2825]: time="2025-07-16T01:01:16.852019803Z" level=warning msg="container event discarded" container=fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b type=CONTAINER_STARTED_EVENT Jul 16 01:01:17.134459 containerd[2825]: time="2025-07-16T01:01:17.134385221Z" level=warning msg="container event discarded" container=97e2927acb4f52894206b4bbfdfc153b25abc3834bd6fba823d2962ad5227517 type=CONTAINER_CREATED_EVENT Jul 16 01:01:17.193616 containerd[2825]: time="2025-07-16T01:01:17.193580821Z" level=warning msg="container event discarded" container=97e2927acb4f52894206b4bbfdfc153b25abc3834bd6fba823d2962ad5227517 type=CONTAINER_STARTED_EVENT Jul 16 01:01:17.696955 containerd[2825]: time="2025-07-16T01:01:17.696899582Z" level=warning msg="container event discarded" container=9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8 type=CONTAINER_CREATED_EVENT Jul 16 01:01:17.696955 containerd[2825]: time="2025-07-16T01:01:17.696928542Z" level=warning msg="container event discarded" container=9750931e5b77516645e41c79b2b11cd217a22706e33db6e9349ed078186d02e8 type=CONTAINER_STARTED_EVENT Jul 16 01:01:17.696955 containerd[2825]: time="2025-07-16T01:01:17.696936503Z" level=warning msg="container event discarded" container=d05763edcc901224fdacca0b6099c9716e4949ba90d0e4ddce913a76a6d960be type=CONTAINER_CREATED_EVENT Jul 16 01:01:17.761183 containerd[2825]: time="2025-07-16T01:01:17.761129903Z" level=warning msg="container event discarded" container=d05763edcc901224fdacca0b6099c9716e4949ba90d0e4ddce913a76a6d960be type=CONTAINER_STARTED_EVENT Jul 16 01:01:24.868061 containerd[2825]: time="2025-07-16T01:01:24.868011499Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\" id:\"0c635fa34ea900a557fc654764b350e7dec641dba4504788caded35a6b447c72\" pid:10261 exited_at:{seconds:1752627684 nanos:867743217}" Jul 16 01:01:34.805275 containerd[2825]: time="2025-07-16T01:01:34.805225205Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"256d5d1316705853be672b69c06557aa052e81f9aee0481d9375027a1283e9bb\" pid:10300 exited_at:{seconds:1752627694 nanos:805049484}" Jul 16 01:01:42.341248 containerd[2825]: time="2025-07-16T01:01:42.341209904Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"db354d065639779a93ebb70a206129962ea0a941b6ac9b0b1b16e9d1c5eace61\" pid:10350 exited_at:{seconds:1752627702 nanos:340970303}" Jul 16 01:01:43.230386 containerd[2825]: time="2025-07-16T01:01:43.230354819Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"6ec9c4066bc7b5a08312caf9242392cafd384115ad038f26c0ee34cd95f9b9ee\" pid:10388 exited_at:{seconds:1752627703 nanos:230178338}" Jul 16 01:01:54.870894 containerd[2825]: time="2025-07-16T01:01:54.870852440Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\" id:\"9f1221e9bc0a74901989e243124dfa2c9a559e0515733c1f192dd02a24aa0fba\" pid:10413 exited_at:{seconds:1752627714 nanos:870608678}" Jul 16 01:02:04.239673 containerd[2825]: time="2025-07-16T01:02:04.239636684Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"3bfc0512a9ae141dfe958f46aea8bb309c7924bea119ac3880102611167b86fa\" pid:10449 exited_at:{seconds:1752627724 nanos:239439403}" Jul 16 01:02:04.795843 containerd[2825]: time="2025-07-16T01:02:04.795800582Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"8b655f8c25cbc2c313e74cbb7f308bd3982e580c8f7ecbe55a9ec807e25779dd\" pid:10470 exited_at:{seconds:1752627724 nanos:795596181}" Jul 16 01:02:13.232404 containerd[2825]: time="2025-07-16T01:02:13.232358660Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"d3708689be45eafecaf2850b3d8901b34822f0ae988cfb1dcb99e396a39cbf2f\" pid:10515 exited_at:{seconds:1752627733 nanos:232184378}" Jul 16 01:02:24.870932 containerd[2825]: time="2025-07-16T01:02:24.870884708Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\" id:\"ae24fcb36a616a6f00ea42f1f08d766d558cb68555e4211e6a15c25899bd5cf0\" pid:10542 exited_at:{seconds:1752627744 nanos:870675707}" Jul 16 01:02:34.802572 containerd[2825]: time="2025-07-16T01:02:34.802524295Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"f608dd56f9c4455e5665b936f29f6b88bd593d6008853457b5a97e40eab43cfe\" pid:10593 exited_at:{seconds:1752627754 nanos:802276853}" Jul 16 01:02:42.342803 containerd[2825]: time="2025-07-16T01:02:42.342730779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"bdf40849f821afbedc06f57eff507bca504e801dc521ee0cb2d674cbc55e475f\" pid:10636 exited_at:{seconds:1752627762 nanos:342527058}" Jul 16 01:02:43.223292 containerd[2825]: time="2025-07-16T01:02:43.223260034Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"e606b3d6b2a965c6f2eec5e75f5291321298031c75ca812dbb54119f16c85dd0\" pid:10673 exited_at:{seconds:1752627763 nanos:223092073}" Jul 16 01:02:54.864987 containerd[2825]: time="2025-07-16T01:02:54.864945127Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\" id:\"2c6ecccaf2e9ac29217114fbbe26c5bbab4ce5fbbf04edeb827c940cb68e8799\" pid:10719 exited_at:{seconds:1752627774 nanos:864701965}" Jul 16 01:03:04.238559 containerd[2825]: time="2025-07-16T01:03:04.238516805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"9dae5ff9c36f38c291ae1e179c57b32005ca388130dd2acf6954874c327dd845\" pid:10756 exited_at:{seconds:1752627784 nanos:238327324}" Jul 16 01:03:04.804347 containerd[2825]: time="2025-07-16T01:03:04.804303396Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"2e902485657e9864ed537c504f5cdfdd0d775cb0667bb3a6c48f9cf75ce03152\" pid:10778 exited_at:{seconds:1752627784 nanos:804077515}" Jul 16 01:03:13.233546 containerd[2825]: time="2025-07-16T01:03:13.233500764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"2170b7646b8c7ce4f24e380817f9aea0a87058c1604ab8e9da096673751cbcf9\" pid:10818 exited_at:{seconds:1752627793 nanos:233343443}" Jul 16 01:03:24.869955 containerd[2825]: time="2025-07-16T01:03:24.869904180Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\" id:\"30fc3c87f4660ceba910d6485674e996c76ba0762eb8ec72881428729048b521\" pid:10841 exited_at:{seconds:1752627804 nanos:869668858}" Jul 16 01:03:34.805722 containerd[2825]: time="2025-07-16T01:03:34.805676994Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"c9463c91de173e7ebdd7a533cbe2fc09cd1969b1854227328186b1638374f9c4\" pid:10876 exited_at:{seconds:1752627814 nanos:805474673}" Jul 16 01:03:42.338937 containerd[2825]: time="2025-07-16T01:03:42.338876798Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"cd07be5876781a4ff08d93dc2bd57280bca6d2274b9f3791b6bd85e855b46bdf\" pid:10916 exited_at:{seconds:1752627822 nanos:338631997}" Jul 16 01:03:43.225486 containerd[2825]: time="2025-07-16T01:03:43.225452915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"4f9b0301fa2a75389b7195e49a28a27ec653d59a018dd1f4633ab9dc4e54fd71\" pid:10955 exited_at:{seconds:1752627823 nanos:225315994}" Jul 16 01:03:54.867486 containerd[2825]: time="2025-07-16T01:03:54.867428166Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\" id:\"9aaae0f7e087cf6c64a618603316eea94fa4036f540ebe8e6e1d713dc6d9f8f0\" pid:10979 exited_at:{seconds:1752627834 nanos:867020884}" Jul 16 01:04:04.230225 containerd[2825]: time="2025-07-16T01:04:04.230180624Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"1b6ef935f6fd71130eb9ff026322323a5966752515c5669b87b3677d16c843fb\" pid:11015 exited_at:{seconds:1752627844 nanos:230026183}" Jul 16 01:04:04.722458 systemd[1]: Started sshd@7-147.28.150.207:22-139.178.89.65:43840.service - OpenSSH per-connection server daemon (139.178.89.65:43840). Jul 16 01:04:04.786396 containerd[2825]: time="2025-07-16T01:04:04.786360280Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"bde8b84c4c9388272225da40929290a5f7e981ea1ea5fe494dd8939868c5ab8f\" pid:11044 exited_at:{seconds:1752627844 nanos:786116639}" Jul 16 01:04:05.123895 sshd[11030]: Accepted publickey for core from 139.178.89.65 port 43840 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 01:04:05.125161 sshd-session[11030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 01:04:05.129560 systemd-logind[2810]: New session 10 of user core. Jul 16 01:04:05.153724 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 16 01:04:05.475685 sshd[11074]: Connection closed by 139.178.89.65 port 43840 Jul 16 01:04:05.476045 sshd-session[11030]: pam_unix(sshd:session): session closed for user core Jul 16 01:04:05.479178 systemd[1]: sshd@7-147.28.150.207:22-139.178.89.65:43840.service: Deactivated successfully. Jul 16 01:04:05.480808 systemd[1]: session-10.scope: Deactivated successfully. Jul 16 01:04:05.481477 systemd-logind[2810]: Session 10 logged out. Waiting for processes to exit. Jul 16 01:04:05.482557 systemd-logind[2810]: Removed session 10. Jul 16 01:04:10.553351 systemd[1]: Started sshd@8-147.28.150.207:22-139.178.89.65:45598.service - OpenSSH per-connection server daemon (139.178.89.65:45598). Jul 16 01:04:10.965248 sshd[11123]: Accepted publickey for core from 139.178.89.65 port 45598 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 01:04:10.966627 sshd-session[11123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 01:04:10.969820 systemd-logind[2810]: New session 11 of user core. Jul 16 01:04:10.992724 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 16 01:04:11.312139 sshd[11125]: Connection closed by 139.178.89.65 port 45598 Jul 16 01:04:11.312451 sshd-session[11123]: pam_unix(sshd:session): session closed for user core Jul 16 01:04:11.315511 systemd[1]: sshd@8-147.28.150.207:22-139.178.89.65:45598.service: Deactivated successfully. Jul 16 01:04:11.317717 systemd[1]: session-11.scope: Deactivated successfully. Jul 16 01:04:11.318416 systemd-logind[2810]: Session 11 logged out. Waiting for processes to exit. Jul 16 01:04:11.319291 systemd-logind[2810]: Removed session 11. Jul 16 01:04:11.404629 systemd[1]: Started sshd@9-147.28.150.207:22-139.178.89.65:45614.service - OpenSSH per-connection server daemon (139.178.89.65:45614). Jul 16 01:04:11.832782 sshd[11160]: Accepted publickey for core from 139.178.89.65 port 45614 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 01:04:11.834124 sshd-session[11160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 01:04:11.837403 systemd-logind[2810]: New session 12 of user core. Jul 16 01:04:11.859667 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 16 01:04:12.203358 sshd[11162]: Connection closed by 139.178.89.65 port 45614 Jul 16 01:04:12.203728 sshd-session[11160]: pam_unix(sshd:session): session closed for user core Jul 16 01:04:12.206768 systemd[1]: sshd@9-147.28.150.207:22-139.178.89.65:45614.service: Deactivated successfully. Jul 16 01:04:12.208974 systemd[1]: session-12.scope: Deactivated successfully. Jul 16 01:04:12.210136 systemd-logind[2810]: Session 12 logged out. Waiting for processes to exit. Jul 16 01:04:12.210902 systemd-logind[2810]: Removed session 12. Jul 16 01:04:12.280198 systemd[1]: Started sshd@10-147.28.150.207:22-139.178.89.65:45618.service - OpenSSH per-connection server daemon (139.178.89.65:45618). Jul 16 01:04:12.680801 sshd[11199]: Accepted publickey for core from 139.178.89.65 port 45618 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 01:04:12.681948 sshd-session[11199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 01:04:12.684990 systemd-logind[2810]: New session 13 of user core. Jul 16 01:04:12.694674 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 16 01:04:13.027221 sshd[11202]: Connection closed by 139.178.89.65 port 45618 Jul 16 01:04:13.027514 sshd-session[11199]: pam_unix(sshd:session): session closed for user core Jul 16 01:04:13.030356 systemd[1]: sshd@10-147.28.150.207:22-139.178.89.65:45618.service: Deactivated successfully. Jul 16 01:04:13.031972 systemd[1]: session-13.scope: Deactivated successfully. Jul 16 01:04:13.032529 systemd-logind[2810]: Session 13 logged out. Waiting for processes to exit. Jul 16 01:04:13.033321 systemd-logind[2810]: Removed session 13. Jul 16 01:04:13.231626 containerd[2825]: time="2025-07-16T01:04:13.231592517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"2c400f2d5f6e86aa4b8c9cd632ae158ac1ca87d31c18591c6f2114ba420f24bb\" pid:11251 exited_at:{seconds:1752627853 nanos:231383796}" Jul 16 01:04:18.102493 systemd[1]: Started sshd@11-147.28.150.207:22-139.178.89.65:45634.service - OpenSSH per-connection server daemon (139.178.89.65:45634). Jul 16 01:04:18.501385 sshd[11269]: Accepted publickey for core from 139.178.89.65 port 45634 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 01:04:18.502715 sshd-session[11269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 01:04:18.505946 systemd-logind[2810]: New session 14 of user core. Jul 16 01:04:18.518697 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 16 01:04:18.846251 sshd[11271]: Connection closed by 139.178.89.65 port 45634 Jul 16 01:04:18.846517 sshd-session[11269]: pam_unix(sshd:session): session closed for user core Jul 16 01:04:18.849537 systemd[1]: sshd@11-147.28.150.207:22-139.178.89.65:45634.service: Deactivated successfully. Jul 16 01:04:18.851192 systemd[1]: session-14.scope: Deactivated successfully. Jul 16 01:04:18.851803 systemd-logind[2810]: Session 14 logged out. Waiting for processes to exit. Jul 16 01:04:18.852604 systemd-logind[2810]: Removed session 14. Jul 16 01:04:23.923441 systemd[1]: Started sshd@12-147.28.150.207:22-139.178.89.65:46082.service - OpenSSH per-connection server daemon (139.178.89.65:46082). Jul 16 01:04:24.326783 sshd[11326]: Accepted publickey for core from 139.178.89.65 port 46082 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 01:04:24.327958 sshd-session[11326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 01:04:24.331026 systemd-logind[2810]: New session 15 of user core. Jul 16 01:04:24.343713 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 16 01:04:24.676247 sshd[11328]: Connection closed by 139.178.89.65 port 46082 Jul 16 01:04:24.676612 sshd-session[11326]: pam_unix(sshd:session): session closed for user core Jul 16 01:04:24.679650 systemd[1]: sshd@12-147.28.150.207:22-139.178.89.65:46082.service: Deactivated successfully. Jul 16 01:04:24.681888 systemd[1]: session-15.scope: Deactivated successfully. Jul 16 01:04:24.682501 systemd-logind[2810]: Session 15 logged out. Waiting for processes to exit. Jul 16 01:04:24.683331 systemd-logind[2810]: Removed session 15. Jul 16 01:04:24.873934 containerd[2825]: time="2025-07-16T01:04:24.873898596Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba423ac6cad646ef813be2c1a4d99fa016d2ccefc9217985d4aa7a5d52a87fcb\" id:\"b4e3e84e9d66dba44c7af7fcc74805fac317e51abe208e29c5f8e8ef350ac9a9\" pid:11373 exited_at:{seconds:1752627864 nanos:873655515}" Jul 16 01:04:29.752482 systemd[1]: Started sshd@13-147.28.150.207:22-139.178.89.65:35064.service - OpenSSH per-connection server daemon (139.178.89.65:35064). Jul 16 01:04:30.153241 sshd[11399]: Accepted publickey for core from 139.178.89.65 port 35064 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 01:04:30.154380 sshd-session[11399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 01:04:30.157479 systemd-logind[2810]: New session 16 of user core. Jul 16 01:04:30.171730 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 16 01:04:30.500267 sshd[11401]: Connection closed by 139.178.89.65 port 35064 Jul 16 01:04:30.500643 sshd-session[11399]: pam_unix(sshd:session): session closed for user core Jul 16 01:04:30.503654 systemd[1]: sshd@13-147.28.150.207:22-139.178.89.65:35064.service: Deactivated successfully. Jul 16 01:04:30.505349 systemd[1]: session-16.scope: Deactivated successfully. Jul 16 01:04:30.505969 systemd-logind[2810]: Session 16 logged out. Waiting for processes to exit. Jul 16 01:04:30.506825 systemd-logind[2810]: Removed session 16. Jul 16 01:04:30.581365 systemd[1]: Started sshd@14-147.28.150.207:22-139.178.89.65:35072.service - OpenSSH per-connection server daemon (139.178.89.65:35072). Jul 16 01:04:30.979638 sshd[11439]: Accepted publickey for core from 139.178.89.65 port 35072 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 01:04:30.980832 sshd-session[11439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 01:04:30.983864 systemd-logind[2810]: New session 17 of user core. Jul 16 01:04:31.007669 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 16 01:04:31.535598 sshd[11441]: Connection closed by 139.178.89.65 port 35072 Jul 16 01:04:31.535986 sshd-session[11439]: pam_unix(sshd:session): session closed for user core Jul 16 01:04:31.539180 systemd[1]: sshd@14-147.28.150.207:22-139.178.89.65:35072.service: Deactivated successfully. Jul 16 01:04:31.540870 systemd[1]: session-17.scope: Deactivated successfully. Jul 16 01:04:31.541441 systemd-logind[2810]: Session 17 logged out. Waiting for processes to exit. Jul 16 01:04:31.542263 systemd-logind[2810]: Removed session 17. Jul 16 01:04:31.607459 systemd[1]: Started sshd@15-147.28.150.207:22-139.178.89.65:35078.service - OpenSSH per-connection server daemon (139.178.89.65:35078). Jul 16 01:04:32.004120 sshd[11471]: Accepted publickey for core from 139.178.89.65 port 35078 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 01:04:32.005425 sshd-session[11471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 01:04:32.008822 systemd-logind[2810]: New session 18 of user core. Jul 16 01:04:32.024691 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 16 01:04:33.471034 sshd[11473]: Connection closed by 139.178.89.65 port 35078 Jul 16 01:04:33.471401 sshd-session[11471]: pam_unix(sshd:session): session closed for user core Jul 16 01:04:33.474557 systemd[1]: sshd@15-147.28.150.207:22-139.178.89.65:35078.service: Deactivated successfully. Jul 16 01:04:33.476768 systemd[1]: session-18.scope: Deactivated successfully. Jul 16 01:04:33.477005 systemd[1]: session-18.scope: Consumed 2.898s CPU time, 119.5M memory peak. Jul 16 01:04:33.477375 systemd-logind[2810]: Session 18 logged out. Waiting for processes to exit. Jul 16 01:04:33.478175 systemd-logind[2810]: Removed session 18. Jul 16 01:04:33.543286 systemd[1]: Started sshd@16-147.28.150.207:22-139.178.89.65:35084.service - OpenSSH per-connection server daemon (139.178.89.65:35084). Jul 16 01:04:33.942186 sshd[11570]: Accepted publickey for core from 139.178.89.65 port 35084 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 01:04:33.943456 sshd-session[11570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 01:04:33.946769 systemd-logind[2810]: New session 19 of user core. Jul 16 01:04:33.970722 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 16 01:04:34.374406 sshd[11572]: Connection closed by 139.178.89.65 port 35084 Jul 16 01:04:34.374738 sshd-session[11570]: pam_unix(sshd:session): session closed for user core Jul 16 01:04:34.377710 systemd[1]: sshd@16-147.28.150.207:22-139.178.89.65:35084.service: Deactivated successfully. Jul 16 01:04:34.379966 systemd[1]: session-19.scope: Deactivated successfully. Jul 16 01:04:34.380567 systemd-logind[2810]: Session 19 logged out. Waiting for processes to exit. Jul 16 01:04:34.381371 systemd-logind[2810]: Removed session 19. Jul 16 01:04:34.451373 systemd[1]: Started sshd@17-147.28.150.207:22-139.178.89.65:35088.service - OpenSSH per-connection server daemon (139.178.89.65:35088). Jul 16 01:04:34.795054 containerd[2825]: time="2025-07-16T01:04:34.795017232Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"ef15b2b57c7dbf80b453367202a545c74ba0e8cde6c9469ef89034db8a4f8c3c\" pid:11636 exited_at:{seconds:1752627874 nanos:794776231}" Jul 16 01:04:34.854858 sshd[11623]: Accepted publickey for core from 139.178.89.65 port 35088 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 01:04:34.856183 sshd-session[11623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 01:04:34.859446 systemd-logind[2810]: New session 20 of user core. Jul 16 01:04:34.881697 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 16 01:04:35.201208 sshd[11662]: Connection closed by 139.178.89.65 port 35088 Jul 16 01:04:35.201633 sshd-session[11623]: pam_unix(sshd:session): session closed for user core Jul 16 01:04:35.204828 systemd[1]: sshd@17-147.28.150.207:22-139.178.89.65:35088.service: Deactivated successfully. Jul 16 01:04:35.207022 systemd[1]: session-20.scope: Deactivated successfully. Jul 16 01:04:35.207662 systemd-logind[2810]: Session 20 logged out. Waiting for processes to exit. Jul 16 01:04:35.208475 systemd-logind[2810]: Removed session 20. Jul 16 01:04:40.284372 systemd[1]: Started sshd@18-147.28.150.207:22-139.178.89.65:44168.service - OpenSSH per-connection server daemon (139.178.89.65:44168). Jul 16 01:04:40.682247 sshd[11712]: Accepted publickey for core from 139.178.89.65 port 44168 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 01:04:40.683536 sshd-session[11712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 01:04:40.686705 systemd-logind[2810]: New session 21 of user core. Jul 16 01:04:40.710664 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 16 01:04:41.041589 sshd[11714]: Connection closed by 139.178.89.65 port 44168 Jul 16 01:04:41.041905 sshd-session[11712]: pam_unix(sshd:session): session closed for user core Jul 16 01:04:41.044929 systemd[1]: sshd@18-147.28.150.207:22-139.178.89.65:44168.service: Deactivated successfully. Jul 16 01:04:41.047092 systemd[1]: session-21.scope: Deactivated successfully. Jul 16 01:04:41.047680 systemd-logind[2810]: Session 21 logged out. Waiting for processes to exit. Jul 16 01:04:41.048440 systemd-logind[2810]: Removed session 21. Jul 16 01:04:42.339540 containerd[2825]: time="2025-07-16T01:04:42.339498052Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fe32b2de0cc1338cb15dbf3be79d214014559e6734729d05feee36e6977d4c4b\" id:\"046867a137475b01d6ca0ab79c2a8e5dfd92e2a904e196781a0c8cb02d5c810d\" pid:11763 exited_at:{seconds:1752627882 nanos:339309291}" Jul 16 01:04:43.224452 containerd[2825]: time="2025-07-16T01:04:43.224416984Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e33619323f8f7be38b83cb0fd5533455feb64e2d1527a3fc76202947dd5ad469\" id:\"a237b4a9d8f6140f65b7e8097d2fe61119fdc9adcb53e4b779a51307825aad29\" pid:11804 exited_at:{seconds:1752627883 nanos:224172583}" Jul 16 01:04:46.117352 systemd[1]: Started sshd@19-147.28.150.207:22-139.178.89.65:44184.service - OpenSSH per-connection server daemon (139.178.89.65:44184). Jul 16 01:04:46.518051 sshd[11815]: Accepted publickey for core from 139.178.89.65 port 44184 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 01:04:46.519267 sshd-session[11815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 01:04:46.522427 systemd-logind[2810]: New session 22 of user core. Jul 16 01:04:46.545662 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 16 01:04:46.865438 sshd[11817]: Connection closed by 139.178.89.65 port 44184 Jul 16 01:04:46.865752 sshd-session[11815]: pam_unix(sshd:session): session closed for user core Jul 16 01:04:46.868844 systemd[1]: sshd@19-147.28.150.207:22-139.178.89.65:44184.service: Deactivated successfully. Jul 16 01:04:46.871063 systemd[1]: session-22.scope: Deactivated successfully. Jul 16 01:04:46.871717 systemd-logind[2810]: Session 22 logged out. Waiting for processes to exit. Jul 16 01:04:46.872567 systemd-logind[2810]: Removed session 22. Jul 16 01:04:51.941429 systemd[1]: Started sshd@20-147.28.150.207:22-139.178.89.65:49832.service - OpenSSH per-connection server daemon (139.178.89.65:49832). Jul 16 01:04:52.342294 sshd[11858]: Accepted publickey for core from 139.178.89.65 port 49832 ssh2: RSA SHA256:/+Do+xNxL6kjd1UdR3qHKvMwB2hYBrmmb6HREL82QsY Jul 16 01:04:52.343449 sshd-session[11858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 01:04:52.346448 systemd-logind[2810]: New session 23 of user core. Jul 16 01:04:52.369729 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 16 01:04:52.688685 sshd[11860]: Connection closed by 139.178.89.65 port 49832 Jul 16 01:04:52.689046 sshd-session[11858]: pam_unix(sshd:session): session closed for user core Jul 16 01:04:52.692042 systemd[1]: sshd@20-147.28.150.207:22-139.178.89.65:49832.service: Deactivated successfully. Jul 16 01:04:52.694169 systemd[1]: session-23.scope: Deactivated successfully. Jul 16 01:04:52.694767 systemd-logind[2810]: Session 23 logged out. Waiting for processes to exit. Jul 16 01:04:52.695583 systemd-logind[2810]: Removed session 23.