Jul 7 01:07:58.201657 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] Jul 7 01:07:58.201681 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sun Jul 6 21:52:18 -00 2025 Jul 7 01:07:58.201689 kernel: KASLR enabled Jul 7 01:07:58.201695 kernel: efi: EFI v2.7 by American Megatrends Jul 7 01:07:58.201700 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea468818 RNG=0xebf10018 MEMRESERVE=0xe3fa0f98 Jul 7 01:07:58.201705 kernel: random: crng init done Jul 7 01:07:58.201712 kernel: secureboot: Secure boot disabled Jul 7 01:07:58.201718 kernel: esrt: Reserving ESRT space from 0x00000000ea468818 to 0x00000000ea468878. Jul 7 01:07:58.201725 kernel: ACPI: Early table checksum verification disabled Jul 7 01:07:58.201730 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) Jul 7 01:07:58.201736 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) Jul 7 01:07:58.201742 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) Jul 7 01:07:58.201748 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) Jul 7 01:07:58.201754 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) Jul 7 01:07:58.201762 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) Jul 7 01:07:58.201768 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) Jul 7 01:07:58.201774 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) Jul 7 01:07:58.201780 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) Jul 7 01:07:58.201786 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) Jul 7 01:07:58.201792 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) Jul 7 01:07:58.201798 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) Jul 7 01:07:58.201804 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) Jul 7 01:07:58.201810 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) Jul 7 01:07:58.201816 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) Jul 7 01:07:58.201823 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) Jul 7 01:07:58.201829 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) Jul 7 01:07:58.201835 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) Jul 7 01:07:58.201841 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) Jul 7 01:07:58.201846 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 Jul 7 01:07:58.201852 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 7 01:07:58.201858 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] Jul 7 01:07:58.201864 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] Jul 7 01:07:58.201870 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] Jul 7 01:07:58.201876 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] Jul 7 01:07:58.201882 kernel: NUMA: Initialized distance table, cnt=1 Jul 7 01:07:58.201889 kernel: NUMA: Node 0 [mem 0x88300000-0x883fffff] + [mem 0x90000000-0xffffffff] -> [mem 0x88300000-0xffffffff] Jul 7 01:07:58.201896 kernel: NUMA: Node 0 [mem 0x88300000-0xffffffff] + [mem 0x80000000000-0x8007fffffff] -> [mem 0x88300000-0x8007fffffff] Jul 7 01:07:58.201902 kernel: NUMA: Node 0 [mem 0x88300000-0x8007fffffff] + [mem 0x80100000000-0x83fffffffff] -> [mem 0x88300000-0x83fffffffff] Jul 7 01:07:58.201908 kernel: NODE_DATA(0) allocated [mem 0x83fdffd8dc0-0x83fdffdffff] Jul 7 01:07:58.201914 kernel: Zone ranges: Jul 7 01:07:58.201922 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] Jul 7 01:07:58.201930 kernel: DMA32 empty Jul 7 01:07:58.201936 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] Jul 7 01:07:58.201943 kernel: Device empty Jul 7 01:07:58.201949 kernel: Movable zone start for each node Jul 7 01:07:58.201955 kernel: Early memory node ranges Jul 7 01:07:58.201961 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] Jul 7 01:07:58.201967 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] Jul 7 01:07:58.201974 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] Jul 7 01:07:58.201980 kernel: node 0: [mem 0x0000000094000000-0x00000000eba32fff] Jul 7 01:07:58.201986 kernel: node 0: [mem 0x00000000eba33000-0x00000000ebebffff] Jul 7 01:07:58.201993 kernel: node 0: [mem 0x00000000ebec0000-0x00000000ebec4fff] Jul 7 01:07:58.202000 kernel: node 0: [mem 0x00000000ebec5000-0x00000000ebeccfff] Jul 7 01:07:58.202006 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] Jul 7 01:07:58.202012 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] Jul 7 01:07:58.202019 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] Jul 7 01:07:58.202025 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] Jul 7 01:07:58.202031 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee53ffff] Jul 7 01:07:58.202037 kernel: node 0: [mem 0x00000000ee540000-0x00000000f765ffff] Jul 7 01:07:58.202043 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] Jul 7 01:07:58.202050 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] Jul 7 01:07:58.202056 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] Jul 7 01:07:58.202062 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] Jul 7 01:07:58.202070 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] Jul 7 01:07:58.202076 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] Jul 7 01:07:58.202082 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] Jul 7 01:07:58.202089 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] Jul 7 01:07:58.202095 kernel: On node 0, zone DMA: 768 pages in unavailable ranges Jul 7 01:07:58.202102 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges Jul 7 01:07:58.202108 kernel: psci: probing for conduit method from ACPI. Jul 7 01:07:58.202114 kernel: psci: PSCIv1.1 detected in firmware. Jul 7 01:07:58.202120 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 01:07:58.202127 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 7 01:07:58.202133 kernel: psci: SMC Calling Convention v1.2 Jul 7 01:07:58.202139 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 7 01:07:58.202147 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 Jul 7 01:07:58.202153 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 Jul 7 01:07:58.202159 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 Jul 7 01:07:58.202165 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 Jul 7 01:07:58.202172 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 Jul 7 01:07:58.202178 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 Jul 7 01:07:58.202184 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 Jul 7 01:07:58.202191 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 Jul 7 01:07:58.202197 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 Jul 7 01:07:58.202203 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 Jul 7 01:07:58.202209 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 Jul 7 01:07:58.202217 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 Jul 7 01:07:58.202223 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 Jul 7 01:07:58.202230 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 Jul 7 01:07:58.202236 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 Jul 7 01:07:58.202242 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 Jul 7 01:07:58.202248 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 Jul 7 01:07:58.202255 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 Jul 7 01:07:58.202261 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 Jul 7 01:07:58.202267 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 Jul 7 01:07:58.202273 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 Jul 7 01:07:58.202279 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 Jul 7 01:07:58.202286 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 Jul 7 01:07:58.202293 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 Jul 7 01:07:58.202300 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 Jul 7 01:07:58.202306 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 Jul 7 01:07:58.202312 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 Jul 7 01:07:58.202318 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 Jul 7 01:07:58.202325 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 Jul 7 01:07:58.202331 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 Jul 7 01:07:58.202337 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 Jul 7 01:07:58.202343 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 Jul 7 01:07:58.202349 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 Jul 7 01:07:58.202356 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 Jul 7 01:07:58.202363 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 Jul 7 01:07:58.202369 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 Jul 7 01:07:58.202376 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 Jul 7 01:07:58.202382 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 Jul 7 01:07:58.202388 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 Jul 7 01:07:58.202394 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 Jul 7 01:07:58.202400 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 Jul 7 01:07:58.202407 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 Jul 7 01:07:58.202413 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 Jul 7 01:07:58.202425 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 Jul 7 01:07:58.202433 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 Jul 7 01:07:58.202439 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 Jul 7 01:07:58.202446 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 Jul 7 01:07:58.202453 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 Jul 7 01:07:58.202460 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 Jul 7 01:07:58.202466 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 Jul 7 01:07:58.202474 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 Jul 7 01:07:58.202481 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 Jul 7 01:07:58.202492 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 Jul 7 01:07:58.202499 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 Jul 7 01:07:58.202506 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 Jul 7 01:07:58.202512 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 Jul 7 01:07:58.202519 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 Jul 7 01:07:58.202525 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 Jul 7 01:07:58.202532 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 Jul 7 01:07:58.202539 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 Jul 7 01:07:58.202545 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 Jul 7 01:07:58.202552 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 Jul 7 01:07:58.202560 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 Jul 7 01:07:58.202567 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 Jul 7 01:07:58.202573 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 Jul 7 01:07:58.202580 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 Jul 7 01:07:58.202587 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 Jul 7 01:07:58.202593 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 Jul 7 01:07:58.202600 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 Jul 7 01:07:58.202606 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 Jul 7 01:07:58.202613 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 Jul 7 01:07:58.202620 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 Jul 7 01:07:58.202626 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 Jul 7 01:07:58.202633 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 Jul 7 01:07:58.202641 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 Jul 7 01:07:58.202648 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 Jul 7 01:07:58.202654 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 Jul 7 01:07:58.202661 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 Jul 7 01:07:58.202667 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 Jul 7 01:07:58.202674 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 7 01:07:58.202681 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 7 01:07:58.202688 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 Jul 7 01:07:58.202694 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 Jul 7 01:07:58.202701 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 Jul 7 01:07:58.202707 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 Jul 7 01:07:58.202715 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 Jul 7 01:07:58.202722 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 Jul 7 01:07:58.202728 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 Jul 7 01:07:58.202735 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 Jul 7 01:07:58.202742 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 Jul 7 01:07:58.202748 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 Jul 7 01:07:58.202755 kernel: Detected PIPT I-cache on CPU0 Jul 7 01:07:58.202761 kernel: CPU features: detected: GIC system register CPU interface Jul 7 01:07:58.202768 kernel: CPU features: detected: Virtualization Host Extensions Jul 7 01:07:58.202775 kernel: CPU features: detected: Spectre-v4 Jul 7 01:07:58.202781 kernel: CPU features: detected: Spectre-BHB Jul 7 01:07:58.202789 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 7 01:07:58.202796 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 7 01:07:58.202803 kernel: CPU features: detected: ARM erratum 1418040 Jul 7 01:07:58.202810 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 7 01:07:58.202817 kernel: alternatives: applying boot alternatives Jul 7 01:07:58.202825 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=dd2d39de40482a23e9bb75390ff5ca85cd9bd34d902b8049121a8373f8cb2ef2 Jul 7 01:07:58.202832 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 01:07:58.202838 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 7 01:07:58.202845 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes Jul 7 01:07:58.202852 kernel: printk: log_buf_len min size: 262144 bytes Jul 7 01:07:58.202858 kernel: printk: log_buf_len: 1048576 bytes Jul 7 01:07:58.202866 kernel: printk: early log buf free: 249440(95%) Jul 7 01:07:58.202873 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) Jul 7 01:07:58.202880 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) Jul 7 01:07:58.202886 kernel: Fallback order for Node 0: 0 Jul 7 01:07:58.202893 kernel: Built 1 zonelists, mobility grouping on. Total pages: 67043584 Jul 7 01:07:58.202900 kernel: Policy zone: Normal Jul 7 01:07:58.202906 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 01:07:58.202913 kernel: software IO TLB: area num 128. Jul 7 01:07:58.202920 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) Jul 7 01:07:58.202927 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 Jul 7 01:07:58.202933 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 01:07:58.202942 kernel: rcu: RCU event tracing is enabled. Jul 7 01:07:58.202949 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. Jul 7 01:07:58.202955 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 01:07:58.202962 kernel: Tracing variant of Tasks RCU enabled. Jul 7 01:07:58.202969 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 01:07:58.202976 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 Jul 7 01:07:58.202982 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Jul 7 01:07:58.202989 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=80. Jul 7 01:07:58.202996 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 01:07:58.203002 kernel: GICv3: GIC: Using split EOI/Deactivate mode Jul 7 01:07:58.203009 kernel: GICv3: 672 SPIs implemented Jul 7 01:07:58.203016 kernel: GICv3: 0 Extended SPIs implemented Jul 7 01:07:58.203023 kernel: Root IRQ handler: gic_handle_irq Jul 7 01:07:58.203030 kernel: GICv3: GICv3 features: 16 PPIs Jul 7 01:07:58.203037 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=1 Jul 7 01:07:58.203044 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 Jul 7 01:07:58.203050 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 Jul 7 01:07:58.203057 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 Jul 7 01:07:58.203064 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 Jul 7 01:07:58.203070 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 Jul 7 01:07:58.203077 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 Jul 7 01:07:58.203083 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 Jul 7 01:07:58.203090 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 Jul 7 01:07:58.203096 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 Jul 7 01:07:58.203104 kernel: ITS [mem 0x100100040000-0x10010005ffff] Jul 7 01:07:58.203111 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000340000 (indirect, esz 8, psz 64K, shr 1) Jul 7 01:07:58.203118 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000350000 (flat, esz 2, psz 64K, shr 1) Jul 7 01:07:58.203124 kernel: ITS [mem 0x100100060000-0x10010007ffff] Jul 7 01:07:58.203131 kernel: ITS@0x0000100100060000: allocated 8192 Devices @80000370000 (indirect, esz 8, psz 64K, shr 1) Jul 7 01:07:58.203138 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @80000380000 (flat, esz 2, psz 64K, shr 1) Jul 7 01:07:58.203145 kernel: ITS [mem 0x100100080000-0x10010009ffff] Jul 7 01:07:58.203152 kernel: ITS@0x0000100100080000: allocated 8192 Devices @800003a0000 (indirect, esz 8, psz 64K, shr 1) Jul 7 01:07:58.203158 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @800003b0000 (flat, esz 2, psz 64K, shr 1) Jul 7 01:07:58.203165 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] Jul 7 01:07:58.203172 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @800003d0000 (indirect, esz 8, psz 64K, shr 1) Jul 7 01:07:58.203179 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @800003e0000 (flat, esz 2, psz 64K, shr 1) Jul 7 01:07:58.203186 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] Jul 7 01:07:58.203193 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @80000800000 (indirect, esz 8, psz 64K, shr 1) Jul 7 01:07:58.203200 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @80000810000 (flat, esz 2, psz 64K, shr 1) Jul 7 01:07:58.203206 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] Jul 7 01:07:58.203213 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @80000830000 (indirect, esz 8, psz 64K, shr 1) Jul 7 01:07:58.203220 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000840000 (flat, esz 2, psz 64K, shr 1) Jul 7 01:07:58.203227 kernel: ITS [mem 0x100100100000-0x10010011ffff] Jul 7 01:07:58.203233 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000860000 (indirect, esz 8, psz 64K, shr 1) Jul 7 01:07:58.203240 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @80000870000 (flat, esz 2, psz 64K, shr 1) Jul 7 01:07:58.203247 kernel: ITS [mem 0x100100120000-0x10010013ffff] Jul 7 01:07:58.203255 kernel: ITS@0x0000100100120000: allocated 8192 Devices @80000890000 (indirect, esz 8, psz 64K, shr 1) Jul 7 01:07:58.203262 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @800008a0000 (flat, esz 2, psz 64K, shr 1) Jul 7 01:07:58.203269 kernel: GICv3: using LPI property table @0x00000800008b0000 Jul 7 01:07:58.203275 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000800008c0000 Jul 7 01:07:58.203282 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 01:07:58.203289 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.203296 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). Jul 7 01:07:58.203302 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). Jul 7 01:07:58.203309 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 7 01:07:58.203316 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 7 01:07:58.203323 kernel: Console: colour dummy device 80x25 Jul 7 01:07:58.203331 kernel: printk: legacy console [tty0] enabled Jul 7 01:07:58.203338 kernel: ACPI: Core revision 20240827 Jul 7 01:07:58.203345 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 7 01:07:58.203352 kernel: pid_max: default: 81920 minimum: 640 Jul 7 01:07:58.203358 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 01:07:58.203365 kernel: landlock: Up and running. Jul 7 01:07:58.203372 kernel: SELinux: Initializing. Jul 7 01:07:58.203379 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 01:07:58.203386 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 01:07:58.203394 kernel: rcu: Hierarchical SRCU implementation. Jul 7 01:07:58.203401 kernel: rcu: Max phase no-delay instances is 400. Jul 7 01:07:58.203408 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Jul 7 01:07:58.203415 kernel: Remapping and enabling EFI services. Jul 7 01:07:58.203422 kernel: smp: Bringing up secondary CPUs ... Jul 7 01:07:58.203429 kernel: Detected PIPT I-cache on CPU1 Jul 7 01:07:58.203435 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 Jul 7 01:07:58.203442 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000800008d0000 Jul 7 01:07:58.203449 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.203456 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] Jul 7 01:07:58.203464 kernel: Detected PIPT I-cache on CPU2 Jul 7 01:07:58.203471 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 Jul 7 01:07:58.203478 kernel: GICv3: CPU2: using allocated LPI pending table @0x00000800008e0000 Jul 7 01:07:58.203485 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.203494 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] Jul 7 01:07:58.203500 kernel: Detected PIPT I-cache on CPU3 Jul 7 01:07:58.203507 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 Jul 7 01:07:58.203514 kernel: GICv3: CPU3: using allocated LPI pending table @0x00000800008f0000 Jul 7 01:07:58.203521 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.203529 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] Jul 7 01:07:58.203536 kernel: Detected PIPT I-cache on CPU4 Jul 7 01:07:58.203543 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 Jul 7 01:07:58.203550 kernel: GICv3: CPU4: using allocated LPI pending table @0x0000080000900000 Jul 7 01:07:58.203557 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.203563 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] Jul 7 01:07:58.203570 kernel: Detected PIPT I-cache on CPU5 Jul 7 01:07:58.203577 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 Jul 7 01:07:58.203584 kernel: GICv3: CPU5: using allocated LPI pending table @0x0000080000910000 Jul 7 01:07:58.203591 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.203599 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] Jul 7 01:07:58.203606 kernel: Detected PIPT I-cache on CPU6 Jul 7 01:07:58.203612 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 Jul 7 01:07:58.203619 kernel: GICv3: CPU6: using allocated LPI pending table @0x0000080000920000 Jul 7 01:07:58.203626 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.203633 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] Jul 7 01:07:58.203639 kernel: Detected PIPT I-cache on CPU7 Jul 7 01:07:58.203646 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 Jul 7 01:07:58.203653 kernel: GICv3: CPU7: using allocated LPI pending table @0x0000080000930000 Jul 7 01:07:58.203661 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.203668 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] Jul 7 01:07:58.203675 kernel: Detected PIPT I-cache on CPU8 Jul 7 01:07:58.203682 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 Jul 7 01:07:58.203688 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000940000 Jul 7 01:07:58.203695 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.203702 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] Jul 7 01:07:58.203709 kernel: Detected PIPT I-cache on CPU9 Jul 7 01:07:58.203715 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 Jul 7 01:07:58.203722 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000950000 Jul 7 01:07:58.203730 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.203737 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] Jul 7 01:07:58.203744 kernel: Detected PIPT I-cache on CPU10 Jul 7 01:07:58.203750 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 Jul 7 01:07:58.203757 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000960000 Jul 7 01:07:58.203764 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.203771 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] Jul 7 01:07:58.203778 kernel: Detected PIPT I-cache on CPU11 Jul 7 01:07:58.203785 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 Jul 7 01:07:58.203793 kernel: GICv3: CPU11: using allocated LPI pending table @0x0000080000970000 Jul 7 01:07:58.203800 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.203806 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] Jul 7 01:07:58.203813 kernel: Detected PIPT I-cache on CPU12 Jul 7 01:07:58.203820 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 Jul 7 01:07:58.203827 kernel: GICv3: CPU12: using allocated LPI pending table @0x0000080000980000 Jul 7 01:07:58.203834 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.203840 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] Jul 7 01:07:58.203847 kernel: Detected PIPT I-cache on CPU13 Jul 7 01:07:58.203854 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 Jul 7 01:07:58.203862 kernel: GICv3: CPU13: using allocated LPI pending table @0x0000080000990000 Jul 7 01:07:58.203869 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.203876 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] Jul 7 01:07:58.203883 kernel: Detected PIPT I-cache on CPU14 Jul 7 01:07:58.203890 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 Jul 7 01:07:58.203896 kernel: GICv3: CPU14: using allocated LPI pending table @0x00000800009a0000 Jul 7 01:07:58.203903 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.203910 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] Jul 7 01:07:58.203917 kernel: Detected PIPT I-cache on CPU15 Jul 7 01:07:58.203925 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 Jul 7 01:07:58.203932 kernel: GICv3: CPU15: using allocated LPI pending table @0x00000800009b0000 Jul 7 01:07:58.203939 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.203946 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] Jul 7 01:07:58.203953 kernel: Detected PIPT I-cache on CPU16 Jul 7 01:07:58.203960 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 Jul 7 01:07:58.203967 kernel: GICv3: CPU16: using allocated LPI pending table @0x00000800009c0000 Jul 7 01:07:58.203974 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.203981 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] Jul 7 01:07:58.203987 kernel: Detected PIPT I-cache on CPU17 Jul 7 01:07:58.203995 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 Jul 7 01:07:58.204002 kernel: GICv3: CPU17: using allocated LPI pending table @0x00000800009d0000 Jul 7 01:07:58.204009 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204016 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] Jul 7 01:07:58.204023 kernel: Detected PIPT I-cache on CPU18 Jul 7 01:07:58.204029 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 Jul 7 01:07:58.204037 kernel: GICv3: CPU18: using allocated LPI pending table @0x00000800009e0000 Jul 7 01:07:58.204052 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204060 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] Jul 7 01:07:58.204069 kernel: Detected PIPT I-cache on CPU19 Jul 7 01:07:58.204076 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 Jul 7 01:07:58.204083 kernel: GICv3: CPU19: using allocated LPI pending table @0x00000800009f0000 Jul 7 01:07:58.204090 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204097 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] Jul 7 01:07:58.204104 kernel: Detected PIPT I-cache on CPU20 Jul 7 01:07:58.204112 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 Jul 7 01:07:58.204120 kernel: GICv3: CPU20: using allocated LPI pending table @0x0000080000a00000 Jul 7 01:07:58.204127 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204134 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] Jul 7 01:07:58.204143 kernel: Detected PIPT I-cache on CPU21 Jul 7 01:07:58.204150 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 Jul 7 01:07:58.204157 kernel: GICv3: CPU21: using allocated LPI pending table @0x0000080000a10000 Jul 7 01:07:58.204164 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204172 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] Jul 7 01:07:58.204181 kernel: Detected PIPT I-cache on CPU22 Jul 7 01:07:58.204188 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 Jul 7 01:07:58.204195 kernel: GICv3: CPU22: using allocated LPI pending table @0x0000080000a20000 Jul 7 01:07:58.204203 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204210 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] Jul 7 01:07:58.204217 kernel: Detected PIPT I-cache on CPU23 Jul 7 01:07:58.204224 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 Jul 7 01:07:58.204231 kernel: GICv3: CPU23: using allocated LPI pending table @0x0000080000a30000 Jul 7 01:07:58.204238 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204246 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] Jul 7 01:07:58.204254 kernel: Detected PIPT I-cache on CPU24 Jul 7 01:07:58.204261 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 Jul 7 01:07:58.204268 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000a40000 Jul 7 01:07:58.204276 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204283 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] Jul 7 01:07:58.204290 kernel: Detected PIPT I-cache on CPU25 Jul 7 01:07:58.204297 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 Jul 7 01:07:58.204304 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000a50000 Jul 7 01:07:58.204311 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204319 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] Jul 7 01:07:58.204327 kernel: Detected PIPT I-cache on CPU26 Jul 7 01:07:58.204334 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 Jul 7 01:07:58.204341 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000a60000 Jul 7 01:07:58.204348 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204355 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] Jul 7 01:07:58.204362 kernel: Detected PIPT I-cache on CPU27 Jul 7 01:07:58.204369 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 Jul 7 01:07:58.204377 kernel: GICv3: CPU27: using allocated LPI pending table @0x0000080000a70000 Jul 7 01:07:58.204385 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204392 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] Jul 7 01:07:58.204399 kernel: Detected PIPT I-cache on CPU28 Jul 7 01:07:58.204407 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 Jul 7 01:07:58.204414 kernel: GICv3: CPU28: using allocated LPI pending table @0x0000080000a80000 Jul 7 01:07:58.204421 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204428 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] Jul 7 01:07:58.204435 kernel: Detected PIPT I-cache on CPU29 Jul 7 01:07:58.204442 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 Jul 7 01:07:58.204449 kernel: GICv3: CPU29: using allocated LPI pending table @0x0000080000a90000 Jul 7 01:07:58.204458 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204465 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] Jul 7 01:07:58.204472 kernel: Detected PIPT I-cache on CPU30 Jul 7 01:07:58.204479 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 Jul 7 01:07:58.204489 kernel: GICv3: CPU30: using allocated LPI pending table @0x0000080000aa0000 Jul 7 01:07:58.204496 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204504 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] Jul 7 01:07:58.204511 kernel: Detected PIPT I-cache on CPU31 Jul 7 01:07:58.204518 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 Jul 7 01:07:58.204527 kernel: GICv3: CPU31: using allocated LPI pending table @0x0000080000ab0000 Jul 7 01:07:58.204534 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204541 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] Jul 7 01:07:58.204548 kernel: Detected PIPT I-cache on CPU32 Jul 7 01:07:58.204555 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 Jul 7 01:07:58.204563 kernel: GICv3: CPU32: using allocated LPI pending table @0x0000080000ac0000 Jul 7 01:07:58.204570 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204577 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] Jul 7 01:07:58.204584 kernel: Detected PIPT I-cache on CPU33 Jul 7 01:07:58.204591 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 Jul 7 01:07:58.204600 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000ad0000 Jul 7 01:07:58.204607 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204614 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] Jul 7 01:07:58.204622 kernel: Detected PIPT I-cache on CPU34 Jul 7 01:07:58.204629 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 Jul 7 01:07:58.204636 kernel: GICv3: CPU34: using allocated LPI pending table @0x0000080000ae0000 Jul 7 01:07:58.204643 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204651 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] Jul 7 01:07:58.204658 kernel: Detected PIPT I-cache on CPU35 Jul 7 01:07:58.204666 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 Jul 7 01:07:58.204673 kernel: GICv3: CPU35: using allocated LPI pending table @0x0000080000af0000 Jul 7 01:07:58.204681 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204688 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] Jul 7 01:07:58.204695 kernel: Detected PIPT I-cache on CPU36 Jul 7 01:07:58.204702 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 Jul 7 01:07:58.204710 kernel: GICv3: CPU36: using allocated LPI pending table @0x0000080000b00000 Jul 7 01:07:58.204717 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204724 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] Jul 7 01:07:58.204731 kernel: Detected PIPT I-cache on CPU37 Jul 7 01:07:58.204740 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 Jul 7 01:07:58.204747 kernel: GICv3: CPU37: using allocated LPI pending table @0x0000080000b10000 Jul 7 01:07:58.204754 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204761 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] Jul 7 01:07:58.204768 kernel: Detected PIPT I-cache on CPU38 Jul 7 01:07:58.204776 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 Jul 7 01:07:58.204783 kernel: GICv3: CPU38: using allocated LPI pending table @0x0000080000b20000 Jul 7 01:07:58.204790 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204797 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] Jul 7 01:07:58.204806 kernel: Detected PIPT I-cache on CPU39 Jul 7 01:07:58.204813 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 Jul 7 01:07:58.204820 kernel: GICv3: CPU39: using allocated LPI pending table @0x0000080000b30000 Jul 7 01:07:58.204828 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204835 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] Jul 7 01:07:58.204842 kernel: Detected PIPT I-cache on CPU40 Jul 7 01:07:58.204849 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 Jul 7 01:07:58.204856 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000b40000 Jul 7 01:07:58.204865 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204872 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] Jul 7 01:07:58.204879 kernel: Detected PIPT I-cache on CPU41 Jul 7 01:07:58.204886 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 Jul 7 01:07:58.204894 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000b50000 Jul 7 01:07:58.204901 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204908 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] Jul 7 01:07:58.204915 kernel: Detected PIPT I-cache on CPU42 Jul 7 01:07:58.204922 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 Jul 7 01:07:58.204929 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000b60000 Jul 7 01:07:58.204938 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204945 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] Jul 7 01:07:58.204952 kernel: Detected PIPT I-cache on CPU43 Jul 7 01:07:58.204959 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 Jul 7 01:07:58.204966 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000b70000 Jul 7 01:07:58.204974 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.204981 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] Jul 7 01:07:58.204988 kernel: Detected PIPT I-cache on CPU44 Jul 7 01:07:58.204995 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 Jul 7 01:07:58.205004 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000b80000 Jul 7 01:07:58.205011 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205018 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] Jul 7 01:07:58.205025 kernel: Detected PIPT I-cache on CPU45 Jul 7 01:07:58.205033 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 Jul 7 01:07:58.205041 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000b90000 Jul 7 01:07:58.205049 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205057 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] Jul 7 01:07:58.205064 kernel: Detected PIPT I-cache on CPU46 Jul 7 01:07:58.205071 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 Jul 7 01:07:58.205079 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000ba0000 Jul 7 01:07:58.205087 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205094 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] Jul 7 01:07:58.205101 kernel: Detected PIPT I-cache on CPU47 Jul 7 01:07:58.205108 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 Jul 7 01:07:58.205115 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000bb0000 Jul 7 01:07:58.205122 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205130 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] Jul 7 01:07:58.205137 kernel: Detected PIPT I-cache on CPU48 Jul 7 01:07:58.205145 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 Jul 7 01:07:58.205152 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000bc0000 Jul 7 01:07:58.205159 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205167 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] Jul 7 01:07:58.205174 kernel: Detected PIPT I-cache on CPU49 Jul 7 01:07:58.205181 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 Jul 7 01:07:58.205188 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000bd0000 Jul 7 01:07:58.205195 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205202 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] Jul 7 01:07:58.205210 kernel: Detected PIPT I-cache on CPU50 Jul 7 01:07:58.205218 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 Jul 7 01:07:58.205225 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000be0000 Jul 7 01:07:58.205232 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205239 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] Jul 7 01:07:58.205247 kernel: Detected PIPT I-cache on CPU51 Jul 7 01:07:58.205254 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 Jul 7 01:07:58.205261 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000bf0000 Jul 7 01:07:58.205268 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205275 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] Jul 7 01:07:58.205284 kernel: Detected PIPT I-cache on CPU52 Jul 7 01:07:58.205291 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 Jul 7 01:07:58.205298 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000c00000 Jul 7 01:07:58.205306 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205313 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] Jul 7 01:07:58.205320 kernel: Detected PIPT I-cache on CPU53 Jul 7 01:07:58.205327 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 Jul 7 01:07:58.205335 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000c10000 Jul 7 01:07:58.205342 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205350 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] Jul 7 01:07:58.205358 kernel: Detected PIPT I-cache on CPU54 Jul 7 01:07:58.205365 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 Jul 7 01:07:58.205372 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000c20000 Jul 7 01:07:58.205379 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205386 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] Jul 7 01:07:58.205393 kernel: Detected PIPT I-cache on CPU55 Jul 7 01:07:58.205400 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 Jul 7 01:07:58.205408 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000c30000 Jul 7 01:07:58.205415 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205423 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] Jul 7 01:07:58.205430 kernel: Detected PIPT I-cache on CPU56 Jul 7 01:07:58.205438 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 Jul 7 01:07:58.205445 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000c40000 Jul 7 01:07:58.205452 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205459 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] Jul 7 01:07:58.205466 kernel: Detected PIPT I-cache on CPU57 Jul 7 01:07:58.205473 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 Jul 7 01:07:58.205480 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000c50000 Jul 7 01:07:58.205491 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205499 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] Jul 7 01:07:58.205506 kernel: Detected PIPT I-cache on CPU58 Jul 7 01:07:58.205513 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 Jul 7 01:07:58.205520 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000c60000 Jul 7 01:07:58.205528 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205535 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] Jul 7 01:07:58.205542 kernel: Detected PIPT I-cache on CPU59 Jul 7 01:07:58.205549 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 Jul 7 01:07:58.205556 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000c70000 Jul 7 01:07:58.205565 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205572 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] Jul 7 01:07:58.205579 kernel: Detected PIPT I-cache on CPU60 Jul 7 01:07:58.205586 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 Jul 7 01:07:58.205593 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000c80000 Jul 7 01:07:58.205601 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205608 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] Jul 7 01:07:58.205615 kernel: Detected PIPT I-cache on CPU61 Jul 7 01:07:58.205622 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 Jul 7 01:07:58.205630 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000c90000 Jul 7 01:07:58.205638 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205645 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] Jul 7 01:07:58.205652 kernel: Detected PIPT I-cache on CPU62 Jul 7 01:07:58.205659 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 Jul 7 01:07:58.205666 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000ca0000 Jul 7 01:07:58.205674 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205681 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] Jul 7 01:07:58.205688 kernel: Detected PIPT I-cache on CPU63 Jul 7 01:07:58.205695 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 Jul 7 01:07:58.205704 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000cb0000 Jul 7 01:07:58.205711 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205718 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] Jul 7 01:07:58.205725 kernel: Detected PIPT I-cache on CPU64 Jul 7 01:07:58.205732 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 Jul 7 01:07:58.205739 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000cc0000 Jul 7 01:07:58.205746 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205754 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] Jul 7 01:07:58.205761 kernel: Detected PIPT I-cache on CPU65 Jul 7 01:07:58.205769 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 Jul 7 01:07:58.205776 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000cd0000 Jul 7 01:07:58.205784 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205791 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] Jul 7 01:07:58.205798 kernel: Detected PIPT I-cache on CPU66 Jul 7 01:07:58.205805 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 Jul 7 01:07:58.205812 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000ce0000 Jul 7 01:07:58.205820 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205827 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] Jul 7 01:07:58.205834 kernel: Detected PIPT I-cache on CPU67 Jul 7 01:07:58.205842 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 Jul 7 01:07:58.205849 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000cf0000 Jul 7 01:07:58.205857 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205864 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] Jul 7 01:07:58.205871 kernel: Detected PIPT I-cache on CPU68 Jul 7 01:07:58.205878 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 Jul 7 01:07:58.205885 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000d00000 Jul 7 01:07:58.205893 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205900 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] Jul 7 01:07:58.205908 kernel: Detected PIPT I-cache on CPU69 Jul 7 01:07:58.205915 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 Jul 7 01:07:58.205923 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000d10000 Jul 7 01:07:58.205930 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205937 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] Jul 7 01:07:58.205944 kernel: Detected PIPT I-cache on CPU70 Jul 7 01:07:58.205951 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 Jul 7 01:07:58.205959 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000d20000 Jul 7 01:07:58.205966 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.205973 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] Jul 7 01:07:58.205981 kernel: Detected PIPT I-cache on CPU71 Jul 7 01:07:58.205989 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 Jul 7 01:07:58.205996 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000d30000 Jul 7 01:07:58.206003 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.206010 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] Jul 7 01:07:58.206017 kernel: Detected PIPT I-cache on CPU72 Jul 7 01:07:58.206024 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 Jul 7 01:07:58.206032 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000d40000 Jul 7 01:07:58.206039 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.206047 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] Jul 7 01:07:58.206054 kernel: Detected PIPT I-cache on CPU73 Jul 7 01:07:58.206062 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 Jul 7 01:07:58.206069 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000d50000 Jul 7 01:07:58.206076 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.206083 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] Jul 7 01:07:58.206090 kernel: Detected PIPT I-cache on CPU74 Jul 7 01:07:58.206098 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 Jul 7 01:07:58.206105 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000d60000 Jul 7 01:07:58.206113 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.206120 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] Jul 7 01:07:58.206127 kernel: Detected PIPT I-cache on CPU75 Jul 7 01:07:58.206134 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 Jul 7 01:07:58.206141 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000d70000 Jul 7 01:07:58.206149 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.206156 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] Jul 7 01:07:58.206163 kernel: Detected PIPT I-cache on CPU76 Jul 7 01:07:58.206170 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 Jul 7 01:07:58.206177 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000d80000 Jul 7 01:07:58.206186 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.206193 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] Jul 7 01:07:58.206200 kernel: Detected PIPT I-cache on CPU77 Jul 7 01:07:58.206207 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 Jul 7 01:07:58.206214 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000d90000 Jul 7 01:07:58.206221 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.206228 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] Jul 7 01:07:58.206236 kernel: Detected PIPT I-cache on CPU78 Jul 7 01:07:58.206243 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 Jul 7 01:07:58.206251 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000da0000 Jul 7 01:07:58.206259 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.206266 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] Jul 7 01:07:58.206273 kernel: Detected PIPT I-cache on CPU79 Jul 7 01:07:58.206280 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 Jul 7 01:07:58.206287 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000db0000 Jul 7 01:07:58.206294 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 01:07:58.206301 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] Jul 7 01:07:58.206308 kernel: smp: Brought up 1 node, 80 CPUs Jul 7 01:07:58.206316 kernel: SMP: Total of 80 processors activated. Jul 7 01:07:58.206324 kernel: CPU: All CPU(s) started at EL2 Jul 7 01:07:58.206331 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 01:07:58.206339 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 7 01:07:58.206346 kernel: CPU features: detected: Common not Private translations Jul 7 01:07:58.206353 kernel: CPU features: detected: CRC32 instructions Jul 7 01:07:58.206360 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 7 01:07:58.206368 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 7 01:07:58.206375 kernel: CPU features: detected: LSE atomic instructions Jul 7 01:07:58.206382 kernel: CPU features: detected: Privileged Access Never Jul 7 01:07:58.206391 kernel: CPU features: detected: RAS Extension Support Jul 7 01:07:58.206398 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 7 01:07:58.206405 kernel: alternatives: applying system-wide alternatives Jul 7 01:07:58.206412 kernel: CPU features: detected: Hardware dirty bit management on CPU0-79 Jul 7 01:07:58.206420 kernel: Memory: 262860132K/268174336K available (11072K kernel code, 2428K rwdata, 9032K rodata, 39424K init, 1035K bss, 5254660K reserved, 0K cma-reserved) Jul 7 01:07:58.206427 kernel: devtmpfs: initialized Jul 7 01:07:58.206434 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 01:07:58.206442 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 7 01:07:58.206450 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 7 01:07:58.206457 kernel: 0 pages in range for non-PLT usage Jul 7 01:07:58.206464 kernel: 508480 pages in range for PLT usage Jul 7 01:07:58.206471 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 01:07:58.206479 kernel: SMBIOS 3.4.0 present. Jul 7 01:07:58.206488 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 Jul 7 01:07:58.206495 kernel: DMI: Memory slots populated: 8/16 Jul 7 01:07:58.206503 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 01:07:58.206510 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations Jul 7 01:07:58.206517 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 01:07:58.206526 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 01:07:58.206533 kernel: audit: initializing netlink subsys (disabled) Jul 7 01:07:58.206540 kernel: audit: type=2000 audit(0.068:1): state=initialized audit_enabled=0 res=1 Jul 7 01:07:58.206547 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 01:07:58.206554 kernel: cpuidle: using governor menu Jul 7 01:07:58.206561 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 01:07:58.206569 kernel: ASID allocator initialised with 32768 entries Jul 7 01:07:58.206576 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 01:07:58.206584 kernel: Serial: AMBA PL011 UART driver Jul 7 01:07:58.206591 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 01:07:58.206599 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 01:07:58.206606 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 01:07:58.206613 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 01:07:58.206620 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 01:07:58.206627 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 01:07:58.206634 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 01:07:58.206642 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 01:07:58.206650 kernel: ACPI: Added _OSI(Module Device) Jul 7 01:07:58.206657 kernel: ACPI: Added _OSI(Processor Device) Jul 7 01:07:58.206664 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 01:07:58.206672 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded Jul 7 01:07:58.206679 kernel: ACPI: Interpreter enabled Jul 7 01:07:58.206686 kernel: ACPI: Using GIC for interrupt routing Jul 7 01:07:58.206693 kernel: ACPI: MCFG table detected, 8 entries Jul 7 01:07:58.206700 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 Jul 7 01:07:58.206707 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 Jul 7 01:07:58.206714 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 Jul 7 01:07:58.206723 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 Jul 7 01:07:58.206730 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 Jul 7 01:07:58.206737 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 Jul 7 01:07:58.206744 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 Jul 7 01:07:58.206751 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 Jul 7 01:07:58.206759 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA Jul 7 01:07:58.206766 kernel: printk: legacy console [ttyAMA0] enabled Jul 7 01:07:58.206773 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA Jul 7 01:07:58.206782 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) Jul 7 01:07:58.206910 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 01:07:58.206974 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 01:07:58.207031 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] Jul 7 01:07:58.207086 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 01:07:58.207141 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 Jul 7 01:07:58.207196 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] Jul 7 01:07:58.207208 kernel: PCI host bridge to bus 000d:00 Jul 7 01:07:58.207274 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] Jul 7 01:07:58.207326 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] Jul 7 01:07:58.207377 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] Jul 7 01:07:58.207526 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jul 7 01:07:58.207601 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.207667 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Jul 7 01:07:58.207728 kernel: pci 000d:00:01.0: enabling Extended Tags Jul 7 01:07:58.207798 kernel: pci 000d:00:01.0: supports D1 D2 Jul 7 01:07:58.207856 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.207922 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.207980 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Jul 7 01:07:58.208037 kernel: pci 000d:00:02.0: supports D1 D2 Jul 7 01:07:58.208096 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.208161 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.208218 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Jul 7 01:07:58.208275 kernel: pci 000d:00:03.0: supports D1 D2 Jul 7 01:07:58.208332 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.208398 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.208455 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Jul 7 01:07:58.208534 kernel: pci 000d:00:04.0: supports D1 D2 Jul 7 01:07:58.208592 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.208602 kernel: acpiphp: Slot [1] registered Jul 7 01:07:58.208609 kernel: acpiphp: Slot [2] registered Jul 7 01:07:58.208616 kernel: acpiphp: Slot [3] registered Jul 7 01:07:58.208624 kernel: acpiphp: Slot [4] registered Jul 7 01:07:58.208675 kernel: pci_bus 000d:00: on NUMA node 0 Jul 7 01:07:58.208734 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 01:07:58.208795 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 7 01:07:58.208852 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 7 01:07:58.208911 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 01:07:58.208968 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 01:07:58.209026 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 7 01:07:58.209083 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 01:07:58.209140 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 01:07:58.209199 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 7 01:07:58.209257 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 01:07:58.209316 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 01:07:58.209374 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 7 01:07:58.209432 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff]: assigned Jul 7 01:07:58.209492 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref]: assigned Jul 7 01:07:58.209550 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff]: assigned Jul 7 01:07:58.209610 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref]: assigned Jul 7 01:07:58.209667 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff]: assigned Jul 7 01:07:58.209723 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref]: assigned Jul 7 01:07:58.209780 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff]: assigned Jul 7 01:07:58.209838 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref]: assigned Jul 7 01:07:58.209895 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.209951 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.210008 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.210067 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.210125 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.210182 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.210239 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.210296 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.210353 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.210411 kernel: pci 000d:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.210469 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.210529 kernel: pci 000d:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.210587 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.210644 kernel: pci 000d:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.210701 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.210758 kernel: pci 000d:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.210815 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] Jul 7 01:07:58.210872 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] Jul 7 01:07:58.210932 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] Jul 7 01:07:58.210990 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] Jul 7 01:07:58.211048 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] Jul 7 01:07:58.211105 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] Jul 7 01:07:58.211163 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] Jul 7 01:07:58.211220 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] Jul 7 01:07:58.211278 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] Jul 7 01:07:58.211336 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] Jul 7 01:07:58.211393 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] Jul 7 01:07:58.211449 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] Jul 7 01:07:58.211505 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] Jul 7 01:07:58.211555 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] Jul 7 01:07:58.211619 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] Jul 7 01:07:58.211675 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] Jul 7 01:07:58.211736 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] Jul 7 01:07:58.211790 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] Jul 7 01:07:58.211858 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] Jul 7 01:07:58.211911 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] Jul 7 01:07:58.211970 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] Jul 7 01:07:58.212025 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] Jul 7 01:07:58.212035 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) Jul 7 01:07:58.212098 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 01:07:58.212154 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 01:07:58.212209 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] Jul 7 01:07:58.212263 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 01:07:58.212319 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 Jul 7 01:07:58.212374 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] Jul 7 01:07:58.212384 kernel: PCI host bridge to bus 0000:00 Jul 7 01:07:58.212442 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] Jul 7 01:07:58.212500 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] Jul 7 01:07:58.212556 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 01:07:58.212626 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jul 7 01:07:58.212697 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.212755 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 7 01:07:58.212813 kernel: pci 0000:00:01.0: enabling Extended Tags Jul 7 01:07:58.212872 kernel: pci 0000:00:01.0: supports D1 D2 Jul 7 01:07:58.212929 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.212994 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.213054 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Jul 7 01:07:58.213116 kernel: pci 0000:00:02.0: supports D1 D2 Jul 7 01:07:58.213177 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.213246 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.213306 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Jul 7 01:07:58.213366 kernel: pci 0000:00:03.0: supports D1 D2 Jul 7 01:07:58.213426 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.213500 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.213561 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Jul 7 01:07:58.213630 kernel: pci 0000:00:04.0: supports D1 D2 Jul 7 01:07:58.213696 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.213706 kernel: acpiphp: Slot [1-1] registered Jul 7 01:07:58.213713 kernel: acpiphp: Slot [2-1] registered Jul 7 01:07:58.213721 kernel: acpiphp: Slot [3-1] registered Jul 7 01:07:58.213728 kernel: acpiphp: Slot [4-1] registered Jul 7 01:07:58.213778 kernel: pci_bus 0000:00: on NUMA node 0 Jul 7 01:07:58.213836 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 01:07:58.213897 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 7 01:07:58.213954 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 7 01:07:58.214012 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 01:07:58.214069 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 01:07:58.214127 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 7 01:07:58.214185 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 01:07:58.214243 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 01:07:58.214300 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 7 01:07:58.214358 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 01:07:58.214414 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 01:07:58.214472 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 7 01:07:58.214534 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff]: assigned Jul 7 01:07:58.214591 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref]: assigned Jul 7 01:07:58.214650 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff]: assigned Jul 7 01:07:58.214707 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref]: assigned Jul 7 01:07:58.214764 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff]: assigned Jul 7 01:07:58.214821 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref]: assigned Jul 7 01:07:58.214878 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff]: assigned Jul 7 01:07:58.214935 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref]: assigned Jul 7 01:07:58.214992 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.215050 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.215110 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.215167 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.215225 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.215283 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.215339 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.215397 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.215454 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.215514 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.215574 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.215632 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.215690 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.215747 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.215804 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.215863 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.215920 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 7 01:07:58.215977 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] Jul 7 01:07:58.216034 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Jul 7 01:07:58.216091 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] Jul 7 01:07:58.216149 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] Jul 7 01:07:58.216206 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Jul 7 01:07:58.216265 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] Jul 7 01:07:58.216322 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] Jul 7 01:07:58.216380 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Jul 7 01:07:58.216437 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] Jul 7 01:07:58.216497 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] Jul 7 01:07:58.216555 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Jul 7 01:07:58.216609 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] Jul 7 01:07:58.216660 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] Jul 7 01:07:58.216721 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] Jul 7 01:07:58.216775 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] Jul 7 01:07:58.216835 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] Jul 7 01:07:58.216890 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] Jul 7 01:07:58.216956 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] Jul 7 01:07:58.217012 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] Jul 7 01:07:58.217072 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] Jul 7 01:07:58.217124 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] Jul 7 01:07:58.217134 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) Jul 7 01:07:58.217195 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 01:07:58.217251 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 01:07:58.217308 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] Jul 7 01:07:58.217362 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 01:07:58.217416 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 Jul 7 01:07:58.217470 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] Jul 7 01:07:58.217480 kernel: PCI host bridge to bus 0005:00 Jul 7 01:07:58.217542 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] Jul 7 01:07:58.217593 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] Jul 7 01:07:58.217645 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] Jul 7 01:07:58.217709 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jul 7 01:07:58.217774 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.217832 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Jul 7 01:07:58.217889 kernel: pci 0005:00:01.0: supports D1 D2 Jul 7 01:07:58.217946 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.218011 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.218071 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Jul 7 01:07:58.218128 kernel: pci 0005:00:03.0: supports D1 D2 Jul 7 01:07:58.218186 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.218249 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.218307 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Jul 7 01:07:58.218363 kernel: pci 0005:00:05.0: bridge window [mem 0x30100000-0x301fffff] Jul 7 01:07:58.218420 kernel: pci 0005:00:05.0: supports D1 D2 Jul 7 01:07:58.218479 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.218545 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.218604 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Jul 7 01:07:58.218661 kernel: pci 0005:00:07.0: bridge window [mem 0x30000000-0x300fffff] Jul 7 01:07:58.218717 kernel: pci 0005:00:07.0: supports D1 D2 Jul 7 01:07:58.218774 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.218784 kernel: acpiphp: Slot [1-2] registered Jul 7 01:07:58.218793 kernel: acpiphp: Slot [2-2] registered Jul 7 01:07:58.218856 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 PCIe Endpoint Jul 7 01:07:58.218918 kernel: pci 0005:03:00.0: BAR 0 [mem 0x30110000-0x30113fff 64bit] Jul 7 01:07:58.218977 kernel: pci 0005:03:00.0: ROM [mem 0x30100000-0x3010ffff pref] Jul 7 01:07:58.219044 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 PCIe Endpoint Jul 7 01:07:58.219103 kernel: pci 0005:04:00.0: BAR 0 [mem 0x30010000-0x30013fff 64bit] Jul 7 01:07:58.219162 kernel: pci 0005:04:00.0: ROM [mem 0x30000000-0x3000ffff pref] Jul 7 01:07:58.219216 kernel: pci_bus 0005:00: on NUMA node 0 Jul 7 01:07:58.219273 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 01:07:58.219330 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 7 01:07:58.219388 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 7 01:07:58.219445 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 01:07:58.219507 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 01:07:58.219564 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 7 01:07:58.219625 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 01:07:58.219683 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 01:07:58.219741 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jul 7 01:07:58.219799 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 01:07:58.219856 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 01:07:58.219914 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 Jul 7 01:07:58.219971 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff]: assigned Jul 7 01:07:58.220030 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref]: assigned Jul 7 01:07:58.220087 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff]: assigned Jul 7 01:07:58.220144 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref]: assigned Jul 7 01:07:58.220202 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff]: assigned Jul 7 01:07:58.220259 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref]: assigned Jul 7 01:07:58.220316 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff]: assigned Jul 7 01:07:58.220373 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref]: assigned Jul 7 01:07:58.220430 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.220492 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.220551 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.220609 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.220666 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.220723 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.220780 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.220837 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.220894 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.220953 kernel: pci 0005:00:07.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.221011 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.221068 kernel: pci 0005:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.221125 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.221182 kernel: pci 0005:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.221239 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.221295 kernel: pci 0005:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.221354 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] Jul 7 01:07:58.221411 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] Jul 7 01:07:58.221469 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Jul 7 01:07:58.221531 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] Jul 7 01:07:58.221589 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] Jul 7 01:07:58.221646 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Jul 7 01:07:58.221706 kernel: pci 0005:03:00.0: ROM [mem 0x30400000-0x3040ffff pref]: assigned Jul 7 01:07:58.221767 kernel: pci 0005:03:00.0: BAR 0 [mem 0x30410000-0x30413fff 64bit]: assigned Jul 7 01:07:58.221824 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] Jul 7 01:07:58.221881 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] Jul 7 01:07:58.221937 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Jul 7 01:07:58.221997 kernel: pci 0005:04:00.0: ROM [mem 0x30600000-0x3060ffff pref]: assigned Jul 7 01:07:58.222055 kernel: pci 0005:04:00.0: BAR 0 [mem 0x30610000-0x30613fff 64bit]: assigned Jul 7 01:07:58.222113 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] Jul 7 01:07:58.222169 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] Jul 7 01:07:58.222228 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Jul 7 01:07:58.222280 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] Jul 7 01:07:58.222331 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] Jul 7 01:07:58.222391 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] Jul 7 01:07:58.222445 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] Jul 7 01:07:58.222515 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] Jul 7 01:07:58.222569 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] Jul 7 01:07:58.222630 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] Jul 7 01:07:58.222683 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] Jul 7 01:07:58.222744 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] Jul 7 01:07:58.222798 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] Jul 7 01:07:58.222808 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) Jul 7 01:07:58.222870 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 01:07:58.222926 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 01:07:58.222982 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] Jul 7 01:07:58.223037 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 01:07:58.223091 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 Jul 7 01:07:58.223145 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] Jul 7 01:07:58.223155 kernel: PCI host bridge to bus 0003:00 Jul 7 01:07:58.223214 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] Jul 7 01:07:58.223265 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] Jul 7 01:07:58.223315 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] Jul 7 01:07:58.223379 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jul 7 01:07:58.223444 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.223507 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Jul 7 01:07:58.223566 kernel: pci 0003:00:01.0: supports D1 D2 Jul 7 01:07:58.223623 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.223689 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.223746 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Jul 7 01:07:58.223803 kernel: pci 0003:00:03.0: supports D1 D2 Jul 7 01:07:58.223860 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.223925 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.223982 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] Jul 7 01:07:58.224041 kernel: pci 0003:00:05.0: bridge window [io 0x0000-0x0fff] Jul 7 01:07:58.224098 kernel: pci 0003:00:05.0: bridge window [mem 0x10000000-0x100fffff] Jul 7 01:07:58.224155 kernel: pci 0003:00:05.0: bridge window [mem 0x240000000000-0x2400000fffff 64bit pref] Jul 7 01:07:58.224212 kernel: pci 0003:00:05.0: supports D1 D2 Jul 7 01:07:58.224270 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.224279 kernel: acpiphp: Slot [1-3] registered Jul 7 01:07:58.224287 kernel: acpiphp: Slot [2-3] registered Jul 7 01:07:58.224353 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 PCIe Endpoint Jul 7 01:07:58.224413 kernel: pci 0003:03:00.0: BAR 0 [mem 0x10020000-0x1003ffff] Jul 7 01:07:58.224471 kernel: pci 0003:03:00.0: BAR 2 [io 0x0020-0x003f] Jul 7 01:07:58.224534 kernel: pci 0003:03:00.0: BAR 3 [mem 0x10044000-0x10047fff] Jul 7 01:07:58.224593 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold Jul 7 01:07:58.224651 kernel: pci 0003:03:00.0: VF BAR 0 [mem 0x240000060000-0x240000063fff 64bit pref] Jul 7 01:07:58.224710 kernel: pci 0003:03:00.0: VF BAR 0 [mem 0x240000060000-0x24000007ffff 64bit pref]: contains BAR 0 for 8 VFs Jul 7 01:07:58.224770 kernel: pci 0003:03:00.0: VF BAR 3 [mem 0x240000040000-0x240000043fff 64bit pref] Jul 7 01:07:58.224828 kernel: pci 0003:03:00.0: VF BAR 3 [mem 0x240000040000-0x24000005ffff 64bit pref]: contains BAR 3 for 8 VFs Jul 7 01:07:58.224887 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) Jul 7 01:07:58.224952 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 PCIe Endpoint Jul 7 01:07:58.225011 kernel: pci 0003:03:00.1: BAR 0 [mem 0x10000000-0x1001ffff] Jul 7 01:07:58.225070 kernel: pci 0003:03:00.1: BAR 2 [io 0x0000-0x001f] Jul 7 01:07:58.225128 kernel: pci 0003:03:00.1: BAR 3 [mem 0x10040000-0x10043fff] Jul 7 01:07:58.225187 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold Jul 7 01:07:58.225246 kernel: pci 0003:03:00.1: VF BAR 0 [mem 0x240000020000-0x240000023fff 64bit pref] Jul 7 01:07:58.225307 kernel: pci 0003:03:00.1: VF BAR 0 [mem 0x240000020000-0x24000003ffff 64bit pref]: contains BAR 0 for 8 VFs Jul 7 01:07:58.225374 kernel: pci 0003:03:00.1: VF BAR 3 [mem 0x240000000000-0x240000003fff 64bit pref] Jul 7 01:07:58.225434 kernel: pci 0003:03:00.1: VF BAR 3 [mem 0x240000000000-0x24000001ffff 64bit pref]: contains BAR 3 for 8 VFs Jul 7 01:07:58.225490 kernel: pci_bus 0003:00: on NUMA node 0 Jul 7 01:07:58.225549 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 01:07:58.225607 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 7 01:07:58.225666 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 7 01:07:58.225726 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 01:07:58.225784 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 01:07:58.225842 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 7 01:07:58.225900 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 Jul 7 01:07:58.225957 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 Jul 7 01:07:58.226014 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jul 7 01:07:58.226074 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref]: assigned Jul 7 01:07:58.226131 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff]: assigned Jul 7 01:07:58.226189 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref]: assigned Jul 7 01:07:58.226246 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff]: assigned Jul 7 01:07:58.226303 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref]: assigned Jul 7 01:07:58.226360 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.226418 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.226474 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.226537 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.226595 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.226654 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.226711 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.226768 kernel: pci 0003:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.226825 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.226882 kernel: pci 0003:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.226940 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.226998 kernel: pci 0003:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.227055 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] Jul 7 01:07:58.227113 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] Jul 7 01:07:58.227170 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] Jul 7 01:07:58.227227 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] Jul 7 01:07:58.227287 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] Jul 7 01:07:58.227344 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] Jul 7 01:07:58.227403 kernel: pci 0003:03:00.0: BAR 0 [mem 0x10400000-0x1041ffff]: assigned Jul 7 01:07:58.227463 kernel: pci 0003:03:00.1: BAR 0 [mem 0x10420000-0x1043ffff]: assigned Jul 7 01:07:58.227526 kernel: pci 0003:03:00.0: BAR 3 [mem 0x10440000-0x10443fff]: assigned Jul 7 01:07:58.227585 kernel: pci 0003:03:00.0: VF BAR 0 [mem 0x240000400000-0x24000041ffff 64bit pref]: assigned Jul 7 01:07:58.227644 kernel: pci 0003:03:00.0: VF BAR 3 [mem 0x240000420000-0x24000043ffff 64bit pref]: assigned Jul 7 01:07:58.227703 kernel: pci 0003:03:00.1: BAR 3 [mem 0x10444000-0x10447fff]: assigned Jul 7 01:07:58.227764 kernel: pci 0003:03:00.1: VF BAR 0 [mem 0x240000440000-0x24000045ffff 64bit pref]: assigned Jul 7 01:07:58.227823 kernel: pci 0003:03:00.1: VF BAR 3 [mem 0x240000460000-0x24000047ffff 64bit pref]: assigned Jul 7 01:07:58.227881 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: can't assign; no space Jul 7 01:07:58.227941 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: failed to assign Jul 7 01:07:58.228000 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: can't assign; no space Jul 7 01:07:58.228059 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: failed to assign Jul 7 01:07:58.228120 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: can't assign; no space Jul 7 01:07:58.228180 kernel: pci 0003:03:00.0: BAR 2 [io size 0x0020]: failed to assign Jul 7 01:07:58.228239 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: can't assign; no space Jul 7 01:07:58.228298 kernel: pci 0003:03:00.1: BAR 2 [io size 0x0020]: failed to assign Jul 7 01:07:58.228355 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] Jul 7 01:07:58.228412 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] Jul 7 01:07:58.228469 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] Jul 7 01:07:58.228528 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 7 01:07:58.228579 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] Jul 7 01:07:58.228632 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] Jul 7 01:07:58.228693 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] Jul 7 01:07:58.228747 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] Jul 7 01:07:58.228815 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] Jul 7 01:07:58.228869 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] Jul 7 01:07:58.228928 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] Jul 7 01:07:58.228983 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] Jul 7 01:07:58.228993 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) Jul 7 01:07:58.229055 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 01:07:58.229112 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 01:07:58.229167 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] Jul 7 01:07:58.229222 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 01:07:58.229277 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 Jul 7 01:07:58.229333 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] Jul 7 01:07:58.229343 kernel: PCI host bridge to bus 000c:00 Jul 7 01:07:58.229402 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] Jul 7 01:07:58.229453 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] Jul 7 01:07:58.229508 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] Jul 7 01:07:58.229574 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jul 7 01:07:58.229639 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.229700 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Jul 7 01:07:58.229757 kernel: pci 000c:00:01.0: enabling Extended Tags Jul 7 01:07:58.229815 kernel: pci 000c:00:01.0: supports D1 D2 Jul 7 01:07:58.229872 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.229935 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.229994 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Jul 7 01:07:58.230051 kernel: pci 000c:00:02.0: supports D1 D2 Jul 7 01:07:58.230110 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.230175 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.230233 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Jul 7 01:07:58.230291 kernel: pci 000c:00:03.0: supports D1 D2 Jul 7 01:07:58.230348 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.230412 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.230471 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Jul 7 01:07:58.230535 kernel: pci 000c:00:04.0: supports D1 D2 Jul 7 01:07:58.230593 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.230602 kernel: acpiphp: Slot [1-4] registered Jul 7 01:07:58.230610 kernel: acpiphp: Slot [2-4] registered Jul 7 01:07:58.230617 kernel: acpiphp: Slot [3-2] registered Jul 7 01:07:58.230625 kernel: acpiphp: Slot [4-2] registered Jul 7 01:07:58.230675 kernel: pci_bus 000c:00: on NUMA node 0 Jul 7 01:07:58.230733 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 01:07:58.230792 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 7 01:07:58.230851 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 7 01:07:58.230909 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 01:07:58.230966 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 01:07:58.231024 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 7 01:07:58.231082 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 01:07:58.231139 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 01:07:58.231199 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 7 01:07:58.231256 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 01:07:58.231314 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 01:07:58.231373 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 7 01:07:58.231430 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff]: assigned Jul 7 01:07:58.231491 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref]: assigned Jul 7 01:07:58.231549 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff]: assigned Jul 7 01:07:58.231607 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref]: assigned Jul 7 01:07:58.231666 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff]: assigned Jul 7 01:07:58.231724 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref]: assigned Jul 7 01:07:58.231783 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff]: assigned Jul 7 01:07:58.231841 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref]: assigned Jul 7 01:07:58.231899 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.231956 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.232014 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.232072 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.232129 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.232187 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.232244 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.232301 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.232359 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.232416 kernel: pci 000c:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.232473 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.232533 kernel: pci 000c:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.232593 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.232651 kernel: pci 000c:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.232708 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.232766 kernel: pci 000c:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.232823 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] Jul 7 01:07:58.232880 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] Jul 7 01:07:58.232939 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] Jul 7 01:07:58.232997 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] Jul 7 01:07:58.233056 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] Jul 7 01:07:58.233113 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] Jul 7 01:07:58.233170 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] Jul 7 01:07:58.233230 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] Jul 7 01:07:58.233288 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] Jul 7 01:07:58.233346 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] Jul 7 01:07:58.233404 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] Jul 7 01:07:58.233461 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] Jul 7 01:07:58.233516 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] Jul 7 01:07:58.233568 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] Jul 7 01:07:58.233631 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] Jul 7 01:07:58.233685 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] Jul 7 01:07:58.233745 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] Jul 7 01:07:58.233799 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] Jul 7 01:07:58.233866 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] Jul 7 01:07:58.233920 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] Jul 7 01:07:58.233982 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] Jul 7 01:07:58.234036 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] Jul 7 01:07:58.234045 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) Jul 7 01:07:58.234109 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 01:07:58.234165 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 01:07:58.234220 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] Jul 7 01:07:58.234275 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 01:07:58.234331 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 Jul 7 01:07:58.234386 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] Jul 7 01:07:58.234395 kernel: PCI host bridge to bus 0002:00 Jul 7 01:07:58.234453 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] Jul 7 01:07:58.234509 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] Jul 7 01:07:58.234561 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] Jul 7 01:07:58.234624 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jul 7 01:07:58.234692 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.234750 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Jul 7 01:07:58.234808 kernel: pci 0002:00:01.0: supports D1 D2 Jul 7 01:07:58.234865 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.234929 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.234988 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Jul 7 01:07:58.235045 kernel: pci 0002:00:03.0: supports D1 D2 Jul 7 01:07:58.235104 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.235169 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.235229 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Jul 7 01:07:58.235286 kernel: pci 0002:00:05.0: supports D1 D2 Jul 7 01:07:58.235344 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.235409 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.235467 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Jul 7 01:07:58.235531 kernel: pci 0002:00:07.0: supports D1 D2 Jul 7 01:07:58.235589 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.235598 kernel: acpiphp: Slot [1-5] registered Jul 7 01:07:58.235606 kernel: acpiphp: Slot [2-5] registered Jul 7 01:07:58.235614 kernel: acpiphp: Slot [3-3] registered Jul 7 01:07:58.235621 kernel: acpiphp: Slot [4-3] registered Jul 7 01:07:58.235671 kernel: pci_bus 0002:00: on NUMA node 0 Jul 7 01:07:58.235729 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 01:07:58.235788 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 Jul 7 01:07:58.235847 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 Jul 7 01:07:58.235905 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 01:07:58.235962 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 01:07:58.236019 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 7 01:07:58.236077 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 01:07:58.236134 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 01:07:58.236194 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 7 01:07:58.236251 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 01:07:58.236309 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 01:07:58.236366 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 7 01:07:58.236424 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff]: assigned Jul 7 01:07:58.236481 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref]: assigned Jul 7 01:07:58.236542 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff]: assigned Jul 7 01:07:58.236601 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref]: assigned Jul 7 01:07:58.236659 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff]: assigned Jul 7 01:07:58.236716 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref]: assigned Jul 7 01:07:58.236775 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff]: assigned Jul 7 01:07:58.236834 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref]: assigned Jul 7 01:07:58.236891 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.236948 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.237006 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.237065 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.237123 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.237181 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.237239 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.237297 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.237354 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.237413 kernel: pci 0002:00:07.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.237471 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.237533 kernel: pci 0002:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.237591 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.237648 kernel: pci 0002:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.237705 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.237762 kernel: pci 0002:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.237821 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] Jul 7 01:07:58.237878 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] Jul 7 01:07:58.237936 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] Jul 7 01:07:58.237996 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] Jul 7 01:07:58.238054 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] Jul 7 01:07:58.238111 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] Jul 7 01:07:58.238169 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] Jul 7 01:07:58.238226 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] Jul 7 01:07:58.238284 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] Jul 7 01:07:58.238341 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] Jul 7 01:07:58.238401 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] Jul 7 01:07:58.238459 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] Jul 7 01:07:58.238520 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] Jul 7 01:07:58.238574 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] Jul 7 01:07:58.238637 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] Jul 7 01:07:58.238691 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] Jul 7 01:07:58.238753 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] Jul 7 01:07:58.238807 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] Jul 7 01:07:58.238868 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] Jul 7 01:07:58.238921 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] Jul 7 01:07:58.238990 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] Jul 7 01:07:58.239044 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] Jul 7 01:07:58.239055 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) Jul 7 01:07:58.239118 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 01:07:58.239174 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 01:07:58.239229 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] Jul 7 01:07:58.239284 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 01:07:58.239339 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 Jul 7 01:07:58.239396 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] Jul 7 01:07:58.239406 kernel: PCI host bridge to bus 0001:00 Jul 7 01:07:58.239462 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] Jul 7 01:07:58.239530 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] Jul 7 01:07:58.239599 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] Jul 7 01:07:58.239665 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 conventional PCI endpoint Jul 7 01:07:58.239730 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.239791 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Jul 7 01:07:58.239849 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Jul 7 01:07:58.239907 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380004ffffff 64bit pref] Jul 7 01:07:58.239964 kernel: pci 0001:00:01.0: enabling Extended Tags Jul 7 01:07:58.240022 kernel: pci 0001:00:01.0: supports D1 D2 Jul 7 01:07:58.240079 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.240145 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.240204 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Jul 7 01:07:58.240263 kernel: pci 0001:00:02.0: supports D1 D2 Jul 7 01:07:58.240320 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.240385 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.240443 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Jul 7 01:07:58.240505 kernel: pci 0001:00:03.0: supports D1 D2 Jul 7 01:07:58.240563 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.240629 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.240687 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Jul 7 01:07:58.240745 kernel: pci 0001:00:04.0: supports D1 D2 Jul 7 01:07:58.240803 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.240813 kernel: acpiphp: Slot [1-6] registered Jul 7 01:07:58.240877 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 PCIe Endpoint Jul 7 01:07:58.240937 kernel: pci 0001:01:00.0: BAR 0 [mem 0x380002000000-0x380003ffffff 64bit pref] Jul 7 01:07:58.240998 kernel: pci 0001:01:00.0: ROM [mem 0x60100000-0x601fffff pref] Jul 7 01:07:58.241057 kernel: pci 0001:01:00.0: PME# supported from D3cold Jul 7 01:07:58.241116 kernel: pci 0001:01:00.0: VF BAR 0 [mem 0x380004800000-0x3800048fffff 64bit pref] Jul 7 01:07:58.241175 kernel: pci 0001:01:00.0: VF BAR 0 [mem 0x380004800000-0x380004ffffff 64bit pref]: contains BAR 0 for 8 VFs Jul 7 01:07:58.241234 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 7 01:07:58.241302 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 PCIe Endpoint Jul 7 01:07:58.241363 kernel: pci 0001:01:00.1: BAR 0 [mem 0x380000000000-0x380001ffffff 64bit pref] Jul 7 01:07:58.241424 kernel: pci 0001:01:00.1: ROM [mem 0x60000000-0x600fffff pref] Jul 7 01:07:58.241484 kernel: pci 0001:01:00.1: PME# supported from D3cold Jul 7 01:07:58.241547 kernel: pci 0001:01:00.1: VF BAR 0 [mem 0x380004000000-0x3800040fffff 64bit pref] Jul 7 01:07:58.241606 kernel: pci 0001:01:00.1: VF BAR 0 [mem 0x380004000000-0x3800047fffff 64bit pref]: contains BAR 0 for 8 VFs Jul 7 01:07:58.241616 kernel: acpiphp: Slot [2-6] registered Jul 7 01:07:58.241624 kernel: acpiphp: Slot [3-4] registered Jul 7 01:07:58.241632 kernel: acpiphp: Slot [4-4] registered Jul 7 01:07:58.241681 kernel: pci_bus 0001:00: on NUMA node 0 Jul 7 01:07:58.241742 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 7 01:07:58.241800 kernel: pci 0001:00:01.0: bridge window [mem 0x02000000-0x05ffffff 64bit pref] to [bus 01] add_size 2000000 add_align 2000000 Jul 7 01:07:58.241858 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 7 01:07:58.241915 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 7 01:07:58.241974 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 Jul 7 01:07:58.242031 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 01:07:58.242089 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 01:07:58.242148 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 Jul 7 01:07:58.242207 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 01:07:58.242264 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 01:07:58.242322 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 7 01:07:58.242379 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380005ffffff 64bit pref]: assigned Jul 7 01:07:58.242437 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff]: assigned Jul 7 01:07:58.242497 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff]: assigned Jul 7 01:07:58.242558 kernel: pci 0001:00:02.0: bridge window [mem 0x380006000000-0x3800061fffff 64bit pref]: assigned Jul 7 01:07:58.242616 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff]: assigned Jul 7 01:07:58.242674 kernel: pci 0001:00:03.0: bridge window [mem 0x380006200000-0x3800063fffff 64bit pref]: assigned Jul 7 01:07:58.242731 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff]: assigned Jul 7 01:07:58.242788 kernel: pci 0001:00:04.0: bridge window [mem 0x380006400000-0x3800065fffff 64bit pref]: assigned Jul 7 01:07:58.242846 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.242904 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.242963 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.243022 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.243080 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.243138 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.243195 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.243253 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.243311 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.243368 kernel: pci 0001:00:04.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.243425 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.243484 kernel: pci 0001:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.243545 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.243602 kernel: pci 0001:00:02.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.243660 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.243718 kernel: pci 0001:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.243778 kernel: pci 0001:01:00.0: BAR 0 [mem 0x380000000000-0x380001ffffff 64bit pref]: assigned Jul 7 01:07:58.243838 kernel: pci 0001:01:00.1: BAR 0 [mem 0x380002000000-0x380003ffffff 64bit pref]: assigned Jul 7 01:07:58.243897 kernel: pci 0001:01:00.0: ROM [mem 0x60000000-0x600fffff pref]: assigned Jul 7 01:07:58.243960 kernel: pci 0001:01:00.0: VF BAR 0 [mem 0x380004000000-0x3800047fffff 64bit pref]: assigned Jul 7 01:07:58.244019 kernel: pci 0001:01:00.1: ROM [mem 0x60100000-0x601fffff pref]: assigned Jul 7 01:07:58.244079 kernel: pci 0001:01:00.1: VF BAR 0 [mem 0x380004800000-0x380004ffffff 64bit pref]: assigned Jul 7 01:07:58.244137 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] Jul 7 01:07:58.244194 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] Jul 7 01:07:58.244252 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380005ffffff 64bit pref] Jul 7 01:07:58.244310 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] Jul 7 01:07:58.244368 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] Jul 7 01:07:58.244427 kernel: pci 0001:00:02.0: bridge window [mem 0x380006000000-0x3800061fffff 64bit pref] Jul 7 01:07:58.244485 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] Jul 7 01:07:58.244546 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] Jul 7 01:07:58.244604 kernel: pci 0001:00:03.0: bridge window [mem 0x380006200000-0x3800063fffff 64bit pref] Jul 7 01:07:58.244661 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] Jul 7 01:07:58.244719 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] Jul 7 01:07:58.244778 kernel: pci 0001:00:04.0: bridge window [mem 0x380006400000-0x3800065fffff 64bit pref] Jul 7 01:07:58.244830 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] Jul 7 01:07:58.244881 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] Jul 7 01:07:58.244945 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] Jul 7 01:07:58.244998 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380005ffffff 64bit pref] Jul 7 01:07:58.245066 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] Jul 7 01:07:58.245121 kernel: pci_bus 0001:02: resource 2 [mem 0x380006000000-0x3800061fffff 64bit pref] Jul 7 01:07:58.245183 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] Jul 7 01:07:58.245236 kernel: pci_bus 0001:03: resource 2 [mem 0x380006200000-0x3800063fffff 64bit pref] Jul 7 01:07:58.245297 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] Jul 7 01:07:58.245351 kernel: pci_bus 0001:04: resource 2 [mem 0x380006400000-0x3800065fffff 64bit pref] Jul 7 01:07:58.245361 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) Jul 7 01:07:58.245423 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 01:07:58.245481 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] Jul 7 01:07:58.245541 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] Jul 7 01:07:58.245596 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops Jul 7 01:07:58.245651 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 Jul 7 01:07:58.245706 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] Jul 7 01:07:58.245716 kernel: PCI host bridge to bus 0004:00 Jul 7 01:07:58.245773 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] Jul 7 01:07:58.245827 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] Jul 7 01:07:58.245878 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] Jul 7 01:07:58.245942 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 conventional PCI endpoint Jul 7 01:07:58.246007 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.246066 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Jul 7 01:07:58.246123 kernel: pci 0004:00:01.0: bridge window [io 0x0000-0x0fff] Jul 7 01:07:58.246181 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x220fffff] Jul 7 01:07:58.246240 kernel: pci 0004:00:01.0: supports D1 D2 Jul 7 01:07:58.246297 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.246361 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.246421 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Jul 7 01:07:58.246479 kernel: pci 0004:00:03.0: bridge window [mem 0x22200000-0x222fffff] Jul 7 01:07:58.246540 kernel: pci 0004:00:03.0: supports D1 D2 Jul 7 01:07:58.246598 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.246664 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 PCIe Root Port Jul 7 01:07:58.246722 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Jul 7 01:07:58.246782 kernel: pci 0004:00:05.0: supports D1 D2 Jul 7 01:07:58.246838 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot Jul 7 01:07:58.246904 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jul 7 01:07:58.246964 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Jul 7 01:07:58.247023 kernel: pci 0004:01:00.0: bridge window [io 0x0000-0x0fff] Jul 7 01:07:58.247085 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x220fffff] Jul 7 01:07:58.247144 kernel: pci 0004:01:00.0: enabling Extended Tags Jul 7 01:07:58.247203 kernel: pci 0004:01:00.0: supports D1 D2 Jul 7 01:07:58.247262 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 01:07:58.247325 kernel: pci_bus 0004:02: extended config space not accessible Jul 7 01:07:58.247395 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 conventional PCI endpoint Jul 7 01:07:58.247457 kernel: pci 0004:02:00.0: BAR 0 [mem 0x20000000-0x21ffffff] Jul 7 01:07:58.247524 kernel: pci 0004:02:00.0: BAR 1 [mem 0x22000000-0x2201ffff] Jul 7 01:07:58.247586 kernel: pci 0004:02:00.0: BAR 2 [io 0x0000-0x007f] Jul 7 01:07:58.247647 kernel: pci 0004:02:00.0: supports D1 D2 Jul 7 01:07:58.247708 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 01:07:58.247782 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 PCIe Endpoint Jul 7 01:07:58.247843 kernel: pci 0004:03:00.0: BAR 0 [mem 0x22200000-0x22201fff 64bit] Jul 7 01:07:58.247903 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold Jul 7 01:07:58.247958 kernel: pci_bus 0004:00: on NUMA node 0 Jul 7 01:07:58.248016 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 Jul 7 01:07:58.248075 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 01:07:58.248133 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 01:07:58.248191 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jul 7 01:07:58.248249 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 01:07:58.248307 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 Jul 7 01:07:58.248367 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 7 01:07:58.248424 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff]: assigned Jul 7 01:07:58.248482 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref]: assigned Jul 7 01:07:58.248544 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff]: assigned Jul 7 01:07:58.248601 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref]: assigned Jul 7 01:07:58.248658 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff]: assigned Jul 7 01:07:58.248716 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref]: assigned Jul 7 01:07:58.248773 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.248833 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.248890 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.248948 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.249005 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.249062 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.249120 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.249177 kernel: pci 0004:00:01.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.249234 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.249293 kernel: pci 0004:00:05.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.249350 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.249408 kernel: pci 0004:00:03.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.249468 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff]: assigned Jul 7 01:07:58.249530 kernel: pci 0004:01:00.0: bridge window [io size 0x1000]: can't assign; no space Jul 7 01:07:58.249590 kernel: pci 0004:01:00.0: bridge window [io size 0x1000]: failed to assign Jul 7 01:07:58.249651 kernel: pci 0004:02:00.0: BAR 0 [mem 0x20000000-0x21ffffff]: assigned Jul 7 01:07:58.249715 kernel: pci 0004:02:00.0: BAR 1 [mem 0x22000000-0x2201ffff]: assigned Jul 7 01:07:58.249777 kernel: pci 0004:02:00.0: BAR 2 [io size 0x0080]: can't assign; no space Jul 7 01:07:58.249839 kernel: pci 0004:02:00.0: BAR 2 [io size 0x0080]: failed to assign Jul 7 01:07:58.249898 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] Jul 7 01:07:58.249957 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] Jul 7 01:07:58.250014 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] Jul 7 01:07:58.250072 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] Jul 7 01:07:58.250130 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] Jul 7 01:07:58.250192 kernel: pci 0004:03:00.0: BAR 0 [mem 0x23000000-0x23001fff 64bit]: assigned Jul 7 01:07:58.250250 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] Jul 7 01:07:58.250308 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] Jul 7 01:07:58.250366 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] Jul 7 01:07:58.250426 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] Jul 7 01:07:58.250484 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] Jul 7 01:07:58.250545 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] Jul 7 01:07:58.250599 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 7 01:07:58.250651 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] Jul 7 01:07:58.250702 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] Jul 7 01:07:58.250764 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] Jul 7 01:07:58.250817 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] Jul 7 01:07:58.250879 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] Jul 7 01:07:58.250943 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] Jul 7 01:07:58.250999 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] Jul 7 01:07:58.251060 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] Jul 7 01:07:58.251115 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] Jul 7 01:07:58.251124 kernel: ACPI: CPU18 has been hot-added Jul 7 01:07:58.251132 kernel: ACPI: CPU58 has been hot-added Jul 7 01:07:58.251140 kernel: ACPI: CPU38 has been hot-added Jul 7 01:07:58.251147 kernel: ACPI: CPU78 has been hot-added Jul 7 01:07:58.251156 kernel: ACPI: CPU16 has been hot-added Jul 7 01:07:58.251164 kernel: ACPI: CPU56 has been hot-added Jul 7 01:07:58.251171 kernel: ACPI: CPU36 has been hot-added Jul 7 01:07:58.251179 kernel: ACPI: CPU76 has been hot-added Jul 7 01:07:58.251186 kernel: ACPI: CPU17 has been hot-added Jul 7 01:07:58.251194 kernel: ACPI: CPU57 has been hot-added Jul 7 01:07:58.251201 kernel: ACPI: CPU37 has been hot-added Jul 7 01:07:58.251209 kernel: ACPI: CPU77 has been hot-added Jul 7 01:07:58.251216 kernel: ACPI: CPU19 has been hot-added Jul 7 01:07:58.251224 kernel: ACPI: CPU59 has been hot-added Jul 7 01:07:58.251233 kernel: ACPI: CPU39 has been hot-added Jul 7 01:07:58.251240 kernel: ACPI: CPU79 has been hot-added Jul 7 01:07:58.251248 kernel: ACPI: CPU12 has been hot-added Jul 7 01:07:58.251255 kernel: ACPI: CPU52 has been hot-added Jul 7 01:07:58.251263 kernel: ACPI: CPU32 has been hot-added Jul 7 01:07:58.251270 kernel: ACPI: CPU72 has been hot-added Jul 7 01:07:58.251278 kernel: ACPI: CPU8 has been hot-added Jul 7 01:07:58.251285 kernel: ACPI: CPU48 has been hot-added Jul 7 01:07:58.251293 kernel: ACPI: CPU28 has been hot-added Jul 7 01:07:58.251302 kernel: ACPI: CPU68 has been hot-added Jul 7 01:07:58.251309 kernel: ACPI: CPU10 has been hot-added Jul 7 01:07:58.251317 kernel: ACPI: CPU50 has been hot-added Jul 7 01:07:58.251324 kernel: ACPI: CPU30 has been hot-added Jul 7 01:07:58.251332 kernel: ACPI: CPU70 has been hot-added Jul 7 01:07:58.251339 kernel: ACPI: CPU14 has been hot-added Jul 7 01:07:58.251347 kernel: ACPI: CPU54 has been hot-added Jul 7 01:07:58.251355 kernel: ACPI: CPU34 has been hot-added Jul 7 01:07:58.251362 kernel: ACPI: CPU74 has been hot-added Jul 7 01:07:58.251370 kernel: ACPI: CPU4 has been hot-added Jul 7 01:07:58.251379 kernel: ACPI: CPU44 has been hot-added Jul 7 01:07:58.251386 kernel: ACPI: CPU24 has been hot-added Jul 7 01:07:58.251394 kernel: ACPI: CPU64 has been hot-added Jul 7 01:07:58.251401 kernel: ACPI: CPU0 has been hot-added Jul 7 01:07:58.251409 kernel: ACPI: CPU40 has been hot-added Jul 7 01:07:58.251416 kernel: ACPI: CPU20 has been hot-added Jul 7 01:07:58.251423 kernel: ACPI: CPU60 has been hot-added Jul 7 01:07:58.251431 kernel: ACPI: CPU2 has been hot-added Jul 7 01:07:58.251438 kernel: ACPI: CPU42 has been hot-added Jul 7 01:07:58.251447 kernel: ACPI: CPU22 has been hot-added Jul 7 01:07:58.251455 kernel: ACPI: CPU62 has been hot-added Jul 7 01:07:58.251462 kernel: ACPI: CPU6 has been hot-added Jul 7 01:07:58.251469 kernel: ACPI: CPU46 has been hot-added Jul 7 01:07:58.251477 kernel: ACPI: CPU26 has been hot-added Jul 7 01:07:58.251484 kernel: ACPI: CPU66 has been hot-added Jul 7 01:07:58.251497 kernel: ACPI: CPU5 has been hot-added Jul 7 01:07:58.251504 kernel: ACPI: CPU45 has been hot-added Jul 7 01:07:58.251512 kernel: ACPI: CPU25 has been hot-added Jul 7 01:07:58.251519 kernel: ACPI: CPU65 has been hot-added Jul 7 01:07:58.251528 kernel: ACPI: CPU1 has been hot-added Jul 7 01:07:58.251535 kernel: ACPI: CPU41 has been hot-added Jul 7 01:07:58.251543 kernel: ACPI: CPU21 has been hot-added Jul 7 01:07:58.251550 kernel: ACPI: CPU61 has been hot-added Jul 7 01:07:58.251558 kernel: ACPI: CPU3 has been hot-added Jul 7 01:07:58.251565 kernel: ACPI: CPU43 has been hot-added Jul 7 01:07:58.251573 kernel: ACPI: CPU23 has been hot-added Jul 7 01:07:58.251580 kernel: ACPI: CPU63 has been hot-added Jul 7 01:07:58.251588 kernel: ACPI: CPU7 has been hot-added Jul 7 01:07:58.251596 kernel: ACPI: CPU47 has been hot-added Jul 7 01:07:58.251604 kernel: ACPI: CPU27 has been hot-added Jul 7 01:07:58.251611 kernel: ACPI: CPU67 has been hot-added Jul 7 01:07:58.251619 kernel: ACPI: CPU13 has been hot-added Jul 7 01:07:58.251626 kernel: ACPI: CPU53 has been hot-added Jul 7 01:07:58.251634 kernel: ACPI: CPU33 has been hot-added Jul 7 01:07:58.251641 kernel: ACPI: CPU73 has been hot-added Jul 7 01:07:58.251649 kernel: ACPI: CPU9 has been hot-added Jul 7 01:07:58.251656 kernel: ACPI: CPU49 has been hot-added Jul 7 01:07:58.251664 kernel: ACPI: CPU29 has been hot-added Jul 7 01:07:58.251673 kernel: ACPI: CPU69 has been hot-added Jul 7 01:07:58.251680 kernel: ACPI: CPU11 has been hot-added Jul 7 01:07:58.251688 kernel: ACPI: CPU51 has been hot-added Jul 7 01:07:58.251695 kernel: ACPI: CPU31 has been hot-added Jul 7 01:07:58.251703 kernel: ACPI: CPU71 has been hot-added Jul 7 01:07:58.251710 kernel: ACPI: CPU15 has been hot-added Jul 7 01:07:58.251717 kernel: ACPI: CPU55 has been hot-added Jul 7 01:07:58.251725 kernel: ACPI: CPU35 has been hot-added Jul 7 01:07:58.251732 kernel: ACPI: CPU75 has been hot-added Jul 7 01:07:58.251741 kernel: iommu: Default domain type: Translated Jul 7 01:07:58.251749 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 01:07:58.251757 kernel: efivars: Registered efivars operations Jul 7 01:07:58.251820 kernel: pci 0004:02:00.0: vgaarb: setting as boot VGA device Jul 7 01:07:58.251882 kernel: pci 0004:02:00.0: vgaarb: bridge control possible Jul 7 01:07:58.251943 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none Jul 7 01:07:58.251953 kernel: vgaarb: loaded Jul 7 01:07:58.251961 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 01:07:58.251968 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 01:07:58.251977 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 01:07:58.251985 kernel: pnp: PnP ACPI init Jul 7 01:07:58.252049 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved Jul 7 01:07:58.252104 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved Jul 7 01:07:58.252156 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved Jul 7 01:07:58.252208 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved Jul 7 01:07:58.252259 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved Jul 7 01:07:58.252313 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved Jul 7 01:07:58.252366 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved Jul 7 01:07:58.252419 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved Jul 7 01:07:58.252429 kernel: pnp: PnP ACPI: found 1 devices Jul 7 01:07:58.252436 kernel: NET: Registered PF_INET protocol family Jul 7 01:07:58.252444 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 01:07:58.252452 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jul 7 01:07:58.252459 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 01:07:58.252468 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 01:07:58.252476 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 7 01:07:58.252484 kernel: TCP: Hash tables configured (established 524288 bind 65536) Jul 7 01:07:58.252495 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 7 01:07:58.252503 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 7 01:07:58.252513 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 01:07:58.252574 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes Jul 7 01:07:58.252585 kernel: kvm [1]: nv: 554 coarse grained trap handlers Jul 7 01:07:58.252592 kernel: kvm [1]: IPA Size Limit: 48 bits Jul 7 01:07:58.252602 kernel: kvm [1]: GICv3: no GICV resource entry Jul 7 01:07:58.252609 kernel: kvm [1]: disabling GICv2 emulation Jul 7 01:07:58.252617 kernel: kvm [1]: GIC system register CPU interface enabled Jul 7 01:07:58.252624 kernel: kvm [1]: vgic interrupt IRQ9 Jul 7 01:07:58.252632 kernel: kvm [1]: VHE mode initialized successfully Jul 7 01:07:58.252639 kernel: Initialise system trusted keyrings Jul 7 01:07:58.252647 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 Jul 7 01:07:58.252654 kernel: Key type asymmetric registered Jul 7 01:07:58.252662 kernel: Asymmetric key parser 'x509' registered Jul 7 01:07:58.252670 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 7 01:07:58.252678 kernel: io scheduler mq-deadline registered Jul 7 01:07:58.252686 kernel: io scheduler kyber registered Jul 7 01:07:58.252694 kernel: io scheduler bfq registered Jul 7 01:07:58.252702 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 7 01:07:58.252709 kernel: ACPI: button: Power Button [PWRB] Jul 7 01:07:58.252717 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). Jul 7 01:07:58.252725 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 01:07:58.252791 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 Jul 7 01:07:58.252848 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 01:07:58.252903 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 01:07:58.252956 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq Jul 7 01:07:58.253010 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 131072 entries for evtq Jul 7 01:07:58.253064 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for priq Jul 7 01:07:58.253125 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 Jul 7 01:07:58.253180 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 01:07:58.253233 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 01:07:58.253286 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq Jul 7 01:07:58.253340 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 131072 entries for evtq Jul 7 01:07:58.253392 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for priq Jul 7 01:07:58.253452 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 Jul 7 01:07:58.253516 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 01:07:58.253584 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 01:07:58.253640 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq Jul 7 01:07:58.253694 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 131072 entries for evtq Jul 7 01:07:58.253747 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for priq Jul 7 01:07:58.253807 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 Jul 7 01:07:58.253861 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 01:07:58.253916 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 01:07:58.253969 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq Jul 7 01:07:58.254022 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 131072 entries for evtq Jul 7 01:07:58.254075 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for priq Jul 7 01:07:58.254135 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 Jul 7 01:07:58.254189 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 01:07:58.254242 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 01:07:58.254296 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq Jul 7 01:07:58.254349 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 131072 entries for evtq Jul 7 01:07:58.254402 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for priq Jul 7 01:07:58.254464 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 Jul 7 01:07:58.254524 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 01:07:58.254577 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 01:07:58.254633 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq Jul 7 01:07:58.254686 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 131072 entries for evtq Jul 7 01:07:58.254738 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for priq Jul 7 01:07:58.254807 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 Jul 7 01:07:58.254861 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 01:07:58.254914 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 01:07:58.254967 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq Jul 7 01:07:58.255023 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 131072 entries for evtq Jul 7 01:07:58.255082 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for priq Jul 7 01:07:58.255142 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 Jul 7 01:07:58.255196 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) Jul 7 01:07:58.255249 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x001c1eff) Jul 7 01:07:58.255302 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq Jul 7 01:07:58.255356 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 131072 entries for evtq Jul 7 01:07:58.255409 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for priq Jul 7 01:07:58.255419 kernel: thunder_xcv, ver 1.0 Jul 7 01:07:58.255427 kernel: thunder_bgx, ver 1.0 Jul 7 01:07:58.255434 kernel: nicpf, ver 1.0 Jul 7 01:07:58.255442 kernel: nicvf, ver 1.0 Jul 7 01:07:58.255504 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 01:07:58.255560 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T01:07:56 UTC (1751850476) Jul 7 01:07:58.255572 kernel: efifb: probing for efifb Jul 7 01:07:58.255579 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k Jul 7 01:07:58.255587 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jul 7 01:07:58.255595 kernel: efifb: scrolling: redraw Jul 7 01:07:58.255603 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 01:07:58.255610 kernel: Console: switching to colour frame buffer device 100x37 Jul 7 01:07:58.255618 kernel: fb0: EFI VGA frame buffer device Jul 7 01:07:58.255625 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 Jul 7 01:07:58.255633 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 01:07:58.255642 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 7 01:07:58.255649 kernel: watchdog: NMI not fully supported Jul 7 01:07:58.255657 kernel: NET: Registered PF_INET6 protocol family Jul 7 01:07:58.255665 kernel: watchdog: Hard watchdog permanently disabled Jul 7 01:07:58.255672 kernel: Segment Routing with IPv6 Jul 7 01:07:58.255680 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 01:07:58.255687 kernel: NET: Registered PF_PACKET protocol family Jul 7 01:07:58.255695 kernel: Key type dns_resolver registered Jul 7 01:07:58.255702 kernel: registered taskstats version 1 Jul 7 01:07:58.255711 kernel: Loading compiled-in X.509 certificates Jul 7 01:07:58.255719 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: 90fb300ebe1fa0773739bb35dad461c5679d8dfb' Jul 7 01:07:58.255726 kernel: Demotion targets for Node 0: null Jul 7 01:07:58.255734 kernel: Key type .fscrypt registered Jul 7 01:07:58.255741 kernel: Key type fscrypt-provisioning registered Jul 7 01:07:58.255749 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 01:07:58.255756 kernel: ima: Allocated hash algorithm: sha1 Jul 7 01:07:58.255764 kernel: ima: No architecture policies found Jul 7 01:07:58.255772 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 01:07:58.255835 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 Jul 7 01:07:58.255893 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 Jul 7 01:07:58.255953 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 Jul 7 01:07:58.256011 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 Jul 7 01:07:58.256070 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 Jul 7 01:07:58.256127 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 Jul 7 01:07:58.256186 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 Jul 7 01:07:58.256245 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 Jul 7 01:07:58.256309 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 Jul 7 01:07:58.256367 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 Jul 7 01:07:58.256426 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 Jul 7 01:07:58.256484 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 Jul 7 01:07:58.256547 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 Jul 7 01:07:58.256605 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 Jul 7 01:07:58.256664 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 Jul 7 01:07:58.256722 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 Jul 7 01:07:58.256781 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 Jul 7 01:07:58.256841 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 Jul 7 01:07:58.256900 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 Jul 7 01:07:58.256958 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 Jul 7 01:07:58.257019 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 Jul 7 01:07:58.257076 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 Jul 7 01:07:58.257135 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 Jul 7 01:07:58.257192 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 Jul 7 01:07:58.257253 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 Jul 7 01:07:58.257315 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 Jul 7 01:07:58.257375 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 Jul 7 01:07:58.257432 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 Jul 7 01:07:58.257512 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 Jul 7 01:07:58.257573 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 Jul 7 01:07:58.257633 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 Jul 7 01:07:58.257692 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 Jul 7 01:07:58.257751 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 Jul 7 01:07:58.257811 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 Jul 7 01:07:58.257870 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 Jul 7 01:07:58.257928 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 Jul 7 01:07:58.257987 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 Jul 7 01:07:58.258044 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 Jul 7 01:07:58.258104 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 Jul 7 01:07:58.258162 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 Jul 7 01:07:58.258221 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 Jul 7 01:07:58.258280 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 Jul 7 01:07:58.258340 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 Jul 7 01:07:58.258398 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 Jul 7 01:07:58.258457 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 Jul 7 01:07:58.258518 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 Jul 7 01:07:58.258578 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 Jul 7 01:07:58.258636 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 Jul 7 01:07:58.258695 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 Jul 7 01:07:58.258753 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 Jul 7 01:07:58.258815 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 Jul 7 01:07:58.258873 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 Jul 7 01:07:58.258933 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 Jul 7 01:07:58.258990 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 Jul 7 01:07:58.259049 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 Jul 7 01:07:58.259107 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 Jul 7 01:07:58.259166 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 Jul 7 01:07:58.259223 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 Jul 7 01:07:58.259284 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 Jul 7 01:07:58.259342 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 Jul 7 01:07:58.259402 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 Jul 7 01:07:58.259412 kernel: clk: Disabling unused clocks Jul 7 01:07:58.259420 kernel: PM: genpd: Disabling unused power domains Jul 7 01:07:58.259428 kernel: Warning: unable to open an initial console. Jul 7 01:07:58.259436 kernel: Freeing unused kernel memory: 39424K Jul 7 01:07:58.259443 kernel: Run /init as init process Jul 7 01:07:58.259452 kernel: with arguments: Jul 7 01:07:58.259460 kernel: /init Jul 7 01:07:58.259467 kernel: with environment: Jul 7 01:07:58.259475 kernel: HOME=/ Jul 7 01:07:58.259482 kernel: TERM=linux Jul 7 01:07:58.259492 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 01:07:58.259501 systemd[1]: Successfully made /usr/ read-only. Jul 7 01:07:58.259511 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 01:07:58.259521 systemd[1]: Detected architecture arm64. Jul 7 01:07:58.259529 systemd[1]: Running in initrd. Jul 7 01:07:58.259537 systemd[1]: No hostname configured, using default hostname. Jul 7 01:07:58.259545 systemd[1]: Hostname set to . Jul 7 01:07:58.259553 systemd[1]: Initializing machine ID from random generator. Jul 7 01:07:58.259561 systemd[1]: Queued start job for default target initrd.target. Jul 7 01:07:58.259569 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 01:07:58.259578 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 01:07:58.259588 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 01:07:58.259596 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 01:07:58.259605 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 01:07:58.259613 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 01:07:58.259622 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 01:07:58.259630 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 01:07:58.259638 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 01:07:58.259647 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 01:07:58.259655 systemd[1]: Reached target paths.target - Path Units. Jul 7 01:07:58.259663 systemd[1]: Reached target slices.target - Slice Units. Jul 7 01:07:58.259671 systemd[1]: Reached target swap.target - Swaps. Jul 7 01:07:58.259679 systemd[1]: Reached target timers.target - Timer Units. Jul 7 01:07:58.259687 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 01:07:58.259695 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 01:07:58.259703 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 01:07:58.259711 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 01:07:58.259720 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 01:07:58.259728 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 01:07:58.259737 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 01:07:58.259744 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 01:07:58.259752 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 01:07:58.259760 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 01:07:58.259768 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 01:07:58.259776 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 01:07:58.259786 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 01:07:58.259794 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 01:07:58.259802 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 01:07:58.259809 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:07:58.259838 systemd-journald[909]: Collecting audit messages is disabled. Jul 7 01:07:58.259859 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 01:07:58.259867 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 01:07:58.259875 kernel: Bridge firewalling registered Jul 7 01:07:58.259884 systemd-journald[909]: Journal started Jul 7 01:07:58.259904 systemd-journald[909]: Runtime Journal (/run/log/journal/05a8b396bf8b4815a7b3806da94c7686) is 8M, max 4G, 3.9G free. Jul 7 01:07:58.196066 systemd-modules-load[911]: Inserted module 'overlay' Jul 7 01:07:58.284170 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 01:07:58.253744 systemd-modules-load[911]: Inserted module 'br_netfilter' Jul 7 01:07:58.289901 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 01:07:58.300887 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 01:07:58.313508 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 01:07:58.323339 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:07:58.338054 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 01:07:58.346277 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 01:07:58.380032 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 01:07:58.386782 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 01:07:58.399065 systemd-tmpfiles[939]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 01:07:58.406075 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 01:07:58.421268 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 01:07:58.437926 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 01:07:58.449219 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 01:07:58.468978 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 01:07:58.506741 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 01:07:58.520298 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 01:07:58.533432 dracut-cmdline[957]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=dd2d39de40482a23e9bb75390ff5ca85cd9bd34d902b8049121a8373f8cb2ef2 Jul 7 01:07:58.541760 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 01:07:58.545110 systemd-resolved[959]: Positive Trust Anchors: Jul 7 01:07:58.545119 systemd-resolved[959]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 01:07:58.545155 systemd-resolved[959]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 01:07:58.560513 systemd-resolved[959]: Defaulting to hostname 'linux'. Jul 7 01:07:58.579466 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 01:07:58.599940 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 01:07:58.705498 kernel: SCSI subsystem initialized Jul 7 01:07:58.721497 kernel: Loading iSCSI transport class v2.0-870. Jul 7 01:07:58.740496 kernel: iscsi: registered transport (tcp) Jul 7 01:07:58.768332 kernel: iscsi: registered transport (qla4xxx) Jul 7 01:07:58.768353 kernel: QLogic iSCSI HBA Driver Jul 7 01:07:58.787467 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 01:07:58.820556 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 01:07:58.837215 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 01:07:58.888458 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 01:07:58.900222 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 01:07:58.976501 kernel: raid6: neonx8 gen() 15868 MB/s Jul 7 01:07:59.002493 kernel: raid6: neonx4 gen() 15883 MB/s Jul 7 01:07:59.028497 kernel: raid6: neonx2 gen() 13262 MB/s Jul 7 01:07:59.053496 kernel: raid6: neonx1 gen() 10475 MB/s Jul 7 01:07:59.078498 kernel: raid6: int64x8 gen() 6928 MB/s Jul 7 01:07:59.103496 kernel: raid6: int64x4 gen() 7390 MB/s Jul 7 01:07:59.128496 kernel: raid6: int64x2 gen() 6131 MB/s Jul 7 01:07:59.156986 kernel: raid6: int64x1 gen() 5075 MB/s Jul 7 01:07:59.157011 kernel: raid6: using algorithm neonx4 gen() 15883 MB/s Jul 7 01:07:59.191372 kernel: raid6: .... xor() 12420 MB/s, rmw enabled Jul 7 01:07:59.191393 kernel: raid6: using neon recovery algorithm Jul 7 01:07:59.216365 kernel: xor: measuring software checksum speed Jul 7 01:07:59.216386 kernel: 8regs : 21636 MB/sec Jul 7 01:07:59.224760 kernel: 32regs : 21699 MB/sec Jul 7 01:07:59.233020 kernel: arm64_neon : 28244 MB/sec Jul 7 01:07:59.241104 kernel: xor: using function: arm64_neon (28244 MB/sec) Jul 7 01:07:59.306499 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 01:07:59.312435 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 01:07:59.319244 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 01:07:59.362006 systemd-udevd[1186]: Using default interface naming scheme 'v255'. Jul 7 01:07:59.365928 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 01:07:59.372293 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 01:07:59.410342 dracut-pre-trigger[1198]: rd.md=0: removing MD RAID activation Jul 7 01:07:59.432589 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 01:07:59.442210 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 01:07:59.742518 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 01:07:59.877865 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 7 01:07:59.877884 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 7 01:07:59.877894 kernel: nvme 0005:03:00.0: Adding to iommu group 31 Jul 7 01:07:59.878044 kernel: ACPI: bus type USB registered Jul 7 01:07:59.878054 kernel: nvme 0005:04:00.0: Adding to iommu group 32 Jul 7 01:07:59.878139 kernel: usbcore: registered new interface driver usbfs Jul 7 01:07:59.878149 kernel: usbcore: registered new interface driver hub Jul 7 01:07:59.878158 kernel: usbcore: registered new device driver usb Jul 7 01:07:59.878167 kernel: nvme nvme0: pci function 0005:03:00.0 Jul 7 01:07:59.878251 kernel: nvme nvme1: pci function 0005:04:00.0 Jul 7 01:07:59.878328 kernel: PTP clock support registered Jul 7 01:07:59.878338 kernel: nvme nvme0: D3 entry latency set to 8 seconds Jul 7 01:07:59.878399 kernel: nvme nvme1: D3 entry latency set to 8 seconds Jul 7 01:07:59.868114 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 01:07:59.907979 kernel: nvme nvme1: 32/0/0 default/read/poll queues Jul 7 01:07:59.908096 kernel: nvme nvme0: 32/0/0 default/read/poll queues Jul 7 01:07:59.883284 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 01:07:59.976722 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 01:07:59.976739 kernel: GPT:9289727 != 1875385007 Jul 7 01:07:59.976749 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 01:07:59.976758 kernel: GPT:9289727 != 1875385007 Jul 7 01:07:59.976766 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 01:07:59.976775 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 01:07:59.883439 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:08:00.065898 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 33 Jul 7 01:08:00.066039 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Jul 7 01:08:00.066121 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 Jul 7 01:08:00.066195 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault Jul 7 01:08:00.066267 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jul 7 01:08:00.066277 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jul 7 01:08:00.066286 kernel: igb 0003:03:00.0: Adding to iommu group 34 Jul 7 01:07:59.913467 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:08:00.086464 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 35 Jul 7 01:08:00.061599 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:08:00.082046 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 01:08:00.113062 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - SAMSUNG MZ1LB960HAJQ-00007 ROOT. Jul 7 01:08:00.119525 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:08:00.144952 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - SAMSUNG MZ1LB960HAJQ-00007 EFI-SYSTEM. Jul 7 01:08:00.176858 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Jul 7 01:08:00.201073 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Jul 7 01:08:00.206857 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - SAMSUNG MZ1LB960HAJQ-00007 USR-A. Jul 7 01:08:00.358540 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000000100000010 Jul 7 01:08:00.358675 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller Jul 7 01:08:00.358751 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 Jul 7 01:08:00.358821 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed Jul 7 01:08:00.358890 kernel: hub 1-0:1.0: USB hub found Jul 7 01:08:00.358985 kernel: hub 1-0:1.0: 4 ports detected Jul 7 01:08:00.359062 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 7 01:08:00.359184 kernel: hub 2-0:1.0: USB hub found Jul 7 01:08:00.359276 kernel: hub 2-0:1.0: 4 ports detected Jul 7 01:08:00.359352 kernel: mlx5_core 0001:01:00.0: PTM is not supported by PCIe Jul 7 01:08:00.359428 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 Jul 7 01:08:00.359504 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 7 01:08:00.338088 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 01:08:00.562987 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 01:08:00.563002 kernel: igb 0003:03:00.0: added PHC on eth0 Jul 7 01:08:00.563119 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 7 01:08:00.563193 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0f:f6:ac Jul 7 01:08:00.563263 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 Jul 7 01:08:00.563332 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Jul 7 01:08:00.563401 kernel: igb 0003:03:00.1: Adding to iommu group 36 Jul 7 01:08:00.563475 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 01:08:00.563484 kernel: igb 0003:03:00.1: added PHC on eth1 Jul 7 01:08:00.563569 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection Jul 7 01:08:00.563640 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:0f:f6:ad Jul 7 01:08:00.563708 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 Jul 7 01:08:00.563777 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Jul 7 01:08:00.563844 kernel: igb 0003:03:00.0 eno1: renamed from eth0 Jul 7 01:08:00.563913 kernel: igb 0003:03:00.1 eno2: renamed from eth1 Jul 7 01:08:00.564060 disk-uuid[1332]: Primary Header is updated. Jul 7 01:08:00.564060 disk-uuid[1332]: Secondary Entries is updated. Jul 7 01:08:00.564060 disk-uuid[1332]: Secondary Header is updated. Jul 7 01:08:00.588501 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd Jul 7 01:08:00.617498 kernel: hub 2-3:1.0: USB hub found Jul 7 01:08:00.626491 kernel: hub 2-3:1.0: 4 ports detected Jul 7 01:08:00.667500 kernel: mlx5_core 0001:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jul 7 01:08:00.683424 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged Jul 7 01:08:00.728499 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd Jul 7 01:08:00.871498 kernel: hub 1-3:1.0: USB hub found Jul 7 01:08:00.871672 kernel: hub 1-3:1.0: 4 ports detected Jul 7 01:08:01.007499 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 7 01:08:01.019499 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 Jul 7 01:08:01.035369 kernel: mlx5_core 0001:01:00.1: PTM is not supported by PCIe Jul 7 01:08:01.035533 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 Jul 7 01:08:01.049607 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 7 01:08:01.400498 kernel: mlx5_core 0001:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Jul 7 01:08:01.417839 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable unplugged Jul 7 01:08:01.478232 disk-uuid[1333]: The operation has completed successfully. Jul 7 01:08:01.483050 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 01:08:01.756500 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 7 01:08:01.771499 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 Jul 7 01:08:01.771665 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 Jul 7 01:08:01.802932 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 01:08:01.803027 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 01:08:01.813374 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 01:08:01.823173 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 01:08:01.830366 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 01:08:01.845445 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 01:08:01.856700 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 01:08:01.880992 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 01:08:01.892726 sh[1534]: Success Jul 7 01:08:01.898504 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 01:08:01.907493 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 01:08:01.907516 kernel: device-mapper: uevent: version 1.0.3 Jul 7 01:08:01.907526 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 01:08:01.973500 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 7 01:08:02.003759 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 01:08:02.016210 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 01:08:02.044435 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 01:08:02.052492 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 01:08:02.052506 kernel: BTRFS: device fsid aa7ffdf7-f152-4ceb-bd0e-b3b3f8f8b296 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (1556) Jul 7 01:08:02.052516 kernel: BTRFS info (device dm-0): first mount of filesystem aa7ffdf7-f152-4ceb-bd0e-b3b3f8f8b296 Jul 7 01:08:02.052525 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 01:08:02.052534 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 01:08:02.144186 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 01:08:02.155774 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 01:08:02.167863 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 01:08:02.168853 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 01:08:02.195082 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 01:08:02.214492 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:6) scanned by mount (1585) Jul 7 01:08:02.263480 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 7 01:08:02.263504 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 7 01:08:02.277911 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 01:08:02.311496 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 7 01:08:02.313567 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 01:08:02.325053 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 01:08:02.343335 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 01:08:02.369794 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 01:08:02.403387 systemd-networkd[1736]: lo: Link UP Jul 7 01:08:02.403393 systemd-networkd[1736]: lo: Gained carrier Jul 7 01:08:02.406835 systemd-networkd[1736]: Enumeration completed Jul 7 01:08:02.406975 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 01:08:02.408078 systemd-networkd[1736]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 01:08:02.415206 systemd[1]: Reached target network.target - Network. Jul 7 01:08:02.460055 systemd-networkd[1736]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 01:08:02.481470 ignition[1733]: Ignition 2.21.0 Jul 7 01:08:02.481478 ignition[1733]: Stage: fetch-offline Jul 7 01:08:02.481510 ignition[1733]: no configs at "/usr/lib/ignition/base.d" Jul 7 01:08:02.490302 unknown[1733]: fetched base config from "system" Jul 7 01:08:02.481517 ignition[1733]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 01:08:02.490308 unknown[1733]: fetched user config from "system" Jul 7 01:08:02.481719 ignition[1733]: parsed url from cmdline: "" Jul 7 01:08:02.493519 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 01:08:02.481722 ignition[1733]: no config URL provided Jul 7 01:08:02.503844 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 01:08:02.481727 ignition[1733]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 01:08:02.505028 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 01:08:02.481778 ignition[1733]: parsing config with SHA512: adf2800c1a1c545074e881c72bb91dfa52741eedeaa6200855ee3934248c66ec6f722c3198c7b63b271f6e3232b1f65b1559d1bc94184c59edb93002798c32a8 Jul 7 01:08:02.512400 systemd-networkd[1736]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 01:08:02.490983 ignition[1733]: fetch-offline: fetch-offline passed Jul 7 01:08:02.490988 ignition[1733]: POST message to Packet Timeline Jul 7 01:08:02.491017 ignition[1733]: POST Status error: resource requires networking Jul 7 01:08:02.491076 ignition[1733]: Ignition finished successfully Jul 7 01:08:02.557636 ignition[1779]: Ignition 2.21.0 Jul 7 01:08:02.557642 ignition[1779]: Stage: kargs Jul 7 01:08:02.557821 ignition[1779]: no configs at "/usr/lib/ignition/base.d" Jul 7 01:08:02.557828 ignition[1779]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 01:08:02.558640 ignition[1779]: kargs: kargs passed Jul 7 01:08:02.558644 ignition[1779]: POST message to Packet Timeline Jul 7 01:08:02.558866 ignition[1779]: GET https://metadata.packet.net/metadata: attempt #1 Jul 7 01:08:02.562673 ignition[1779]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36377->[::1]:53: read: connection refused Jul 7 01:08:02.762794 ignition[1779]: GET https://metadata.packet.net/metadata: attempt #2 Jul 7 01:08:02.763410 ignition[1779]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38103->[::1]:53: read: connection refused Jul 7 01:08:03.105497 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Jul 7 01:08:03.108752 systemd-networkd[1736]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 01:08:03.163944 ignition[1779]: GET https://metadata.packet.net/metadata: attempt #3 Jul 7 01:08:03.164308 ignition[1779]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51854->[::1]:53: read: connection refused Jul 7 01:08:03.740504 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link down Jul 7 01:08:03.744034 systemd-networkd[1736]: eno1: Link UP Jul 7 01:08:03.744156 systemd-networkd[1736]: eno2: Link UP Jul 7 01:08:03.744267 systemd-networkd[1736]: enP1p1s0f0np0: Link UP Jul 7 01:08:03.744391 systemd-networkd[1736]: enP1p1s0f0np0: Gained carrier Jul 7 01:08:03.753703 systemd-networkd[1736]: enP1p1s0f1np1: Link UP Jul 7 01:08:03.790532 systemd-networkd[1736]: enP1p1s0f0np0: DHCPv4 address 147.28.143.214/30, gateway 147.28.143.213 acquired from 145.40.76.140 Jul 7 01:08:03.964589 ignition[1779]: GET https://metadata.packet.net/metadata: attempt #4 Jul 7 01:08:03.965388 ignition[1779]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35940->[::1]:53: read: connection refused Jul 7 01:08:04.768568 systemd-networkd[1736]: enP1p1s0f0np0: Gained IPv6LL Jul 7 01:08:05.566118 ignition[1779]: GET https://metadata.packet.net/metadata: attempt #5 Jul 7 01:08:05.566529 ignition[1779]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40177->[::1]:53: read: connection refused Jul 7 01:08:08.766964 ignition[1779]: GET https://metadata.packet.net/metadata: attempt #6 Jul 7 01:08:10.016728 ignition[1779]: GET result: OK Jul 7 01:08:10.341374 ignition[1779]: Ignition finished successfully Jul 7 01:08:10.344194 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 01:08:10.352052 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 01:08:10.376723 ignition[1809]: Ignition 2.21.0 Jul 7 01:08:10.376731 ignition[1809]: Stage: disks Jul 7 01:08:10.378630 ignition[1809]: no configs at "/usr/lib/ignition/base.d" Jul 7 01:08:10.378642 ignition[1809]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 01:08:10.379599 ignition[1809]: disks: disks passed Jul 7 01:08:10.379603 ignition[1809]: POST message to Packet Timeline Jul 7 01:08:10.379623 ignition[1809]: GET https://metadata.packet.net/metadata: attempt #1 Jul 7 01:08:11.205240 ignition[1809]: GET result: OK Jul 7 01:08:11.495035 ignition[1809]: Ignition finished successfully Jul 7 01:08:11.498599 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 01:08:11.503954 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 01:08:11.511797 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 01:08:11.520039 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 01:08:11.528424 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 01:08:11.537243 systemd[1]: Reached target basic.target - Basic System. Jul 7 01:08:11.547548 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 01:08:11.585667 systemd-fsck[1832]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 01:08:11.589594 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 01:08:11.597305 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 01:08:11.693495 kernel: EXT4-fs (nvme0n1p9): mounted filesystem a6b10247-fbe6-4a25-95d9-ddd4b58604ec r/w with ordered data mode. Quota mode: none. Jul 7 01:08:11.693984 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 01:08:11.704395 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 01:08:11.715524 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 01:08:11.737103 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 01:08:11.745498 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 (259:6) scanned by mount (1844) Jul 7 01:08:11.746494 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 7 01:08:11.746505 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 7 01:08:11.746515 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 01:08:11.814316 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 01:08:11.840124 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jul 7 01:08:11.852062 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 01:08:11.852093 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 01:08:11.872353 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 01:08:11.880610 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 01:08:11.895116 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 01:08:11.908928 coreos-metadata[1864]: Jul 07 01:08:11.891 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 01:08:11.919961 coreos-metadata[1865]: Jul 07 01:08:11.891 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 01:08:11.952072 initrd-setup-root[1881]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 01:08:11.958498 initrd-setup-root[1889]: cut: /sysroot/etc/group: No such file or directory Jul 7 01:08:11.964987 initrd-setup-root[1897]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 01:08:11.971536 initrd-setup-root[1904]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 01:08:12.041024 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 01:08:12.053219 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 01:08:12.081030 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 01:08:12.090493 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 7 01:08:12.114178 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 01:08:12.124101 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 01:08:12.138447 ignition[1982]: INFO : Ignition 2.21.0 Jul 7 01:08:12.138447 ignition[1982]: INFO : Stage: mount Jul 7 01:08:12.149612 ignition[1982]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 01:08:12.149612 ignition[1982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 01:08:12.149612 ignition[1982]: INFO : mount: mount passed Jul 7 01:08:12.149612 ignition[1982]: INFO : POST message to Packet Timeline Jul 7 01:08:12.149612 ignition[1982]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 01:08:12.727652 coreos-metadata[1864]: Jul 07 01:08:12.727 INFO Fetch successful Jul 7 01:08:12.772380 coreos-metadata[1864]: Jul 07 01:08:12.772 INFO wrote hostname ci-4344.1.1-a-a5852c4667 to /sysroot/etc/hostname Jul 7 01:08:12.776623 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 01:08:12.814176 coreos-metadata[1865]: Jul 07 01:08:12.814 INFO Fetch successful Jul 7 01:08:12.859275 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jul 7 01:08:12.859445 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jul 7 01:08:12.896403 ignition[1982]: INFO : GET result: OK Jul 7 01:08:13.211777 ignition[1982]: INFO : Ignition finished successfully Jul 7 01:08:13.214484 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 01:08:13.223064 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 01:08:13.260687 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 01:08:13.302865 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 (259:6) scanned by mount (2011) Jul 7 01:08:13.302900 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 492b2e2a-5dd7-445f-b930-e9dd6acadf93 Jul 7 01:08:13.317404 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 7 01:08:13.330623 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 01:08:13.339777 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 01:08:13.382761 ignition[2028]: INFO : Ignition 2.21.0 Jul 7 01:08:13.382761 ignition[2028]: INFO : Stage: files Jul 7 01:08:13.392931 ignition[2028]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 01:08:13.392931 ignition[2028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 01:08:13.392931 ignition[2028]: DEBUG : files: compiled without relabeling support, skipping Jul 7 01:08:13.392931 ignition[2028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 01:08:13.392931 ignition[2028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 01:08:13.392931 ignition[2028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 01:08:13.392931 ignition[2028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 01:08:13.392931 ignition[2028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 01:08:13.392931 ignition[2028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 7 01:08:13.392931 ignition[2028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 7 01:08:13.389994 unknown[2028]: wrote ssh authorized keys file for user: core Jul 7 01:08:14.048541 ignition[2028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 01:08:16.044897 ignition[2028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 7 01:08:16.056066 ignition[2028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 01:08:16.056066 ignition[2028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 01:08:16.056066 ignition[2028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 01:08:16.056066 ignition[2028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 01:08:16.056066 ignition[2028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 01:08:16.056066 ignition[2028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 01:08:16.056066 ignition[2028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 01:08:16.056066 ignition[2028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 01:08:16.056066 ignition[2028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 01:08:16.056066 ignition[2028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 01:08:16.056066 ignition[2028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 01:08:16.056066 ignition[2028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 01:08:16.056066 ignition[2028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 01:08:16.056066 ignition[2028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 7 01:08:16.598448 ignition[2028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 01:08:17.712248 ignition[2028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 01:08:17.724934 ignition[2028]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 01:08:17.724934 ignition[2028]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 01:08:17.724934 ignition[2028]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 01:08:17.724934 ignition[2028]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 01:08:17.724934 ignition[2028]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 7 01:08:17.724934 ignition[2028]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 01:08:17.724934 ignition[2028]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 01:08:17.724934 ignition[2028]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 01:08:17.724934 ignition[2028]: INFO : files: files passed Jul 7 01:08:17.724934 ignition[2028]: INFO : POST message to Packet Timeline Jul 7 01:08:17.724934 ignition[2028]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 01:08:18.487122 ignition[2028]: INFO : GET result: OK Jul 7 01:08:18.795992 ignition[2028]: INFO : Ignition finished successfully Jul 7 01:08:18.798603 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 01:08:18.809768 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 01:08:18.825040 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 01:08:18.844171 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 01:08:18.844371 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 01:08:18.862629 initrd-setup-root-after-ignition[2073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 01:08:18.862629 initrd-setup-root-after-ignition[2073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 01:08:18.856989 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 01:08:18.914897 initrd-setup-root-after-ignition[2077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 01:08:18.870169 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 01:08:18.887203 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 01:08:18.946691 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 01:08:18.948565 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 01:08:18.958798 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 01:08:18.975256 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 01:08:18.986666 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 01:08:18.987733 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 01:08:19.024526 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 01:08:19.037394 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 01:08:19.060784 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 01:08:19.072612 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 01:08:19.078586 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 01:08:19.090149 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 01:08:19.090257 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 01:08:19.101863 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 01:08:19.113117 systemd[1]: Stopped target basic.target - Basic System. Jul 7 01:08:19.124580 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 01:08:19.135983 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 01:08:19.147355 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 01:08:19.158535 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 01:08:19.169778 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 01:08:19.181115 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 01:08:19.192366 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 01:08:19.203613 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 01:08:19.220466 systemd[1]: Stopped target swap.target - Swaps. Jul 7 01:08:19.231690 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 01:08:19.231807 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 01:08:19.243274 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 01:08:19.254429 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 01:08:19.265405 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 01:08:19.265494 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 01:08:19.276526 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 01:08:19.276625 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 01:08:19.288013 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 01:08:19.288111 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 01:08:19.299412 systemd[1]: Stopped target paths.target - Path Units. Jul 7 01:08:19.310692 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 01:08:19.316512 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 01:08:19.328284 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 01:08:19.339854 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 01:08:19.351591 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 01:08:19.351678 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 01:08:19.363292 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 01:08:19.363384 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 01:08:19.469451 ignition[2098]: INFO : Ignition 2.21.0 Jul 7 01:08:19.469451 ignition[2098]: INFO : Stage: umount Jul 7 01:08:19.469451 ignition[2098]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 01:08:19.469451 ignition[2098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 01:08:19.469451 ignition[2098]: INFO : umount: umount passed Jul 7 01:08:19.469451 ignition[2098]: INFO : POST message to Packet Timeline Jul 7 01:08:19.469451 ignition[2098]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 01:08:19.374977 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 01:08:19.375074 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 01:08:19.386652 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 01:08:19.386744 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 01:08:19.398371 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 01:08:19.398464 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 01:08:19.416616 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 01:08:19.442078 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 01:08:19.450984 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 01:08:19.451099 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 01:08:19.463551 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 01:08:19.463643 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 01:08:19.477672 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 01:08:19.478493 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 01:08:19.479604 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 01:08:19.490149 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 01:08:19.490247 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 01:08:20.281981 ignition[2098]: INFO : GET result: OK Jul 7 01:08:20.565149 ignition[2098]: INFO : Ignition finished successfully Jul 7 01:08:20.567500 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 01:08:20.567763 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 01:08:20.575494 systemd[1]: Stopped target network.target - Network. Jul 7 01:08:20.584351 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 01:08:20.584431 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 01:08:20.593734 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 01:08:20.593781 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 01:08:20.603213 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 01:08:20.603260 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 01:08:20.612708 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 01:08:20.612740 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 01:08:20.622278 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 01:08:20.622330 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 01:08:20.631971 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 01:08:20.641636 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 01:08:20.651762 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 01:08:20.651879 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 01:08:20.665960 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 01:08:20.666246 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 01:08:20.667543 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 01:08:20.676998 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 01:08:20.678976 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 01:08:20.686003 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 01:08:20.686099 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 01:08:20.698007 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 01:08:20.706138 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 01:08:20.706194 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 01:08:20.716870 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 01:08:20.716916 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 01:08:20.727454 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 01:08:20.727517 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 01:08:20.737761 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 01:08:20.737808 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 01:08:20.753704 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 01:08:20.765853 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 01:08:20.765922 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 01:08:20.772856 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 01:08:20.773153 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 01:08:20.787416 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 01:08:20.787463 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 01:08:20.798348 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 01:08:20.798412 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 01:08:20.809363 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 01:08:20.809402 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 01:08:20.820872 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 01:08:20.820913 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 01:08:20.832015 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 01:08:20.832074 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 01:08:20.849984 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 01:08:20.860705 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 01:08:20.860787 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 01:08:20.872597 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 01:08:20.872636 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 01:08:20.884568 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 01:08:20.884611 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 01:08:20.896491 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 01:08:20.896530 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 01:08:20.908332 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 01:08:20.908365 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:08:20.922268 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 01:08:20.922328 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 7 01:08:20.922356 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 01:08:20.922382 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 01:08:20.922703 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 01:08:20.922774 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 01:08:21.443725 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 01:08:21.444593 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 01:08:21.455259 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 01:08:21.466513 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 01:08:21.497794 systemd[1]: Switching root. Jul 7 01:08:21.559195 systemd-journald[909]: Journal stopped Jul 7 01:08:23.749266 systemd-journald[909]: Received SIGTERM from PID 1 (systemd). Jul 7 01:08:23.749294 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 01:08:23.749304 kernel: SELinux: policy capability open_perms=1 Jul 7 01:08:23.749311 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 01:08:23.749318 kernel: SELinux: policy capability always_check_network=0 Jul 7 01:08:23.749325 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 01:08:23.749333 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 01:08:23.749342 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 01:08:23.749349 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 01:08:23.749357 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 01:08:23.749364 kernel: audit: type=1403 audit(1751850501.771:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 01:08:23.749372 systemd[1]: Successfully loaded SELinux policy in 141.830ms. Jul 7 01:08:23.749382 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.535ms. Jul 7 01:08:23.749392 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 01:08:23.749402 systemd[1]: Detected architecture arm64. Jul 7 01:08:23.749410 systemd[1]: Detected first boot. Jul 7 01:08:23.749419 systemd[1]: Hostname set to . Jul 7 01:08:23.749427 systemd[1]: Initializing machine ID from random generator. Jul 7 01:08:23.749436 zram_generator::config[2166]: No configuration found. Jul 7 01:08:23.749446 systemd[1]: Populated /etc with preset unit settings. Jul 7 01:08:23.749455 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 01:08:23.749464 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 01:08:23.749472 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 01:08:23.749481 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 01:08:23.749497 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 01:08:23.749506 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 01:08:23.749516 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 01:08:23.749525 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 01:08:23.749534 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 01:08:23.749542 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 01:08:23.749551 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 01:08:23.749560 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 01:08:23.749568 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 01:08:23.749577 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 01:08:23.749587 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 01:08:23.749596 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 01:08:23.749604 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 01:08:23.749613 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 01:08:23.749622 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 7 01:08:23.749630 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 01:08:23.749641 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 01:08:23.749650 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 01:08:23.749660 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 01:08:23.749669 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 01:08:23.749678 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 01:08:23.749686 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 01:08:23.749695 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 01:08:23.749704 systemd[1]: Reached target slices.target - Slice Units. Jul 7 01:08:23.749713 systemd[1]: Reached target swap.target - Swaps. Jul 7 01:08:23.749722 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 01:08:23.749731 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 01:08:23.749741 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 01:08:23.749750 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 01:08:23.749759 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 01:08:23.749769 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 01:08:23.749778 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 01:08:23.749787 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 01:08:23.749795 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 01:08:23.749804 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 01:08:23.749813 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 01:08:23.749822 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 01:08:23.749831 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 01:08:23.749841 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 01:08:23.749850 systemd[1]: Reached target machines.target - Containers. Jul 7 01:08:23.749859 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 01:08:23.749869 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 01:08:23.749879 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 01:08:23.749888 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 01:08:23.749896 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 01:08:23.749905 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 01:08:23.749914 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 01:08:23.749924 kernel: ACPI: bus type drm_connector registered Jul 7 01:08:23.749932 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 01:08:23.749941 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 01:08:23.749950 kernel: fuse: init (API version 7.41) Jul 7 01:08:23.749958 kernel: loop: module loaded Jul 7 01:08:23.749966 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 01:08:23.749975 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 01:08:23.749984 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 01:08:23.749995 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 01:08:23.750003 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 01:08:23.750013 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 01:08:23.750022 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 01:08:23.750031 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 01:08:23.750040 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 01:08:23.750049 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 01:08:23.750074 systemd-journald[2272]: Collecting audit messages is disabled. Jul 7 01:08:23.750098 systemd-journald[2272]: Journal started Jul 7 01:08:23.750117 systemd-journald[2272]: Runtime Journal (/run/log/journal/95c3a0ead81141a890d3c73324468eb5) is 8M, max 4G, 3.9G free. Jul 7 01:08:22.315810 systemd[1]: Queued start job for default target multi-user.target. Jul 7 01:08:22.345058 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 7 01:08:22.345393 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 01:08:22.345697 systemd[1]: systemd-journald.service: Consumed 3.475s CPU time. Jul 7 01:08:23.785502 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 01:08:23.806500 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 01:08:23.829697 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 01:08:23.829733 systemd[1]: Stopped verity-setup.service. Jul 7 01:08:23.855502 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 01:08:23.860866 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 01:08:23.866437 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 01:08:23.871959 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 01:08:23.877415 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 01:08:23.882937 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 01:08:23.888332 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 01:08:23.893891 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 01:08:23.899458 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 01:08:23.905090 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 01:08:23.905274 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 01:08:23.910670 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 01:08:23.910826 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 01:08:23.917312 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 01:08:23.917503 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 01:08:23.922772 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 01:08:23.922947 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 01:08:23.928240 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 01:08:23.928403 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 01:08:23.933590 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 01:08:23.933766 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 01:08:23.939671 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 01:08:23.944943 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 01:08:23.950099 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 01:08:23.955246 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 01:08:23.970215 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 01:08:23.976480 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 01:08:23.999166 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 01:08:24.004263 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 01:08:24.004291 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 01:08:24.009865 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 01:08:24.015765 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 01:08:24.020705 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 01:08:24.022085 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 01:08:24.027923 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 01:08:24.032800 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 01:08:24.033884 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 01:08:24.038750 systemd-journald[2272]: Time spent on flushing to /var/log/journal/95c3a0ead81141a890d3c73324468eb5 is 24.582ms for 2484 entries. Jul 7 01:08:24.038750 systemd-journald[2272]: System Journal (/var/log/journal/95c3a0ead81141a890d3c73324468eb5) is 8M, max 195.6M, 187.6M free. Jul 7 01:08:24.080815 systemd-journald[2272]: Received client request to flush runtime journal. Jul 7 01:08:24.080858 kernel: loop0: detected capacity change from 0 to 8 Jul 7 01:08:24.038732 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 01:08:24.039818 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 01:08:24.056671 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 01:08:24.062394 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 01:08:24.068508 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 01:08:24.083725 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 01:08:24.088090 systemd-tmpfiles[2310]: ACLs are not supported, ignoring. Jul 7 01:08:24.088102 systemd-tmpfiles[2310]: ACLs are not supported, ignoring. Jul 7 01:08:24.094494 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 01:08:24.098715 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 01:08:24.103549 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 01:08:24.108329 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 01:08:24.113041 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 01:08:24.117742 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 01:08:24.126198 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 01:08:24.132346 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 01:08:24.152501 kernel: loop1: detected capacity change from 0 to 107312 Jul 7 01:08:24.154379 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 01:08:24.161996 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 01:08:24.162640 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 01:08:24.178610 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 01:08:24.184749 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 01:08:24.216495 kernel: loop2: detected capacity change from 0 to 138376 Jul 7 01:08:24.223725 systemd-tmpfiles[2341]: ACLs are not supported, ignoring. Jul 7 01:08:24.223737 systemd-tmpfiles[2341]: ACLs are not supported, ignoring. Jul 7 01:08:24.227409 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 01:08:24.260503 kernel: loop3: detected capacity change from 0 to 207008 Jul 7 01:08:24.278185 ldconfig[2302]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 01:08:24.281529 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 01:08:24.326503 kernel: loop4: detected capacity change from 0 to 8 Jul 7 01:08:24.338505 kernel: loop5: detected capacity change from 0 to 107312 Jul 7 01:08:24.354500 kernel: loop6: detected capacity change from 0 to 138376 Jul 7 01:08:24.373500 kernel: loop7: detected capacity change from 0 to 207008 Jul 7 01:08:24.380214 (sd-merge)[2356]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jul 7 01:08:24.380652 (sd-merge)[2356]: Merged extensions into '/usr'. Jul 7 01:08:24.383851 systemd[1]: Reload requested from client PID 2308 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 01:08:24.383862 systemd[1]: Reloading... Jul 7 01:08:24.425493 zram_generator::config[2381]: No configuration found. Jul 7 01:08:24.502984 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 01:08:24.576505 systemd[1]: Reloading finished in 192 ms. Jul 7 01:08:24.605852 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 01:08:24.611130 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 01:08:24.633809 systemd[1]: Starting ensure-sysext.service... Jul 7 01:08:24.640001 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 01:08:24.646988 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 01:08:24.658134 systemd[1]: Reload requested from client PID 2436 ('systemctl') (unit ensure-sysext.service)... Jul 7 01:08:24.658146 systemd[1]: Reloading... Jul 7 01:08:24.659372 systemd-tmpfiles[2437]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 01:08:24.659406 systemd-tmpfiles[2437]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 01:08:24.659647 systemd-tmpfiles[2437]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 01:08:24.659824 systemd-tmpfiles[2437]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 01:08:24.660387 systemd-tmpfiles[2437]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 01:08:24.660596 systemd-tmpfiles[2437]: ACLs are not supported, ignoring. Jul 7 01:08:24.660639 systemd-tmpfiles[2437]: ACLs are not supported, ignoring. Jul 7 01:08:24.663602 systemd-tmpfiles[2437]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 01:08:24.663608 systemd-tmpfiles[2437]: Skipping /boot Jul 7 01:08:24.672176 systemd-tmpfiles[2437]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 01:08:24.672183 systemd-tmpfiles[2437]: Skipping /boot Jul 7 01:08:24.676085 systemd-udevd[2438]: Using default interface naming scheme 'v255'. Jul 7 01:08:24.704495 zram_generator::config[2468]: No configuration found. Jul 7 01:08:24.752500 kernel: IPMI message handler: version 39.2 Jul 7 01:08:24.763500 kernel: ipmi device interface Jul 7 01:08:24.763544 kernel: MACsec IEEE 802.1AE Jul 7 01:08:24.776496 kernel: ipmi_si: IPMI System Interface driver Jul 7 01:08:24.776525 kernel: ipmi_ssif: IPMI SSIF Interface driver Jul 7 01:08:24.776546 kernel: ipmi_si: Unable to find any System Interface(s) Jul 7 01:08:24.787131 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 01:08:24.877884 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 7 01:08:24.877959 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - SAMSUNG MZ1LB960HAJQ-00007 OEM. Jul 7 01:08:24.882830 systemd[1]: Reloading finished in 224 ms. Jul 7 01:08:24.896707 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 01:08:24.916005 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 01:08:24.937752 systemd[1]: Finished ensure-sysext.service. Jul 7 01:08:24.959904 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 01:08:24.978223 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 01:08:24.983260 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 01:08:24.984140 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 01:08:24.990107 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 01:08:24.995981 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 01:08:25.001765 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 01:08:25.006659 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 01:08:25.007512 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 01:08:25.012315 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 01:08:25.013399 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 01:08:25.019293 augenrules[2689]: No rules Jul 7 01:08:25.019974 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 01:08:25.026549 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 01:08:25.032765 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 01:08:25.038280 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 01:08:25.043674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 01:08:25.048879 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 01:08:25.049092 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 01:08:25.053778 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 01:08:25.059465 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 01:08:25.059628 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 01:08:25.064066 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 01:08:25.064211 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 01:08:25.069214 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 01:08:25.069392 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 01:08:25.073907 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 01:08:25.074070 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 01:08:25.078704 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 01:08:25.083449 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 01:08:25.090663 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 01:08:25.102431 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 01:08:25.107445 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 01:08:25.107532 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 01:08:25.108878 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 01:08:25.138732 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 01:08:25.143216 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 01:08:25.145492 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 01:08:25.175670 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 01:08:25.234581 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 01:08:25.239304 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 01:08:25.241605 systemd-resolved[2696]: Positive Trust Anchors: Jul 7 01:08:25.241617 systemd-resolved[2696]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 01:08:25.241650 systemd-resolved[2696]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 01:08:25.245464 systemd-resolved[2696]: Using system hostname 'ci-4344.1.1-a-a5852c4667'. Jul 7 01:08:25.246789 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 01:08:25.249020 systemd-networkd[2695]: lo: Link UP Jul 7 01:08:25.249026 systemd-networkd[2695]: lo: Gained carrier Jul 7 01:08:25.251171 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 01:08:25.252290 systemd-networkd[2695]: bond0: netdev ready Jul 7 01:08:25.255509 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 01:08:25.259894 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 01:08:25.261150 systemd-networkd[2695]: Enumeration completed Jul 7 01:08:25.264247 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 01:08:25.268731 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 01:08:25.273129 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 01:08:25.274152 systemd-networkd[2695]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:49:c6:a8.network. Jul 7 01:08:25.277425 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 01:08:25.281861 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 01:08:25.281883 systemd[1]: Reached target paths.target - Path Units. Jul 7 01:08:25.286141 systemd[1]: Reached target timers.target - Timer Units. Jul 7 01:08:25.291154 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 01:08:25.296827 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 01:08:25.302933 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 01:08:25.311359 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 01:08:25.316300 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 01:08:25.321385 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 01:08:25.325936 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 01:08:25.330419 systemd[1]: Reached target network.target - Network. Jul 7 01:08:25.334839 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 01:08:25.339149 systemd[1]: Reached target basic.target - Basic System. Jul 7 01:08:25.343426 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 01:08:25.343446 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 01:08:25.344460 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 01:08:25.366158 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 01:08:25.371748 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 01:08:25.377374 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 01:08:25.382847 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 01:08:25.387921 coreos-metadata[2738]: Jul 07 01:08:25.387 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 01:08:25.388529 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 01:08:25.390734 coreos-metadata[2738]: Jul 07 01:08:25.390 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 7 01:08:25.393033 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 01:08:25.393149 jq[2743]: false Jul 7 01:08:25.394075 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 01:08:25.399666 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 01:08:25.404588 extend-filesystems[2745]: Found /dev/nvme0n1p6 Jul 7 01:08:25.409523 extend-filesystems[2745]: Found /dev/nvme0n1p9 Jul 7 01:08:25.405297 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 01:08:25.418763 extend-filesystems[2745]: Checking size of /dev/nvme0n1p9 Jul 7 01:08:25.415229 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 01:08:25.427984 extend-filesystems[2745]: Resized partition /dev/nvme0n1p9 Jul 7 01:08:25.449881 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks Jul 7 01:08:25.427519 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 01:08:25.450094 extend-filesystems[2765]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 01:08:25.446293 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 01:08:25.455786 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 01:08:25.464631 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 01:08:25.465164 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 01:08:25.465822 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 01:08:25.471762 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 01:08:25.478113 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 01:08:25.479394 jq[2779]: true Jul 7 01:08:25.483329 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 01:08:25.483514 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 01:08:25.483784 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 01:08:25.483973 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 01:08:25.486358 systemd-logind[2767]: Watching system buttons on /dev/input/event0 (Power Button) Jul 7 01:08:25.488903 systemd-logind[2767]: New seat seat0. Jul 7 01:08:25.489705 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 01:08:25.495068 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 01:08:25.495260 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 01:08:25.503053 update_engine[2778]: I20250707 01:08:25.502928 2778 main.cc:92] Flatcar Update Engine starting Jul 7 01:08:25.515462 jq[2782]: true Jul 7 01:08:25.516364 (ntainerd)[2783]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 01:08:25.524797 tar[2781]: linux-arm64/LICENSE Jul 7 01:08:25.524967 tar[2781]: linux-arm64/helm Jul 7 01:08:25.531906 dbus-daemon[2739]: [system] SELinux support is enabled Jul 7 01:08:25.532429 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 01:08:25.535622 update_engine[2778]: I20250707 01:08:25.535589 2778 update_check_scheduler.cc:74] Next update check in 6m54s Jul 7 01:08:25.542675 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 01:08:25.542701 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 01:08:25.543154 dbus-daemon[2739]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 01:08:25.544333 bash[2809]: Updated "/home/core/.ssh/authorized_keys" Jul 7 01:08:25.547513 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 01:08:25.547529 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 01:08:25.553190 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 01:08:25.558821 systemd[1]: Started update-engine.service - Update Engine. Jul 7 01:08:25.565608 systemd[1]: Starting sshkeys.service... Jul 7 01:08:25.588928 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 01:08:25.598505 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 01:08:25.604499 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 01:08:25.621299 locksmithd[2812]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 01:08:25.624212 coreos-metadata[2821]: Jul 07 01:08:25.624 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 01:08:25.625330 coreos-metadata[2821]: Jul 07 01:08:25.625 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 7 01:08:25.672611 containerd[2783]: time="2025-07-07T01:08:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 01:08:25.674592 containerd[2783]: time="2025-07-07T01:08:25.674566560Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 01:08:25.682436 containerd[2783]: time="2025-07-07T01:08:25.682408400Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.64µs" Jul 7 01:08:25.682457 containerd[2783]: time="2025-07-07T01:08:25.682437240Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 01:08:25.682457 containerd[2783]: time="2025-07-07T01:08:25.682453200Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 01:08:25.682634 containerd[2783]: time="2025-07-07T01:08:25.682620320Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 01:08:25.682657 containerd[2783]: time="2025-07-07T01:08:25.682638080Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 01:08:25.682679 containerd[2783]: time="2025-07-07T01:08:25.682659560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 01:08:25.682717 containerd[2783]: time="2025-07-07T01:08:25.682704560Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 01:08:25.682739 containerd[2783]: time="2025-07-07T01:08:25.682718640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 01:08:25.682958 containerd[2783]: time="2025-07-07T01:08:25.682942600Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 01:08:25.682976 containerd[2783]: time="2025-07-07T01:08:25.682957920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 01:08:25.682976 containerd[2783]: time="2025-07-07T01:08:25.682968600Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 01:08:25.683011 containerd[2783]: time="2025-07-07T01:08:25.682977080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 01:08:25.683058 containerd[2783]: time="2025-07-07T01:08:25.683047920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 01:08:25.683240 containerd[2783]: time="2025-07-07T01:08:25.683226920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 01:08:25.683277 containerd[2783]: time="2025-07-07T01:08:25.683265920Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 01:08:25.683295 containerd[2783]: time="2025-07-07T01:08:25.683277080Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 01:08:25.683312 containerd[2783]: time="2025-07-07T01:08:25.683300560Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 01:08:25.683527 containerd[2783]: time="2025-07-07T01:08:25.683515080Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 01:08:25.683592 containerd[2783]: time="2025-07-07T01:08:25.683581360Z" level=info msg="metadata content store policy set" policy=shared Jul 7 01:08:25.691409 containerd[2783]: time="2025-07-07T01:08:25.691387440Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 01:08:25.691447 containerd[2783]: time="2025-07-07T01:08:25.691425240Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 01:08:25.691447 containerd[2783]: time="2025-07-07T01:08:25.691437520Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 01:08:25.691513 containerd[2783]: time="2025-07-07T01:08:25.691449480Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 01:08:25.691513 containerd[2783]: time="2025-07-07T01:08:25.691461240Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 01:08:25.691513 containerd[2783]: time="2025-07-07T01:08:25.691479360Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 01:08:25.691513 containerd[2783]: time="2025-07-07T01:08:25.691497680Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 01:08:25.691513 containerd[2783]: time="2025-07-07T01:08:25.691509040Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 01:08:25.691710 containerd[2783]: time="2025-07-07T01:08:25.691523200Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 01:08:25.691710 containerd[2783]: time="2025-07-07T01:08:25.691533200Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 01:08:25.691710 containerd[2783]: time="2025-07-07T01:08:25.691542040Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 01:08:25.691710 containerd[2783]: time="2025-07-07T01:08:25.691554680Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 01:08:25.691710 containerd[2783]: time="2025-07-07T01:08:25.691662960Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 01:08:25.691710 containerd[2783]: time="2025-07-07T01:08:25.691682720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 01:08:25.691710 containerd[2783]: time="2025-07-07T01:08:25.691696920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 01:08:25.691710 containerd[2783]: time="2025-07-07T01:08:25.691707600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 01:08:25.691834 containerd[2783]: time="2025-07-07T01:08:25.691718760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 01:08:25.691834 containerd[2783]: time="2025-07-07T01:08:25.691729240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 01:08:25.691834 containerd[2783]: time="2025-07-07T01:08:25.691739320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 01:08:25.691834 containerd[2783]: time="2025-07-07T01:08:25.691749520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 01:08:25.691834 containerd[2783]: time="2025-07-07T01:08:25.691761320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 01:08:25.691834 containerd[2783]: time="2025-07-07T01:08:25.691771520Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 01:08:25.691834 containerd[2783]: time="2025-07-07T01:08:25.691781640Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 01:08:25.691976 containerd[2783]: time="2025-07-07T01:08:25.691965160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 01:08:25.691999 containerd[2783]: time="2025-07-07T01:08:25.691981520Z" level=info msg="Start snapshots syncer" Jul 7 01:08:25.692016 containerd[2783]: time="2025-07-07T01:08:25.692005200Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 01:08:25.692223 containerd[2783]: time="2025-07-07T01:08:25.692195720Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 01:08:25.692310 containerd[2783]: time="2025-07-07T01:08:25.692241760Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 01:08:25.692310 containerd[2783]: time="2025-07-07T01:08:25.692300000Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 01:08:25.692411 containerd[2783]: time="2025-07-07T01:08:25.692399040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 01:08:25.692430 containerd[2783]: time="2025-07-07T01:08:25.692422200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 01:08:25.692448 containerd[2783]: time="2025-07-07T01:08:25.692433240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 01:08:25.692448 containerd[2783]: time="2025-07-07T01:08:25.692444960Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 01:08:25.692484 containerd[2783]: time="2025-07-07T01:08:25.692455360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 01:08:25.692484 containerd[2783]: time="2025-07-07T01:08:25.692465840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 01:08:25.692484 containerd[2783]: time="2025-07-07T01:08:25.692475560Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 01:08:25.692542 containerd[2783]: time="2025-07-07T01:08:25.692523120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 01:08:25.692542 containerd[2783]: time="2025-07-07T01:08:25.692535600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 01:08:25.692574 containerd[2783]: time="2025-07-07T01:08:25.692545880Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 01:08:25.692594 containerd[2783]: time="2025-07-07T01:08:25.692582200Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 01:08:25.692612 containerd[2783]: time="2025-07-07T01:08:25.692594480Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 01:08:25.692612 containerd[2783]: time="2025-07-07T01:08:25.692602560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 01:08:25.692644 containerd[2783]: time="2025-07-07T01:08:25.692612280Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 01:08:25.692644 containerd[2783]: time="2025-07-07T01:08:25.692620200Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 01:08:25.692644 containerd[2783]: time="2025-07-07T01:08:25.692629200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 01:08:25.692644 containerd[2783]: time="2025-07-07T01:08:25.692638920Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 01:08:25.692724 containerd[2783]: time="2025-07-07T01:08:25.692716720Z" level=info msg="runtime interface created" Jul 7 01:08:25.692724 containerd[2783]: time="2025-07-07T01:08:25.692722600Z" level=info msg="created NRI interface" Jul 7 01:08:25.692763 containerd[2783]: time="2025-07-07T01:08:25.692730600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 01:08:25.692763 containerd[2783]: time="2025-07-07T01:08:25.692742040Z" level=info msg="Connect containerd service" Jul 7 01:08:25.692795 containerd[2783]: time="2025-07-07T01:08:25.692767120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 01:08:25.693409 containerd[2783]: time="2025-07-07T01:08:25.693391760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 01:08:25.772565 containerd[2783]: time="2025-07-07T01:08:25.772508560Z" level=info msg="Start subscribing containerd event" Jul 7 01:08:25.772594 containerd[2783]: time="2025-07-07T01:08:25.772579480Z" level=info msg="Start recovering state" Jul 7 01:08:25.772676 containerd[2783]: time="2025-07-07T01:08:25.772664200Z" level=info msg="Start event monitor" Jul 7 01:08:25.772705 containerd[2783]: time="2025-07-07T01:08:25.772678520Z" level=info msg="Start cni network conf syncer for default" Jul 7 01:08:25.772705 containerd[2783]: time="2025-07-07T01:08:25.772685760Z" level=info msg="Start streaming server" Jul 7 01:08:25.772705 containerd[2783]: time="2025-07-07T01:08:25.772697280Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 01:08:25.772705 containerd[2783]: time="2025-07-07T01:08:25.772705120Z" level=info msg="runtime interface starting up..." Jul 7 01:08:25.772771 containerd[2783]: time="2025-07-07T01:08:25.772711360Z" level=info msg="starting plugins..." Jul 7 01:08:25.772771 containerd[2783]: time="2025-07-07T01:08:25.772723280Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 01:08:25.772875 containerd[2783]: time="2025-07-07T01:08:25.772855200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 01:08:25.772916 containerd[2783]: time="2025-07-07T01:08:25.772906600Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 01:08:25.772964 containerd[2783]: time="2025-07-07T01:08:25.772955520Z" level=info msg="containerd successfully booted in 0.100729s" Jul 7 01:08:25.773013 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 01:08:25.858063 tar[2781]: linux-arm64/README.md Jul 7 01:08:25.884533 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 01:08:25.969503 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 Jul 7 01:08:25.985961 extend-filesystems[2765]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 7 01:08:25.985961 extend-filesystems[2765]: old_desc_blocks = 1, new_desc_blocks = 112 Jul 7 01:08:25.985961 extend-filesystems[2765]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. Jul 7 01:08:26.016783 extend-filesystems[2745]: Resized filesystem in /dev/nvme0n1p9 Jul 7 01:08:25.988376 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 01:08:25.988737 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 01:08:26.001866 systemd[1]: extend-filesystems.service: Consumed 213ms CPU time, 69.2M memory peak. Jul 7 01:08:26.239162 sshd_keygen[2770]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 01:08:26.257644 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 01:08:26.264932 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 01:08:26.292117 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 01:08:26.292324 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 01:08:26.299305 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 01:08:26.329196 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 01:08:26.335989 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 01:08:26.342582 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 7 01:08:26.348304 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 01:08:26.390867 coreos-metadata[2738]: Jul 07 01:08:26.390 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 7 01:08:26.391333 coreos-metadata[2738]: Jul 07 01:08:26.391 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 7 01:08:26.604505 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up Jul 7 01:08:26.621504 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link Jul 7 01:08:26.622964 systemd-networkd[2695]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:49:c6:a9.network. Jul 7 01:08:26.625434 coreos-metadata[2821]: Jul 07 01:08:26.625 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 7 01:08:26.625884 coreos-metadata[2821]: Jul 07 01:08:26.625 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 7 01:08:27.247502 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link down Jul 7 01:08:27.264503 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with a down link Jul 7 01:08:27.264959 systemd-networkd[2695]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jul 7 01:08:27.265549 systemd-networkd[2695]: enP1p1s0f0np0: Link UP Jul 7 01:08:27.265798 systemd-networkd[2695]: enP1p1s0f0np0: Gained carrier Jul 7 01:08:27.267145 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 01:08:27.284493 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 7 01:08:27.303823 systemd-networkd[2695]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:49:c6:a8.network. Jul 7 01:08:27.304104 systemd-networkd[2695]: enP1p1s0f1np1: Link UP Jul 7 01:08:27.304288 systemd-networkd[2695]: bond0: Link UP Jul 7 01:08:27.304547 systemd-networkd[2695]: bond0: Gained carrier Jul 7 01:08:27.304716 systemd-timesyncd[2697]: Network configuration changed, trying to establish connection. Jul 7 01:08:27.305271 systemd-timesyncd[2697]: Network configuration changed, trying to establish connection. Jul 7 01:08:27.305537 systemd-timesyncd[2697]: Network configuration changed, trying to establish connection. Jul 7 01:08:27.305681 systemd-timesyncd[2697]: Network configuration changed, trying to establish connection. Jul 7 01:08:27.388775 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex Jul 7 01:08:27.388807 kernel: bond0: active interface up! Jul 7 01:08:28.391433 coreos-metadata[2738]: Jul 07 01:08:28.391 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Jul 7 01:08:28.448531 systemd-networkd[2695]: bond0: Gained IPv6LL Jul 7 01:08:28.449042 systemd-timesyncd[2697]: Network configuration changed, trying to establish connection. Jul 7 01:08:28.577734 systemd-timesyncd[2697]: Network configuration changed, trying to establish connection. Jul 7 01:08:28.577845 systemd-timesyncd[2697]: Network configuration changed, trying to establish connection. Jul 7 01:08:28.579552 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 01:08:28.585389 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 01:08:28.592397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:08:28.621905 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 01:08:28.625878 coreos-metadata[2821]: Jul 07 01:08:28.625 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 Jul 7 01:08:28.643585 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 01:08:29.241208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:08:29.247383 (kubelet)[2909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 01:08:29.606080 kubelet[2909]: E0707 01:08:29.606003 2909 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 01:08:29.608657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 01:08:29.608784 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 01:08:29.609111 systemd[1]: kubelet.service: Consumed 723ms CPU time, 263.5M memory peak. Jul 7 01:08:30.495843 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 01:08:30.502105 systemd[1]: Started sshd@0-147.28.143.214:22-147.75.109.163:46938.service - OpenSSH per-connection server daemon (147.75.109.163:46938). Jul 7 01:08:30.813859 sshd[2932]: Accepted publickey for core from 147.75.109.163 port 46938 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 01:08:30.815757 sshd-session[2932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:08:30.825477 systemd-logind[2767]: New session 1 of user core. Jul 7 01:08:30.826924 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 01:08:30.830495 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:2 Jul 7 01:08:30.830746 kernel: mlx5_core 0001:01:00.0: shared_fdb:0 mode:queue_affinity Jul 7 01:08:30.847855 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 01:08:30.884787 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 01:08:30.892045 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 01:08:30.921503 kernel: mlx5_core 0001:01:00.0: lag map: port 1:1 port 2:1 Jul 7 01:08:30.925370 (systemd)[2939]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 01:08:30.927267 systemd-logind[2767]: New session c1 of user core. Jul 7 01:08:31.054558 systemd[2939]: Queued start job for default target default.target. Jul 7 01:08:31.074518 systemd[2939]: Created slice app.slice - User Application Slice. Jul 7 01:08:31.074542 systemd[2939]: Reached target paths.target - Paths. Jul 7 01:08:31.074575 systemd[2939]: Reached target timers.target - Timers. Jul 7 01:08:31.075757 systemd[2939]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 01:08:31.083863 systemd[2939]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 01:08:31.083915 systemd[2939]: Reached target sockets.target - Sockets. Jul 7 01:08:31.083958 systemd[2939]: Reached target basic.target - Basic System. Jul 7 01:08:31.083984 systemd[2939]: Reached target default.target - Main User Target. Jul 7 01:08:31.084006 systemd[2939]: Startup finished in 152ms. Jul 7 01:08:31.084302 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 01:08:31.090659 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 01:08:31.234179 coreos-metadata[2738]: Jul 07 01:08:31.234 INFO Fetch successful Jul 7 01:08:31.314926 systemd[1]: Started sshd@1-147.28.143.214:22-147.75.109.163:46940.service - OpenSSH per-connection server daemon (147.75.109.163:46940). Jul 7 01:08:31.320556 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 01:08:31.328014 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jul 7 01:08:31.398630 login[2889]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:08:31.399566 login[2890]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:08:31.401864 systemd-logind[2767]: New session 2 of user core. Jul 7 01:08:31.403270 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 01:08:31.405212 systemd-logind[2767]: New session 3 of user core. Jul 7 01:08:31.406449 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 01:08:31.610059 sshd[2956]: Accepted publickey for core from 147.75.109.163 port 46940 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 01:08:31.611286 sshd-session[2956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:08:31.614033 systemd-logind[2767]: New session 4 of user core. Jul 7 01:08:31.639624 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 01:08:31.684405 coreos-metadata[2821]: Jul 07 01:08:31.684 INFO Fetch successful Jul 7 01:08:31.698306 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jul 7 01:08:31.742627 unknown[2821]: wrote ssh authorized keys file for user: core Jul 7 01:08:31.775604 update-ssh-keys[2989]: Updated "/home/core/.ssh/authorized_keys" Jul 7 01:08:31.776770 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 01:08:31.778261 systemd[1]: Finished sshkeys.service. Jul 7 01:08:31.779151 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 01:08:31.782461 systemd[1]: Startup finished in 4.909s (kernel) + 24.335s (initrd) + 10.152s (userspace) = 39.397s. Jul 7 01:08:31.828901 sshd[2986]: Connection closed by 147.75.109.163 port 46940 Jul 7 01:08:31.829209 sshd-session[2956]: pam_unix(sshd:session): session closed for user core Jul 7 01:08:31.832164 systemd[1]: sshd@1-147.28.143.214:22-147.75.109.163:46940.service: Deactivated successfully. Jul 7 01:08:31.833618 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 01:08:31.834133 systemd-logind[2767]: Session 4 logged out. Waiting for processes to exit. Jul 7 01:08:31.834957 systemd-logind[2767]: Removed session 4. Jul 7 01:08:31.882989 systemd[1]: Started sshd@2-147.28.143.214:22-147.75.109.163:46946.service - OpenSSH per-connection server daemon (147.75.109.163:46946). Jul 7 01:08:32.160129 sshd[2998]: Accepted publickey for core from 147.75.109.163 port 46946 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 01:08:32.161119 sshd-session[2998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:08:32.164193 systemd-logind[2767]: New session 5 of user core. Jul 7 01:08:32.179604 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 01:08:32.362312 sshd[3000]: Connection closed by 147.75.109.163 port 46946 Jul 7 01:08:32.362611 sshd-session[2998]: pam_unix(sshd:session): session closed for user core Jul 7 01:08:32.365246 systemd[1]: sshd@2-147.28.143.214:22-147.75.109.163:46946.service: Deactivated successfully. Jul 7 01:08:32.366648 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 01:08:32.367158 systemd-logind[2767]: Session 5 logged out. Waiting for processes to exit. Jul 7 01:08:32.367983 systemd-logind[2767]: Removed session 5. Jul 7 01:08:32.415933 systemd[1]: Started sshd@3-147.28.143.214:22-147.75.109.163:46948.service - OpenSSH per-connection server daemon (147.75.109.163:46948). Jul 7 01:08:32.694631 sshd[3006]: Accepted publickey for core from 147.75.109.163 port 46948 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 01:08:32.695722 sshd-session[3006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:08:32.698727 systemd-logind[2767]: New session 6 of user core. Jul 7 01:08:32.722650 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 01:08:32.897742 sshd[3008]: Connection closed by 147.75.109.163 port 46948 Jul 7 01:08:32.898093 sshd-session[3006]: pam_unix(sshd:session): session closed for user core Jul 7 01:08:32.900739 systemd[1]: sshd@3-147.28.143.214:22-147.75.109.163:46948.service: Deactivated successfully. Jul 7 01:08:32.902842 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 01:08:32.903379 systemd-logind[2767]: Session 6 logged out. Waiting for processes to exit. Jul 7 01:08:32.904165 systemd-logind[2767]: Removed session 6. Jul 7 01:08:32.959013 systemd[1]: Started sshd@4-147.28.143.214:22-147.75.109.163:46956.service - OpenSSH per-connection server daemon (147.75.109.163:46956). Jul 7 01:08:33.263045 sshd[3014]: Accepted publickey for core from 147.75.109.163 port 46956 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 01:08:33.263977 sshd-session[3014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:08:33.266844 systemd-logind[2767]: New session 7 of user core. Jul 7 01:08:33.290597 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 01:08:33.455196 sudo[3017]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 01:08:33.455439 sudo[3017]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 01:08:33.479009 sudo[3017]: pam_unix(sudo:session): session closed for user root Jul 7 01:08:33.522010 systemd-timesyncd[2697]: Network configuration changed, trying to establish connection. Jul 7 01:08:33.522794 sshd[3016]: Connection closed by 147.75.109.163 port 46956 Jul 7 01:08:33.523076 sshd-session[3014]: pam_unix(sshd:session): session closed for user core Jul 7 01:08:33.525938 systemd[1]: sshd@4-147.28.143.214:22-147.75.109.163:46956.service: Deactivated successfully. Jul 7 01:08:33.527652 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 01:08:33.528206 systemd-logind[2767]: Session 7 logged out. Waiting for processes to exit. Jul 7 01:08:33.529031 systemd-logind[2767]: Removed session 7. Jul 7 01:08:33.586102 systemd[1]: Started sshd@5-147.28.143.214:22-147.75.109.163:46962.service - OpenSSH per-connection server daemon (147.75.109.163:46962). Jul 7 01:08:33.892029 sshd[3023]: Accepted publickey for core from 147.75.109.163 port 46962 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 01:08:33.893209 sshd-session[3023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:08:33.896198 systemd-logind[2767]: New session 8 of user core. Jul 7 01:08:33.920649 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 01:08:34.067997 sudo[3028]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 01:08:34.068252 sudo[3028]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 01:08:34.071108 sudo[3028]: pam_unix(sudo:session): session closed for user root Jul 7 01:08:34.075446 sudo[3027]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 01:08:34.075700 sudo[3027]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 01:08:34.082929 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 01:08:34.125282 augenrules[3050]: No rules Jul 7 01:08:34.126382 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 01:08:34.126615 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 01:08:34.127319 sudo[3027]: pam_unix(sudo:session): session closed for user root Jul 7 01:08:34.170616 sshd[3026]: Connection closed by 147.75.109.163 port 46962 Jul 7 01:08:34.170898 sshd-session[3023]: pam_unix(sshd:session): session closed for user core Jul 7 01:08:34.173911 systemd[1]: sshd@5-147.28.143.214:22-147.75.109.163:46962.service: Deactivated successfully. Jul 7 01:08:34.175369 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 01:08:34.175942 systemd-logind[2767]: Session 8 logged out. Waiting for processes to exit. Jul 7 01:08:34.176772 systemd-logind[2767]: Removed session 8. Jul 7 01:08:34.232996 systemd[1]: Started sshd@6-147.28.143.214:22-147.75.109.163:46978.service - OpenSSH per-connection server daemon (147.75.109.163:46978). Jul 7 01:08:34.536503 sshd[3060]: Accepted publickey for core from 147.75.109.163 port 46978 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 01:08:34.537627 sshd-session[3060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:08:34.540699 systemd-logind[2767]: New session 9 of user core. Jul 7 01:08:34.563601 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 01:08:34.710455 sudo[3063]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 01:08:34.710724 sudo[3063]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 01:08:35.016338 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 01:08:35.040780 (dockerd)[3098]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 01:08:35.268864 dockerd[3098]: time="2025-07-07T01:08:35.268776040Z" level=info msg="Starting up" Jul 7 01:08:35.269961 dockerd[3098]: time="2025-07-07T01:08:35.269940400Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 01:08:35.296859 dockerd[3098]: time="2025-07-07T01:08:35.296834520Z" level=info msg="Loading containers: start." Jul 7 01:08:35.309497 kernel: Initializing XFRM netlink socket Jul 7 01:08:35.483512 systemd-timesyncd[2697]: Network configuration changed, trying to establish connection. Jul 7 01:08:35.514132 systemd-networkd[2695]: docker0: Link UP Jul 7 01:08:35.525271 dockerd[3098]: time="2025-07-07T01:08:35.525208280Z" level=info msg="Loading containers: done." Jul 7 01:08:35.534186 dockerd[3098]: time="2025-07-07T01:08:35.534152680Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 01:08:35.534253 dockerd[3098]: time="2025-07-07T01:08:35.534219960Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 01:08:35.534329 dockerd[3098]: time="2025-07-07T01:08:35.534314920Z" level=info msg="Initializing buildkit" Jul 7 01:08:35.548800 dockerd[3098]: time="2025-07-07T01:08:35.548774080Z" level=info msg="Completed buildkit initialization" Jul 7 01:08:35.554216 dockerd[3098]: time="2025-07-07T01:08:35.554181080Z" level=info msg="Daemon has completed initialization" Jul 7 01:08:35.554275 dockerd[3098]: time="2025-07-07T01:08:35.554235640Z" level=info msg="API listen on /run/docker.sock" Jul 7 01:08:35.554380 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 01:08:36.041621 systemd-timesyncd[2697]: Contacted time server [2606:82c0:23::e]:123 (2.flatcar.pool.ntp.org). Jul 7 01:08:36.041677 systemd-timesyncd[2697]: Initial clock synchronization to Mon 2025-07-07 01:08:35.831815 UTC. Jul 7 01:08:36.085976 containerd[2783]: time="2025-07-07T01:08:36.085940040Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 7 01:08:36.285845 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4046386217-merged.mount: Deactivated successfully. Jul 7 01:08:36.677150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2761346184.mount: Deactivated successfully. Jul 7 01:08:37.822948 containerd[2783]: time="2025-07-07T01:08:37.822906504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:08:37.823307 containerd[2783]: time="2025-07-07T01:08:37.822920371Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 7 01:08:37.823853 containerd[2783]: time="2025-07-07T01:08:37.823830412Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:08:37.826256 containerd[2783]: time="2025-07-07T01:08:37.826232556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:08:37.827214 containerd[2783]: time="2025-07-07T01:08:37.827190625Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.741207633s" Jul 7 01:08:37.827238 containerd[2783]: time="2025-07-07T01:08:37.827224084Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 7 01:08:37.827779 containerd[2783]: time="2025-07-07T01:08:37.827759327Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 7 01:08:38.942923 containerd[2783]: time="2025-07-07T01:08:38.942870042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:08:38.942923 containerd[2783]: time="2025-07-07T01:08:38.942913776Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 7 01:08:38.943813 containerd[2783]: time="2025-07-07T01:08:38.943765321Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:08:38.946108 containerd[2783]: time="2025-07-07T01:08:38.946059074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:08:38.947006 containerd[2783]: time="2025-07-07T01:08:38.946972800Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.119182751s" Jul 7 01:08:38.947199 containerd[2783]: time="2025-07-07T01:08:38.947109552Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 7 01:08:38.947503 containerd[2783]: time="2025-07-07T01:08:38.947453093Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 7 01:08:39.859139 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 01:08:39.860569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:08:39.999020 containerd[2783]: time="2025-07-07T01:08:39.998992728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:08:39.999253 containerd[2783]: time="2025-07-07T01:08:39.999012484Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 7 01:08:39.999857 containerd[2783]: time="2025-07-07T01:08:39.999833324Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:08:40.002073 containerd[2783]: time="2025-07-07T01:08:40.002052738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:08:40.004656 containerd[2783]: time="2025-07-07T01:08:40.004623303Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.057128762s" Jul 7 01:08:40.004698 containerd[2783]: time="2025-07-07T01:08:40.004664487Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 7 01:08:40.004707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:08:40.004984 containerd[2783]: time="2025-07-07T01:08:40.004958201Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 7 01:08:40.008146 (kubelet)[3439]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 01:08:40.039423 kubelet[3439]: E0707 01:08:40.039391 3439 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 01:08:40.042451 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 01:08:40.042583 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 01:08:40.043618 systemd[1]: kubelet.service: Consumed 148ms CPU time, 114.9M memory peak. Jul 7 01:08:40.931870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3056459694.mount: Deactivated successfully. Jul 7 01:08:41.264505 containerd[2783]: time="2025-07-07T01:08:41.264394881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:08:41.264505 containerd[2783]: time="2025-07-07T01:08:41.264440530Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 7 01:08:41.265067 containerd[2783]: time="2025-07-07T01:08:41.265050458Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:08:41.266436 containerd[2783]: time="2025-07-07T01:08:41.266413050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:08:41.267050 containerd[2783]: time="2025-07-07T01:08:41.267029280Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.262033369s" Jul 7 01:08:41.267073 containerd[2783]: time="2025-07-07T01:08:41.267056653Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 7 01:08:41.267415 containerd[2783]: time="2025-07-07T01:08:41.267392181Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 01:08:41.804113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2877702244.mount: Deactivated successfully. Jul 7 01:08:42.492773 containerd[2783]: time="2025-07-07T01:08:42.492704368Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 7 01:08:42.492773 containerd[2783]: time="2025-07-07T01:08:42.492709538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:08:42.493698 containerd[2783]: time="2025-07-07T01:08:42.493666308Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:08:42.496232 containerd[2783]: time="2025-07-07T01:08:42.496179049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:08:42.497132 containerd[2783]: time="2025-07-07T01:08:42.497100342Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.22967151s" Jul 7 01:08:42.497328 containerd[2783]: time="2025-07-07T01:08:42.497238974Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 7 01:08:42.497745 containerd[2783]: time="2025-07-07T01:08:42.497669708Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 01:08:42.911097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3536678749.mount: Deactivated successfully. Jul 7 01:08:42.911513 containerd[2783]: time="2025-07-07T01:08:42.911484171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:08:42.911586 containerd[2783]: time="2025-07-07T01:08:42.911556348Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 7 01:08:42.912184 containerd[2783]: time="2025-07-07T01:08:42.912166755Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:08:42.913752 containerd[2783]: time="2025-07-07T01:08:42.913733380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 01:08:42.914399 containerd[2783]: time="2025-07-07T01:08:42.914376186Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 416.670488ms" Jul 7 01:08:42.914422 containerd[2783]: time="2025-07-07T01:08:42.914403258Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 7 01:08:42.914728 containerd[2783]: time="2025-07-07T01:08:42.914711579Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 01:08:43.372378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1802987450.mount: Deactivated successfully. Jul 7 01:08:45.205274 containerd[2783]: time="2025-07-07T01:08:45.205231870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:08:45.205662 containerd[2783]: time="2025-07-07T01:08:45.205206580Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 7 01:08:45.206275 containerd[2783]: time="2025-07-07T01:08:45.206255810Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:08:45.208800 containerd[2783]: time="2025-07-07T01:08:45.208784325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:08:45.209836 containerd[2783]: time="2025-07-07T01:08:45.209808384Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.295072973s" Jul 7 01:08:45.209857 containerd[2783]: time="2025-07-07T01:08:45.209842276Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 7 01:08:49.659099 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:08:49.659229 systemd[1]: kubelet.service: Consumed 148ms CPU time, 114.9M memory peak. Jul 7 01:08:49.661328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:08:49.677924 systemd[1]: Reload requested from client PID 3659 ('systemctl') (unit session-9.scope)... Jul 7 01:08:49.677934 systemd[1]: Reloading... Jul 7 01:08:49.734494 zram_generator::config[3706]: No configuration found. Jul 7 01:08:49.809703 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 01:08:49.910781 systemd[1]: Reloading finished in 232 ms. Jul 7 01:08:49.968311 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 01:08:49.968531 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 01:08:49.968891 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:08:49.971149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:08:50.092317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:08:50.095833 (kubelet)[3766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 01:08:50.128227 kubelet[3766]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 01:08:50.128227 kubelet[3766]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 01:08:50.128227 kubelet[3766]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 01:08:50.129407 kubelet[3766]: I0707 01:08:50.129365 3766 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 01:08:50.690230 kubelet[3766]: I0707 01:08:50.690139 3766 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 01:08:50.690230 kubelet[3766]: I0707 01:08:50.690224 3766 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 01:08:50.691273 kubelet[3766]: I0707 01:08:50.691256 3766 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 01:08:50.714966 kubelet[3766]: E0707 01:08:50.714934 3766 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.28.143.214:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.28.143.214:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:08:50.715658 kubelet[3766]: I0707 01:08:50.715632 3766 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 01:08:50.720209 kubelet[3766]: I0707 01:08:50.720195 3766 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 01:08:50.740506 kubelet[3766]: I0707 01:08:50.740473 3766 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 01:08:50.740707 kubelet[3766]: I0707 01:08:50.740676 3766 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 01:08:50.740844 kubelet[3766]: I0707 01:08:50.740703 3766 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-a-a5852c4667","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 01:08:50.740925 kubelet[3766]: I0707 01:08:50.740917 3766 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 01:08:50.740948 kubelet[3766]: I0707 01:08:50.740927 3766 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 01:08:50.741132 kubelet[3766]: I0707 01:08:50.741121 3766 state_mem.go:36] "Initialized new in-memory state store" Jul 7 01:08:50.744207 kubelet[3766]: I0707 01:08:50.744191 3766 kubelet.go:446] "Attempting to sync node with API server" Jul 7 01:08:50.744238 kubelet[3766]: I0707 01:08:50.744216 3766 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 01:08:50.744238 kubelet[3766]: I0707 01:08:50.744235 3766 kubelet.go:352] "Adding apiserver pod source" Jul 7 01:08:50.744272 kubelet[3766]: I0707 01:08:50.744244 3766 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 01:08:50.745395 kubelet[3766]: W0707 01:08:50.745355 3766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.28.143.214:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-a-a5852c4667&limit=500&resourceVersion=0": dial tcp 147.28.143.214:6443: connect: connection refused Jul 7 01:08:50.745432 kubelet[3766]: E0707 01:08:50.745420 3766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.28.143.214:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-a-a5852c4667&limit=500&resourceVersion=0\": dial tcp 147.28.143.214:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:08:50.745769 kubelet[3766]: W0707 01:08:50.745734 3766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.28.143.214:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.28.143.214:6443: connect: connection refused Jul 7 01:08:50.745795 kubelet[3766]: E0707 01:08:50.745782 3766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.28.143.214:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.28.143.214:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:08:50.746795 kubelet[3766]: I0707 01:08:50.746781 3766 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 01:08:50.747379 kubelet[3766]: I0707 01:08:50.747367 3766 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 01:08:50.747491 kubelet[3766]: W0707 01:08:50.747480 3766 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 01:08:50.748252 kubelet[3766]: I0707 01:08:50.748240 3766 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 01:08:50.748276 kubelet[3766]: I0707 01:08:50.748271 3766 server.go:1287] "Started kubelet" Jul 7 01:08:50.748362 kubelet[3766]: I0707 01:08:50.748310 3766 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 01:08:50.748791 kubelet[3766]: I0707 01:08:50.748749 3766 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 01:08:50.749028 kubelet[3766]: I0707 01:08:50.749016 3766 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 01:08:50.749575 kubelet[3766]: I0707 01:08:50.749559 3766 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 01:08:50.752433 kubelet[3766]: I0707 01:08:50.752408 3766 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 01:08:50.752566 kubelet[3766]: I0707 01:08:50.752531 3766 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 01:08:50.752683 kubelet[3766]: I0707 01:08:50.752645 3766 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 01:08:50.752683 kubelet[3766]: E0707 01:08:50.752645 3766 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-a5852c4667\" not found" Jul 7 01:08:50.752743 kubelet[3766]: I0707 01:08:50.752692 3766 reconciler.go:26] "Reconciler: start to sync state" Jul 7 01:08:50.754153 kubelet[3766]: E0707 01:08:50.754124 3766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.143.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-a5852c4667?timeout=10s\": dial tcp 147.28.143.214:6443: connect: connection refused" interval="200ms" Jul 7 01:08:50.754341 kubelet[3766]: W0707 01:08:50.754304 3766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.28.143.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.143.214:6443: connect: connection refused Jul 7 01:08:50.754369 kubelet[3766]: E0707 01:08:50.754354 3766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.28.143.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.28.143.214:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:08:50.754458 kubelet[3766]: I0707 01:08:50.754443 3766 factory.go:221] Registration of the systemd container factory successfully Jul 7 01:08:50.754566 kubelet[3766]: I0707 01:08:50.754555 3766 server.go:479] "Adding debug handlers to kubelet server" Jul 7 01:08:50.754634 kubelet[3766]: E0707 01:08:50.754614 3766 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 01:08:50.754660 kubelet[3766]: I0707 01:08:50.754555 3766 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 01:08:50.754724 kubelet[3766]: E0707 01:08:50.754481 3766 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.143.214:6443/api/v1/namespaces/default/events\": dial tcp 147.28.143.214:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.1-a-a5852c4667.184fd2c6a8d4430f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-a-a5852c4667,UID:ci-4344.1.1-a-a5852c4667,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-a-a5852c4667,},FirstTimestamp:2025-07-07 01:08:50.748252943 +0000 UTC m=+0.649551892,LastTimestamp:2025-07-07 01:08:50.748252943 +0000 UTC m=+0.649551892,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-a-a5852c4667,}" Jul 7 01:08:50.755381 kubelet[3766]: I0707 01:08:50.755365 3766 factory.go:221] Registration of the containerd container factory successfully Jul 7 01:08:50.766165 kubelet[3766]: I0707 01:08:50.766131 3766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 01:08:50.767167 kubelet[3766]: I0707 01:08:50.767155 3766 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 01:08:50.767197 kubelet[3766]: I0707 01:08:50.767171 3766 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 01:08:50.767197 kubelet[3766]: I0707 01:08:50.767186 3766 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 01:08:50.767197 kubelet[3766]: I0707 01:08:50.767194 3766 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 01:08:50.767446 kubelet[3766]: E0707 01:08:50.767228 3766 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 01:08:50.769803 kubelet[3766]: W0707 01:08:50.769265 3766 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.28.143.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.143.214:6443: connect: connection refused Jul 7 01:08:50.769803 kubelet[3766]: I0707 01:08:50.769788 3766 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 01:08:50.769890 kubelet[3766]: I0707 01:08:50.769817 3766 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 01:08:50.769890 kubelet[3766]: I0707 01:08:50.769836 3766 state_mem.go:36] "Initialized new in-memory state store" Jul 7 01:08:50.769890 kubelet[3766]: E0707 01:08:50.769833 3766 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.28.143.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.28.143.214:6443: connect: connection refused" logger="UnhandledError" Jul 7 01:08:50.770473 kubelet[3766]: I0707 01:08:50.770459 3766 policy_none.go:49] "None policy: Start" Jul 7 01:08:50.770514 kubelet[3766]: I0707 01:08:50.770477 3766 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 01:08:50.770514 kubelet[3766]: I0707 01:08:50.770493 3766 state_mem.go:35] "Initializing new in-memory state store" Jul 7 01:08:50.774413 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 01:08:50.800837 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 01:08:50.803449 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 01:08:50.822298 kubelet[3766]: I0707 01:08:50.822277 3766 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 01:08:50.822473 kubelet[3766]: I0707 01:08:50.822457 3766 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 01:08:50.822527 kubelet[3766]: I0707 01:08:50.822468 3766 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 01:08:50.822635 kubelet[3766]: I0707 01:08:50.822609 3766 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 01:08:50.823092 kubelet[3766]: E0707 01:08:50.823077 3766 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 01:08:50.823135 kubelet[3766]: E0707 01:08:50.823113 3766 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.1-a-a5852c4667\" not found" Jul 7 01:08:50.874788 systemd[1]: Created slice kubepods-burstable-pod0ab2f780f3161b562f527407e486e0a4.slice - libcontainer container kubepods-burstable-pod0ab2f780f3161b562f527407e486e0a4.slice. Jul 7 01:08:50.898236 kubelet[3766]: E0707 01:08:50.898211 3766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-a5852c4667\" not found" node="ci-4344.1.1-a-a5852c4667" Jul 7 01:08:50.900321 systemd[1]: Created slice kubepods-burstable-podc624717eb17db3fd1995e72e8b82c16a.slice - libcontainer container kubepods-burstable-podc624717eb17db3fd1995e72e8b82c16a.slice. Jul 7 01:08:50.922383 kubelet[3766]: E0707 01:08:50.922360 3766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-a5852c4667\" not found" node="ci-4344.1.1-a-a5852c4667" Jul 7 01:08:50.923753 kubelet[3766]: I0707 01:08:50.923736 3766 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-a5852c4667" Jul 7 01:08:50.924105 kubelet[3766]: E0707 01:08:50.924082 3766 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.28.143.214:6443/api/v1/nodes\": dial tcp 147.28.143.214:6443: connect: connection refused" node="ci-4344.1.1-a-a5852c4667" Jul 7 01:08:50.924542 systemd[1]: Created slice kubepods-burstable-podcdc8b4a0239b35389a546e218d20a00c.slice - libcontainer container kubepods-burstable-podcdc8b4a0239b35389a546e218d20a00c.slice. Jul 7 01:08:50.925788 kubelet[3766]: E0707 01:08:50.925766 3766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-a5852c4667\" not found" node="ci-4344.1.1-a-a5852c4667" Jul 7 01:08:50.955222 kubelet[3766]: E0707 01:08:50.955148 3766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.143.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-a5852c4667?timeout=10s\": dial tcp 147.28.143.214:6443: connect: connection refused" interval="400ms" Jul 7 01:08:51.053402 kubelet[3766]: I0707 01:08:51.053378 3766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c624717eb17db3fd1995e72e8b82c16a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-a-a5852c4667\" (UID: \"c624717eb17db3fd1995e72e8b82c16a\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:51.053461 kubelet[3766]: I0707 01:08:51.053408 3766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ab2f780f3161b562f527407e486e0a4-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-a-a5852c4667\" (UID: \"0ab2f780f3161b562f527407e486e0a4\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:51.053461 kubelet[3766]: I0707 01:08:51.053426 3766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ab2f780f3161b562f527407e486e0a4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-a-a5852c4667\" (UID: \"0ab2f780f3161b562f527407e486e0a4\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:51.053461 kubelet[3766]: I0707 01:08:51.053442 3766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c624717eb17db3fd1995e72e8b82c16a-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-a5852c4667\" (UID: \"c624717eb17db3fd1995e72e8b82c16a\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:51.053543 kubelet[3766]: I0707 01:08:51.053473 3766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c624717eb17db3fd1995e72e8b82c16a-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-a-a5852c4667\" (UID: \"c624717eb17db3fd1995e72e8b82c16a\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:51.053543 kubelet[3766]: I0707 01:08:51.053500 3766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c624717eb17db3fd1995e72e8b82c16a-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-a5852c4667\" (UID: \"c624717eb17db3fd1995e72e8b82c16a\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:51.053588 kubelet[3766]: I0707 01:08:51.053554 3766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c624717eb17db3fd1995e72e8b82c16a-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-a-a5852c4667\" (UID: \"c624717eb17db3fd1995e72e8b82c16a\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:51.053607 kubelet[3766]: I0707 01:08:51.053583 3766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ab2f780f3161b562f527407e486e0a4-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-a-a5852c4667\" (UID: \"0ab2f780f3161b562f527407e486e0a4\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:51.053607 kubelet[3766]: I0707 01:08:51.053600 3766 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cdc8b4a0239b35389a546e218d20a00c-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-a-a5852c4667\" (UID: \"cdc8b4a0239b35389a546e218d20a00c\") " pod="kube-system/kube-scheduler-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:51.126175 kubelet[3766]: I0707 01:08:51.126145 3766 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-a5852c4667" Jul 7 01:08:51.126392 kubelet[3766]: E0707 01:08:51.126372 3766 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.28.143.214:6443/api/v1/nodes\": dial tcp 147.28.143.214:6443: connect: connection refused" node="ci-4344.1.1-a-a5852c4667" Jul 7 01:08:51.199233 containerd[2783]: time="2025-07-07T01:08:51.199199404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-a-a5852c4667,Uid:0ab2f780f3161b562f527407e486e0a4,Namespace:kube-system,Attempt:0,}" Jul 7 01:08:51.209339 containerd[2783]: time="2025-07-07T01:08:51.209271592Z" level=info msg="connecting to shim e11bb4cbebe6d13433a6cc91516b70d6ff67077afe3991b59f53ddd7602fde61" address="unix:///run/containerd/s/ddf3e3d717fc725f0bc9cc4f6937d711750d63cd2d9483adfae058a4aeeb7f44" namespace=k8s.io protocol=ttrpc version=3 Jul 7 01:08:51.223167 containerd[2783]: time="2025-07-07T01:08:51.223131538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-a-a5852c4667,Uid:c624717eb17db3fd1995e72e8b82c16a,Namespace:kube-system,Attempt:0,}" Jul 7 01:08:51.227296 containerd[2783]: time="2025-07-07T01:08:51.227185576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-a-a5852c4667,Uid:cdc8b4a0239b35389a546e218d20a00c,Namespace:kube-system,Attempt:0,}" Jul 7 01:08:51.240187 containerd[2783]: time="2025-07-07T01:08:51.240157523Z" level=info msg="connecting to shim deb49431debd9581c43b9c6b1ad691bbfbdeb5c959df58527014f719aa65fb76" address="unix:///run/containerd/s/1c217029391ad4ef828b2a2d0c5ea1506361c4ff73d6cc3e0cf006458baadd6d" namespace=k8s.io protocol=ttrpc version=3 Jul 7 01:08:51.240324 containerd[2783]: time="2025-07-07T01:08:51.240300982Z" level=info msg="connecting to shim c9812da8435807419d4d506a1294628c74758096f354d662675ac9bee85c5b2c" address="unix:///run/containerd/s/a91b44c9cb828765f5c1c1996f03ac147bffb3a92e36423a35ef576f6fdf7b6b" namespace=k8s.io protocol=ttrpc version=3 Jul 7 01:08:51.245636 systemd[1]: Started cri-containerd-e11bb4cbebe6d13433a6cc91516b70d6ff67077afe3991b59f53ddd7602fde61.scope - libcontainer container e11bb4cbebe6d13433a6cc91516b70d6ff67077afe3991b59f53ddd7602fde61. Jul 7 01:08:51.255979 systemd[1]: Started cri-containerd-c9812da8435807419d4d506a1294628c74758096f354d662675ac9bee85c5b2c.scope - libcontainer container c9812da8435807419d4d506a1294628c74758096f354d662675ac9bee85c5b2c. Jul 7 01:08:51.257249 systemd[1]: Started cri-containerd-deb49431debd9581c43b9c6b1ad691bbfbdeb5c959df58527014f719aa65fb76.scope - libcontainer container deb49431debd9581c43b9c6b1ad691bbfbdeb5c959df58527014f719aa65fb76. Jul 7 01:08:51.271545 containerd[2783]: time="2025-07-07T01:08:51.271515341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-a-a5852c4667,Uid:0ab2f780f3161b562f527407e486e0a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e11bb4cbebe6d13433a6cc91516b70d6ff67077afe3991b59f53ddd7602fde61\"" Jul 7 01:08:51.273775 containerd[2783]: time="2025-07-07T01:08:51.273756650Z" level=info msg="CreateContainer within sandbox \"e11bb4cbebe6d13433a6cc91516b70d6ff67077afe3991b59f53ddd7602fde61\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 01:08:51.278061 containerd[2783]: time="2025-07-07T01:08:51.278030237Z" level=info msg="Container 57a187dbdeac4b242db056ca4b56c44ca083f32c9c604d6317b248fed14a711c: CDI devices from CRI Config.CDIDevices: []" Jul 7 01:08:51.281322 containerd[2783]: time="2025-07-07T01:08:51.281272049Z" level=info msg="CreateContainer within sandbox \"e11bb4cbebe6d13433a6cc91516b70d6ff67077afe3991b59f53ddd7602fde61\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"57a187dbdeac4b242db056ca4b56c44ca083f32c9c604d6317b248fed14a711c\"" Jul 7 01:08:51.281402 containerd[2783]: time="2025-07-07T01:08:51.281377302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-a-a5852c4667,Uid:cdc8b4a0239b35389a546e218d20a00c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9812da8435807419d4d506a1294628c74758096f354d662675ac9bee85c5b2c\"" Jul 7 01:08:51.281666 containerd[2783]: time="2025-07-07T01:08:51.281642905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-a-a5852c4667,Uid:c624717eb17db3fd1995e72e8b82c16a,Namespace:kube-system,Attempt:0,} returns sandbox id \"deb49431debd9581c43b9c6b1ad691bbfbdeb5c959df58527014f719aa65fb76\"" Jul 7 01:08:51.282099 containerd[2783]: time="2025-07-07T01:08:51.282078857Z" level=info msg="StartContainer for \"57a187dbdeac4b242db056ca4b56c44ca083f32c9c604d6317b248fed14a711c\"" Jul 7 01:08:51.283117 containerd[2783]: time="2025-07-07T01:08:51.283094298Z" level=info msg="connecting to shim 57a187dbdeac4b242db056ca4b56c44ca083f32c9c604d6317b248fed14a711c" address="unix:///run/containerd/s/ddf3e3d717fc725f0bc9cc4f6937d711750d63cd2d9483adfae058a4aeeb7f44" protocol=ttrpc version=3 Jul 7 01:08:51.283420 containerd[2783]: time="2025-07-07T01:08:51.283401333Z" level=info msg="CreateContainer within sandbox \"deb49431debd9581c43b9c6b1ad691bbfbdeb5c959df58527014f719aa65fb76\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 01:08:51.283453 containerd[2783]: time="2025-07-07T01:08:51.283419340Z" level=info msg="CreateContainer within sandbox \"c9812da8435807419d4d506a1294628c74758096f354d662675ac9bee85c5b2c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 01:08:51.287567 containerd[2783]: time="2025-07-07T01:08:51.287542975Z" level=info msg="Container c8ba2726ac72af1c75dd150ba3622348b419acd79f3d07674f49f191ca9915dd: CDI devices from CRI Config.CDIDevices: []" Jul 7 01:08:51.288015 containerd[2783]: time="2025-07-07T01:08:51.287991755Z" level=info msg="Container 813ce4762e10ab202e29fa2d32c73be55993e30ab684bb28b1aa1ff1b6c52178: CDI devices from CRI Config.CDIDevices: []" Jul 7 01:08:51.290509 containerd[2783]: time="2025-07-07T01:08:51.290478509Z" level=info msg="CreateContainer within sandbox \"c9812da8435807419d4d506a1294628c74758096f354d662675ac9bee85c5b2c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c8ba2726ac72af1c75dd150ba3622348b419acd79f3d07674f49f191ca9915dd\"" Jul 7 01:08:51.290774 containerd[2783]: time="2025-07-07T01:08:51.290755067Z" level=info msg="StartContainer for \"c8ba2726ac72af1c75dd150ba3622348b419acd79f3d07674f49f191ca9915dd\"" Jul 7 01:08:51.290859 containerd[2783]: time="2025-07-07T01:08:51.290837134Z" level=info msg="CreateContainer within sandbox \"deb49431debd9581c43b9c6b1ad691bbfbdeb5c959df58527014f719aa65fb76\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"813ce4762e10ab202e29fa2d32c73be55993e30ab684bb28b1aa1ff1b6c52178\"" Jul 7 01:08:51.291143 containerd[2783]: time="2025-07-07T01:08:51.291123493Z" level=info msg="StartContainer for \"813ce4762e10ab202e29fa2d32c73be55993e30ab684bb28b1aa1ff1b6c52178\"" Jul 7 01:08:51.291652 containerd[2783]: time="2025-07-07T01:08:51.291632190Z" level=info msg="connecting to shim c8ba2726ac72af1c75dd150ba3622348b419acd79f3d07674f49f191ca9915dd" address="unix:///run/containerd/s/a91b44c9cb828765f5c1c1996f03ac147bffb3a92e36423a35ef576f6fdf7b6b" protocol=ttrpc version=3 Jul 7 01:08:51.292074 containerd[2783]: time="2025-07-07T01:08:51.292054437Z" level=info msg="connecting to shim 813ce4762e10ab202e29fa2d32c73be55993e30ab684bb28b1aa1ff1b6c52178" address="unix:///run/containerd/s/1c217029391ad4ef828b2a2d0c5ea1506361c4ff73d6cc3e0cf006458baadd6d" protocol=ttrpc version=3 Jul 7 01:08:51.310618 systemd[1]: Started cri-containerd-57a187dbdeac4b242db056ca4b56c44ca083f32c9c604d6317b248fed14a711c.scope - libcontainer container 57a187dbdeac4b242db056ca4b56c44ca083f32c9c604d6317b248fed14a711c. Jul 7 01:08:51.313807 systemd[1]: Started cri-containerd-813ce4762e10ab202e29fa2d32c73be55993e30ab684bb28b1aa1ff1b6c52178.scope - libcontainer container 813ce4762e10ab202e29fa2d32c73be55993e30ab684bb28b1aa1ff1b6c52178. Jul 7 01:08:51.314856 systemd[1]: Started cri-containerd-c8ba2726ac72af1c75dd150ba3622348b419acd79f3d07674f49f191ca9915dd.scope - libcontainer container c8ba2726ac72af1c75dd150ba3622348b419acd79f3d07674f49f191ca9915dd. Jul 7 01:08:51.338801 containerd[2783]: time="2025-07-07T01:08:51.338767137Z" level=info msg="StartContainer for \"57a187dbdeac4b242db056ca4b56c44ca083f32c9c604d6317b248fed14a711c\" returns successfully" Jul 7 01:08:51.341215 containerd[2783]: time="2025-07-07T01:08:51.341196763Z" level=info msg="StartContainer for \"c8ba2726ac72af1c75dd150ba3622348b419acd79f3d07674f49f191ca9915dd\" returns successfully" Jul 7 01:08:51.342903 containerd[2783]: time="2025-07-07T01:08:51.342883801Z" level=info msg="StartContainer for \"813ce4762e10ab202e29fa2d32c73be55993e30ab684bb28b1aa1ff1b6c52178\" returns successfully" Jul 7 01:08:51.355686 kubelet[3766]: E0707 01:08:51.355632 3766 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.143.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-a-a5852c4667?timeout=10s\": dial tcp 147.28.143.214:6443: connect: connection refused" interval="800ms" Jul 7 01:08:51.529072 kubelet[3766]: I0707 01:08:51.528964 3766 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-a5852c4667" Jul 7 01:08:51.773903 kubelet[3766]: E0707 01:08:51.773880 3766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-a5852c4667\" not found" node="ci-4344.1.1-a-a5852c4667" Jul 7 01:08:51.774728 kubelet[3766]: E0707 01:08:51.774711 3766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-a5852c4667\" not found" node="ci-4344.1.1-a-a5852c4667" Jul 7 01:08:51.775805 kubelet[3766]: E0707 01:08:51.775789 3766 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344.1.1-a-a5852c4667\" not found" node="ci-4344.1.1-a-a5852c4667" Jul 7 01:08:52.398687 kubelet[3766]: E0707 01:08:52.398650 3766 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.1-a-a5852c4667\" not found" node="ci-4344.1.1-a-a5852c4667" Jul 7 01:08:52.505500 kubelet[3766]: I0707 01:08:52.505120 3766 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-a-a5852c4667" Jul 7 01:08:52.553072 kubelet[3766]: I0707 01:08:52.553049 3766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:52.557715 kubelet[3766]: E0707 01:08:52.557696 3766 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-a5852c4667\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:52.557737 kubelet[3766]: I0707 01:08:52.557717 3766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:52.559119 kubelet[3766]: E0707 01:08:52.559100 3766 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-a-a5852c4667\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:52.559151 kubelet[3766]: I0707 01:08:52.559120 3766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:52.560668 kubelet[3766]: E0707 01:08:52.560649 3766 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.1-a-a5852c4667\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:52.745044 kubelet[3766]: I0707 01:08:52.744958 3766 apiserver.go:52] "Watching apiserver" Jul 7 01:08:52.753163 kubelet[3766]: I0707 01:08:52.753139 3766 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 01:08:52.775997 kubelet[3766]: I0707 01:08:52.775981 3766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:52.776093 kubelet[3766]: I0707 01:08:52.776075 3766 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:52.777326 kubelet[3766]: E0707 01:08:52.777308 3766 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-a5852c4667\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:52.778157 kubelet[3766]: E0707 01:08:52.778123 3766 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-a-a5852c4667\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:54.526985 systemd[1]: Reload requested from client PID 4187 ('systemctl') (unit session-9.scope)... Jul 7 01:08:54.526995 systemd[1]: Reloading... Jul 7 01:08:54.595497 zram_generator::config[4236]: No configuration found. Jul 7 01:08:54.670361 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 01:08:54.782493 systemd[1]: Reloading finished in 255 ms. Jul 7 01:08:54.811439 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:08:54.835278 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 01:08:54.836578 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:08:54.836638 systemd[1]: kubelet.service: Consumed 1.112s CPU time, 148.7M memory peak. Jul 7 01:08:54.838408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 01:08:54.990120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 01:08:54.993580 (kubelet)[4295]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 01:08:55.036288 kubelet[4295]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 01:08:55.036288 kubelet[4295]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 01:08:55.036288 kubelet[4295]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 01:08:55.036589 kubelet[4295]: I0707 01:08:55.036284 4295 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 01:08:55.042007 kubelet[4295]: I0707 01:08:55.041985 4295 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 01:08:55.042034 kubelet[4295]: I0707 01:08:55.042008 4295 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 01:08:55.042241 kubelet[4295]: I0707 01:08:55.042230 4295 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 01:08:55.043379 kubelet[4295]: I0707 01:08:55.043366 4295 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 01:08:55.046054 kubelet[4295]: I0707 01:08:55.046035 4295 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 01:08:55.048747 kubelet[4295]: I0707 01:08:55.048735 4295 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 01:08:55.066659 kubelet[4295]: I0707 01:08:55.066640 4295 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 01:08:55.066832 kubelet[4295]: I0707 01:08:55.066805 4295 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 01:08:55.066976 kubelet[4295]: I0707 01:08:55.066832 4295 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-a-a5852c4667","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 01:08:55.067047 kubelet[4295]: I0707 01:08:55.066988 4295 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 01:08:55.067047 kubelet[4295]: I0707 01:08:55.066996 4295 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 01:08:55.067085 kubelet[4295]: I0707 01:08:55.067053 4295 state_mem.go:36] "Initialized new in-memory state store" Jul 7 01:08:55.067331 kubelet[4295]: I0707 01:08:55.067323 4295 kubelet.go:446] "Attempting to sync node with API server" Jul 7 01:08:55.067354 kubelet[4295]: I0707 01:08:55.067337 4295 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 01:08:55.067374 kubelet[4295]: I0707 01:08:55.067357 4295 kubelet.go:352] "Adding apiserver pod source" Jul 7 01:08:55.067374 kubelet[4295]: I0707 01:08:55.067369 4295 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 01:08:55.067974 kubelet[4295]: I0707 01:08:55.067958 4295 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 01:08:55.068409 kubelet[4295]: I0707 01:08:55.068400 4295 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 01:08:55.068799 kubelet[4295]: I0707 01:08:55.068789 4295 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 01:08:55.068826 kubelet[4295]: I0707 01:08:55.068820 4295 server.go:1287] "Started kubelet" Jul 7 01:08:55.068902 kubelet[4295]: I0707 01:08:55.068859 4295 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 01:08:55.069013 kubelet[4295]: I0707 01:08:55.068873 4295 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 01:08:55.069220 kubelet[4295]: I0707 01:08:55.069204 4295 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 01:08:55.071073 kubelet[4295]: I0707 01:08:55.071055 4295 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 01:08:55.071118 kubelet[4295]: I0707 01:08:55.071080 4295 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 01:08:55.071145 kubelet[4295]: I0707 01:08:55.071123 4295 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 01:08:55.071176 kubelet[4295]: I0707 01:08:55.071160 4295 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 01:08:55.071196 kubelet[4295]: E0707 01:08:55.071166 4295 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344.1.1-a-a5852c4667\" not found" Jul 7 01:08:55.071314 kubelet[4295]: I0707 01:08:55.071297 4295 reconciler.go:26] "Reconciler: start to sync state" Jul 7 01:08:55.071602 kubelet[4295]: I0707 01:08:55.071587 4295 factory.go:221] Registration of the systemd container factory successfully Jul 7 01:08:55.071649 kubelet[4295]: E0707 01:08:55.071634 4295 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 01:08:55.071696 kubelet[4295]: I0707 01:08:55.071680 4295 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 01:08:55.072407 kubelet[4295]: I0707 01:08:55.072391 4295 factory.go:221] Registration of the containerd container factory successfully Jul 7 01:08:55.072473 kubelet[4295]: I0707 01:08:55.072459 4295 server.go:479] "Adding debug handlers to kubelet server" Jul 7 01:08:55.078202 kubelet[4295]: I0707 01:08:55.078164 4295 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 01:08:55.079218 kubelet[4295]: I0707 01:08:55.079204 4295 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 01:08:55.079242 kubelet[4295]: I0707 01:08:55.079226 4295 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 01:08:55.079263 kubelet[4295]: I0707 01:08:55.079244 4295 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 01:08:55.079263 kubelet[4295]: I0707 01:08:55.079251 4295 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 01:08:55.079313 kubelet[4295]: E0707 01:08:55.079297 4295 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 01:08:55.101754 kubelet[4295]: I0707 01:08:55.101736 4295 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 01:08:55.101754 kubelet[4295]: I0707 01:08:55.101753 4295 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 01:08:55.101821 kubelet[4295]: I0707 01:08:55.101770 4295 state_mem.go:36] "Initialized new in-memory state store" Jul 7 01:08:55.101917 kubelet[4295]: I0707 01:08:55.101906 4295 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 01:08:55.101940 kubelet[4295]: I0707 01:08:55.101917 4295 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 01:08:55.101940 kubelet[4295]: I0707 01:08:55.101936 4295 policy_none.go:49] "None policy: Start" Jul 7 01:08:55.102030 kubelet[4295]: I0707 01:08:55.101944 4295 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 01:08:55.102030 kubelet[4295]: I0707 01:08:55.101953 4295 state_mem.go:35] "Initializing new in-memory state store" Jul 7 01:08:55.102069 kubelet[4295]: I0707 01:08:55.102046 4295 state_mem.go:75] "Updated machine memory state" Jul 7 01:08:55.105013 kubelet[4295]: I0707 01:08:55.104998 4295 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 01:08:55.105175 kubelet[4295]: I0707 01:08:55.105167 4295 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 01:08:55.105204 kubelet[4295]: I0707 01:08:55.105177 4295 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 01:08:55.105327 kubelet[4295]: I0707 01:08:55.105314 4295 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 01:08:55.105770 kubelet[4295]: E0707 01:08:55.105754 4295 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 01:08:55.180038 kubelet[4295]: I0707 01:08:55.180008 4295 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:55.180038 kubelet[4295]: I0707 01:08:55.180031 4295 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:55.180187 kubelet[4295]: I0707 01:08:55.180073 4295 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:55.184478 kubelet[4295]: W0707 01:08:55.184455 4295 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 01:08:55.184614 kubelet[4295]: W0707 01:08:55.184600 4295 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 01:08:55.184692 kubelet[4295]: W0707 01:08:55.184681 4295 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 01:08:55.208179 kubelet[4295]: I0707 01:08:55.208165 4295 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344.1.1-a-a5852c4667" Jul 7 01:08:55.212078 kubelet[4295]: I0707 01:08:55.212059 4295 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344.1.1-a-a5852c4667" Jul 7 01:08:55.212124 kubelet[4295]: I0707 01:08:55.212115 4295 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344.1.1-a-a5852c4667" Jul 7 01:08:55.373138 kubelet[4295]: I0707 01:08:55.373110 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c624717eb17db3fd1995e72e8b82c16a-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-a-a5852c4667\" (UID: \"c624717eb17db3fd1995e72e8b82c16a\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:55.373138 kubelet[4295]: I0707 01:08:55.373136 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c624717eb17db3fd1995e72e8b82c16a-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-a5852c4667\" (UID: \"c624717eb17db3fd1995e72e8b82c16a\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:55.373268 kubelet[4295]: I0707 01:08:55.373157 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c624717eb17db3fd1995e72e8b82c16a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-a-a5852c4667\" (UID: \"c624717eb17db3fd1995e72e8b82c16a\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:55.373268 kubelet[4295]: I0707 01:08:55.373175 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cdc8b4a0239b35389a546e218d20a00c-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-a-a5852c4667\" (UID: \"cdc8b4a0239b35389a546e218d20a00c\") " pod="kube-system/kube-scheduler-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:55.373268 kubelet[4295]: I0707 01:08:55.373240 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ab2f780f3161b562f527407e486e0a4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-a-a5852c4667\" (UID: \"0ab2f780f3161b562f527407e486e0a4\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:55.373398 kubelet[4295]: I0707 01:08:55.373289 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ab2f780f3161b562f527407e486e0a4-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-a-a5852c4667\" (UID: \"0ab2f780f3161b562f527407e486e0a4\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:55.373398 kubelet[4295]: I0707 01:08:55.373321 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c624717eb17db3fd1995e72e8b82c16a-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-a-a5852c4667\" (UID: \"c624717eb17db3fd1995e72e8b82c16a\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:55.373398 kubelet[4295]: I0707 01:08:55.373353 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c624717eb17db3fd1995e72e8b82c16a-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-a-a5852c4667\" (UID: \"c624717eb17db3fd1995e72e8b82c16a\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:55.373398 kubelet[4295]: I0707 01:08:55.373380 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ab2f780f3161b562f527407e486e0a4-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-a-a5852c4667\" (UID: \"0ab2f780f3161b562f527407e486e0a4\") " pod="kube-system/kube-apiserver-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:56.068606 kubelet[4295]: I0707 01:08:56.068570 4295 apiserver.go:52] "Watching apiserver" Jul 7 01:08:56.071813 kubelet[4295]: I0707 01:08:56.071795 4295 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 01:08:56.084935 kubelet[4295]: I0707 01:08:56.084913 4295 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:56.085003 kubelet[4295]: I0707 01:08:56.084992 4295 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:56.085080 kubelet[4295]: I0707 01:08:56.085059 4295 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:56.087620 kubelet[4295]: W0707 01:08:56.087596 4295 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 01:08:56.087671 kubelet[4295]: E0707 01:08:56.087655 4295 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344.1.1-a-a5852c4667\" already exists" pod="kube-system/kube-scheduler-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:56.087768 kubelet[4295]: W0707 01:08:56.087745 4295 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 01:08:56.087827 kubelet[4295]: E0707 01:08:56.087805 4295 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344.1.1-a-a5852c4667\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:56.087866 kubelet[4295]: W0707 01:08:56.087835 4295 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 01:08:56.087866 kubelet[4295]: E0707 01:08:56.087862 4295 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344.1.1-a-a5852c4667\" already exists" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-a5852c4667" Jul 7 01:08:56.112815 kubelet[4295]: I0707 01:08:56.112763 4295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.1-a-a5852c4667" podStartSLOduration=1.112732838 podStartE2EDuration="1.112732838s" podCreationTimestamp="2025-07-07 01:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:08:56.112641069 +0000 UTC m=+1.116281425" watchObservedRunningTime="2025-07-07 01:08:56.112732838 +0000 UTC m=+1.116373194" Jul 7 01:08:56.118143 kubelet[4295]: I0707 01:08:56.118109 4295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.1-a-a5852c4667" podStartSLOduration=1.118095984 podStartE2EDuration="1.118095984s" podCreationTimestamp="2025-07-07 01:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:08:56.118071514 +0000 UTC m=+1.121711870" watchObservedRunningTime="2025-07-07 01:08:56.118095984 +0000 UTC m=+1.121736419" Jul 7 01:08:56.135716 kubelet[4295]: I0707 01:08:56.135673 4295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.1-a-a5852c4667" podStartSLOduration=1.135660477 podStartE2EDuration="1.135660477s" podCreationTimestamp="2025-07-07 01:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:08:56.135571621 +0000 UTC m=+1.139211977" watchObservedRunningTime="2025-07-07 01:08:56.135660477 +0000 UTC m=+1.139300832" Jul 7 01:09:00.426419 kubelet[4295]: I0707 01:09:00.426384 4295 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 01:09:00.426842 kubelet[4295]: I0707 01:09:00.426784 4295 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 01:09:00.426870 containerd[2783]: time="2025-07-07T01:09:00.426665757Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 01:09:01.116012 systemd[1]: Created slice kubepods-besteffort-pod5d10ea62_80f5_4adb_b7ea_e64eafc28bf2.slice - libcontainer container kubepods-besteffort-pod5d10ea62_80f5_4adb_b7ea_e64eafc28bf2.slice. Jul 7 01:09:01.213457 kubelet[4295]: I0707 01:09:01.213427 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d10ea62-80f5-4adb-b7ea-e64eafc28bf2-lib-modules\") pod \"kube-proxy-9zs5x\" (UID: \"5d10ea62-80f5-4adb-b7ea-e64eafc28bf2\") " pod="kube-system/kube-proxy-9zs5x" Jul 7 01:09:01.213457 kubelet[4295]: I0707 01:09:01.213459 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbsrs\" (UniqueName: \"kubernetes.io/projected/5d10ea62-80f5-4adb-b7ea-e64eafc28bf2-kube-api-access-vbsrs\") pod \"kube-proxy-9zs5x\" (UID: \"5d10ea62-80f5-4adb-b7ea-e64eafc28bf2\") " pod="kube-system/kube-proxy-9zs5x" Jul 7 01:09:01.213643 kubelet[4295]: I0707 01:09:01.213477 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d10ea62-80f5-4adb-b7ea-e64eafc28bf2-xtables-lock\") pod \"kube-proxy-9zs5x\" (UID: \"5d10ea62-80f5-4adb-b7ea-e64eafc28bf2\") " pod="kube-system/kube-proxy-9zs5x" Jul 7 01:09:01.213643 kubelet[4295]: I0707 01:09:01.213502 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5d10ea62-80f5-4adb-b7ea-e64eafc28bf2-kube-proxy\") pod \"kube-proxy-9zs5x\" (UID: \"5d10ea62-80f5-4adb-b7ea-e64eafc28bf2\") " pod="kube-system/kube-proxy-9zs5x" Jul 7 01:09:01.426847 containerd[2783]: time="2025-07-07T01:09:01.426749219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9zs5x,Uid:5d10ea62-80f5-4adb-b7ea-e64eafc28bf2,Namespace:kube-system,Attempt:0,}" Jul 7 01:09:01.434143 containerd[2783]: time="2025-07-07T01:09:01.434119099Z" level=info msg="connecting to shim 9b1bf8258fb2237db897e0c3a5c02900e3f80b0eed883c083218843a51b30235" address="unix:///run/containerd/s/d7695b060f4f30b7f1552206ef412a918690f540977438890abfffdc7cb92b96" namespace=k8s.io protocol=ttrpc version=3 Jul 7 01:09:01.454613 systemd[1]: Started cri-containerd-9b1bf8258fb2237db897e0c3a5c02900e3f80b0eed883c083218843a51b30235.scope - libcontainer container 9b1bf8258fb2237db897e0c3a5c02900e3f80b0eed883c083218843a51b30235. Jul 7 01:09:01.471359 containerd[2783]: time="2025-07-07T01:09:01.471325638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9zs5x,Uid:5d10ea62-80f5-4adb-b7ea-e64eafc28bf2,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b1bf8258fb2237db897e0c3a5c02900e3f80b0eed883c083218843a51b30235\"" Jul 7 01:09:01.473297 containerd[2783]: time="2025-07-07T01:09:01.473238044Z" level=info msg="CreateContainer within sandbox \"9b1bf8258fb2237db897e0c3a5c02900e3f80b0eed883c083218843a51b30235\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 01:09:01.478448 containerd[2783]: time="2025-07-07T01:09:01.478425485Z" level=info msg="Container 574a02dbc8cf98c6a09b72b2db1ca52d5539a6e2699d7157621c342254ac3f71: CDI devices from CRI Config.CDIDevices: []" Jul 7 01:09:01.482232 containerd[2783]: time="2025-07-07T01:09:01.482206783Z" level=info msg="CreateContainer within sandbox \"9b1bf8258fb2237db897e0c3a5c02900e3f80b0eed883c083218843a51b30235\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"574a02dbc8cf98c6a09b72b2db1ca52d5539a6e2699d7157621c342254ac3f71\"" Jul 7 01:09:01.482568 containerd[2783]: time="2025-07-07T01:09:01.482548579Z" level=info msg="StartContainer for \"574a02dbc8cf98c6a09b72b2db1ca52d5539a6e2699d7157621c342254ac3f71\"" Jul 7 01:09:01.483806 containerd[2783]: time="2025-07-07T01:09:01.483785663Z" level=info msg="connecting to shim 574a02dbc8cf98c6a09b72b2db1ca52d5539a6e2699d7157621c342254ac3f71" address="unix:///run/containerd/s/d7695b060f4f30b7f1552206ef412a918690f540977438890abfffdc7cb92b96" protocol=ttrpc version=3 Jul 7 01:09:01.504662 systemd[1]: Started cri-containerd-574a02dbc8cf98c6a09b72b2db1ca52d5539a6e2699d7157621c342254ac3f71.scope - libcontainer container 574a02dbc8cf98c6a09b72b2db1ca52d5539a6e2699d7157621c342254ac3f71. Jul 7 01:09:01.538532 systemd[1]: Created slice kubepods-besteffort-pod4f606b7e_b811_4419_afe0_ac2564177bc9.slice - libcontainer container kubepods-besteffort-pod4f606b7e_b811_4419_afe0_ac2564177bc9.slice. Jul 7 01:09:01.552215 containerd[2783]: time="2025-07-07T01:09:01.552190493Z" level=info msg="StartContainer for \"574a02dbc8cf98c6a09b72b2db1ca52d5539a6e2699d7157621c342254ac3f71\" returns successfully" Jul 7 01:09:01.615091 kubelet[4295]: I0707 01:09:01.615050 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4f606b7e-b811-4419-afe0-ac2564177bc9-var-lib-calico\") pod \"tigera-operator-747864d56d-ngfbp\" (UID: \"4f606b7e-b811-4419-afe0-ac2564177bc9\") " pod="tigera-operator/tigera-operator-747864d56d-ngfbp" Jul 7 01:09:01.615091 kubelet[4295]: I0707 01:09:01.615087 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrlvh\" (UniqueName: \"kubernetes.io/projected/4f606b7e-b811-4419-afe0-ac2564177bc9-kube-api-access-rrlvh\") pod \"tigera-operator-747864d56d-ngfbp\" (UID: \"4f606b7e-b811-4419-afe0-ac2564177bc9\") " pod="tigera-operator/tigera-operator-747864d56d-ngfbp" Jul 7 01:09:01.840451 containerd[2783]: time="2025-07-07T01:09:01.840368884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-ngfbp,Uid:4f606b7e-b811-4419-afe0-ac2564177bc9,Namespace:tigera-operator,Attempt:0,}" Jul 7 01:09:01.848763 containerd[2783]: time="2025-07-07T01:09:01.848739259Z" level=info msg="connecting to shim 1216dfbfc4cef5128887b9e05dafdedf63ac6cce18b48aa879d235a90675bb83" address="unix:///run/containerd/s/cfa913746a2344ba461eccdb24285684be51ba6888a8482247a250abeeb0d506" namespace=k8s.io protocol=ttrpc version=3 Jul 7 01:09:01.877617 systemd[1]: Started cri-containerd-1216dfbfc4cef5128887b9e05dafdedf63ac6cce18b48aa879d235a90675bb83.scope - libcontainer container 1216dfbfc4cef5128887b9e05dafdedf63ac6cce18b48aa879d235a90675bb83. Jul 7 01:09:01.901928 containerd[2783]: time="2025-07-07T01:09:01.901902184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-ngfbp,Uid:4f606b7e-b811-4419-afe0-ac2564177bc9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1216dfbfc4cef5128887b9e05dafdedf63ac6cce18b48aa879d235a90675bb83\"" Jul 7 01:09:01.903022 containerd[2783]: time="2025-07-07T01:09:01.903006409Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 01:09:02.625097 kubelet[4295]: I0707 01:09:02.625051 4295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9zs5x" podStartSLOduration=1.625034898 podStartE2EDuration="1.625034898s" podCreationTimestamp="2025-07-07 01:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:09:02.109696331 +0000 UTC m=+7.113336687" watchObservedRunningTime="2025-07-07 01:09:02.625034898 +0000 UTC m=+7.628675254" Jul 7 01:09:02.853392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2073350242.mount: Deactivated successfully. Jul 7 01:09:03.065481 containerd[2783]: time="2025-07-07T01:09:03.065387371Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 7 01:09:03.065481 containerd[2783]: time="2025-07-07T01:09:03.065406076Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:03.066138 containerd[2783]: time="2025-07-07T01:09:03.066112141Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:03.067740 containerd[2783]: time="2025-07-07T01:09:03.067722030Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:03.068400 containerd[2783]: time="2025-07-07T01:09:03.068375298Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.165340836s" Jul 7 01:09:03.068428 containerd[2783]: time="2025-07-07T01:09:03.068406073Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 7 01:09:03.069945 containerd[2783]: time="2025-07-07T01:09:03.069923078Z" level=info msg="CreateContainer within sandbox \"1216dfbfc4cef5128887b9e05dafdedf63ac6cce18b48aa879d235a90675bb83\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 01:09:03.073640 containerd[2783]: time="2025-07-07T01:09:03.073613393Z" level=info msg="Container 9a23f375e5a5d6f549b7301557675e028e142869ebcd71959257090332aa2595: CDI devices from CRI Config.CDIDevices: []" Jul 7 01:09:03.076233 containerd[2783]: time="2025-07-07T01:09:03.076211797Z" level=info msg="CreateContainer within sandbox \"1216dfbfc4cef5128887b9e05dafdedf63ac6cce18b48aa879d235a90675bb83\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9a23f375e5a5d6f549b7301557675e028e142869ebcd71959257090332aa2595\"" Jul 7 01:09:03.076375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2593121810.mount: Deactivated successfully. Jul 7 01:09:03.076526 containerd[2783]: time="2025-07-07T01:09:03.076509035Z" level=info msg="StartContainer for \"9a23f375e5a5d6f549b7301557675e028e142869ebcd71959257090332aa2595\"" Jul 7 01:09:03.077183 containerd[2783]: time="2025-07-07T01:09:03.077161783Z" level=info msg="connecting to shim 9a23f375e5a5d6f549b7301557675e028e142869ebcd71959257090332aa2595" address="unix:///run/containerd/s/cfa913746a2344ba461eccdb24285684be51ba6888a8482247a250abeeb0d506" protocol=ttrpc version=3 Jul 7 01:09:03.110657 systemd[1]: Started cri-containerd-9a23f375e5a5d6f549b7301557675e028e142869ebcd71959257090332aa2595.scope - libcontainer container 9a23f375e5a5d6f549b7301557675e028e142869ebcd71959257090332aa2595. Jul 7 01:09:03.130547 containerd[2783]: time="2025-07-07T01:09:03.130525490Z" level=info msg="StartContainer for \"9a23f375e5a5d6f549b7301557675e028e142869ebcd71959257090332aa2595\" returns successfully" Jul 7 01:09:04.106690 kubelet[4295]: I0707 01:09:04.106644 4295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-ngfbp" podStartSLOduration=1.940335143 podStartE2EDuration="3.106629678s" podCreationTimestamp="2025-07-07 01:09:01 +0000 UTC" firstStartedPulling="2025-07-07 01:09:01.902686909 +0000 UTC m=+6.906327265" lastFinishedPulling="2025-07-07 01:09:03.068981444 +0000 UTC m=+8.072621800" observedRunningTime="2025-07-07 01:09:04.106450485 +0000 UTC m=+9.110090841" watchObservedRunningTime="2025-07-07 01:09:04.106629678 +0000 UTC m=+9.110270034" Jul 7 01:09:07.830131 sudo[3063]: pam_unix(sudo:session): session closed for user root Jul 7 01:09:07.873180 sshd[3062]: Connection closed by 147.75.109.163 port 46978 Jul 7 01:09:07.873483 sshd-session[3060]: pam_unix(sshd:session): session closed for user core Jul 7 01:09:07.876513 systemd[1]: sshd@6-147.28.143.214:22-147.75.109.163:46978.service: Deactivated successfully. Jul 7 01:09:07.878152 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 01:09:07.878359 systemd[1]: session-9.scope: Consumed 6.744s CPU time, 253.6M memory peak. Jul 7 01:09:07.879442 systemd-logind[2767]: Session 9 logged out. Waiting for processes to exit. Jul 7 01:09:07.880270 systemd-logind[2767]: Removed session 9. Jul 7 01:09:10.387573 update_engine[2778]: I20250707 01:09:10.387029 2778 update_attempter.cc:509] Updating boot flags... Jul 7 01:09:13.076996 systemd[1]: Created slice kubepods-besteffort-pod1c8f11b4_fd5b_427f_9821_1efb0c3c63cf.slice - libcontainer container kubepods-besteffort-pod1c8f11b4_fd5b_427f_9821_1efb0c3c63cf.slice. Jul 7 01:09:13.084399 kubelet[4295]: I0707 01:09:13.084367 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c8f11b4-fd5b-427f-9821-1efb0c3c63cf-tigera-ca-bundle\") pod \"calico-typha-7988b9b775-p7l59\" (UID: \"1c8f11b4-fd5b-427f-9821-1efb0c3c63cf\") " pod="calico-system/calico-typha-7988b9b775-p7l59" Jul 7 01:09:13.084399 kubelet[4295]: I0707 01:09:13.084402 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1c8f11b4-fd5b-427f-9821-1efb0c3c63cf-typha-certs\") pod \"calico-typha-7988b9b775-p7l59\" (UID: \"1c8f11b4-fd5b-427f-9821-1efb0c3c63cf\") " pod="calico-system/calico-typha-7988b9b775-p7l59" Jul 7 01:09:13.084729 kubelet[4295]: I0707 01:09:13.084421 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6kcn\" (UniqueName: \"kubernetes.io/projected/1c8f11b4-fd5b-427f-9821-1efb0c3c63cf-kube-api-access-w6kcn\") pod \"calico-typha-7988b9b775-p7l59\" (UID: \"1c8f11b4-fd5b-427f-9821-1efb0c3c63cf\") " pod="calico-system/calico-typha-7988b9b775-p7l59" Jul 7 01:09:13.327802 systemd[1]: Created slice kubepods-besteffort-podbc12cff9_bf1c_47a2_bd76_51f3ea987e34.slice - libcontainer container kubepods-besteffort-podbc12cff9_bf1c_47a2_bd76_51f3ea987e34.slice. Jul 7 01:09:13.379745 containerd[2783]: time="2025-07-07T01:09:13.379699145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7988b9b775-p7l59,Uid:1c8f11b4-fd5b-427f-9821-1efb0c3c63cf,Namespace:calico-system,Attempt:0,}" Jul 7 01:09:13.385925 kubelet[4295]: I0707 01:09:13.385891 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85tfw\" (UniqueName: \"kubernetes.io/projected/bc12cff9-bf1c-47a2-bd76-51f3ea987e34-kube-api-access-85tfw\") pod \"calico-node-2rkcm\" (UID: \"bc12cff9-bf1c-47a2-bd76-51f3ea987e34\") " pod="calico-system/calico-node-2rkcm" Jul 7 01:09:13.386033 kubelet[4295]: I0707 01:09:13.385932 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bc12cff9-bf1c-47a2-bd76-51f3ea987e34-cni-log-dir\") pod \"calico-node-2rkcm\" (UID: \"bc12cff9-bf1c-47a2-bd76-51f3ea987e34\") " pod="calico-system/calico-node-2rkcm" Jul 7 01:09:13.386033 kubelet[4295]: I0707 01:09:13.385949 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bc12cff9-bf1c-47a2-bd76-51f3ea987e34-var-lib-calico\") pod \"calico-node-2rkcm\" (UID: \"bc12cff9-bf1c-47a2-bd76-51f3ea987e34\") " pod="calico-system/calico-node-2rkcm" Jul 7 01:09:13.386033 kubelet[4295]: I0707 01:09:13.385964 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bc12cff9-bf1c-47a2-bd76-51f3ea987e34-var-run-calico\") pod \"calico-node-2rkcm\" (UID: \"bc12cff9-bf1c-47a2-bd76-51f3ea987e34\") " pod="calico-system/calico-node-2rkcm" Jul 7 01:09:13.386033 kubelet[4295]: I0707 01:09:13.385981 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bc12cff9-bf1c-47a2-bd76-51f3ea987e34-flexvol-driver-host\") pod \"calico-node-2rkcm\" (UID: \"bc12cff9-bf1c-47a2-bd76-51f3ea987e34\") " pod="calico-system/calico-node-2rkcm" Jul 7 01:09:13.386033 kubelet[4295]: I0707 01:09:13.385997 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc12cff9-bf1c-47a2-bd76-51f3ea987e34-lib-modules\") pod \"calico-node-2rkcm\" (UID: \"bc12cff9-bf1c-47a2-bd76-51f3ea987e34\") " pod="calico-system/calico-node-2rkcm" Jul 7 01:09:13.386145 kubelet[4295]: I0707 01:09:13.386034 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bc12cff9-bf1c-47a2-bd76-51f3ea987e34-policysync\") pod \"calico-node-2rkcm\" (UID: \"bc12cff9-bf1c-47a2-bd76-51f3ea987e34\") " pod="calico-system/calico-node-2rkcm" Jul 7 01:09:13.386145 kubelet[4295]: I0707 01:09:13.386049 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc12cff9-bf1c-47a2-bd76-51f3ea987e34-xtables-lock\") pod \"calico-node-2rkcm\" (UID: \"bc12cff9-bf1c-47a2-bd76-51f3ea987e34\") " pod="calico-system/calico-node-2rkcm" Jul 7 01:09:13.386145 kubelet[4295]: I0707 01:09:13.386066 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bc12cff9-bf1c-47a2-bd76-51f3ea987e34-cni-bin-dir\") pod \"calico-node-2rkcm\" (UID: \"bc12cff9-bf1c-47a2-bd76-51f3ea987e34\") " pod="calico-system/calico-node-2rkcm" Jul 7 01:09:13.386145 kubelet[4295]: I0707 01:09:13.386081 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bc12cff9-bf1c-47a2-bd76-51f3ea987e34-cni-net-dir\") pod \"calico-node-2rkcm\" (UID: \"bc12cff9-bf1c-47a2-bd76-51f3ea987e34\") " pod="calico-system/calico-node-2rkcm" Jul 7 01:09:13.386145 kubelet[4295]: I0707 01:09:13.386128 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc12cff9-bf1c-47a2-bd76-51f3ea987e34-tigera-ca-bundle\") pod \"calico-node-2rkcm\" (UID: \"bc12cff9-bf1c-47a2-bd76-51f3ea987e34\") " pod="calico-system/calico-node-2rkcm" Jul 7 01:09:13.386245 kubelet[4295]: I0707 01:09:13.386181 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bc12cff9-bf1c-47a2-bd76-51f3ea987e34-node-certs\") pod \"calico-node-2rkcm\" (UID: \"bc12cff9-bf1c-47a2-bd76-51f3ea987e34\") " pod="calico-system/calico-node-2rkcm" Jul 7 01:09:13.388188 containerd[2783]: time="2025-07-07T01:09:13.388161730Z" level=info msg="connecting to shim d40285d344640bd5855d94ef17d724cbcbc0150846e8ac6fafb68b57633512c8" address="unix:///run/containerd/s/8619ea70683f989ddac15aef4f5357b0d066763be62f981331be21efe5ea5045" namespace=k8s.io protocol=ttrpc version=3 Jul 7 01:09:13.421627 systemd[1]: Started cri-containerd-d40285d344640bd5855d94ef17d724cbcbc0150846e8ac6fafb68b57633512c8.scope - libcontainer container d40285d344640bd5855d94ef17d724cbcbc0150846e8ac6fafb68b57633512c8. Jul 7 01:09:13.446881 containerd[2783]: time="2025-07-07T01:09:13.446849264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7988b9b775-p7l59,Uid:1c8f11b4-fd5b-427f-9821-1efb0c3c63cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"d40285d344640bd5855d94ef17d724cbcbc0150846e8ac6fafb68b57633512c8\"" Jul 7 01:09:13.447774 containerd[2783]: time="2025-07-07T01:09:13.447754987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 01:09:13.488118 kubelet[4295]: E0707 01:09:13.488095 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.488118 kubelet[4295]: W0707 01:09:13.488115 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.488224 kubelet[4295]: E0707 01:09:13.488136 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.491650 kubelet[4295]: E0707 01:09:13.489609 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.491650 kubelet[4295]: W0707 01:09:13.489626 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.491650 kubelet[4295]: E0707 01:09:13.489641 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.495602 kubelet[4295]: E0707 01:09:13.495584 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.495602 kubelet[4295]: W0707 01:09:13.495598 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.495715 kubelet[4295]: E0707 01:09:13.495610 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.569533 kubelet[4295]: E0707 01:09:13.569500 4295 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-268kj" podUID="850a1808-16bf-4143-8ff8-9afb8ec843dc" Jul 7 01:09:13.580137 kubelet[4295]: E0707 01:09:13.580044 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.580137 kubelet[4295]: W0707 01:09:13.580062 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.580137 kubelet[4295]: E0707 01:09:13.580079 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.580266 kubelet[4295]: E0707 01:09:13.580254 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.580301 kubelet[4295]: W0707 01:09:13.580263 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.580301 kubelet[4295]: E0707 01:09:13.580300 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.580485 kubelet[4295]: E0707 01:09:13.580471 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.580485 kubelet[4295]: W0707 01:09:13.580480 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.580542 kubelet[4295]: E0707 01:09:13.580492 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.580669 kubelet[4295]: E0707 01:09:13.580658 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.580669 kubelet[4295]: W0707 01:09:13.580665 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.580722 kubelet[4295]: E0707 01:09:13.580673 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.580858 kubelet[4295]: E0707 01:09:13.580845 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.580858 kubelet[4295]: W0707 01:09:13.580854 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.580901 kubelet[4295]: E0707 01:09:13.580863 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.581034 kubelet[4295]: E0707 01:09:13.581025 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.581056 kubelet[4295]: W0707 01:09:13.581033 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.581056 kubelet[4295]: E0707 01:09:13.581041 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.581237 kubelet[4295]: E0707 01:09:13.581229 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.581267 kubelet[4295]: W0707 01:09:13.581237 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.581267 kubelet[4295]: E0707 01:09:13.581244 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.581423 kubelet[4295]: E0707 01:09:13.581414 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.581443 kubelet[4295]: W0707 01:09:13.581422 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.581443 kubelet[4295]: E0707 01:09:13.581430 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.581614 kubelet[4295]: E0707 01:09:13.581606 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.581641 kubelet[4295]: W0707 01:09:13.581614 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.581641 kubelet[4295]: E0707 01:09:13.581622 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.581809 kubelet[4295]: E0707 01:09:13.581801 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.581830 kubelet[4295]: W0707 01:09:13.581809 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.581830 kubelet[4295]: E0707 01:09:13.581816 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.581985 kubelet[4295]: E0707 01:09:13.581976 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.582007 kubelet[4295]: W0707 01:09:13.581984 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.582007 kubelet[4295]: E0707 01:09:13.581992 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.582152 kubelet[4295]: E0707 01:09:13.582144 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.582176 kubelet[4295]: W0707 01:09:13.582151 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.582176 kubelet[4295]: E0707 01:09:13.582159 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.582350 kubelet[4295]: E0707 01:09:13.582342 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.582372 kubelet[4295]: W0707 01:09:13.582350 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.582372 kubelet[4295]: E0707 01:09:13.582358 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.582529 kubelet[4295]: E0707 01:09:13.582521 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.582549 kubelet[4295]: W0707 01:09:13.582528 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.582549 kubelet[4295]: E0707 01:09:13.582536 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.582707 kubelet[4295]: E0707 01:09:13.582699 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.582707 kubelet[4295]: W0707 01:09:13.582707 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.582748 kubelet[4295]: E0707 01:09:13.582713 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.582914 kubelet[4295]: E0707 01:09:13.582906 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.582934 kubelet[4295]: W0707 01:09:13.582914 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.582934 kubelet[4295]: E0707 01:09:13.582922 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.583123 kubelet[4295]: E0707 01:09:13.583115 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.583145 kubelet[4295]: W0707 01:09:13.583123 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.583145 kubelet[4295]: E0707 01:09:13.583131 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.583317 kubelet[4295]: E0707 01:09:13.583309 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.583337 kubelet[4295]: W0707 01:09:13.583317 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.583337 kubelet[4295]: E0707 01:09:13.583324 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.583472 kubelet[4295]: E0707 01:09:13.583464 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.583497 kubelet[4295]: W0707 01:09:13.583472 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.583497 kubelet[4295]: E0707 01:09:13.583481 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.583682 kubelet[4295]: E0707 01:09:13.583674 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.583705 kubelet[4295]: W0707 01:09:13.583681 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.583705 kubelet[4295]: E0707 01:09:13.583688 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.587961 kubelet[4295]: E0707 01:09:13.587946 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.587985 kubelet[4295]: W0707 01:09:13.587962 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.587985 kubelet[4295]: E0707 01:09:13.587976 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.588026 kubelet[4295]: I0707 01:09:13.587998 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24tch\" (UniqueName: \"kubernetes.io/projected/850a1808-16bf-4143-8ff8-9afb8ec843dc-kube-api-access-24tch\") pod \"csi-node-driver-268kj\" (UID: \"850a1808-16bf-4143-8ff8-9afb8ec843dc\") " pod="calico-system/csi-node-driver-268kj" Jul 7 01:09:13.588212 kubelet[4295]: E0707 01:09:13.588202 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.588234 kubelet[4295]: W0707 01:09:13.588212 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.588234 kubelet[4295]: E0707 01:09:13.588225 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.588273 kubelet[4295]: I0707 01:09:13.588239 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/850a1808-16bf-4143-8ff8-9afb8ec843dc-socket-dir\") pod \"csi-node-driver-268kj\" (UID: \"850a1808-16bf-4143-8ff8-9afb8ec843dc\") " pod="calico-system/csi-node-driver-268kj" Jul 7 01:09:13.588464 kubelet[4295]: E0707 01:09:13.588455 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.588490 kubelet[4295]: W0707 01:09:13.588464 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.588490 kubelet[4295]: E0707 01:09:13.588475 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.588532 kubelet[4295]: I0707 01:09:13.588494 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/850a1808-16bf-4143-8ff8-9afb8ec843dc-registration-dir\") pod \"csi-node-driver-268kj\" (UID: \"850a1808-16bf-4143-8ff8-9afb8ec843dc\") " pod="calico-system/csi-node-driver-268kj" Jul 7 01:09:13.588695 kubelet[4295]: E0707 01:09:13.588680 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.588717 kubelet[4295]: W0707 01:09:13.588695 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.588717 kubelet[4295]: E0707 01:09:13.588712 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.588887 kubelet[4295]: E0707 01:09:13.588879 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.588909 kubelet[4295]: W0707 01:09:13.588886 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.588909 kubelet[4295]: E0707 01:09:13.588897 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.589070 kubelet[4295]: E0707 01:09:13.589062 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.589090 kubelet[4295]: W0707 01:09:13.589070 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.589090 kubelet[4295]: E0707 01:09:13.589080 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.589234 kubelet[4295]: E0707 01:09:13.589226 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.589254 kubelet[4295]: W0707 01:09:13.589234 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.589254 kubelet[4295]: E0707 01:09:13.589244 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.589467 kubelet[4295]: E0707 01:09:13.589451 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.589491 kubelet[4295]: W0707 01:09:13.589468 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.589519 kubelet[4295]: E0707 01:09:13.589484 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.589519 kubelet[4295]: I0707 01:09:13.589512 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/850a1808-16bf-4143-8ff8-9afb8ec843dc-kubelet-dir\") pod \"csi-node-driver-268kj\" (UID: \"850a1808-16bf-4143-8ff8-9afb8ec843dc\") " pod="calico-system/csi-node-driver-268kj" Jul 7 01:09:13.589750 kubelet[4295]: E0707 01:09:13.589739 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.589772 kubelet[4295]: W0707 01:09:13.589750 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.589772 kubelet[4295]: E0707 01:09:13.589767 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.589807 kubelet[4295]: I0707 01:09:13.589791 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/850a1808-16bf-4143-8ff8-9afb8ec843dc-varrun\") pod \"csi-node-driver-268kj\" (UID: \"850a1808-16bf-4143-8ff8-9afb8ec843dc\") " pod="calico-system/csi-node-driver-268kj" Jul 7 01:09:13.589938 kubelet[4295]: E0707 01:09:13.589929 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.589960 kubelet[4295]: W0707 01:09:13.589937 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.589960 kubelet[4295]: E0707 01:09:13.589953 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.590120 kubelet[4295]: E0707 01:09:13.590112 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.590140 kubelet[4295]: W0707 01:09:13.590120 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.590140 kubelet[4295]: E0707 01:09:13.590131 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.590324 kubelet[4295]: E0707 01:09:13.590317 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.590344 kubelet[4295]: W0707 01:09:13.590324 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.590344 kubelet[4295]: E0707 01:09:13.590334 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.590543 kubelet[4295]: E0707 01:09:13.590535 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.590568 kubelet[4295]: W0707 01:09:13.590545 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.590568 kubelet[4295]: E0707 01:09:13.590552 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.590684 kubelet[4295]: E0707 01:09:13.590676 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.590707 kubelet[4295]: W0707 01:09:13.590684 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.590707 kubelet[4295]: E0707 01:09:13.590691 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.590851 kubelet[4295]: E0707 01:09:13.590842 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.590874 kubelet[4295]: W0707 01:09:13.590851 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.590874 kubelet[4295]: E0707 01:09:13.590858 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.630399 containerd[2783]: time="2025-07-07T01:09:13.630354006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2rkcm,Uid:bc12cff9-bf1c-47a2-bd76-51f3ea987e34,Namespace:calico-system,Attempt:0,}" Jul 7 01:09:13.638452 containerd[2783]: time="2025-07-07T01:09:13.638414350Z" level=info msg="connecting to shim 047a2d283f84704ee95c32615deb67230fd1861d1555958da72d167aada0c10f" address="unix:///run/containerd/s/bc62cd8791eb1bca86c61ad18d61e237337d524d4e7c7a47ab3935ef66d983be" namespace=k8s.io protocol=ttrpc version=3 Jul 7 01:09:13.669618 systemd[1]: Started cri-containerd-047a2d283f84704ee95c32615deb67230fd1861d1555958da72d167aada0c10f.scope - libcontainer container 047a2d283f84704ee95c32615deb67230fd1861d1555958da72d167aada0c10f. Jul 7 01:09:13.687065 containerd[2783]: time="2025-07-07T01:09:13.687040654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2rkcm,Uid:bc12cff9-bf1c-47a2-bd76-51f3ea987e34,Namespace:calico-system,Attempt:0,} returns sandbox id \"047a2d283f84704ee95c32615deb67230fd1861d1555958da72d167aada0c10f\"" Jul 7 01:09:13.690150 kubelet[4295]: E0707 01:09:13.690133 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.690196 kubelet[4295]: W0707 01:09:13.690150 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.690196 kubelet[4295]: E0707 01:09:13.690168 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.690387 kubelet[4295]: E0707 01:09:13.690378 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.690408 kubelet[4295]: W0707 01:09:13.690387 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.690408 kubelet[4295]: E0707 01:09:13.690399 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.690628 kubelet[4295]: E0707 01:09:13.690619 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.690695 kubelet[4295]: W0707 01:09:13.690629 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.690695 kubelet[4295]: E0707 01:09:13.690641 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.690879 kubelet[4295]: E0707 01:09:13.690870 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.690900 kubelet[4295]: W0707 01:09:13.690879 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.690900 kubelet[4295]: E0707 01:09:13.690890 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.691070 kubelet[4295]: E0707 01:09:13.691062 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.691095 kubelet[4295]: W0707 01:09:13.691070 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.691095 kubelet[4295]: E0707 01:09:13.691080 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.691251 kubelet[4295]: E0707 01:09:13.691242 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.691276 kubelet[4295]: W0707 01:09:13.691251 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.691276 kubelet[4295]: E0707 01:09:13.691263 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.691428 kubelet[4295]: E0707 01:09:13.691420 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.691448 kubelet[4295]: W0707 01:09:13.691428 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.691470 kubelet[4295]: E0707 01:09:13.691451 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.691601 kubelet[4295]: E0707 01:09:13.691593 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.691623 kubelet[4295]: W0707 01:09:13.691601 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.691623 kubelet[4295]: E0707 01:09:13.691617 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.691740 kubelet[4295]: E0707 01:09:13.691731 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.691740 kubelet[4295]: W0707 01:09:13.691739 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.691780 kubelet[4295]: E0707 01:09:13.691754 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.691905 kubelet[4295]: E0707 01:09:13.691897 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.691905 kubelet[4295]: W0707 01:09:13.691904 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.691953 kubelet[4295]: E0707 01:09:13.691915 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.692089 kubelet[4295]: E0707 01:09:13.692081 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.692113 kubelet[4295]: W0707 01:09:13.692089 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.692113 kubelet[4295]: E0707 01:09:13.692098 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.692228 kubelet[4295]: E0707 01:09:13.692220 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.692249 kubelet[4295]: W0707 01:09:13.692227 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.692249 kubelet[4295]: E0707 01:09:13.692241 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.692422 kubelet[4295]: E0707 01:09:13.692414 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.692444 kubelet[4295]: W0707 01:09:13.692422 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.692444 kubelet[4295]: E0707 01:09:13.692438 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.692594 kubelet[4295]: E0707 01:09:13.692586 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.692619 kubelet[4295]: W0707 01:09:13.692594 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.692619 kubelet[4295]: E0707 01:09:13.692605 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.692836 kubelet[4295]: E0707 01:09:13.692827 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.692856 kubelet[4295]: W0707 01:09:13.692836 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.692876 kubelet[4295]: E0707 01:09:13.692854 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.693055 kubelet[4295]: E0707 01:09:13.693047 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.693055 kubelet[4295]: W0707 01:09:13.693054 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.693098 kubelet[4295]: E0707 01:09:13.693069 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.693230 kubelet[4295]: E0707 01:09:13.693222 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.693250 kubelet[4295]: W0707 01:09:13.693231 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.693250 kubelet[4295]: E0707 01:09:13.693245 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.693454 kubelet[4295]: E0707 01:09:13.693446 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.693476 kubelet[4295]: W0707 01:09:13.693454 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.693476 kubelet[4295]: E0707 01:09:13.693465 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.693617 kubelet[4295]: E0707 01:09:13.693609 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.693637 kubelet[4295]: W0707 01:09:13.693617 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.693637 kubelet[4295]: E0707 01:09:13.693628 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.693816 kubelet[4295]: E0707 01:09:13.693808 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.693836 kubelet[4295]: W0707 01:09:13.693816 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.693836 kubelet[4295]: E0707 01:09:13.693825 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.694065 kubelet[4295]: E0707 01:09:13.694056 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.694087 kubelet[4295]: W0707 01:09:13.694065 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.694087 kubelet[4295]: E0707 01:09:13.694076 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.694286 kubelet[4295]: E0707 01:09:13.694278 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.694306 kubelet[4295]: W0707 01:09:13.694286 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.694306 kubelet[4295]: E0707 01:09:13.694296 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.694519 kubelet[4295]: E0707 01:09:13.694511 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.694544 kubelet[4295]: W0707 01:09:13.694520 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.694564 kubelet[4295]: E0707 01:09:13.694541 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.694686 kubelet[4295]: E0707 01:09:13.694679 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.694707 kubelet[4295]: W0707 01:09:13.694686 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.694707 kubelet[4295]: E0707 01:09:13.694695 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.695013 kubelet[4295]: E0707 01:09:13.695002 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.695033 kubelet[4295]: W0707 01:09:13.695015 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.695033 kubelet[4295]: E0707 01:09:13.695025 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:13.703080 kubelet[4295]: E0707 01:09:13.703062 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:13.703080 kubelet[4295]: W0707 01:09:13.703075 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:13.703132 kubelet[4295]: E0707 01:09:13.703089 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:14.456006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount512525798.mount: Deactivated successfully. Jul 7 01:09:14.861855 containerd[2783]: time="2025-07-07T01:09:14.861816549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:14.862205 containerd[2783]: time="2025-07-07T01:09:14.861849789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 7 01:09:14.862464 containerd[2783]: time="2025-07-07T01:09:14.862447871Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:14.863858 containerd[2783]: time="2025-07-07T01:09:14.863837755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:14.864409 containerd[2783]: time="2025-07-07T01:09:14.864395116Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.416614689s" Jul 7 01:09:14.864442 containerd[2783]: time="2025-07-07T01:09:14.864416236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 7 01:09:14.865090 containerd[2783]: time="2025-07-07T01:09:14.865071638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 01:09:14.869606 containerd[2783]: time="2025-07-07T01:09:14.869585211Z" level=info msg="CreateContainer within sandbox \"d40285d344640bd5855d94ef17d724cbcbc0150846e8ac6fafb68b57633512c8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 01:09:14.879561 containerd[2783]: time="2025-07-07T01:09:14.879536798Z" level=info msg="Container eb2d3d8fc376d5671c33a142fbef046cce1d4b9367fb0c25c905bb8ae9b47406: CDI devices from CRI Config.CDIDevices: []" Jul 7 01:09:14.882884 containerd[2783]: time="2025-07-07T01:09:14.882859528Z" level=info msg="CreateContainer within sandbox \"d40285d344640bd5855d94ef17d724cbcbc0150846e8ac6fafb68b57633512c8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"eb2d3d8fc376d5671c33a142fbef046cce1d4b9367fb0c25c905bb8ae9b47406\"" Jul 7 01:09:14.883156 containerd[2783]: time="2025-07-07T01:09:14.883129569Z" level=info msg="StartContainer for \"eb2d3d8fc376d5671c33a142fbef046cce1d4b9367fb0c25c905bb8ae9b47406\"" Jul 7 01:09:14.884109 containerd[2783]: time="2025-07-07T01:09:14.884087611Z" level=info msg="connecting to shim eb2d3d8fc376d5671c33a142fbef046cce1d4b9367fb0c25c905bb8ae9b47406" address="unix:///run/containerd/s/8619ea70683f989ddac15aef4f5357b0d066763be62f981331be21efe5ea5045" protocol=ttrpc version=3 Jul 7 01:09:14.911687 systemd[1]: Started cri-containerd-eb2d3d8fc376d5671c33a142fbef046cce1d4b9367fb0c25c905bb8ae9b47406.scope - libcontainer container eb2d3d8fc376d5671c33a142fbef046cce1d4b9367fb0c25c905bb8ae9b47406. Jul 7 01:09:14.939871 containerd[2783]: time="2025-07-07T01:09:14.939844807Z" level=info msg="StartContainer for \"eb2d3d8fc376d5671c33a142fbef046cce1d4b9367fb0c25c905bb8ae9b47406\" returns successfully" Jul 7 01:09:15.081451 kubelet[4295]: E0707 01:09:15.081399 4295 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-268kj" podUID="850a1808-16bf-4143-8ff8-9afb8ec843dc" Jul 7 01:09:15.126005 kubelet[4295]: I0707 01:09:15.125846 4295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7988b9b775-p7l59" podStartSLOduration=0.708440377 podStartE2EDuration="2.125830269s" podCreationTimestamp="2025-07-07 01:09:13 +0000 UTC" firstStartedPulling="2025-07-07 01:09:13.447563026 +0000 UTC m=+18.451203382" lastFinishedPulling="2025-07-07 01:09:14.864952918 +0000 UTC m=+19.868593274" observedRunningTime="2025-07-07 01:09:15.125791148 +0000 UTC m=+20.129431504" watchObservedRunningTime="2025-07-07 01:09:15.125830269 +0000 UTC m=+20.129470625" Jul 7 01:09:15.193800 kubelet[4295]: E0707 01:09:15.193768 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.193800 kubelet[4295]: W0707 01:09:15.193788 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.193800 kubelet[4295]: E0707 01:09:15.193805 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.194017 kubelet[4295]: E0707 01:09:15.194007 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.194049 kubelet[4295]: W0707 01:09:15.194015 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.194074 kubelet[4295]: E0707 01:09:15.194051 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.194256 kubelet[4295]: E0707 01:09:15.194248 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.194279 kubelet[4295]: W0707 01:09:15.194256 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.194279 kubelet[4295]: E0707 01:09:15.194263 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.194474 kubelet[4295]: E0707 01:09:15.194466 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.194474 kubelet[4295]: W0707 01:09:15.194473 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.194539 kubelet[4295]: E0707 01:09:15.194482 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.194646 kubelet[4295]: E0707 01:09:15.194637 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.194666 kubelet[4295]: W0707 01:09:15.194645 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.194666 kubelet[4295]: E0707 01:09:15.194652 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.194849 kubelet[4295]: E0707 01:09:15.194841 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.194872 kubelet[4295]: W0707 01:09:15.194849 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.194872 kubelet[4295]: E0707 01:09:15.194857 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.195053 kubelet[4295]: E0707 01:09:15.195046 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.195076 kubelet[4295]: W0707 01:09:15.195053 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.195076 kubelet[4295]: E0707 01:09:15.195060 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.195254 kubelet[4295]: E0707 01:09:15.195247 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.195277 kubelet[4295]: W0707 01:09:15.195254 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.195277 kubelet[4295]: E0707 01:09:15.195261 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.195459 kubelet[4295]: E0707 01:09:15.195451 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.195479 kubelet[4295]: W0707 01:09:15.195458 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.195479 kubelet[4295]: E0707 01:09:15.195466 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.195590 kubelet[4295]: E0707 01:09:15.195583 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.195659 kubelet[4295]: W0707 01:09:15.195590 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.195659 kubelet[4295]: E0707 01:09:15.195597 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.195711 kubelet[4295]: E0707 01:09:15.195703 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.195711 kubelet[4295]: W0707 01:09:15.195711 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.195748 kubelet[4295]: E0707 01:09:15.195717 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.195910 kubelet[4295]: E0707 01:09:15.195903 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.195936 kubelet[4295]: W0707 01:09:15.195910 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.195936 kubelet[4295]: E0707 01:09:15.195917 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.196113 kubelet[4295]: E0707 01:09:15.196106 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.196136 kubelet[4295]: W0707 01:09:15.196113 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.196136 kubelet[4295]: E0707 01:09:15.196120 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.196240 kubelet[4295]: E0707 01:09:15.196233 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.196262 kubelet[4295]: W0707 01:09:15.196240 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.196262 kubelet[4295]: E0707 01:09:15.196247 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.196442 kubelet[4295]: E0707 01:09:15.196435 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.196465 kubelet[4295]: W0707 01:09:15.196442 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.196465 kubelet[4295]: E0707 01:09:15.196449 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.203684 kubelet[4295]: E0707 01:09:15.203665 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.203684 kubelet[4295]: W0707 01:09:15.203680 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.203790 kubelet[4295]: E0707 01:09:15.203693 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.203836 kubelet[4295]: E0707 01:09:15.203826 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.203836 kubelet[4295]: W0707 01:09:15.203834 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.203879 kubelet[4295]: E0707 01:09:15.203845 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.204001 kubelet[4295]: E0707 01:09:15.203992 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.204024 kubelet[4295]: W0707 01:09:15.204000 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.204024 kubelet[4295]: E0707 01:09:15.204012 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.204260 kubelet[4295]: E0707 01:09:15.204251 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.204280 kubelet[4295]: W0707 01:09:15.204260 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.204280 kubelet[4295]: E0707 01:09:15.204272 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.204453 kubelet[4295]: E0707 01:09:15.204445 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.204473 kubelet[4295]: W0707 01:09:15.204452 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.204473 kubelet[4295]: E0707 01:09:15.204462 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.204593 kubelet[4295]: E0707 01:09:15.204585 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.204613 kubelet[4295]: W0707 01:09:15.204592 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.204613 kubelet[4295]: E0707 01:09:15.204603 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.204753 kubelet[4295]: E0707 01:09:15.204745 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.204778 kubelet[4295]: W0707 01:09:15.204753 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.204778 kubelet[4295]: E0707 01:09:15.204763 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.205040 kubelet[4295]: E0707 01:09:15.205030 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.205060 kubelet[4295]: W0707 01:09:15.205041 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.205060 kubelet[4295]: E0707 01:09:15.205054 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.205228 kubelet[4295]: E0707 01:09:15.205219 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.205250 kubelet[4295]: W0707 01:09:15.205228 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.205250 kubelet[4295]: E0707 01:09:15.205239 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.205441 kubelet[4295]: E0707 01:09:15.205433 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.205461 kubelet[4295]: W0707 01:09:15.205441 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.205461 kubelet[4295]: E0707 01:09:15.205452 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.205589 kubelet[4295]: E0707 01:09:15.205576 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.205589 kubelet[4295]: W0707 01:09:15.205584 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.205634 kubelet[4295]: E0707 01:09:15.205593 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.205733 kubelet[4295]: E0707 01:09:15.205724 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.205733 kubelet[4295]: W0707 01:09:15.205732 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.205774 kubelet[4295]: E0707 01:09:15.205742 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.205927 kubelet[4295]: E0707 01:09:15.205919 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.205950 kubelet[4295]: W0707 01:09:15.205927 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.205950 kubelet[4295]: E0707 01:09:15.205936 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.206145 kubelet[4295]: E0707 01:09:15.206137 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.206165 kubelet[4295]: W0707 01:09:15.206144 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.206165 kubelet[4295]: E0707 01:09:15.206154 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.206305 kubelet[4295]: E0707 01:09:15.206298 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.206328 kubelet[4295]: W0707 01:09:15.206305 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.206328 kubelet[4295]: E0707 01:09:15.206315 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.206502 kubelet[4295]: E0707 01:09:15.206495 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.206523 kubelet[4295]: W0707 01:09:15.206502 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.206523 kubelet[4295]: E0707 01:09:15.206513 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.206748 kubelet[4295]: E0707 01:09:15.206735 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.206748 kubelet[4295]: W0707 01:09:15.206746 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.206794 kubelet[4295]: E0707 01:09:15.206755 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.206910 kubelet[4295]: E0707 01:09:15.206898 4295 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 01:09:15.206930 kubelet[4295]: W0707 01:09:15.206909 4295 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 01:09:15.206930 kubelet[4295]: E0707 01:09:15.206919 4295 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 01:09:15.774501 containerd[2783]: time="2025-07-07T01:09:15.774467266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 7 01:09:15.774501 containerd[2783]: time="2025-07-07T01:09:15.774480866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:15.775765 containerd[2783]: time="2025-07-07T01:09:15.775731669Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:15.778831 containerd[2783]: time="2025-07-07T01:09:15.778802357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:15.779273 containerd[2783]: time="2025-07-07T01:09:15.779246798Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 914.14492ms" Jul 7 01:09:15.779307 containerd[2783]: time="2025-07-07T01:09:15.779276679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 7 01:09:15.780845 containerd[2783]: time="2025-07-07T01:09:15.780822163Z" level=info msg="CreateContainer within sandbox \"047a2d283f84704ee95c32615deb67230fd1861d1555958da72d167aada0c10f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 01:09:15.785111 containerd[2783]: time="2025-07-07T01:09:15.785084254Z" level=info msg="Container 3e9ff6360597654655b373bcbc0dca6819df86c39a5464f91c2e35d67dcab18c: CDI devices from CRI Config.CDIDevices: []" Jul 7 01:09:15.788888 containerd[2783]: time="2025-07-07T01:09:15.788866824Z" level=info msg="CreateContainer within sandbox \"047a2d283f84704ee95c32615deb67230fd1861d1555958da72d167aada0c10f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3e9ff6360597654655b373bcbc0dca6819df86c39a5464f91c2e35d67dcab18c\"" Jul 7 01:09:15.789128 containerd[2783]: time="2025-07-07T01:09:15.789113065Z" level=info msg="StartContainer for \"3e9ff6360597654655b373bcbc0dca6819df86c39a5464f91c2e35d67dcab18c\"" Jul 7 01:09:15.790376 containerd[2783]: time="2025-07-07T01:09:15.790356028Z" level=info msg="connecting to shim 3e9ff6360597654655b373bcbc0dca6819df86c39a5464f91c2e35d67dcab18c" address="unix:///run/containerd/s/bc62cd8791eb1bca86c61ad18d61e237337d524d4e7c7a47ab3935ef66d983be" protocol=ttrpc version=3 Jul 7 01:09:15.819659 systemd[1]: Started cri-containerd-3e9ff6360597654655b373bcbc0dca6819df86c39a5464f91c2e35d67dcab18c.scope - libcontainer container 3e9ff6360597654655b373bcbc0dca6819df86c39a5464f91c2e35d67dcab18c. Jul 7 01:09:15.845899 containerd[2783]: time="2025-07-07T01:09:15.845866895Z" level=info msg="StartContainer for \"3e9ff6360597654655b373bcbc0dca6819df86c39a5464f91c2e35d67dcab18c\" returns successfully" Jul 7 01:09:15.855747 systemd[1]: cri-containerd-3e9ff6360597654655b373bcbc0dca6819df86c39a5464f91c2e35d67dcab18c.scope: Deactivated successfully. Jul 7 01:09:15.857150 containerd[2783]: time="2025-07-07T01:09:15.857122165Z" level=info msg="received exit event container_id:\"3e9ff6360597654655b373bcbc0dca6819df86c39a5464f91c2e35d67dcab18c\" id:\"3e9ff6360597654655b373bcbc0dca6819df86c39a5464f91c2e35d67dcab18c\" pid:5335 exited_at:{seconds:1751850555 nanos:856900724}" Jul 7 01:09:15.857223 containerd[2783]: time="2025-07-07T01:09:15.857201525Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e9ff6360597654655b373bcbc0dca6819df86c39a5464f91c2e35d67dcab18c\" id:\"3e9ff6360597654655b373bcbc0dca6819df86c39a5464f91c2e35d67dcab18c\" pid:5335 exited_at:{seconds:1751850555 nanos:856900724}" Jul 7 01:09:15.872671 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e9ff6360597654655b373bcbc0dca6819df86c39a5464f91c2e35d67dcab18c-rootfs.mount: Deactivated successfully. Jul 7 01:09:16.121212 kubelet[4295]: I0707 01:09:16.121186 4295 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 01:09:17.080503 kubelet[4295]: E0707 01:09:17.080453 4295 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-268kj" podUID="850a1808-16bf-4143-8ff8-9afb8ec843dc" Jul 7 01:09:17.124110 containerd[2783]: time="2025-07-07T01:09:17.124083906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 01:09:19.079729 kubelet[4295]: E0707 01:09:19.079680 4295 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-268kj" podUID="850a1808-16bf-4143-8ff8-9afb8ec843dc" Jul 7 01:09:20.039170 containerd[2783]: time="2025-07-07T01:09:20.039092660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:20.039170 containerd[2783]: time="2025-07-07T01:09:20.039134220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 7 01:09:20.039741 containerd[2783]: time="2025-07-07T01:09:20.039720901Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:20.041322 containerd[2783]: time="2025-07-07T01:09:20.041301784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:20.041920 containerd[2783]: time="2025-07-07T01:09:20.041907345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.917793159s" Jul 7 01:09:20.041953 containerd[2783]: time="2025-07-07T01:09:20.041927465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 7 01:09:20.043583 containerd[2783]: time="2025-07-07T01:09:20.043564949Z" level=info msg="CreateContainer within sandbox \"047a2d283f84704ee95c32615deb67230fd1861d1555958da72d167aada0c10f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 01:09:20.047942 containerd[2783]: time="2025-07-07T01:09:20.047915678Z" level=info msg="Container a84b330cd968006b73378f404657b1ffbde6f20c8e9fa9de9cf6740506add9d0: CDI devices from CRI Config.CDIDevices: []" Jul 7 01:09:20.052611 containerd[2783]: time="2025-07-07T01:09:20.052580807Z" level=info msg="CreateContainer within sandbox \"047a2d283f84704ee95c32615deb67230fd1861d1555958da72d167aada0c10f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a84b330cd968006b73378f404657b1ffbde6f20c8e9fa9de9cf6740506add9d0\"" Jul 7 01:09:20.052934 containerd[2783]: time="2025-07-07T01:09:20.052911928Z" level=info msg="StartContainer for \"a84b330cd968006b73378f404657b1ffbde6f20c8e9fa9de9cf6740506add9d0\"" Jul 7 01:09:20.054238 containerd[2783]: time="2025-07-07T01:09:20.054216970Z" level=info msg="connecting to shim a84b330cd968006b73378f404657b1ffbde6f20c8e9fa9de9cf6740506add9d0" address="unix:///run/containerd/s/bc62cd8791eb1bca86c61ad18d61e237337d524d4e7c7a47ab3935ef66d983be" protocol=ttrpc version=3 Jul 7 01:09:20.074687 systemd[1]: Started cri-containerd-a84b330cd968006b73378f404657b1ffbde6f20c8e9fa9de9cf6740506add9d0.scope - libcontainer container a84b330cd968006b73378f404657b1ffbde6f20c8e9fa9de9cf6740506add9d0. Jul 7 01:09:20.101727 containerd[2783]: time="2025-07-07T01:09:20.101680107Z" level=info msg="StartContainer for \"a84b330cd968006b73378f404657b1ffbde6f20c8e9fa9de9cf6740506add9d0\" returns successfully" Jul 7 01:09:20.400298 kubelet[4295]: I0707 01:09:20.400263 4295 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 01:09:20.486881 containerd[2783]: time="2025-07-07T01:09:20.486834849Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 01:09:20.488535 systemd[1]: cri-containerd-a84b330cd968006b73378f404657b1ffbde6f20c8e9fa9de9cf6740506add9d0.scope: Deactivated successfully. Jul 7 01:09:20.488865 systemd[1]: cri-containerd-a84b330cd968006b73378f404657b1ffbde6f20c8e9fa9de9cf6740506add9d0.scope: Consumed 1.002s CPU time, 199.2M memory peak, 165.8M written to disk. Jul 7 01:09:20.489775 containerd[2783]: time="2025-07-07T01:09:20.489752735Z" level=info msg="received exit event container_id:\"a84b330cd968006b73378f404657b1ffbde6f20c8e9fa9de9cf6740506add9d0\" id:\"a84b330cd968006b73378f404657b1ffbde6f20c8e9fa9de9cf6740506add9d0\" pid:5398 exited_at:{seconds:1751850560 nanos:489620255}" Jul 7 01:09:20.489873 containerd[2783]: time="2025-07-07T01:09:20.489850895Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a84b330cd968006b73378f404657b1ffbde6f20c8e9fa9de9cf6740506add9d0\" id:\"a84b330cd968006b73378f404657b1ffbde6f20c8e9fa9de9cf6740506add9d0\" pid:5398 exited_at:{seconds:1751850560 nanos:489620255}" Jul 7 01:09:20.497953 kubelet[4295]: I0707 01:09:20.497933 4295 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 01:09:20.504856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a84b330cd968006b73378f404657b1ffbde6f20c8e9fa9de9cf6740506add9d0-rootfs.mount: Deactivated successfully. Jul 7 01:09:20.518096 systemd[1]: Created slice kubepods-burstable-pod2a78675e_0157_4efe_b211_95cf133f1f55.slice - libcontainer container kubepods-burstable-pod2a78675e_0157_4efe_b211_95cf133f1f55.slice. Jul 7 01:09:20.522764 systemd[1]: Created slice kubepods-besteffort-podec64afa8_a08c_4cee_a39d_b0e2bcc7abd7.slice - libcontainer container kubepods-besteffort-podec64afa8_a08c_4cee_a39d_b0e2bcc7abd7.slice. Jul 7 01:09:20.526886 systemd[1]: Created slice kubepods-besteffort-podd029f356_f508_4fb9_b349_4cfa825d4193.slice - libcontainer container kubepods-besteffort-podd029f356_f508_4fb9_b349_4cfa825d4193.slice. Jul 7 01:09:20.531282 systemd[1]: Created slice kubepods-burstable-pod5bb0abc2_d84a_419a_943f_f21761ed7bef.slice - libcontainer container kubepods-burstable-pod5bb0abc2_d84a_419a_943f_f21761ed7bef.slice. Jul 7 01:09:20.535571 systemd[1]: Created slice kubepods-besteffort-podffe33ac8_1739_4114_bd6d_77dcb02988b0.slice - libcontainer container kubepods-besteffort-podffe33ac8_1739_4114_bd6d_77dcb02988b0.slice. Jul 7 01:09:20.536832 kubelet[4295]: I0707 01:09:20.536793 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3fbf6309-7289-4afb-ad59-b1950fe394eb-goldmane-key-pair\") pod \"goldmane-768f4c5c69-lzdth\" (UID: \"3fbf6309-7289-4afb-ad59-b1950fe394eb\") " pod="calico-system/goldmane-768f4c5c69-lzdth" Jul 7 01:09:20.536832 kubelet[4295]: I0707 01:09:20.536828 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch7jl\" (UniqueName: \"kubernetes.io/projected/ffe33ac8-1739-4114-bd6d-77dcb02988b0-kube-api-access-ch7jl\") pod \"calico-apiserver-dc648bb98-s9htc\" (UID: \"ffe33ac8-1739-4114-bd6d-77dcb02988b0\") " pod="calico-apiserver/calico-apiserver-dc648bb98-s9htc" Jul 7 01:09:20.536924 kubelet[4295]: I0707 01:09:20.536849 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec64afa8-a08c-4cee-a39d-b0e2bcc7abd7-tigera-ca-bundle\") pod \"calico-kube-controllers-7799fb6bf9-w8x42\" (UID: \"ec64afa8-a08c-4cee-a39d-b0e2bcc7abd7\") " pod="calico-system/calico-kube-controllers-7799fb6bf9-w8x42" Jul 7 01:09:20.536924 kubelet[4295]: I0707 01:09:20.536867 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4lm8\" (UniqueName: \"kubernetes.io/projected/3fbf6309-7289-4afb-ad59-b1950fe394eb-kube-api-access-t4lm8\") pod \"goldmane-768f4c5c69-lzdth\" (UID: \"3fbf6309-7289-4afb-ad59-b1950fe394eb\") " pod="calico-system/goldmane-768f4c5c69-lzdth" Jul 7 01:09:20.536924 kubelet[4295]: I0707 01:09:20.536883 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phslp\" (UniqueName: \"kubernetes.io/projected/d029f356-f508-4fb9-b349-4cfa825d4193-kube-api-access-phslp\") pod \"calico-apiserver-dc648bb98-68h7b\" (UID: \"d029f356-f508-4fb9-b349-4cfa825d4193\") " pod="calico-apiserver/calico-apiserver-dc648bb98-68h7b" Jul 7 01:09:20.536924 kubelet[4295]: I0707 01:09:20.536901 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zjh4\" (UniqueName: \"kubernetes.io/projected/5df43d02-577e-48ea-b71d-12b96dda8e07-kube-api-access-8zjh4\") pod \"whisker-564dd9f995-8nmq6\" (UID: \"5df43d02-577e-48ea-b71d-12b96dda8e07\") " pod="calico-system/whisker-564dd9f995-8nmq6" Jul 7 01:09:20.537008 kubelet[4295]: I0707 01:09:20.536932 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt9k2\" (UniqueName: \"kubernetes.io/projected/5bb0abc2-d84a-419a-943f-f21761ed7bef-kube-api-access-jt9k2\") pod \"coredns-668d6bf9bc-tgbsq\" (UID: \"5bb0abc2-d84a-419a-943f-f21761ed7bef\") " pod="kube-system/coredns-668d6bf9bc-tgbsq" Jul 7 01:09:20.537008 kubelet[4295]: I0707 01:09:20.536948 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d029f356-f508-4fb9-b349-4cfa825d4193-calico-apiserver-certs\") pod \"calico-apiserver-dc648bb98-68h7b\" (UID: \"d029f356-f508-4fb9-b349-4cfa825d4193\") " pod="calico-apiserver/calico-apiserver-dc648bb98-68h7b" Jul 7 01:09:20.537008 kubelet[4295]: I0707 01:09:20.536963 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5df43d02-577e-48ea-b71d-12b96dda8e07-whisker-backend-key-pair\") pod \"whisker-564dd9f995-8nmq6\" (UID: \"5df43d02-577e-48ea-b71d-12b96dda8e07\") " pod="calico-system/whisker-564dd9f995-8nmq6" Jul 7 01:09:20.537008 kubelet[4295]: I0707 01:09:20.536978 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fbf6309-7289-4afb-ad59-b1950fe394eb-config\") pod \"goldmane-768f4c5c69-lzdth\" (UID: \"3fbf6309-7289-4afb-ad59-b1950fe394eb\") " pod="calico-system/goldmane-768f4c5c69-lzdth" Jul 7 01:09:20.537008 kubelet[4295]: I0707 01:09:20.536994 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bb0abc2-d84a-419a-943f-f21761ed7bef-config-volume\") pod \"coredns-668d6bf9bc-tgbsq\" (UID: \"5bb0abc2-d84a-419a-943f-f21761ed7bef\") " pod="kube-system/coredns-668d6bf9bc-tgbsq" Jul 7 01:09:20.537111 kubelet[4295]: I0707 01:09:20.537010 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fbf6309-7289-4afb-ad59-b1950fe394eb-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-lzdth\" (UID: \"3fbf6309-7289-4afb-ad59-b1950fe394eb\") " pod="calico-system/goldmane-768f4c5c69-lzdth" Jul 7 01:09:20.537111 kubelet[4295]: I0707 01:09:20.537031 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5df43d02-577e-48ea-b71d-12b96dda8e07-whisker-ca-bundle\") pod \"whisker-564dd9f995-8nmq6\" (UID: \"5df43d02-577e-48ea-b71d-12b96dda8e07\") " pod="calico-system/whisker-564dd9f995-8nmq6" Jul 7 01:09:20.537111 kubelet[4295]: I0707 01:09:20.537048 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ffe33ac8-1739-4114-bd6d-77dcb02988b0-calico-apiserver-certs\") pod \"calico-apiserver-dc648bb98-s9htc\" (UID: \"ffe33ac8-1739-4114-bd6d-77dcb02988b0\") " pod="calico-apiserver/calico-apiserver-dc648bb98-s9htc" Jul 7 01:09:20.537111 kubelet[4295]: I0707 01:09:20.537066 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz796\" (UniqueName: \"kubernetes.io/projected/ec64afa8-a08c-4cee-a39d-b0e2bcc7abd7-kube-api-access-pz796\") pod \"calico-kube-controllers-7799fb6bf9-w8x42\" (UID: \"ec64afa8-a08c-4cee-a39d-b0e2bcc7abd7\") " pod="calico-system/calico-kube-controllers-7799fb6bf9-w8x42" Jul 7 01:09:20.537111 kubelet[4295]: I0707 01:09:20.537087 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a78675e-0157-4efe-b211-95cf133f1f55-config-volume\") pod \"coredns-668d6bf9bc-d2whz\" (UID: \"2a78675e-0157-4efe-b211-95cf133f1f55\") " pod="kube-system/coredns-668d6bf9bc-d2whz" Jul 7 01:09:20.537214 kubelet[4295]: I0707 01:09:20.537103 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8nrg\" (UniqueName: \"kubernetes.io/projected/2a78675e-0157-4efe-b211-95cf133f1f55-kube-api-access-b8nrg\") pod \"coredns-668d6bf9bc-d2whz\" (UID: \"2a78675e-0157-4efe-b211-95cf133f1f55\") " pod="kube-system/coredns-668d6bf9bc-d2whz" Jul 7 01:09:20.539666 systemd[1]: Created slice kubepods-besteffort-pod3fbf6309_7289_4afb_ad59_b1950fe394eb.slice - libcontainer container kubepods-besteffort-pod3fbf6309_7289_4afb_ad59_b1950fe394eb.slice. Jul 7 01:09:20.544881 systemd[1]: Created slice kubepods-besteffort-pod5df43d02_577e_48ea_b71d_12b96dda8e07.slice - libcontainer container kubepods-besteffort-pod5df43d02_577e_48ea_b71d_12b96dda8e07.slice. Jul 7 01:09:20.821810 containerd[2783]: time="2025-07-07T01:09:20.821723169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d2whz,Uid:2a78675e-0157-4efe-b211-95cf133f1f55,Namespace:kube-system,Attempt:0,}" Jul 7 01:09:20.825213 containerd[2783]: time="2025-07-07T01:09:20.825185656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7799fb6bf9-w8x42,Uid:ec64afa8-a08c-4cee-a39d-b0e2bcc7abd7,Namespace:calico-system,Attempt:0,}" Jul 7 01:09:20.829770 containerd[2783]: time="2025-07-07T01:09:20.829748345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc648bb98-68h7b,Uid:d029f356-f508-4fb9-b349-4cfa825d4193,Namespace:calico-apiserver,Attempt:0,}" Jul 7 01:09:20.841391 containerd[2783]: time="2025-07-07T01:09:20.841368769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc648bb98-s9htc,Uid:ffe33ac8-1739-4114-bd6d-77dcb02988b0,Namespace:calico-apiserver,Attempt:0,}" Jul 7 01:09:20.841455 containerd[2783]: time="2025-07-07T01:09:20.841431849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lzdth,Uid:3fbf6309-7289-4afb-ad59-b1950fe394eb,Namespace:calico-system,Attempt:0,}" Jul 7 01:09:20.841516 containerd[2783]: time="2025-07-07T01:09:20.841435889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tgbsq,Uid:5bb0abc2-d84a-419a-943f-f21761ed7bef,Namespace:kube-system,Attempt:0,}" Jul 7 01:09:20.847223 containerd[2783]: time="2025-07-07T01:09:20.847191181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-564dd9f995-8nmq6,Uid:5df43d02-577e-48ea-b71d-12b96dda8e07,Namespace:calico-system,Attempt:0,}" Jul 7 01:09:20.876999 containerd[2783]: time="2025-07-07T01:09:20.876956881Z" level=error msg="Failed to destroy network for sandbox \"5d7eb76f83fecc0d5fedb98624a139634cb0db0c8f03a295eaf8f87b66f349d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.877374 containerd[2783]: time="2025-07-07T01:09:20.877299402Z" level=error msg="Failed to destroy network for sandbox \"2c3a5354e7241478f87cca6460c5be5d123606d5ba2118295ef6a3b8a10ba76e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.877433 containerd[2783]: time="2025-07-07T01:09:20.877386442Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7799fb6bf9-w8x42,Uid:ec64afa8-a08c-4cee-a39d-b0e2bcc7abd7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d7eb76f83fecc0d5fedb98624a139634cb0db0c8f03a295eaf8f87b66f349d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.877630 kubelet[4295]: E0707 01:09:20.877587 4295 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d7eb76f83fecc0d5fedb98624a139634cb0db0c8f03a295eaf8f87b66f349d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.877711 kubelet[4295]: E0707 01:09:20.877664 4295 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d7eb76f83fecc0d5fedb98624a139634cb0db0c8f03a295eaf8f87b66f349d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7799fb6bf9-w8x42" Jul 7 01:09:20.877711 kubelet[4295]: E0707 01:09:20.877683 4295 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d7eb76f83fecc0d5fedb98624a139634cb0db0c8f03a295eaf8f87b66f349d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7799fb6bf9-w8x42" Jul 7 01:09:20.877757 kubelet[4295]: E0707 01:09:20.877724 4295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7799fb6bf9-w8x42_calico-system(ec64afa8-a08c-4cee-a39d-b0e2bcc7abd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7799fb6bf9-w8x42_calico-system(ec64afa8-a08c-4cee-a39d-b0e2bcc7abd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d7eb76f83fecc0d5fedb98624a139634cb0db0c8f03a295eaf8f87b66f349d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7799fb6bf9-w8x42" podUID="ec64afa8-a08c-4cee-a39d-b0e2bcc7abd7" Jul 7 01:09:20.877802 containerd[2783]: time="2025-07-07T01:09:20.877662243Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d2whz,Uid:2a78675e-0157-4efe-b211-95cf133f1f55,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3a5354e7241478f87cca6460c5be5d123606d5ba2118295ef6a3b8a10ba76e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.877802 containerd[2783]: time="2025-07-07T01:09:20.877685683Z" level=error msg="Failed to destroy network for sandbox \"722778d9212ad0f4277db5002c3034593e7321535d7320d3e77f23c618472912\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.877886 kubelet[4295]: E0707 01:09:20.877855 4295 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3a5354e7241478f87cca6460c5be5d123606d5ba2118295ef6a3b8a10ba76e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.877917 kubelet[4295]: E0707 01:09:20.877905 4295 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3a5354e7241478f87cca6460c5be5d123606d5ba2118295ef6a3b8a10ba76e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d2whz" Jul 7 01:09:20.877943 kubelet[4295]: E0707 01:09:20.877922 4295 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c3a5354e7241478f87cca6460c5be5d123606d5ba2118295ef6a3b8a10ba76e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d2whz" Jul 7 01:09:20.877976 kubelet[4295]: E0707 01:09:20.877956 4295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-d2whz_kube-system(2a78675e-0157-4efe-b211-95cf133f1f55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-d2whz_kube-system(2a78675e-0157-4efe-b211-95cf133f1f55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c3a5354e7241478f87cca6460c5be5d123606d5ba2118295ef6a3b8a10ba76e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d2whz" podUID="2a78675e-0157-4efe-b211-95cf133f1f55" Jul 7 01:09:20.878120 containerd[2783]: time="2025-07-07T01:09:20.878095203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc648bb98-68h7b,Uid:d029f356-f508-4fb9-b349-4cfa825d4193,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"722778d9212ad0f4277db5002c3034593e7321535d7320d3e77f23c618472912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.878259 kubelet[4295]: E0707 01:09:20.878237 4295 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"722778d9212ad0f4277db5002c3034593e7321535d7320d3e77f23c618472912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.878286 kubelet[4295]: E0707 01:09:20.878273 4295 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"722778d9212ad0f4277db5002c3034593e7321535d7320d3e77f23c618472912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dc648bb98-68h7b" Jul 7 01:09:20.878307 kubelet[4295]: E0707 01:09:20.878290 4295 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"722778d9212ad0f4277db5002c3034593e7321535d7320d3e77f23c618472912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dc648bb98-68h7b" Jul 7 01:09:20.878332 kubelet[4295]: E0707 01:09:20.878318 4295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dc648bb98-68h7b_calico-apiserver(d029f356-f508-4fb9-b349-4cfa825d4193)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dc648bb98-68h7b_calico-apiserver(d029f356-f508-4fb9-b349-4cfa825d4193)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"722778d9212ad0f4277db5002c3034593e7321535d7320d3e77f23c618472912\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dc648bb98-68h7b" podUID="d029f356-f508-4fb9-b349-4cfa825d4193" Jul 7 01:09:20.883324 containerd[2783]: time="2025-07-07T01:09:20.883294774Z" level=error msg="Failed to destroy network for sandbox \"61c6fc45347ef8133206a326ec5271d6f0ed437923a6beb9b75efb24f47dc25e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.883700 containerd[2783]: time="2025-07-07T01:09:20.883675615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lzdth,Uid:3fbf6309-7289-4afb-ad59-b1950fe394eb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c6fc45347ef8133206a326ec5271d6f0ed437923a6beb9b75efb24f47dc25e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.883835 kubelet[4295]: E0707 01:09:20.883817 4295 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c6fc45347ef8133206a326ec5271d6f0ed437923a6beb9b75efb24f47dc25e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.883874 kubelet[4295]: E0707 01:09:20.883842 4295 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c6fc45347ef8133206a326ec5271d6f0ed437923a6beb9b75efb24f47dc25e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-lzdth" Jul 7 01:09:20.883874 kubelet[4295]: E0707 01:09:20.883858 4295 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c6fc45347ef8133206a326ec5271d6f0ed437923a6beb9b75efb24f47dc25e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-lzdth" Jul 7 01:09:20.883923 kubelet[4295]: E0707 01:09:20.883883 4295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-lzdth_calico-system(3fbf6309-7289-4afb-ad59-b1950fe394eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-lzdth_calico-system(3fbf6309-7289-4afb-ad59-b1950fe394eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61c6fc45347ef8133206a326ec5271d6f0ed437923a6beb9b75efb24f47dc25e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-lzdth" podUID="3fbf6309-7289-4afb-ad59-b1950fe394eb" Jul 7 01:09:20.884315 containerd[2783]: time="2025-07-07T01:09:20.884289336Z" level=error msg="Failed to destroy network for sandbox \"d70f118ee26c82a4ecf6440347b4a40d5b5087d6434752aa3819965230c8e930\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.884681 containerd[2783]: time="2025-07-07T01:09:20.884656577Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc648bb98-s9htc,Uid:ffe33ac8-1739-4114-bd6d-77dcb02988b0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d70f118ee26c82a4ecf6440347b4a40d5b5087d6434752aa3819965230c8e930\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.884813 kubelet[4295]: E0707 01:09:20.884790 4295 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d70f118ee26c82a4ecf6440347b4a40d5b5087d6434752aa3819965230c8e930\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.884845 kubelet[4295]: E0707 01:09:20.884825 4295 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d70f118ee26c82a4ecf6440347b4a40d5b5087d6434752aa3819965230c8e930\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dc648bb98-s9htc" Jul 7 01:09:20.884845 kubelet[4295]: E0707 01:09:20.884839 4295 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d70f118ee26c82a4ecf6440347b4a40d5b5087d6434752aa3819965230c8e930\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dc648bb98-s9htc" Jul 7 01:09:20.884886 kubelet[4295]: E0707 01:09:20.884869 4295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dc648bb98-s9htc_calico-apiserver(ffe33ac8-1739-4114-bd6d-77dcb02988b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dc648bb98-s9htc_calico-apiserver(ffe33ac8-1739-4114-bd6d-77dcb02988b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d70f118ee26c82a4ecf6440347b4a40d5b5087d6434752aa3819965230c8e930\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dc648bb98-s9htc" podUID="ffe33ac8-1739-4114-bd6d-77dcb02988b0" Jul 7 01:09:20.885139 containerd[2783]: time="2025-07-07T01:09:20.885113858Z" level=error msg="Failed to destroy network for sandbox \"7780338acc83ac4e04b83294c410fcbe4751e8917f5aef00556cc2485a891f65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.885429 containerd[2783]: time="2025-07-07T01:09:20.885403898Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tgbsq,Uid:5bb0abc2-d84a-419a-943f-f21761ed7bef,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7780338acc83ac4e04b83294c410fcbe4751e8917f5aef00556cc2485a891f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.885547 kubelet[4295]: E0707 01:09:20.885518 4295 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7780338acc83ac4e04b83294c410fcbe4751e8917f5aef00556cc2485a891f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.885585 kubelet[4295]: E0707 01:09:20.885564 4295 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7780338acc83ac4e04b83294c410fcbe4751e8917f5aef00556cc2485a891f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tgbsq" Jul 7 01:09:20.885585 kubelet[4295]: E0707 01:09:20.885580 4295 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7780338acc83ac4e04b83294c410fcbe4751e8917f5aef00556cc2485a891f65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tgbsq" Jul 7 01:09:20.885644 kubelet[4295]: E0707 01:09:20.885612 4295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tgbsq_kube-system(5bb0abc2-d84a-419a-943f-f21761ed7bef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tgbsq_kube-system(5bb0abc2-d84a-419a-943f-f21761ed7bef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7780338acc83ac4e04b83294c410fcbe4751e8917f5aef00556cc2485a891f65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tgbsq" podUID="5bb0abc2-d84a-419a-943f-f21761ed7bef" Jul 7 01:09:20.887311 containerd[2783]: time="2025-07-07T01:09:20.887282702Z" level=error msg="Failed to destroy network for sandbox \"c21cf1d257090ff140c5ac15e4727d559acd2abd71ffbdf6ac50cb026ddd85dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.887625 containerd[2783]: time="2025-07-07T01:09:20.887599503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-564dd9f995-8nmq6,Uid:5df43d02-577e-48ea-b71d-12b96dda8e07,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c21cf1d257090ff140c5ac15e4727d559acd2abd71ffbdf6ac50cb026ddd85dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.887733 kubelet[4295]: E0707 01:09:20.887710 4295 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c21cf1d257090ff140c5ac15e4727d559acd2abd71ffbdf6ac50cb026ddd85dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:20.887765 kubelet[4295]: E0707 01:09:20.887745 4295 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c21cf1d257090ff140c5ac15e4727d559acd2abd71ffbdf6ac50cb026ddd85dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-564dd9f995-8nmq6" Jul 7 01:09:20.887765 kubelet[4295]: E0707 01:09:20.887760 4295 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c21cf1d257090ff140c5ac15e4727d559acd2abd71ffbdf6ac50cb026ddd85dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-564dd9f995-8nmq6" Jul 7 01:09:20.887813 kubelet[4295]: E0707 01:09:20.887788 4295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-564dd9f995-8nmq6_calico-system(5df43d02-577e-48ea-b71d-12b96dda8e07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-564dd9f995-8nmq6_calico-system(5df43d02-577e-48ea-b71d-12b96dda8e07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c21cf1d257090ff140c5ac15e4727d559acd2abd71ffbdf6ac50cb026ddd85dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-564dd9f995-8nmq6" podUID="5df43d02-577e-48ea-b71d-12b96dda8e07" Jul 7 01:09:21.084805 systemd[1]: Created slice kubepods-besteffort-pod850a1808_16bf_4143_8ff8_9afb8ec843dc.slice - libcontainer container kubepods-besteffort-pod850a1808_16bf_4143_8ff8_9afb8ec843dc.slice. Jul 7 01:09:21.086441 containerd[2783]: time="2025-07-07T01:09:21.086417618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-268kj,Uid:850a1808-16bf-4143-8ff8-9afb8ec843dc,Namespace:calico-system,Attempt:0,}" Jul 7 01:09:21.126423 containerd[2783]: time="2025-07-07T01:09:21.126385055Z" level=error msg="Failed to destroy network for sandbox \"8b8969d1d043a22839f4a02de2b8c5e719617d60f051ec527a3d62cbfe97644d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:21.126764 containerd[2783]: time="2025-07-07T01:09:21.126736536Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-268kj,Uid:850a1808-16bf-4143-8ff8-9afb8ec843dc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b8969d1d043a22839f4a02de2b8c5e719617d60f051ec527a3d62cbfe97644d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:21.126920 kubelet[4295]: E0707 01:09:21.126895 4295 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b8969d1d043a22839f4a02de2b8c5e719617d60f051ec527a3d62cbfe97644d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 01:09:21.126959 kubelet[4295]: E0707 01:09:21.126941 4295 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b8969d1d043a22839f4a02de2b8c5e719617d60f051ec527a3d62cbfe97644d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-268kj" Jul 7 01:09:21.126995 kubelet[4295]: E0707 01:09:21.126955 4295 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b8969d1d043a22839f4a02de2b8c5e719617d60f051ec527a3d62cbfe97644d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-268kj" Jul 7 01:09:21.127032 kubelet[4295]: E0707 01:09:21.127007 4295 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-268kj_calico-system(850a1808-16bf-4143-8ff8-9afb8ec843dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-268kj_calico-system(850a1808-16bf-4143-8ff8-9afb8ec843dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b8969d1d043a22839f4a02de2b8c5e719617d60f051ec527a3d62cbfe97644d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-268kj" podUID="850a1808-16bf-4143-8ff8-9afb8ec843dc" Jul 7 01:09:21.128016 systemd[1]: run-netns-cni\x2de18ec401\x2d2831\x2d8f5a\x2d5ec3\x2d5221ba3c868b.mount: Deactivated successfully. Jul 7 01:09:21.133224 containerd[2783]: time="2025-07-07T01:09:21.133205068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 01:09:24.538620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1987266412.mount: Deactivated successfully. Jul 7 01:09:24.555081 containerd[2783]: time="2025-07-07T01:09:24.555034084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 7 01:09:24.555282 containerd[2783]: time="2025-07-07T01:09:24.555043764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:24.555710 containerd[2783]: time="2025-07-07T01:09:24.555692525Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:24.557093 containerd[2783]: time="2025-07-07T01:09:24.557069247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:24.557610 containerd[2783]: time="2025-07-07T01:09:24.557591928Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.4243595s" Jul 7 01:09:24.557637 containerd[2783]: time="2025-07-07T01:09:24.557616128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 7 01:09:24.563907 containerd[2783]: time="2025-07-07T01:09:24.563883779Z" level=info msg="CreateContainer within sandbox \"047a2d283f84704ee95c32615deb67230fd1861d1555958da72d167aada0c10f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 01:09:24.573519 containerd[2783]: time="2025-07-07T01:09:24.573482955Z" level=info msg="Container 862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327: CDI devices from CRI Config.CDIDevices: []" Jul 7 01:09:24.579355 containerd[2783]: time="2025-07-07T01:09:24.579324404Z" level=info msg="CreateContainer within sandbox \"047a2d283f84704ee95c32615deb67230fd1861d1555958da72d167aada0c10f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\"" Jul 7 01:09:24.579752 containerd[2783]: time="2025-07-07T01:09:24.579727525Z" level=info msg="StartContainer for \"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\"" Jul 7 01:09:24.581127 containerd[2783]: time="2025-07-07T01:09:24.581103967Z" level=info msg="connecting to shim 862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327" address="unix:///run/containerd/s/bc62cd8791eb1bca86c61ad18d61e237337d524d4e7c7a47ab3935ef66d983be" protocol=ttrpc version=3 Jul 7 01:09:24.613669 systemd[1]: Started cri-containerd-862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327.scope - libcontainer container 862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327. Jul 7 01:09:24.643069 containerd[2783]: time="2025-07-07T01:09:24.643041030Z" level=info msg="StartContainer for \"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\" returns successfully" Jul 7 01:09:24.762575 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 01:09:24.762691 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 01:09:24.959419 kubelet[4295]: I0707 01:09:24.959376 4295 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5df43d02-577e-48ea-b71d-12b96dda8e07-whisker-ca-bundle\") pod \"5df43d02-577e-48ea-b71d-12b96dda8e07\" (UID: \"5df43d02-577e-48ea-b71d-12b96dda8e07\") " Jul 7 01:09:24.959419 kubelet[4295]: I0707 01:09:24.959425 4295 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5df43d02-577e-48ea-b71d-12b96dda8e07-whisker-backend-key-pair\") pod \"5df43d02-577e-48ea-b71d-12b96dda8e07\" (UID: \"5df43d02-577e-48ea-b71d-12b96dda8e07\") " Jul 7 01:09:24.959748 kubelet[4295]: I0707 01:09:24.959453 4295 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zjh4\" (UniqueName: \"kubernetes.io/projected/5df43d02-577e-48ea-b71d-12b96dda8e07-kube-api-access-8zjh4\") pod \"5df43d02-577e-48ea-b71d-12b96dda8e07\" (UID: \"5df43d02-577e-48ea-b71d-12b96dda8e07\") " Jul 7 01:09:24.959772 kubelet[4295]: I0707 01:09:24.959739 4295 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5df43d02-577e-48ea-b71d-12b96dda8e07-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5df43d02-577e-48ea-b71d-12b96dda8e07" (UID: "5df43d02-577e-48ea-b71d-12b96dda8e07"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 01:09:24.961601 kubelet[4295]: I0707 01:09:24.961577 4295 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5df43d02-577e-48ea-b71d-12b96dda8e07-kube-api-access-8zjh4" (OuterVolumeSpecName: "kube-api-access-8zjh4") pod "5df43d02-577e-48ea-b71d-12b96dda8e07" (UID: "5df43d02-577e-48ea-b71d-12b96dda8e07"). InnerVolumeSpecName "kube-api-access-8zjh4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 01:09:24.961652 kubelet[4295]: I0707 01:09:24.961629 4295 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5df43d02-577e-48ea-b71d-12b96dda8e07-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5df43d02-577e-48ea-b71d-12b96dda8e07" (UID: "5df43d02-577e-48ea-b71d-12b96dda8e07"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 01:09:25.060352 kubelet[4295]: I0707 01:09:25.060322 4295 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8zjh4\" (UniqueName: \"kubernetes.io/projected/5df43d02-577e-48ea-b71d-12b96dda8e07-kube-api-access-8zjh4\") on node \"ci-4344.1.1-a-a5852c4667\" DevicePath \"\"" Jul 7 01:09:25.060352 kubelet[4295]: I0707 01:09:25.060342 4295 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5df43d02-577e-48ea-b71d-12b96dda8e07-whisker-ca-bundle\") on node \"ci-4344.1.1-a-a5852c4667\" DevicePath \"\"" Jul 7 01:09:25.060352 kubelet[4295]: I0707 01:09:25.060352 4295 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5df43d02-577e-48ea-b71d-12b96dda8e07-whisker-backend-key-pair\") on node \"ci-4344.1.1-a-a5852c4667\" DevicePath \"\"" Jul 7 01:09:25.084984 systemd[1]: Removed slice kubepods-besteffort-pod5df43d02_577e_48ea_b71d_12b96dda8e07.slice - libcontainer container kubepods-besteffort-pod5df43d02_577e_48ea_b71d_12b96dda8e07.slice. Jul 7 01:09:25.152887 kubelet[4295]: I0707 01:09:25.152832 4295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2rkcm" podStartSLOduration=1.282554193 podStartE2EDuration="12.152818306s" podCreationTimestamp="2025-07-07 01:09:13 +0000 UTC" firstStartedPulling="2025-07-07 01:09:13.687815656 +0000 UTC m=+18.691456012" lastFinishedPulling="2025-07-07 01:09:24.558079809 +0000 UTC m=+29.561720125" observedRunningTime="2025-07-07 01:09:25.152817226 +0000 UTC m=+30.156457582" watchObservedRunningTime="2025-07-07 01:09:25.152818306 +0000 UTC m=+30.156458662" Jul 7 01:09:25.179202 systemd[1]: Created slice kubepods-besteffort-pod3f9d3db7_73e9_45e4_afee_c428bc7dd63a.slice - libcontainer container kubepods-besteffort-pod3f9d3db7_73e9_45e4_afee_c428bc7dd63a.slice. Jul 7 01:09:25.261125 kubelet[4295]: I0707 01:09:25.261056 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3f9d3db7-73e9-45e4-afee-c428bc7dd63a-whisker-backend-key-pair\") pod \"whisker-6d99d84ccc-cl9xw\" (UID: \"3f9d3db7-73e9-45e4-afee-c428bc7dd63a\") " pod="calico-system/whisker-6d99d84ccc-cl9xw" Jul 7 01:09:25.261125 kubelet[4295]: I0707 01:09:25.261092 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f9d3db7-73e9-45e4-afee-c428bc7dd63a-whisker-ca-bundle\") pod \"whisker-6d99d84ccc-cl9xw\" (UID: \"3f9d3db7-73e9-45e4-afee-c428bc7dd63a\") " pod="calico-system/whisker-6d99d84ccc-cl9xw" Jul 7 01:09:25.261125 kubelet[4295]: I0707 01:09:25.261115 4295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwgp2\" (UniqueName: \"kubernetes.io/projected/3f9d3db7-73e9-45e4-afee-c428bc7dd63a-kube-api-access-jwgp2\") pod \"whisker-6d99d84ccc-cl9xw\" (UID: \"3f9d3db7-73e9-45e4-afee-c428bc7dd63a\") " pod="calico-system/whisker-6d99d84ccc-cl9xw" Jul 7 01:09:25.481670 containerd[2783]: time="2025-07-07T01:09:25.481590307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d99d84ccc-cl9xw,Uid:3f9d3db7-73e9-45e4-afee-c428bc7dd63a,Namespace:calico-system,Attempt:0,}" Jul 7 01:09:25.540639 systemd[1]: var-lib-kubelet-pods-5df43d02\x2d577e\x2d48ea\x2db71d\x2d12b96dda8e07-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8zjh4.mount: Deactivated successfully. Jul 7 01:09:25.540719 systemd[1]: var-lib-kubelet-pods-5df43d02\x2d577e\x2d48ea\x2db71d\x2d12b96dda8e07-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 01:09:25.597717 systemd-networkd[2695]: cali1e645bd4a19: Link UP Jul 7 01:09:25.597900 systemd-networkd[2695]: cali1e645bd4a19: Gained carrier Jul 7 01:09:25.605074 containerd[2783]: 2025-07-07 01:09:25.513 [INFO][6001] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 01:09:25.605074 containerd[2783]: 2025-07-07 01:09:25.529 [INFO][6001] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--a5852c4667-k8s-whisker--6d99d84ccc--cl9xw-eth0 whisker-6d99d84ccc- calico-system 3f9d3db7-73e9-45e4-afee-c428bc7dd63a 885 0 2025-07-07 01:09:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6d99d84ccc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4344.1.1-a-a5852c4667 whisker-6d99d84ccc-cl9xw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1e645bd4a19 [] [] }} ContainerID="79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67" Namespace="calico-system" Pod="whisker-6d99d84ccc-cl9xw" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-whisker--6d99d84ccc--cl9xw-" Jul 7 01:09:25.605074 containerd[2783]: 2025-07-07 01:09:25.529 [INFO][6001] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67" Namespace="calico-system" Pod="whisker-6d99d84ccc-cl9xw" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-whisker--6d99d84ccc--cl9xw-eth0" Jul 7 01:09:25.605074 containerd[2783]: 2025-07-07 01:09:25.565 [INFO][6026] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67" HandleID="k8s-pod-network.79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67" Workload="ci--4344.1.1--a--a5852c4667-k8s-whisker--6d99d84ccc--cl9xw-eth0" Jul 7 01:09:25.605424 containerd[2783]: 2025-07-07 01:09:25.565 [INFO][6026] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67" HandleID="k8s-pod-network.79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67" Workload="ci--4344.1.1--a--a5852c4667-k8s-whisker--6d99d84ccc--cl9xw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003620e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-a-a5852c4667", "pod":"whisker-6d99d84ccc-cl9xw", "timestamp":"2025-07-07 01:09:25.5656046 +0000 UTC"}, Hostname:"ci-4344.1.1-a-a5852c4667", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:09:25.605424 containerd[2783]: 2025-07-07 01:09:25.565 [INFO][6026] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:09:25.605424 containerd[2783]: 2025-07-07 01:09:25.565 [INFO][6026] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:09:25.605424 containerd[2783]: 2025-07-07 01:09:25.565 [INFO][6026] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-a5852c4667' Jul 7 01:09:25.605424 containerd[2783]: 2025-07-07 01:09:25.574 [INFO][6026] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:25.605424 containerd[2783]: 2025-07-07 01:09:25.577 [INFO][6026] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:25.605424 containerd[2783]: 2025-07-07 01:09:25.580 [INFO][6026] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:25.605424 containerd[2783]: 2025-07-07 01:09:25.581 [INFO][6026] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:25.605424 containerd[2783]: 2025-07-07 01:09:25.583 [INFO][6026] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:25.605620 containerd[2783]: 2025-07-07 01:09:25.583 [INFO][6026] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:25.605620 containerd[2783]: 2025-07-07 01:09:25.584 [INFO][6026] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67 Jul 7 01:09:25.605620 containerd[2783]: 2025-07-07 01:09:25.586 [INFO][6026] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:25.605620 containerd[2783]: 2025-07-07 01:09:25.589 [INFO][6026] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.70.129/26] block=192.168.70.128/26 handle="k8s-pod-network.79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:25.605620 containerd[2783]: 2025-07-07 01:09:25.589 [INFO][6026] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.129/26] handle="k8s-pod-network.79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:25.605620 containerd[2783]: 2025-07-07 01:09:25.589 [INFO][6026] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:09:25.605620 containerd[2783]: 2025-07-07 01:09:25.589 [INFO][6026] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.129/26] IPv6=[] ContainerID="79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67" HandleID="k8s-pod-network.79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67" Workload="ci--4344.1.1--a--a5852c4667-k8s-whisker--6d99d84ccc--cl9xw-eth0" Jul 7 01:09:25.605741 containerd[2783]: 2025-07-07 01:09:25.592 [INFO][6001] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67" Namespace="calico-system" Pod="whisker-6d99d84ccc-cl9xw" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-whisker--6d99d84ccc--cl9xw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--a5852c4667-k8s-whisker--6d99d84ccc--cl9xw-eth0", GenerateName:"whisker-6d99d84ccc-", Namespace:"calico-system", SelfLink:"", UID:"3f9d3db7-73e9-45e4-afee-c428bc7dd63a", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 9, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d99d84ccc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-a5852c4667", ContainerID:"", Pod:"whisker-6d99d84ccc-cl9xw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.70.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1e645bd4a19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:09:25.605741 containerd[2783]: 2025-07-07 01:09:25.592 [INFO][6001] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.129/32] ContainerID="79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67" Namespace="calico-system" Pod="whisker-6d99d84ccc-cl9xw" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-whisker--6d99d84ccc--cl9xw-eth0" Jul 7 01:09:25.605804 containerd[2783]: 2025-07-07 01:09:25.592 [INFO][6001] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e645bd4a19 ContainerID="79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67" Namespace="calico-system" Pod="whisker-6d99d84ccc-cl9xw" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-whisker--6d99d84ccc--cl9xw-eth0" Jul 7 01:09:25.605804 containerd[2783]: 2025-07-07 01:09:25.598 [INFO][6001] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67" Namespace="calico-system" Pod="whisker-6d99d84ccc-cl9xw" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-whisker--6d99d84ccc--cl9xw-eth0" Jul 7 01:09:25.605840 containerd[2783]: 2025-07-07 01:09:25.598 [INFO][6001] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67" Namespace="calico-system" Pod="whisker-6d99d84ccc-cl9xw" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-whisker--6d99d84ccc--cl9xw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--a5852c4667-k8s-whisker--6d99d84ccc--cl9xw-eth0", GenerateName:"whisker-6d99d84ccc-", Namespace:"calico-system", SelfLink:"", UID:"3f9d3db7-73e9-45e4-afee-c428bc7dd63a", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 9, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d99d84ccc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-a5852c4667", ContainerID:"79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67", Pod:"whisker-6d99d84ccc-cl9xw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.70.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1e645bd4a19", MAC:"32:23:f5:34:63:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:09:25.605889 containerd[2783]: 2025-07-07 01:09:25.603 [INFO][6001] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67" Namespace="calico-system" Pod="whisker-6d99d84ccc-cl9xw" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-whisker--6d99d84ccc--cl9xw-eth0" Jul 7 01:09:25.615444 containerd[2783]: time="2025-07-07T01:09:25.615414679Z" level=info msg="connecting to shim 79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67" address="unix:///run/containerd/s/321420dafb63ab4836edd5ab638ca6dc45f88f8046ff08a0727c98b0758f7eab" namespace=k8s.io protocol=ttrpc version=3 Jul 7 01:09:25.644654 systemd[1]: Started cri-containerd-79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67.scope - libcontainer container 79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67. Jul 7 01:09:25.670517 containerd[2783]: time="2025-07-07T01:09:25.670475246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d99d84ccc-cl9xw,Uid:3f9d3db7-73e9-45e4-afee-c428bc7dd63a,Namespace:calico-system,Attempt:0,} returns sandbox id \"79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67\"" Jul 7 01:09:25.671510 containerd[2783]: time="2025-07-07T01:09:25.671492288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 01:09:26.144445 kubelet[4295]: I0707 01:09:26.144401 4295 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 01:09:26.244008 systemd-networkd[2695]: vxlan.calico: Link UP Jul 7 01:09:26.244012 systemd-networkd[2695]: vxlan.calico: Gained carrier Jul 7 01:09:26.586049 containerd[2783]: time="2025-07-07T01:09:26.585934893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 7 01:09:26.586049 containerd[2783]: time="2025-07-07T01:09:26.585935853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:26.586683 containerd[2783]: time="2025-07-07T01:09:26.586657614Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:26.588292 containerd[2783]: time="2025-07-07T01:09:26.588267817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:26.588938 containerd[2783]: time="2025-07-07T01:09:26.588909058Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 917.38645ms" Jul 7 01:09:26.588988 containerd[2783]: time="2025-07-07T01:09:26.588938098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 7 01:09:26.590405 containerd[2783]: time="2025-07-07T01:09:26.590383820Z" level=info msg="CreateContainer within sandbox \"79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 01:09:26.593735 containerd[2783]: time="2025-07-07T01:09:26.593706185Z" level=info msg="Container 8c0f864d7e8f2a0c0e4cd8f92683b4dab06c76090264e0de9b957fd0bea6b3bb: CDI devices from CRI Config.CDIDevices: []" Jul 7 01:09:26.596962 containerd[2783]: time="2025-07-07T01:09:26.596932910Z" level=info msg="CreateContainer within sandbox \"79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8c0f864d7e8f2a0c0e4cd8f92683b4dab06c76090264e0de9b957fd0bea6b3bb\"" Jul 7 01:09:26.597238 containerd[2783]: time="2025-07-07T01:09:26.597217030Z" level=info msg="StartContainer for \"8c0f864d7e8f2a0c0e4cd8f92683b4dab06c76090264e0de9b957fd0bea6b3bb\"" Jul 7 01:09:26.598171 containerd[2783]: time="2025-07-07T01:09:26.598150552Z" level=info msg="connecting to shim 8c0f864d7e8f2a0c0e4cd8f92683b4dab06c76090264e0de9b957fd0bea6b3bb" address="unix:///run/containerd/s/321420dafb63ab4836edd5ab638ca6dc45f88f8046ff08a0727c98b0758f7eab" protocol=ttrpc version=3 Jul 7 01:09:26.623656 systemd[1]: Started cri-containerd-8c0f864d7e8f2a0c0e4cd8f92683b4dab06c76090264e0de9b957fd0bea6b3bb.scope - libcontainer container 8c0f864d7e8f2a0c0e4cd8f92683b4dab06c76090264e0de9b957fd0bea6b3bb. Jul 7 01:09:26.651186 containerd[2783]: time="2025-07-07T01:09:26.651155272Z" level=info msg="StartContainer for \"8c0f864d7e8f2a0c0e4cd8f92683b4dab06c76090264e0de9b957fd0bea6b3bb\" returns successfully" Jul 7 01:09:26.651932 containerd[2783]: time="2025-07-07T01:09:26.651912073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 01:09:27.082279 kubelet[4295]: I0707 01:09:27.082243 4295 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5df43d02-577e-48ea-b71d-12b96dda8e07" path="/var/lib/kubelet/pods/5df43d02-577e-48ea-b71d-12b96dda8e07/volumes" Jul 7 01:09:27.200605 systemd-networkd[2695]: cali1e645bd4a19: Gained IPv6LL Jul 7 01:09:27.648597 systemd-networkd[2695]: vxlan.calico: Gained IPv6LL Jul 7 01:09:28.004845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1433712856.mount: Deactivated successfully. Jul 7 01:09:28.007957 containerd[2783]: time="2025-07-07T01:09:28.007910652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 7 01:09:28.008150 containerd[2783]: time="2025-07-07T01:09:28.007961812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:28.008708 containerd[2783]: time="2025-07-07T01:09:28.008685653Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:28.010355 containerd[2783]: time="2025-07-07T01:09:28.010328416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:28.011081 containerd[2783]: time="2025-07-07T01:09:28.011059737Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.359115824s" Jul 7 01:09:28.011115 containerd[2783]: time="2025-07-07T01:09:28.011088417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 7 01:09:28.012734 containerd[2783]: time="2025-07-07T01:09:28.012716899Z" level=info msg="CreateContainer within sandbox \"79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 01:09:28.016373 containerd[2783]: time="2025-07-07T01:09:28.016341424Z" level=info msg="Container 90cae2c8a3d884a67ddccf98ebfc6976d7bd07e9e3a9aa800956980adacc4b5f: CDI devices from CRI Config.CDIDevices: []" Jul 7 01:09:28.020002 containerd[2783]: time="2025-07-07T01:09:28.019976749Z" level=info msg="CreateContainer within sandbox \"79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"90cae2c8a3d884a67ddccf98ebfc6976d7bd07e9e3a9aa800956980adacc4b5f\"" Jul 7 01:09:28.020338 containerd[2783]: time="2025-07-07T01:09:28.020318629Z" level=info msg="StartContainer for \"90cae2c8a3d884a67ddccf98ebfc6976d7bd07e9e3a9aa800956980adacc4b5f\"" Jul 7 01:09:28.021277 containerd[2783]: time="2025-07-07T01:09:28.021254711Z" level=info msg="connecting to shim 90cae2c8a3d884a67ddccf98ebfc6976d7bd07e9e3a9aa800956980adacc4b5f" address="unix:///run/containerd/s/321420dafb63ab4836edd5ab638ca6dc45f88f8046ff08a0727c98b0758f7eab" protocol=ttrpc version=3 Jul 7 01:09:28.051608 systemd[1]: Started cri-containerd-90cae2c8a3d884a67ddccf98ebfc6976d7bd07e9e3a9aa800956980adacc4b5f.scope - libcontainer container 90cae2c8a3d884a67ddccf98ebfc6976d7bd07e9e3a9aa800956980adacc4b5f. Jul 7 01:09:28.080186 containerd[2783]: time="2025-07-07T01:09:28.080160392Z" level=info msg="StartContainer for \"90cae2c8a3d884a67ddccf98ebfc6976d7bd07e9e3a9aa800956980adacc4b5f\" returns successfully" Jul 7 01:09:28.158572 kubelet[4295]: I0707 01:09:28.158516 4295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6d99d84ccc-cl9xw" podStartSLOduration=0.81815977 podStartE2EDuration="3.15850102s" podCreationTimestamp="2025-07-07 01:09:25 +0000 UTC" firstStartedPulling="2025-07-07 01:09:25.671291447 +0000 UTC m=+30.674931763" lastFinishedPulling="2025-07-07 01:09:28.011632697 +0000 UTC m=+33.015273013" observedRunningTime="2025-07-07 01:09:28.158136619 +0000 UTC m=+33.161776975" watchObservedRunningTime="2025-07-07 01:09:28.15850102 +0000 UTC m=+33.162141416" Jul 7 01:09:32.080688 containerd[2783]: time="2025-07-07T01:09:32.080635579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc648bb98-s9htc,Uid:ffe33ac8-1739-4114-bd6d-77dcb02988b0,Namespace:calico-apiserver,Attempt:0,}" Jul 7 01:09:32.170462 systemd-networkd[2695]: calid99b8cdbeb9: Link UP Jul 7 01:09:32.170668 systemd-networkd[2695]: calid99b8cdbeb9: Gained carrier Jul 7 01:09:32.178643 containerd[2783]: 2025-07-07 01:09:32.109 [INFO][6724] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--s9htc-eth0 calico-apiserver-dc648bb98- calico-apiserver ffe33ac8-1739-4114-bd6d-77dcb02988b0 825 0 2025-07-07 01:09:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dc648bb98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.1-a-a5852c4667 calico-apiserver-dc648bb98-s9htc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid99b8cdbeb9 [] [] }} ContainerID="30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83" Namespace="calico-apiserver" Pod="calico-apiserver-dc648bb98-s9htc" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--s9htc-" Jul 7 01:09:32.178643 containerd[2783]: 2025-07-07 01:09:32.109 [INFO][6724] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83" Namespace="calico-apiserver" Pod="calico-apiserver-dc648bb98-s9htc" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--s9htc-eth0" Jul 7 01:09:32.178643 containerd[2783]: 2025-07-07 01:09:32.130 [INFO][6750] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83" HandleID="k8s-pod-network.30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83" Workload="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--s9htc-eth0" Jul 7 01:09:32.178864 containerd[2783]: 2025-07-07 01:09:32.130 [INFO][6750] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83" HandleID="k8s-pod-network.30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83" Workload="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--s9htc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40006167b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.1-a-a5852c4667", "pod":"calico-apiserver-dc648bb98-s9htc", "timestamp":"2025-07-07 01:09:32.130338356 +0000 UTC"}, Hostname:"ci-4344.1.1-a-a5852c4667", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:09:32.178864 containerd[2783]: 2025-07-07 01:09:32.130 [INFO][6750] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:09:32.178864 containerd[2783]: 2025-07-07 01:09:32.130 [INFO][6750] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:09:32.178864 containerd[2783]: 2025-07-07 01:09:32.130 [INFO][6750] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-a5852c4667' Jul 7 01:09:32.178864 containerd[2783]: 2025-07-07 01:09:32.138 [INFO][6750] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:32.178864 containerd[2783]: 2025-07-07 01:09:32.141 [INFO][6750] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:32.178864 containerd[2783]: 2025-07-07 01:09:32.144 [INFO][6750] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:32.178864 containerd[2783]: 2025-07-07 01:09:32.146 [INFO][6750] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:32.178864 containerd[2783]: 2025-07-07 01:09:32.147 [INFO][6750] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:32.179040 containerd[2783]: 2025-07-07 01:09:32.148 [INFO][6750] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:32.179040 containerd[2783]: 2025-07-07 01:09:32.149 [INFO][6750] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83 Jul 7 01:09:32.179040 containerd[2783]: 2025-07-07 01:09:32.163 [INFO][6750] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:32.179040 containerd[2783]: 2025-07-07 01:09:32.167 [INFO][6750] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.70.130/26] block=192.168.70.128/26 handle="k8s-pod-network.30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:32.179040 containerd[2783]: 2025-07-07 01:09:32.167 [INFO][6750] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.130/26] handle="k8s-pod-network.30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:32.179040 containerd[2783]: 2025-07-07 01:09:32.167 [INFO][6750] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:09:32.179040 containerd[2783]: 2025-07-07 01:09:32.167 [INFO][6750] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.130/26] IPv6=[] ContainerID="30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83" HandleID="k8s-pod-network.30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83" Workload="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--s9htc-eth0" Jul 7 01:09:32.179169 containerd[2783]: 2025-07-07 01:09:32.169 [INFO][6724] cni-plugin/k8s.go 418: Populated endpoint ContainerID="30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83" Namespace="calico-apiserver" Pod="calico-apiserver-dc648bb98-s9htc" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--s9htc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--s9htc-eth0", GenerateName:"calico-apiserver-dc648bb98-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffe33ac8-1739-4114-bd6d-77dcb02988b0", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 9, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dc648bb98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-a5852c4667", ContainerID:"", Pod:"calico-apiserver-dc648bb98-s9htc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid99b8cdbeb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:09:32.179215 containerd[2783]: 2025-07-07 01:09:32.169 [INFO][6724] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.130/32] ContainerID="30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83" Namespace="calico-apiserver" Pod="calico-apiserver-dc648bb98-s9htc" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--s9htc-eth0" Jul 7 01:09:32.179215 containerd[2783]: 2025-07-07 01:09:32.169 [INFO][6724] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid99b8cdbeb9 ContainerID="30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83" Namespace="calico-apiserver" Pod="calico-apiserver-dc648bb98-s9htc" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--s9htc-eth0" Jul 7 01:09:32.179215 containerd[2783]: 2025-07-07 01:09:32.171 [INFO][6724] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83" Namespace="calico-apiserver" Pod="calico-apiserver-dc648bb98-s9htc" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--s9htc-eth0" Jul 7 01:09:32.179336 containerd[2783]: 2025-07-07 01:09:32.171 [INFO][6724] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83" Namespace="calico-apiserver" Pod="calico-apiserver-dc648bb98-s9htc" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--s9htc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--s9htc-eth0", GenerateName:"calico-apiserver-dc648bb98-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffe33ac8-1739-4114-bd6d-77dcb02988b0", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 9, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dc648bb98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-a5852c4667", ContainerID:"30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83", Pod:"calico-apiserver-dc648bb98-s9htc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid99b8cdbeb9", MAC:"b6:5b:59:59:20:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:09:32.179386 containerd[2783]: 2025-07-07 01:09:32.177 [INFO][6724] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83" Namespace="calico-apiserver" Pod="calico-apiserver-dc648bb98-s9htc" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--s9htc-eth0" Jul 7 01:09:32.188987 containerd[2783]: time="2025-07-07T01:09:32.188952624Z" level=info msg="connecting to shim 30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83" address="unix:///run/containerd/s/8e43c74d522d5e9265396e94c73bd724c9d3ce1109f0fff74cf80242a3c7ac0a" namespace=k8s.io protocol=ttrpc version=3 Jul 7 01:09:32.211613 systemd[1]: Started cri-containerd-30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83.scope - libcontainer container 30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83. Jul 7 01:09:32.237437 containerd[2783]: time="2025-07-07T01:09:32.237409120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc648bb98-s9htc,Uid:ffe33ac8-1739-4114-bd6d-77dcb02988b0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83\"" Jul 7 01:09:32.238381 containerd[2783]: time="2025-07-07T01:09:32.238361961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 01:09:33.080267 containerd[2783]: time="2025-07-07T01:09:33.080229372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc648bb98-68h7b,Uid:d029f356-f508-4fb9-b349-4cfa825d4193,Namespace:calico-apiserver,Attempt:0,}" Jul 7 01:09:33.080370 containerd[2783]: time="2025-07-07T01:09:33.080312293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d2whz,Uid:2a78675e-0157-4efe-b211-95cf133f1f55,Namespace:kube-system,Attempt:0,}" Jul 7 01:09:33.170992 systemd-networkd[2695]: cali64610dee04c: Link UP Jul 7 01:09:33.171272 systemd-networkd[2695]: cali64610dee04c: Gained carrier Jul 7 01:09:33.179654 containerd[2783]: 2025-07-07 01:09:33.122 [INFO][6841] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--d2whz-eth0 coredns-668d6bf9bc- kube-system 2a78675e-0157-4efe-b211-95cf133f1f55 818 0 2025-07-07 01:09:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.1-a-a5852c4667 coredns-668d6bf9bc-d2whz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali64610dee04c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2whz" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--d2whz-" Jul 7 01:09:33.179654 containerd[2783]: 2025-07-07 01:09:33.122 [INFO][6841] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2whz" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--d2whz-eth0" Jul 7 01:09:33.179654 containerd[2783]: 2025-07-07 01:09:33.142 [INFO][6888] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e" HandleID="k8s-pod-network.fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e" Workload="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--d2whz-eth0" Jul 7 01:09:33.179969 containerd[2783]: 2025-07-07 01:09:33.142 [INFO][6888] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e" HandleID="k8s-pod-network.fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e" Workload="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--d2whz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000455440), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.1-a-a5852c4667", "pod":"coredns-668d6bf9bc-d2whz", "timestamp":"2025-07-07 01:09:33.142385922 +0000 UTC"}, Hostname:"ci-4344.1.1-a-a5852c4667", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:09:33.179969 containerd[2783]: 2025-07-07 01:09:33.142 [INFO][6888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:09:33.179969 containerd[2783]: 2025-07-07 01:09:33.142 [INFO][6888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:09:33.179969 containerd[2783]: 2025-07-07 01:09:33.142 [INFO][6888] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-a5852c4667' Jul 7 01:09:33.179969 containerd[2783]: 2025-07-07 01:09:33.150 [INFO][6888] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:33.179969 containerd[2783]: 2025-07-07 01:09:33.153 [INFO][6888] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:33.179969 containerd[2783]: 2025-07-07 01:09:33.157 [INFO][6888] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:33.179969 containerd[2783]: 2025-07-07 01:09:33.159 [INFO][6888] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:33.179969 containerd[2783]: 2025-07-07 01:09:33.160 [INFO][6888] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:33.180155 containerd[2783]: 2025-07-07 01:09:33.160 [INFO][6888] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:33.180155 containerd[2783]: 2025-07-07 01:09:33.161 [INFO][6888] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e Jul 7 01:09:33.180155 containerd[2783]: 2025-07-07 01:09:33.164 [INFO][6888] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:33.180155 containerd[2783]: 2025-07-07 01:09:33.168 [INFO][6888] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.70.131/26] block=192.168.70.128/26 handle="k8s-pod-network.fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:33.180155 containerd[2783]: 2025-07-07 01:09:33.168 [INFO][6888] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.131/26] handle="k8s-pod-network.fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:33.180155 containerd[2783]: 2025-07-07 01:09:33.168 [INFO][6888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:09:33.180155 containerd[2783]: 2025-07-07 01:09:33.168 [INFO][6888] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.131/26] IPv6=[] ContainerID="fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e" HandleID="k8s-pod-network.fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e" Workload="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--d2whz-eth0" Jul 7 01:09:33.180285 containerd[2783]: 2025-07-07 01:09:33.169 [INFO][6841] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2whz" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--d2whz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--d2whz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2a78675e-0157-4efe-b211-95cf133f1f55", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 9, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-a5852c4667", ContainerID:"", Pod:"coredns-668d6bf9bc-d2whz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64610dee04c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:09:33.180285 containerd[2783]: 2025-07-07 01:09:33.169 [INFO][6841] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.131/32] ContainerID="fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2whz" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--d2whz-eth0" Jul 7 01:09:33.180285 containerd[2783]: 2025-07-07 01:09:33.169 [INFO][6841] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64610dee04c ContainerID="fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2whz" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--d2whz-eth0" Jul 7 01:09:33.180285 containerd[2783]: 2025-07-07 01:09:33.171 [INFO][6841] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2whz" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--d2whz-eth0" Jul 7 01:09:33.180285 containerd[2783]: 2025-07-07 01:09:33.172 [INFO][6841] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2whz" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--d2whz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--d2whz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2a78675e-0157-4efe-b211-95cf133f1f55", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 9, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-a5852c4667", ContainerID:"fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e", Pod:"coredns-668d6bf9bc-d2whz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64610dee04c", MAC:"72:19:1d:f5:d5:25", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:09:33.180285 containerd[2783]: 2025-07-07 01:09:33.178 [INFO][6841] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e" Namespace="kube-system" Pod="coredns-668d6bf9bc-d2whz" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--d2whz-eth0" Jul 7 01:09:33.189199 containerd[2783]: time="2025-07-07T01:09:33.189171134Z" level=info msg="connecting to shim fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e" address="unix:///run/containerd/s/0451a334ae5a96bfd8dca9e60f14e825565b49ae4e5c28e5432182a7b401ee16" namespace=k8s.io protocol=ttrpc version=3 Jul 7 01:09:33.212678 systemd[1]: Started cri-containerd-fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e.scope - libcontainer container fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e. Jul 7 01:09:33.238380 containerd[2783]: time="2025-07-07T01:09:33.238353748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d2whz,Uid:2a78675e-0157-4efe-b211-95cf133f1f55,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e\"" Jul 7 01:09:33.240190 containerd[2783]: time="2025-07-07T01:09:33.240169750Z" level=info msg="CreateContainer within sandbox \"fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 01:09:33.244513 containerd[2783]: time="2025-07-07T01:09:33.244477195Z" level=info msg="Container 456c7749e9c3665f1d8633ff6fc8611eba1debeb39765fe0bd6353534e1dfe98: CDI devices from CRI Config.CDIDevices: []" Jul 7 01:09:33.247993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1254581247.mount: Deactivated successfully. Jul 7 01:09:33.248827 containerd[2783]: time="2025-07-07T01:09:33.248800040Z" level=info msg="CreateContainer within sandbox \"fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"456c7749e9c3665f1d8633ff6fc8611eba1debeb39765fe0bd6353534e1dfe98\"" Jul 7 01:09:33.249251 containerd[2783]: time="2025-07-07T01:09:33.249236960Z" level=info msg="StartContainer for \"456c7749e9c3665f1d8633ff6fc8611eba1debeb39765fe0bd6353534e1dfe98\"" Jul 7 01:09:33.249960 containerd[2783]: time="2025-07-07T01:09:33.249940761Z" level=info msg="connecting to shim 456c7749e9c3665f1d8633ff6fc8611eba1debeb39765fe0bd6353534e1dfe98" address="unix:///run/containerd/s/0451a334ae5a96bfd8dca9e60f14e825565b49ae4e5c28e5432182a7b401ee16" protocol=ttrpc version=3 Jul 7 01:09:33.271565 systemd-networkd[2695]: cali3ab5b345d8d: Link UP Jul 7 01:09:33.271776 systemd-networkd[2695]: cali3ab5b345d8d: Gained carrier Jul 7 01:09:33.279683 systemd[1]: Started cri-containerd-456c7749e9c3665f1d8633ff6fc8611eba1debeb39765fe0bd6353534e1dfe98.scope - libcontainer container 456c7749e9c3665f1d8633ff6fc8611eba1debeb39765fe0bd6353534e1dfe98. Jul 7 01:09:33.280436 containerd[2783]: 2025-07-07 01:09:33.122 [INFO][6835] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--68h7b-eth0 calico-apiserver-dc648bb98- calico-apiserver d029f356-f508-4fb9-b349-4cfa825d4193 822 0 2025-07-07 01:09:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dc648bb98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4344.1.1-a-a5852c4667 calico-apiserver-dc648bb98-68h7b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3ab5b345d8d [] [] }} ContainerID="4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950" Namespace="calico-apiserver" Pod="calico-apiserver-dc648bb98-68h7b" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--68h7b-" Jul 7 01:09:33.280436 containerd[2783]: 2025-07-07 01:09:33.122 [INFO][6835] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950" Namespace="calico-apiserver" Pod="calico-apiserver-dc648bb98-68h7b" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--68h7b-eth0" Jul 7 01:09:33.280436 containerd[2783]: 2025-07-07 01:09:33.142 [INFO][6886] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950" HandleID="k8s-pod-network.4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950" Workload="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--68h7b-eth0" Jul 7 01:09:33.280436 containerd[2783]: 2025-07-07 01:09:33.142 [INFO][6886] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950" HandleID="k8s-pod-network.4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950" Workload="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--68h7b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40007a2840), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4344.1.1-a-a5852c4667", "pod":"calico-apiserver-dc648bb98-68h7b", "timestamp":"2025-07-07 01:09:33.142392522 +0000 UTC"}, Hostname:"ci-4344.1.1-a-a5852c4667", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:09:33.280436 containerd[2783]: 2025-07-07 01:09:33.142 [INFO][6886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:09:33.280436 containerd[2783]: 2025-07-07 01:09:33.168 [INFO][6886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:09:33.280436 containerd[2783]: 2025-07-07 01:09:33.168 [INFO][6886] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-a5852c4667' Jul 7 01:09:33.280436 containerd[2783]: 2025-07-07 01:09:33.250 [INFO][6886] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:33.280436 containerd[2783]: 2025-07-07 01:09:33.253 [INFO][6886] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:33.280436 containerd[2783]: 2025-07-07 01:09:33.257 [INFO][6886] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:33.280436 containerd[2783]: 2025-07-07 01:09:33.259 [INFO][6886] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:33.280436 containerd[2783]: 2025-07-07 01:09:33.260 [INFO][6886] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:33.280436 containerd[2783]: 2025-07-07 01:09:33.260 [INFO][6886] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:33.280436 containerd[2783]: 2025-07-07 01:09:33.261 [INFO][6886] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950 Jul 7 01:09:33.280436 containerd[2783]: 2025-07-07 01:09:33.264 [INFO][6886] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:33.280436 containerd[2783]: 2025-07-07 01:09:33.267 [INFO][6886] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.70.132/26] block=192.168.70.128/26 handle="k8s-pod-network.4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:33.280436 containerd[2783]: 2025-07-07 01:09:33.267 [INFO][6886] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.132/26] handle="k8s-pod-network.4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:33.280436 containerd[2783]: 2025-07-07 01:09:33.267 [INFO][6886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:09:33.280436 containerd[2783]: 2025-07-07 01:09:33.267 [INFO][6886] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.132/26] IPv6=[] ContainerID="4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950" HandleID="k8s-pod-network.4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950" Workload="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--68h7b-eth0" Jul 7 01:09:33.280859 containerd[2783]: 2025-07-07 01:09:33.269 [INFO][6835] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950" Namespace="calico-apiserver" Pod="calico-apiserver-dc648bb98-68h7b" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--68h7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--68h7b-eth0", GenerateName:"calico-apiserver-dc648bb98-", Namespace:"calico-apiserver", SelfLink:"", UID:"d029f356-f508-4fb9-b349-4cfa825d4193", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 9, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dc648bb98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-a5852c4667", ContainerID:"", Pod:"calico-apiserver-dc648bb98-68h7b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3ab5b345d8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:09:33.280859 containerd[2783]: 2025-07-07 01:09:33.269 [INFO][6835] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.132/32] ContainerID="4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950" Namespace="calico-apiserver" Pod="calico-apiserver-dc648bb98-68h7b" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--68h7b-eth0" Jul 7 01:09:33.280859 containerd[2783]: 2025-07-07 01:09:33.269 [INFO][6835] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ab5b345d8d ContainerID="4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950" Namespace="calico-apiserver" Pod="calico-apiserver-dc648bb98-68h7b" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--68h7b-eth0" Jul 7 01:09:33.280859 containerd[2783]: 2025-07-07 01:09:33.272 [INFO][6835] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950" Namespace="calico-apiserver" Pod="calico-apiserver-dc648bb98-68h7b" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--68h7b-eth0" Jul 7 01:09:33.280859 containerd[2783]: 2025-07-07 01:09:33.273 [INFO][6835] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950" Namespace="calico-apiserver" Pod="calico-apiserver-dc648bb98-68h7b" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--68h7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--68h7b-eth0", GenerateName:"calico-apiserver-dc648bb98-", Namespace:"calico-apiserver", SelfLink:"", UID:"d029f356-f508-4fb9-b349-4cfa825d4193", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 9, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dc648bb98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-a5852c4667", ContainerID:"4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950", Pod:"calico-apiserver-dc648bb98-68h7b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.70.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3ab5b345d8d", MAC:"2e:f3:f5:f2:c6:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:09:33.280859 containerd[2783]: 2025-07-07 01:09:33.278 [INFO][6835] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950" Namespace="calico-apiserver" Pod="calico-apiserver-dc648bb98-68h7b" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--apiserver--dc648bb98--68h7b-eth0" Jul 7 01:09:33.290900 containerd[2783]: time="2025-07-07T01:09:33.290862847Z" level=info msg="connecting to shim 4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950" address="unix:///run/containerd/s/78dc1075fabed05db5270e550beef16e6a5d1e0d3c15dc31521d39801b4bf384" namespace=k8s.io protocol=ttrpc version=3 Jul 7 01:09:33.300074 containerd[2783]: time="2025-07-07T01:09:33.300034977Z" level=info msg="StartContainer for \"456c7749e9c3665f1d8633ff6fc8611eba1debeb39765fe0bd6353534e1dfe98\" returns successfully" Jul 7 01:09:33.322635 systemd[1]: Started cri-containerd-4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950.scope - libcontainer container 4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950. Jul 7 01:09:33.348945 containerd[2783]: time="2025-07-07T01:09:33.348917831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc648bb98-68h7b,Uid:d029f356-f508-4fb9-b349-4cfa825d4193,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950\"" Jul 7 01:09:33.472599 systemd-networkd[2695]: calid99b8cdbeb9: Gained IPv6LL Jul 7 01:09:33.829499 containerd[2783]: time="2025-07-07T01:09:33.829451485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:33.829499 containerd[2783]: time="2025-07-07T01:09:33.829496605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 7 01:09:33.830077 containerd[2783]: time="2025-07-07T01:09:33.830055366Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:33.831716 containerd[2783]: time="2025-07-07T01:09:33.831690528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:33.832300 containerd[2783]: time="2025-07-07T01:09:33.832272288Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.593882327s" Jul 7 01:09:33.832324 containerd[2783]: time="2025-07-07T01:09:33.832305648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 7 01:09:33.833126 containerd[2783]: time="2025-07-07T01:09:33.833060929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 01:09:33.833869 containerd[2783]: time="2025-07-07T01:09:33.833849290Z" level=info msg="CreateContainer within sandbox \"30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 01:09:33.837275 containerd[2783]: time="2025-07-07T01:09:33.837250774Z" level=info msg="Container 3b7cc72de20ccf28ebb171fcc138b7dd8b6ac275d11cd0db7d14e144168f5678: CDI devices from CRI Config.CDIDevices: []" Jul 7 01:09:33.840387 containerd[2783]: time="2025-07-07T01:09:33.840362857Z" level=info msg="CreateContainer within sandbox \"30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3b7cc72de20ccf28ebb171fcc138b7dd8b6ac275d11cd0db7d14e144168f5678\"" Jul 7 01:09:33.840726 containerd[2783]: time="2025-07-07T01:09:33.840705418Z" level=info msg="StartContainer for \"3b7cc72de20ccf28ebb171fcc138b7dd8b6ac275d11cd0db7d14e144168f5678\"" Jul 7 01:09:33.841619 containerd[2783]: time="2025-07-07T01:09:33.841598179Z" level=info msg="connecting to shim 3b7cc72de20ccf28ebb171fcc138b7dd8b6ac275d11cd0db7d14e144168f5678" address="unix:///run/containerd/s/8e43c74d522d5e9265396e94c73bd724c9d3ce1109f0fff74cf80242a3c7ac0a" protocol=ttrpc version=3 Jul 7 01:09:33.866664 systemd[1]: Started cri-containerd-3b7cc72de20ccf28ebb171fcc138b7dd8b6ac275d11cd0db7d14e144168f5678.scope - libcontainer container 3b7cc72de20ccf28ebb171fcc138b7dd8b6ac275d11cd0db7d14e144168f5678. Jul 7 01:09:33.895375 containerd[2783]: time="2025-07-07T01:09:33.895349278Z" level=info msg="StartContainer for \"3b7cc72de20ccf28ebb171fcc138b7dd8b6ac275d11cd0db7d14e144168f5678\" returns successfully" Jul 7 01:09:34.056388 containerd[2783]: time="2025-07-07T01:09:34.056347135Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:34.056514 containerd[2783]: time="2025-07-07T01:09:34.056347535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 01:09:34.058513 containerd[2783]: time="2025-07-07T01:09:34.058475857Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 225.388928ms" Jul 7 01:09:34.058552 containerd[2783]: time="2025-07-07T01:09:34.058515377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 7 01:09:34.060100 containerd[2783]: time="2025-07-07T01:09:34.060076859Z" level=info msg="CreateContainer within sandbox \"4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 01:09:34.063605 containerd[2783]: time="2025-07-07T01:09:34.063579463Z" level=info msg="Container 7f60a27c0d2c3c1a3271b4faf2cafb56522beee42c8b2fe2110661f25b34b206: CDI devices from CRI Config.CDIDevices: []" Jul 7 01:09:34.066820 containerd[2783]: time="2025-07-07T01:09:34.066795546Z" level=info msg="CreateContainer within sandbox \"4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7f60a27c0d2c3c1a3271b4faf2cafb56522beee42c8b2fe2110661f25b34b206\"" Jul 7 01:09:34.067164 containerd[2783]: time="2025-07-07T01:09:34.067142866Z" level=info msg="StartContainer for \"7f60a27c0d2c3c1a3271b4faf2cafb56522beee42c8b2fe2110661f25b34b206\"" Jul 7 01:09:34.068127 containerd[2783]: time="2025-07-07T01:09:34.068105787Z" level=info msg="connecting to shim 7f60a27c0d2c3c1a3271b4faf2cafb56522beee42c8b2fe2110661f25b34b206" address="unix:///run/containerd/s/78dc1075fabed05db5270e550beef16e6a5d1e0d3c15dc31521d39801b4bf384" protocol=ttrpc version=3 Jul 7 01:09:34.094612 systemd[1]: Started cri-containerd-7f60a27c0d2c3c1a3271b4faf2cafb56522beee42c8b2fe2110661f25b34b206.scope - libcontainer container 7f60a27c0d2c3c1a3271b4faf2cafb56522beee42c8b2fe2110661f25b34b206. Jul 7 01:09:34.124338 containerd[2783]: time="2025-07-07T01:09:34.124310567Z" level=info msg="StartContainer for \"7f60a27c0d2c3c1a3271b4faf2cafb56522beee42c8b2fe2110661f25b34b206\" returns successfully" Jul 7 01:09:34.170359 kubelet[4295]: I0707 01:09:34.170310 4295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d2whz" podStartSLOduration=33.170292976 podStartE2EDuration="33.170292976s" podCreationTimestamp="2025-07-07 01:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:09:34.169746376 +0000 UTC m=+39.173386732" watchObservedRunningTime="2025-07-07 01:09:34.170292976 +0000 UTC m=+39.173933332" Jul 7 01:09:34.176851 kubelet[4295]: I0707 01:09:34.176808 4295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dc648bb98-s9htc" podStartSLOduration=24.582066975 podStartE2EDuration="26.176797063s" podCreationTimestamp="2025-07-07 01:09:08 +0000 UTC" firstStartedPulling="2025-07-07 01:09:32.238198961 +0000 UTC m=+37.241839277" lastFinishedPulling="2025-07-07 01:09:33.832929009 +0000 UTC m=+38.836569365" observedRunningTime="2025-07-07 01:09:34.176400703 +0000 UTC m=+39.180041059" watchObservedRunningTime="2025-07-07 01:09:34.176797063 +0000 UTC m=+39.180437419" Jul 7 01:09:34.688606 systemd-networkd[2695]: cali3ab5b345d8d: Gained IPv6LL Jul 7 01:09:35.080900 containerd[2783]: time="2025-07-07T01:09:35.080813265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-268kj,Uid:850a1808-16bf-4143-8ff8-9afb8ec843dc,Namespace:calico-system,Attempt:0,}" Jul 7 01:09:35.136660 systemd-networkd[2695]: cali64610dee04c: Gained IPv6LL Jul 7 01:09:35.165950 kubelet[4295]: I0707 01:09:35.165927 4295 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 01:09:35.165998 kubelet[4295]: I0707 01:09:35.165926 4295 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 01:09:35.172529 systemd-networkd[2695]: calied2c12635cd: Link UP Jul 7 01:09:35.172754 systemd-networkd[2695]: calied2c12635cd: Gained carrier Jul 7 01:09:35.179405 kubelet[4295]: I0707 01:09:35.179358 4295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dc648bb98-68h7b" podStartSLOduration=26.47008478 podStartE2EDuration="27.179340366s" podCreationTimestamp="2025-07-07 01:09:08 +0000 UTC" firstStartedPulling="2025-07-07 01:09:33.349815312 +0000 UTC m=+38.353455668" lastFinishedPulling="2025-07-07 01:09:34.059070898 +0000 UTC m=+39.062711254" observedRunningTime="2025-07-07 01:09:34.202579971 +0000 UTC m=+39.206220327" watchObservedRunningTime="2025-07-07 01:09:35.179340366 +0000 UTC m=+40.182980722" Jul 7 01:09:35.180504 containerd[2783]: 2025-07-07 01:09:35.109 [INFO][7232] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--a5852c4667-k8s-csi--node--driver--268kj-eth0 csi-node-driver- calico-system 850a1808-16bf-4143-8ff8-9afb8ec843dc 691 0 2025-07-07 01:09:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4344.1.1-a-a5852c4667 csi-node-driver-268kj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calied2c12635cd [] [] }} ContainerID="d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa" Namespace="calico-system" Pod="csi-node-driver-268kj" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-csi--node--driver--268kj-" Jul 7 01:09:35.180504 containerd[2783]: 2025-07-07 01:09:35.109 [INFO][7232] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa" Namespace="calico-system" Pod="csi-node-driver-268kj" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-csi--node--driver--268kj-eth0" Jul 7 01:09:35.180504 containerd[2783]: 2025-07-07 01:09:35.129 [INFO][7260] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa" HandleID="k8s-pod-network.d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa" Workload="ci--4344.1.1--a--a5852c4667-k8s-csi--node--driver--268kj-eth0" Jul 7 01:09:35.180504 containerd[2783]: 2025-07-07 01:09:35.129 [INFO][7260] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa" HandleID="k8s-pod-network.d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa" Workload="ci--4344.1.1--a--a5852c4667-k8s-csi--node--driver--268kj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400042fd40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-a-a5852c4667", "pod":"csi-node-driver-268kj", "timestamp":"2025-07-07 01:09:35.129755756 +0000 UTC"}, Hostname:"ci-4344.1.1-a-a5852c4667", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:09:35.180504 containerd[2783]: 2025-07-07 01:09:35.129 [INFO][7260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:09:35.180504 containerd[2783]: 2025-07-07 01:09:35.129 [INFO][7260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:09:35.180504 containerd[2783]: 2025-07-07 01:09:35.129 [INFO][7260] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-a5852c4667' Jul 7 01:09:35.180504 containerd[2783]: 2025-07-07 01:09:35.137 [INFO][7260] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:35.180504 containerd[2783]: 2025-07-07 01:09:35.155 [INFO][7260] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:35.180504 containerd[2783]: 2025-07-07 01:09:35.158 [INFO][7260] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:35.180504 containerd[2783]: 2025-07-07 01:09:35.160 [INFO][7260] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:35.180504 containerd[2783]: 2025-07-07 01:09:35.161 [INFO][7260] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:35.180504 containerd[2783]: 2025-07-07 01:09:35.161 [INFO][7260] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:35.180504 containerd[2783]: 2025-07-07 01:09:35.162 [INFO][7260] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa Jul 7 01:09:35.180504 containerd[2783]: 2025-07-07 01:09:35.165 [INFO][7260] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:35.180504 containerd[2783]: 2025-07-07 01:09:35.168 [INFO][7260] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.70.133/26] block=192.168.70.128/26 handle="k8s-pod-network.d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:35.180504 containerd[2783]: 2025-07-07 01:09:35.168 [INFO][7260] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.133/26] handle="k8s-pod-network.d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:35.180504 containerd[2783]: 2025-07-07 01:09:35.168 [INFO][7260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:09:35.180504 containerd[2783]: 2025-07-07 01:09:35.168 [INFO][7260] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.133/26] IPv6=[] ContainerID="d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa" HandleID="k8s-pod-network.d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa" Workload="ci--4344.1.1--a--a5852c4667-k8s-csi--node--driver--268kj-eth0" Jul 7 01:09:35.180957 containerd[2783]: 2025-07-07 01:09:35.170 [INFO][7232] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa" Namespace="calico-system" Pod="csi-node-driver-268kj" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-csi--node--driver--268kj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--a5852c4667-k8s-csi--node--driver--268kj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"850a1808-16bf-4143-8ff8-9afb8ec843dc", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 9, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-a5852c4667", ContainerID:"", Pod:"csi-node-driver-268kj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calied2c12635cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:09:35.180957 containerd[2783]: 2025-07-07 01:09:35.170 [INFO][7232] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.133/32] ContainerID="d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa" Namespace="calico-system" Pod="csi-node-driver-268kj" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-csi--node--driver--268kj-eth0" Jul 7 01:09:35.180957 containerd[2783]: 2025-07-07 01:09:35.170 [INFO][7232] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied2c12635cd ContainerID="d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa" Namespace="calico-system" Pod="csi-node-driver-268kj" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-csi--node--driver--268kj-eth0" Jul 7 01:09:35.180957 containerd[2783]: 2025-07-07 01:09:35.172 [INFO][7232] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa" Namespace="calico-system" Pod="csi-node-driver-268kj" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-csi--node--driver--268kj-eth0" Jul 7 01:09:35.180957 containerd[2783]: 2025-07-07 01:09:35.173 [INFO][7232] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa" Namespace="calico-system" Pod="csi-node-driver-268kj" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-csi--node--driver--268kj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--a5852c4667-k8s-csi--node--driver--268kj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"850a1808-16bf-4143-8ff8-9afb8ec843dc", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 9, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-a5852c4667", ContainerID:"d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa", Pod:"csi-node-driver-268kj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calied2c12635cd", MAC:"6a:25:f2:20:68:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:09:35.180957 containerd[2783]: 2025-07-07 01:09:35.178 [INFO][7232] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa" Namespace="calico-system" Pod="csi-node-driver-268kj" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-csi--node--driver--268kj-eth0" Jul 7 01:09:35.191543 containerd[2783]: time="2025-07-07T01:09:35.191508979Z" level=info msg="connecting to shim d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa" address="unix:///run/containerd/s/7a8704c135a5656dc6bf259ce35a02af0d040c66a8479acd2713ecee1a96774a" namespace=k8s.io protocol=ttrpc version=3 Jul 7 01:09:35.225615 systemd[1]: Started cri-containerd-d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa.scope - libcontainer container d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa. Jul 7 01:09:35.243933 containerd[2783]: time="2025-07-07T01:09:35.243903713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-268kj,Uid:850a1808-16bf-4143-8ff8-9afb8ec843dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa\"" Jul 7 01:09:35.245033 containerd[2783]: time="2025-07-07T01:09:35.245004954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 01:09:36.080787 containerd[2783]: time="2025-07-07T01:09:36.080748729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tgbsq,Uid:5bb0abc2-d84a-419a-943f-f21761ed7bef,Namespace:kube-system,Attempt:0,}" Jul 7 01:09:36.080889 containerd[2783]: time="2025-07-07T01:09:36.080804009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lzdth,Uid:3fbf6309-7289-4afb-ad59-b1950fe394eb,Namespace:calico-system,Attempt:0,}" Jul 7 01:09:36.080936 containerd[2783]: time="2025-07-07T01:09:36.080749209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7799fb6bf9-w8x42,Uid:ec64afa8-a08c-4cee-a39d-b0e2bcc7abd7,Namespace:calico-system,Attempt:0,}" Jul 7 01:09:36.164104 systemd-networkd[2695]: cali39f1868b691: Link UP Jul 7 01:09:36.164356 systemd-networkd[2695]: cali39f1868b691: Gained carrier Jul 7 01:09:36.172187 containerd[2783]: 2025-07-07 01:09:36.115 [INFO][7384] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--tgbsq-eth0 coredns-668d6bf9bc- kube-system 5bb0abc2-d84a-419a-943f-f21761ed7bef 826 0 2025-07-07 01:09:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4344.1.1-a-a5852c4667 coredns-668d6bf9bc-tgbsq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali39f1868b691 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-tgbsq" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--tgbsq-" Jul 7 01:09:36.172187 containerd[2783]: 2025-07-07 01:09:36.115 [INFO][7384] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-tgbsq" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--tgbsq-eth0" Jul 7 01:09:36.172187 containerd[2783]: 2025-07-07 01:09:36.135 [INFO][7461] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6" HandleID="k8s-pod-network.22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6" Workload="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--tgbsq-eth0" Jul 7 01:09:36.172187 containerd[2783]: 2025-07-07 01:09:36.135 [INFO][7461] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6" HandleID="k8s-pod-network.22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6" Workload="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--tgbsq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400042f330), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4344.1.1-a-a5852c4667", "pod":"coredns-668d6bf9bc-tgbsq", "timestamp":"2025-07-07 01:09:36.134990342 +0000 UTC"}, Hostname:"ci-4344.1.1-a-a5852c4667", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:09:36.172187 containerd[2783]: 2025-07-07 01:09:36.135 [INFO][7461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:09:36.172187 containerd[2783]: 2025-07-07 01:09:36.135 [INFO][7461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:09:36.172187 containerd[2783]: 2025-07-07 01:09:36.135 [INFO][7461] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-a5852c4667' Jul 7 01:09:36.172187 containerd[2783]: 2025-07-07 01:09:36.143 [INFO][7461] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.172187 containerd[2783]: 2025-07-07 01:09:36.146 [INFO][7461] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.172187 containerd[2783]: 2025-07-07 01:09:36.149 [INFO][7461] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.172187 containerd[2783]: 2025-07-07 01:09:36.150 [INFO][7461] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.172187 containerd[2783]: 2025-07-07 01:09:36.152 [INFO][7461] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.172187 containerd[2783]: 2025-07-07 01:09:36.152 [INFO][7461] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.172187 containerd[2783]: 2025-07-07 01:09:36.154 [INFO][7461] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6 Jul 7 01:09:36.172187 containerd[2783]: 2025-07-07 01:09:36.156 [INFO][7461] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.172187 containerd[2783]: 2025-07-07 01:09:36.160 [INFO][7461] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.70.134/26] block=192.168.70.128/26 handle="k8s-pod-network.22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.172187 containerd[2783]: 2025-07-07 01:09:36.160 [INFO][7461] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.134/26] handle="k8s-pod-network.22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.172187 containerd[2783]: 2025-07-07 01:09:36.160 [INFO][7461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:09:36.172187 containerd[2783]: 2025-07-07 01:09:36.161 [INFO][7461] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.134/26] IPv6=[] ContainerID="22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6" HandleID="k8s-pod-network.22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6" Workload="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--tgbsq-eth0" Jul 7 01:09:36.172645 containerd[2783]: 2025-07-07 01:09:36.162 [INFO][7384] cni-plugin/k8s.go 418: Populated endpoint ContainerID="22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-tgbsq" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--tgbsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--tgbsq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5bb0abc2-d84a-419a-943f-f21761ed7bef", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 9, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-a5852c4667", ContainerID:"", Pod:"coredns-668d6bf9bc-tgbsq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39f1868b691", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:09:36.172645 containerd[2783]: 2025-07-07 01:09:36.162 [INFO][7384] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.134/32] ContainerID="22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-tgbsq" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--tgbsq-eth0" Jul 7 01:09:36.172645 containerd[2783]: 2025-07-07 01:09:36.162 [INFO][7384] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39f1868b691 ContainerID="22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-tgbsq" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--tgbsq-eth0" Jul 7 01:09:36.172645 containerd[2783]: 2025-07-07 01:09:36.164 [INFO][7384] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-tgbsq" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--tgbsq-eth0" Jul 7 01:09:36.172645 containerd[2783]: 2025-07-07 01:09:36.164 [INFO][7384] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-tgbsq" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--tgbsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--tgbsq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5bb0abc2-d84a-419a-943f-f21761ed7bef", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 9, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-a5852c4667", ContainerID:"22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6", Pod:"coredns-668d6bf9bc-tgbsq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.70.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39f1868b691", MAC:"aa:7d:4d:d2:b5:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:09:36.172645 containerd[2783]: 2025-07-07 01:09:36.171 [INFO][7384] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-tgbsq" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-coredns--668d6bf9bc--tgbsq-eth0" Jul 7 01:09:36.182264 containerd[2783]: time="2025-07-07T01:09:36.182230109Z" level=info msg="connecting to shim 22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6" address="unix:///run/containerd/s/42cf08b79a3c92a8f62e82089eee96fa52dbf137d8be4eb6b561972638542615" namespace=k8s.io protocol=ttrpc version=3 Jul 7 01:09:36.212687 systemd[1]: Started cri-containerd-22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6.scope - libcontainer container 22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6. Jul 7 01:09:36.239265 containerd[2783]: time="2025-07-07T01:09:36.239234485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tgbsq,Uid:5bb0abc2-d84a-419a-943f-f21761ed7bef,Namespace:kube-system,Attempt:0,} returns sandbox id \"22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6\"" Jul 7 01:09:36.241105 containerd[2783]: time="2025-07-07T01:09:36.241066807Z" level=info msg="CreateContainer within sandbox \"22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 01:09:36.245379 containerd[2783]: time="2025-07-07T01:09:36.245351571Z" level=info msg="Container adf14ff1e19b0d7f3da804d284cb62f21503a8b109f9f5805bf43ba66ef978df: CDI devices from CRI Config.CDIDevices: []" Jul 7 01:09:36.247983 containerd[2783]: time="2025-07-07T01:09:36.247958494Z" level=info msg="CreateContainer within sandbox \"22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"adf14ff1e19b0d7f3da804d284cb62f21503a8b109f9f5805bf43ba66ef978df\"" Jul 7 01:09:36.248312 containerd[2783]: time="2025-07-07T01:09:36.248293534Z" level=info msg="StartContainer for \"adf14ff1e19b0d7f3da804d284cb62f21503a8b109f9f5805bf43ba66ef978df\"" Jul 7 01:09:36.249036 containerd[2783]: time="2025-07-07T01:09:36.249015055Z" level=info msg="connecting to shim adf14ff1e19b0d7f3da804d284cb62f21503a8b109f9f5805bf43ba66ef978df" address="unix:///run/containerd/s/42cf08b79a3c92a8f62e82089eee96fa52dbf137d8be4eb6b561972638542615" protocol=ttrpc version=3 Jul 7 01:09:36.249037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1204503105.mount: Deactivated successfully. Jul 7 01:09:36.260485 containerd[2783]: time="2025-07-07T01:09:36.260458906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:36.260547 containerd[2783]: time="2025-07-07T01:09:36.260494386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 7 01:09:36.261143 containerd[2783]: time="2025-07-07T01:09:36.261121867Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:36.262653 containerd[2783]: time="2025-07-07T01:09:36.262630268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:36.263296 containerd[2783]: time="2025-07-07T01:09:36.263268149Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.018224715s" Jul 7 01:09:36.263330 containerd[2783]: time="2025-07-07T01:09:36.263303549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 7 01:09:36.263745 systemd-networkd[2695]: cali275dc8980b9: Link UP Jul 7 01:09:36.264031 systemd-networkd[2695]: cali275dc8980b9: Gained carrier Jul 7 01:09:36.264794 containerd[2783]: time="2025-07-07T01:09:36.264769591Z" level=info msg="CreateContainer within sandbox \"d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 01:09:36.269810 containerd[2783]: time="2025-07-07T01:09:36.269781476Z" level=info msg="Container f5d12ff1cb291db4324bfe0f8979c769023793274f738dff8638d5cc98b75645: CDI devices from CRI Config.CDIDevices: []" Jul 7 01:09:36.274187 containerd[2783]: time="2025-07-07T01:09:36.274114880Z" level=info msg="CreateContainer within sandbox \"d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f5d12ff1cb291db4324bfe0f8979c769023793274f738dff8638d5cc98b75645\"" Jul 7 01:09:36.274447 containerd[2783]: 2025-07-07 01:09:36.118 [INFO][7403] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--a5852c4667-k8s-calico--kube--controllers--7799fb6bf9--w8x42-eth0 calico-kube-controllers-7799fb6bf9- calico-system ec64afa8-a08c-4cee-a39d-b0e2bcc7abd7 824 0 2025-07-07 01:09:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7799fb6bf9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4344.1.1-a-a5852c4667 calico-kube-controllers-7799fb6bf9-w8x42 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali275dc8980b9 [] [] }} ContainerID="f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb" Namespace="calico-system" Pod="calico-kube-controllers-7799fb6bf9-w8x42" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--kube--controllers--7799fb6bf9--w8x42-" Jul 7 01:09:36.274447 containerd[2783]: 2025-07-07 01:09:36.118 [INFO][7403] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb" Namespace="calico-system" Pod="calico-kube-controllers-7799fb6bf9-w8x42" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--kube--controllers--7799fb6bf9--w8x42-eth0" Jul 7 01:09:36.274447 containerd[2783]: 2025-07-07 01:09:36.137 [INFO][7468] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb" HandleID="k8s-pod-network.f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb" Workload="ci--4344.1.1--a--a5852c4667-k8s-calico--kube--controllers--7799fb6bf9--w8x42-eth0" Jul 7 01:09:36.274447 containerd[2783]: 2025-07-07 01:09:36.137 [INFO][7468] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb" HandleID="k8s-pod-network.f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb" Workload="ci--4344.1.1--a--a5852c4667-k8s-calico--kube--controllers--7799fb6bf9--w8x42-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000363e20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-a-a5852c4667", "pod":"calico-kube-controllers-7799fb6bf9-w8x42", "timestamp":"2025-07-07 01:09:36.137218945 +0000 UTC"}, Hostname:"ci-4344.1.1-a-a5852c4667", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:09:36.274447 containerd[2783]: 2025-07-07 01:09:36.137 [INFO][7468] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:09:36.274447 containerd[2783]: 2025-07-07 01:09:36.161 [INFO][7468] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:09:36.274447 containerd[2783]: 2025-07-07 01:09:36.161 [INFO][7468] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-a5852c4667' Jul 7 01:09:36.274447 containerd[2783]: 2025-07-07 01:09:36.243 [INFO][7468] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.274447 containerd[2783]: 2025-07-07 01:09:36.246 [INFO][7468] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.274447 containerd[2783]: 2025-07-07 01:09:36.249 [INFO][7468] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.274447 containerd[2783]: 2025-07-07 01:09:36.251 [INFO][7468] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.274447 containerd[2783]: 2025-07-07 01:09:36.253 [INFO][7468] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.274447 containerd[2783]: 2025-07-07 01:09:36.253 [INFO][7468] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.274447 containerd[2783]: 2025-07-07 01:09:36.254 [INFO][7468] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb Jul 7 01:09:36.274447 containerd[2783]: 2025-07-07 01:09:36.256 [INFO][7468] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.274447 containerd[2783]: 2025-07-07 01:09:36.260 [INFO][7468] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.70.135/26] block=192.168.70.128/26 handle="k8s-pod-network.f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.274447 containerd[2783]: 2025-07-07 01:09:36.260 [INFO][7468] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.135/26] handle="k8s-pod-network.f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.274447 containerd[2783]: 2025-07-07 01:09:36.260 [INFO][7468] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:09:36.274447 containerd[2783]: 2025-07-07 01:09:36.260 [INFO][7468] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.135/26] IPv6=[] ContainerID="f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb" HandleID="k8s-pod-network.f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb" Workload="ci--4344.1.1--a--a5852c4667-k8s-calico--kube--controllers--7799fb6bf9--w8x42-eth0" Jul 7 01:09:36.274949 containerd[2783]: 2025-07-07 01:09:36.262 [INFO][7403] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb" Namespace="calico-system" Pod="calico-kube-controllers-7799fb6bf9-w8x42" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--kube--controllers--7799fb6bf9--w8x42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--a5852c4667-k8s-calico--kube--controllers--7799fb6bf9--w8x42-eth0", GenerateName:"calico-kube-controllers-7799fb6bf9-", Namespace:"calico-system", SelfLink:"", UID:"ec64afa8-a08c-4cee-a39d-b0e2bcc7abd7", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 9, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7799fb6bf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-a5852c4667", ContainerID:"", Pod:"calico-kube-controllers-7799fb6bf9-w8x42", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali275dc8980b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:09:36.274949 containerd[2783]: 2025-07-07 01:09:36.262 [INFO][7403] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.135/32] ContainerID="f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb" Namespace="calico-system" Pod="calico-kube-controllers-7799fb6bf9-w8x42" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--kube--controllers--7799fb6bf9--w8x42-eth0" Jul 7 01:09:36.274949 containerd[2783]: 2025-07-07 01:09:36.262 [INFO][7403] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali275dc8980b9 ContainerID="f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb" Namespace="calico-system" Pod="calico-kube-controllers-7799fb6bf9-w8x42" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--kube--controllers--7799fb6bf9--w8x42-eth0" Jul 7 01:09:36.274949 containerd[2783]: 2025-07-07 01:09:36.264 [INFO][7403] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb" Namespace="calico-system" Pod="calico-kube-controllers-7799fb6bf9-w8x42" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--kube--controllers--7799fb6bf9--w8x42-eth0" Jul 7 01:09:36.274949 containerd[2783]: 2025-07-07 01:09:36.264 [INFO][7403] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb" Namespace="calico-system" Pod="calico-kube-controllers-7799fb6bf9-w8x42" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--kube--controllers--7799fb6bf9--w8x42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--a5852c4667-k8s-calico--kube--controllers--7799fb6bf9--w8x42-eth0", GenerateName:"calico-kube-controllers-7799fb6bf9-", Namespace:"calico-system", SelfLink:"", UID:"ec64afa8-a08c-4cee-a39d-b0e2bcc7abd7", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 9, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7799fb6bf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-a5852c4667", ContainerID:"f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb", Pod:"calico-kube-controllers-7799fb6bf9-w8x42", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.70.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali275dc8980b9", MAC:"c2:06:8e:25:a9:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:09:36.274949 containerd[2783]: 2025-07-07 01:09:36.273 [INFO][7403] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb" Namespace="calico-system" Pod="calico-kube-controllers-7799fb6bf9-w8x42" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-calico--kube--controllers--7799fb6bf9--w8x42-eth0" Jul 7 01:09:36.274949 containerd[2783]: time="2025-07-07T01:09:36.274781120Z" level=info msg="StartContainer for \"f5d12ff1cb291db4324bfe0f8979c769023793274f738dff8638d5cc98b75645\"" Jul 7 01:09:36.274637 systemd[1]: Started cri-containerd-adf14ff1e19b0d7f3da804d284cb62f21503a8b109f9f5805bf43ba66ef978df.scope - libcontainer container adf14ff1e19b0d7f3da804d284cb62f21503a8b109f9f5805bf43ba66ef978df. Jul 7 01:09:36.277183 containerd[2783]: time="2025-07-07T01:09:36.277151163Z" level=info msg="connecting to shim f5d12ff1cb291db4324bfe0f8979c769023793274f738dff8638d5cc98b75645" address="unix:///run/containerd/s/7a8704c135a5656dc6bf259ce35a02af0d040c66a8479acd2713ecee1a96774a" protocol=ttrpc version=3 Jul 7 01:09:36.285705 containerd[2783]: time="2025-07-07T01:09:36.285669131Z" level=info msg="connecting to shim f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb" address="unix:///run/containerd/s/f4a4281ed5b3d7ced3a284808b895929f3082b915820deb0aaa7381b0781a0ab" namespace=k8s.io protocol=ttrpc version=3 Jul 7 01:09:36.287521 systemd[1]: Started cri-containerd-f5d12ff1cb291db4324bfe0f8979c769023793274f738dff8638d5cc98b75645.scope - libcontainer container f5d12ff1cb291db4324bfe0f8979c769023793274f738dff8638d5cc98b75645. Jul 7 01:09:36.295349 containerd[2783]: time="2025-07-07T01:09:36.295316181Z" level=info msg="StartContainer for \"adf14ff1e19b0d7f3da804d284cb62f21503a8b109f9f5805bf43ba66ef978df\" returns successfully" Jul 7 01:09:36.297848 systemd[1]: Started cri-containerd-f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb.scope - libcontainer container f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb. Jul 7 01:09:36.314958 containerd[2783]: time="2025-07-07T01:09:36.314927240Z" level=info msg="StartContainer for \"f5d12ff1cb291db4324bfe0f8979c769023793274f738dff8638d5cc98b75645\" returns successfully" Jul 7 01:09:36.315796 containerd[2783]: time="2025-07-07T01:09:36.315775441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 01:09:36.323826 containerd[2783]: time="2025-07-07T01:09:36.323801769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7799fb6bf9-w8x42,Uid:ec64afa8-a08c-4cee-a39d-b0e2bcc7abd7,Namespace:calico-system,Attempt:0,} returns sandbox id \"f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb\"" Jul 7 01:09:36.364843 systemd-networkd[2695]: caliaf31edae620: Link UP Jul 7 01:09:36.365024 systemd-networkd[2695]: caliaf31edae620: Gained carrier Jul 7 01:09:36.380476 containerd[2783]: 2025-07-07 01:09:36.118 [INFO][7386] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4344.1.1--a--a5852c4667-k8s-goldmane--768f4c5c69--lzdth-eth0 goldmane-768f4c5c69- calico-system 3fbf6309-7289-4afb-ad59-b1950fe394eb 823 0 2025-07-07 01:09:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4344.1.1-a-a5852c4667 goldmane-768f4c5c69-lzdth eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliaf31edae620 [] [] }} ContainerID="b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9" Namespace="calico-system" Pod="goldmane-768f4c5c69-lzdth" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-goldmane--768f4c5c69--lzdth-" Jul 7 01:09:36.380476 containerd[2783]: 2025-07-07 01:09:36.118 [INFO][7386] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9" Namespace="calico-system" Pod="goldmane-768f4c5c69-lzdth" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-goldmane--768f4c5c69--lzdth-eth0" Jul 7 01:09:36.380476 containerd[2783]: 2025-07-07 01:09:36.138 [INFO][7469] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9" HandleID="k8s-pod-network.b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9" Workload="ci--4344.1.1--a--a5852c4667-k8s-goldmane--768f4c5c69--lzdth-eth0" Jul 7 01:09:36.380476 containerd[2783]: 2025-07-07 01:09:36.138 [INFO][7469] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9" HandleID="k8s-pod-network.b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9" Workload="ci--4344.1.1--a--a5852c4667-k8s-goldmane--768f4c5c69--lzdth-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001b6e00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4344.1.1-a-a5852c4667", "pod":"goldmane-768f4c5c69-lzdth", "timestamp":"2025-07-07 01:09:36.138154225 +0000 UTC"}, Hostname:"ci-4344.1.1-a-a5852c4667", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 01:09:36.380476 containerd[2783]: 2025-07-07 01:09:36.138 [INFO][7469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 01:09:36.380476 containerd[2783]: 2025-07-07 01:09:36.260 [INFO][7469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 01:09:36.380476 containerd[2783]: 2025-07-07 01:09:36.260 [INFO][7469] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4344.1.1-a-a5852c4667' Jul 7 01:09:36.380476 containerd[2783]: 2025-07-07 01:09:36.344 [INFO][7469] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.380476 containerd[2783]: 2025-07-07 01:09:36.347 [INFO][7469] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.380476 containerd[2783]: 2025-07-07 01:09:36.350 [INFO][7469] ipam/ipam.go 511: Trying affinity for 192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.380476 containerd[2783]: 2025-07-07 01:09:36.352 [INFO][7469] ipam/ipam.go 158: Attempting to load block cidr=192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.380476 containerd[2783]: 2025-07-07 01:09:36.353 [INFO][7469] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.70.128/26 host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.380476 containerd[2783]: 2025-07-07 01:09:36.353 [INFO][7469] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.70.128/26 handle="k8s-pod-network.b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.380476 containerd[2783]: 2025-07-07 01:09:36.355 [INFO][7469] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9 Jul 7 01:09:36.380476 containerd[2783]: 2025-07-07 01:09:36.357 [INFO][7469] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.70.128/26 handle="k8s-pod-network.b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.380476 containerd[2783]: 2025-07-07 01:09:36.361 [INFO][7469] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.70.136/26] block=192.168.70.128/26 handle="k8s-pod-network.b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.380476 containerd[2783]: 2025-07-07 01:09:36.361 [INFO][7469] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.70.136/26] handle="k8s-pod-network.b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9" host="ci-4344.1.1-a-a5852c4667" Jul 7 01:09:36.380476 containerd[2783]: 2025-07-07 01:09:36.361 [INFO][7469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 01:09:36.380476 containerd[2783]: 2025-07-07 01:09:36.361 [INFO][7469] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.136/26] IPv6=[] ContainerID="b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9" HandleID="k8s-pod-network.b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9" Workload="ci--4344.1.1--a--a5852c4667-k8s-goldmane--768f4c5c69--lzdth-eth0" Jul 7 01:09:36.380892 containerd[2783]: 2025-07-07 01:09:36.363 [INFO][7386] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9" Namespace="calico-system" Pod="goldmane-768f4c5c69-lzdth" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-goldmane--768f4c5c69--lzdth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--a5852c4667-k8s-goldmane--768f4c5c69--lzdth-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"3fbf6309-7289-4afb-ad59-b1950fe394eb", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-a5852c4667", ContainerID:"", Pod:"goldmane-768f4c5c69-lzdth", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.70.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaf31edae620", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:09:36.380892 containerd[2783]: 2025-07-07 01:09:36.363 [INFO][7386] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.70.136/32] ContainerID="b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9" Namespace="calico-system" Pod="goldmane-768f4c5c69-lzdth" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-goldmane--768f4c5c69--lzdth-eth0" Jul 7 01:09:36.380892 containerd[2783]: 2025-07-07 01:09:36.363 [INFO][7386] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf31edae620 ContainerID="b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9" Namespace="calico-system" Pod="goldmane-768f4c5c69-lzdth" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-goldmane--768f4c5c69--lzdth-eth0" Jul 7 01:09:36.380892 containerd[2783]: 2025-07-07 01:09:36.365 [INFO][7386] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9" Namespace="calico-system" Pod="goldmane-768f4c5c69-lzdth" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-goldmane--768f4c5c69--lzdth-eth0" Jul 7 01:09:36.380892 containerd[2783]: 2025-07-07 01:09:36.365 [INFO][7386] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9" Namespace="calico-system" Pod="goldmane-768f4c5c69-lzdth" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-goldmane--768f4c5c69--lzdth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4344.1.1--a--a5852c4667-k8s-goldmane--768f4c5c69--lzdth-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"3fbf6309-7289-4afb-ad59-b1950fe394eb", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 1, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4344.1.1-a-a5852c4667", ContainerID:"b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9", Pod:"goldmane-768f4c5c69-lzdth", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.70.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaf31edae620", MAC:"96:97:95:60:89:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 01:09:36.380892 containerd[2783]: 2025-07-07 01:09:36.378 [INFO][7386] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9" Namespace="calico-system" Pod="goldmane-768f4c5c69-lzdth" WorkloadEndpoint="ci--4344.1.1--a--a5852c4667-k8s-goldmane--768f4c5c69--lzdth-eth0" Jul 7 01:09:36.390584 containerd[2783]: time="2025-07-07T01:09:36.390554675Z" level=info msg="connecting to shim b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9" address="unix:///run/containerd/s/1be253aedae0eac62bf0db1fc9c5301100f958dc57b1bc13a83d85be59f7497b" namespace=k8s.io protocol=ttrpc version=3 Jul 7 01:09:36.416588 systemd-networkd[2695]: calied2c12635cd: Gained IPv6LL Jul 7 01:09:36.422628 systemd[1]: Started cri-containerd-b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9.scope - libcontainer container b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9. Jul 7 01:09:36.448543 containerd[2783]: time="2025-07-07T01:09:36.448511252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lzdth,Uid:3fbf6309-7289-4afb-ad59-b1950fe394eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9\"" Jul 7 01:09:37.196171 kubelet[4295]: I0707 01:09:37.196118 4295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tgbsq" podStartSLOduration=36.196102904 podStartE2EDuration="36.196102904s" podCreationTimestamp="2025-07-07 01:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 01:09:37.195762744 +0000 UTC m=+42.199403100" watchObservedRunningTime="2025-07-07 01:09:37.196102904 +0000 UTC m=+42.199743220" Jul 7 01:09:37.398832 containerd[2783]: time="2025-07-07T01:09:37.398794577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:37.399235 containerd[2783]: time="2025-07-07T01:09:37.398811577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 7 01:09:37.399492 containerd[2783]: time="2025-07-07T01:09:37.399466658Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:37.401035 containerd[2783]: time="2025-07-07T01:09:37.401015979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:37.401653 containerd[2783]: time="2025-07-07T01:09:37.401628780Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.085824459s" Jul 7 01:09:37.401677 containerd[2783]: time="2025-07-07T01:09:37.401662220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 7 01:09:37.402367 containerd[2783]: time="2025-07-07T01:09:37.402350340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 01:09:37.403331 containerd[2783]: time="2025-07-07T01:09:37.403311261Z" level=info msg="CreateContainer within sandbox \"d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 01:09:37.407866 containerd[2783]: time="2025-07-07T01:09:37.407843865Z" level=info msg="Container e80342076eabfb487d4c340743921c258f71bee8bde76169c3e7a92ae6f83ea1: CDI devices from CRI Config.CDIDevices: []" Jul 7 01:09:37.412202 containerd[2783]: time="2025-07-07T01:09:37.412179910Z" level=info msg="CreateContainer within sandbox \"d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e80342076eabfb487d4c340743921c258f71bee8bde76169c3e7a92ae6f83ea1\"" Jul 7 01:09:37.412596 containerd[2783]: time="2025-07-07T01:09:37.412570110Z" level=info msg="StartContainer for \"e80342076eabfb487d4c340743921c258f71bee8bde76169c3e7a92ae6f83ea1\"" Jul 7 01:09:37.413965 containerd[2783]: time="2025-07-07T01:09:37.413942711Z" level=info msg="connecting to shim e80342076eabfb487d4c340743921c258f71bee8bde76169c3e7a92ae6f83ea1" address="unix:///run/containerd/s/7a8704c135a5656dc6bf259ce35a02af0d040c66a8479acd2713ecee1a96774a" protocol=ttrpc version=3 Jul 7 01:09:37.439611 systemd[1]: Started cri-containerd-e80342076eabfb487d4c340743921c258f71bee8bde76169c3e7a92ae6f83ea1.scope - libcontainer container e80342076eabfb487d4c340743921c258f71bee8bde76169c3e7a92ae6f83ea1. Jul 7 01:09:37.470526 containerd[2783]: time="2025-07-07T01:09:37.470445685Z" level=info msg="StartContainer for \"e80342076eabfb487d4c340743921c258f71bee8bde76169c3e7a92ae6f83ea1\" returns successfully" Jul 7 01:09:37.504604 systemd-networkd[2695]: cali39f1868b691: Gained IPv6LL Jul 7 01:09:37.632531 systemd-networkd[2695]: cali275dc8980b9: Gained IPv6LL Jul 7 01:09:38.080548 systemd-networkd[2695]: caliaf31edae620: Gained IPv6LL Jul 7 01:09:38.128845 kubelet[4295]: I0707 01:09:38.128820 4295 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 01:09:38.128845 kubelet[4295]: I0707 01:09:38.128847 4295 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 01:09:38.185239 kubelet[4295]: I0707 01:09:38.185187 4295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-268kj" podStartSLOduration=23.027727573 podStartE2EDuration="25.185172439s" podCreationTimestamp="2025-07-07 01:09:13 +0000 UTC" firstStartedPulling="2025-07-07 01:09:35.244796154 +0000 UTC m=+40.248436510" lastFinishedPulling="2025-07-07 01:09:37.40224106 +0000 UTC m=+42.405881376" observedRunningTime="2025-07-07 01:09:38.184819079 +0000 UTC m=+43.188459435" watchObservedRunningTime="2025-07-07 01:09:38.185172439 +0000 UTC m=+43.188812795" Jul 7 01:09:39.032870 containerd[2783]: time="2025-07-07T01:09:39.032819377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:39.033288 containerd[2783]: time="2025-07-07T01:09:39.032834337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 7 01:09:39.033577 containerd[2783]: time="2025-07-07T01:09:39.033531297Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:39.035018 containerd[2783]: time="2025-07-07T01:09:39.034994459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:39.035654 containerd[2783]: time="2025-07-07T01:09:39.035630739Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.633254279s" Jul 7 01:09:39.035692 containerd[2783]: time="2025-07-07T01:09:39.035661339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 7 01:09:39.036425 containerd[2783]: time="2025-07-07T01:09:39.036402380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 01:09:39.041124 containerd[2783]: time="2025-07-07T01:09:39.041101344Z" level=info msg="CreateContainer within sandbox \"f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 01:09:39.045101 containerd[2783]: time="2025-07-07T01:09:39.045078788Z" level=info msg="Container ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a: CDI devices from CRI Config.CDIDevices: []" Jul 7 01:09:39.048708 containerd[2783]: time="2025-07-07T01:09:39.048677711Z" level=info msg="CreateContainer within sandbox \"f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\"" Jul 7 01:09:39.049000 containerd[2783]: time="2025-07-07T01:09:39.048979791Z" level=info msg="StartContainer for \"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\"" Jul 7 01:09:39.049959 containerd[2783]: time="2025-07-07T01:09:39.049930912Z" level=info msg="connecting to shim ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a" address="unix:///run/containerd/s/f4a4281ed5b3d7ced3a284808b895929f3082b915820deb0aaa7381b0781a0ab" protocol=ttrpc version=3 Jul 7 01:09:39.073673 systemd[1]: Started cri-containerd-ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a.scope - libcontainer container ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a. Jul 7 01:09:39.102407 containerd[2783]: time="2025-07-07T01:09:39.102374598Z" level=info msg="StartContainer for \"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" returns successfully" Jul 7 01:09:39.187643 kubelet[4295]: I0707 01:09:39.187589 4295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7799fb6bf9-w8x42" podStartSLOduration=23.475982904 podStartE2EDuration="26.187575314s" podCreationTimestamp="2025-07-07 01:09:13 +0000 UTC" firstStartedPulling="2025-07-07 01:09:36.32468461 +0000 UTC m=+41.328324966" lastFinishedPulling="2025-07-07 01:09:39.03627706 +0000 UTC m=+44.039917376" observedRunningTime="2025-07-07 01:09:39.187264194 +0000 UTC m=+44.190904550" watchObservedRunningTime="2025-07-07 01:09:39.187575314 +0000 UTC m=+44.191215670" Jul 7 01:09:39.216555 containerd[2783]: time="2025-07-07T01:09:39.216524380Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"ef605b5ec47119ae9572ff38fcf047c83f7e61c596ccd58b0d561b88fb0cccd1\" pid:7907 exited_at:{seconds:1751850579 nanos:216260019}" Jul 7 01:09:39.301239 kubelet[4295]: I0707 01:09:39.301145 4295 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 01:09:39.375743 containerd[2783]: time="2025-07-07T01:09:39.375714721Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\" id:\"731d4652c2b81e1d5045da49bc677fe9842214c33e1b07764793d38c5b6520f1\" pid:7928 exited_at:{seconds:1751850579 nanos:375509041}" Jul 7 01:09:39.436452 containerd[2783]: time="2025-07-07T01:09:39.436415255Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\" id:\"10e2fe95824e593e3c05482abca4bd60c7827e875dfdace7a3078cb61660e140\" pid:7963 exited_at:{seconds:1751850579 nanos:436178054}" Jul 7 01:09:40.323674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2249308256.mount: Deactivated successfully. Jul 7 01:09:40.516592 containerd[2783]: time="2025-07-07T01:09:40.516536117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:40.516902 containerd[2783]: time="2025-07-07T01:09:40.516612917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 7 01:09:40.517217 containerd[2783]: time="2025-07-07T01:09:40.517199838Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:40.519094 containerd[2783]: time="2025-07-07T01:09:40.519067399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 01:09:40.519863 containerd[2783]: time="2025-07-07T01:09:40.519841240Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 1.48340734s" Jul 7 01:09:40.519894 containerd[2783]: time="2025-07-07T01:09:40.519869960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 7 01:09:40.521394 containerd[2783]: time="2025-07-07T01:09:40.521375161Z" level=info msg="CreateContainer within sandbox \"b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 01:09:40.525473 containerd[2783]: time="2025-07-07T01:09:40.525450365Z" level=info msg="Container 314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282: CDI devices from CRI Config.CDIDevices: []" Jul 7 01:09:40.528862 containerd[2783]: time="2025-07-07T01:09:40.528837408Z" level=info msg="CreateContainer within sandbox \"b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\"" Jul 7 01:09:40.529158 containerd[2783]: time="2025-07-07T01:09:40.529140568Z" level=info msg="StartContainer for \"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\"" Jul 7 01:09:40.530096 containerd[2783]: time="2025-07-07T01:09:40.530076809Z" level=info msg="connecting to shim 314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282" address="unix:///run/containerd/s/1be253aedae0eac62bf0db1fc9c5301100f958dc57b1bc13a83d85be59f7497b" protocol=ttrpc version=3 Jul 7 01:09:40.548606 systemd[1]: Started cri-containerd-314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282.scope - libcontainer container 314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282. Jul 7 01:09:40.577186 containerd[2783]: time="2025-07-07T01:09:40.577125049Z" level=info msg="StartContainer for \"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" returns successfully" Jul 7 01:09:41.191851 kubelet[4295]: I0707 01:09:41.191796 4295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-lzdth" podStartSLOduration=23.120735944 podStartE2EDuration="27.191779051s" podCreationTimestamp="2025-07-07 01:09:14 +0000 UTC" firstStartedPulling="2025-07-07 01:09:36.449344973 +0000 UTC m=+41.452985289" lastFinishedPulling="2025-07-07 01:09:40.52038808 +0000 UTC m=+45.524028396" observedRunningTime="2025-07-07 01:09:41.19145289 +0000 UTC m=+46.195093246" watchObservedRunningTime="2025-07-07 01:09:41.191779051 +0000 UTC m=+46.195419367" Jul 7 01:09:42.259707 containerd[2783]: time="2025-07-07T01:09:42.259662969Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"a22a2f50e65283ceb368b1d0e4639c423b64c021a924c01049e7db1f1d182d5f\" pid:8064 exit_status:1 exited_at:{seconds:1751850582 nanos:259381289}" Jul 7 01:09:43.250972 containerd[2783]: time="2025-07-07T01:09:43.250926439Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"bf7bf8187995c1d39b241a215a3806d633fdd8466a038809c9d14ae3b3cced8a\" pid:8102 exit_status:1 exited_at:{seconds:1751850583 nanos:250749399}" Jul 7 01:09:43.768743 kubelet[4295]: I0707 01:09:43.768703 4295 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 01:09:53.991403 kubelet[4295]: I0707 01:09:53.991303 4295 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 01:10:09.212355 containerd[2783]: time="2025-07-07T01:10:09.212316304Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"7aa0d6788fc32ee796e6b3456c9a5534ecd82c0ab8588fb037345fb287590bbb\" pid:8197 exited_at:{seconds:1751850609 nanos:212085104}" Jul 7 01:10:09.445875 containerd[2783]: time="2025-07-07T01:10:09.445835856Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\" id:\"cdca2599b8c241636b6107e538884524fc5f73e9abffbd782701fd4c1cd68926\" pid:8218 exited_at:{seconds:1751850609 nanos:445599416}" Jul 7 01:10:13.246423 containerd[2783]: time="2025-07-07T01:10:13.246391170Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"cd974b72a8125b47bd13031b82c67718e00d5839876d8c83cfaa719421ab8a58\" pid:8253 exited_at:{seconds:1751850613 nanos:246104047}" Jul 7 01:10:24.596903 containerd[2783]: time="2025-07-07T01:10:24.596845045Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"8383bb31d6c37f994375b7145c96f4837d521a95e1e8a536a11161b458d619a0\" pid:8302 exited_at:{seconds:1751850624 nanos:596616043}" Jul 7 01:10:28.740211 containerd[2783]: time="2025-07-07T01:10:28.740170281Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"4fc72b62b34da1ed17389127c494ec66fa4162be9b389a7f8f578267f252dc34\" pid:8341 exited_at:{seconds:1751850628 nanos:740047560}" Jul 7 01:10:39.218425 containerd[2783]: time="2025-07-07T01:10:39.218388682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"eefa2c0245cb11f0e6195c165221e76c034e7706617ef3b94405a657a5417ea1\" pid:8369 exited_at:{seconds:1751850639 nanos:218207121}" Jul 7 01:10:39.440047 containerd[2783]: time="2025-07-07T01:10:39.440011086Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\" id:\"5b1051f85cc7386a3e92654f52f5d61efb65aa875a8aa6dff0d9fd1957d108e6\" pid:8390 exited_at:{seconds:1751850639 nanos:439780205}" Jul 7 01:10:43.251091 containerd[2783]: time="2025-07-07T01:10:43.251051153Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"a22445f4b65f57b9a41005bbde689db0daa87e52dd543445fbbc1441350dfc3c\" pid:8425 exited_at:{seconds:1751850643 nanos:250871152}" Jul 7 01:11:09.220510 containerd[2783]: time="2025-07-07T01:11:09.220454304Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"0fe2546a7a115a97735e21d440d9f310c23a65b498980fc70fbd16a976a05d4d\" pid:8500 exited_at:{seconds:1751850669 nanos:220291063}" Jul 7 01:11:09.447236 containerd[2783]: time="2025-07-07T01:11:09.447186754Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\" id:\"eef65fc51e43cc1088f6b6f46f02a576709a44da46191e64bb15a7642734502c\" pid:8521 exited_at:{seconds:1751850669 nanos:446966354}" Jul 7 01:11:13.248621 containerd[2783]: time="2025-07-07T01:11:13.248585298Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"b83d59418e23b1ce40aac8ea2a57a57bd7ae86dbe0c2b02854218e2ebab7db64\" pid:8556 exited_at:{seconds:1751850673 nanos:248375217}" Jul 7 01:11:24.592581 containerd[2783]: time="2025-07-07T01:11:24.592496264Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"cf38b7e2fa31179be6a1b5006ed3833582c0a684aa68af84bb80d30c396df444\" pid:8598 exited_at:{seconds:1751850684 nanos:592266143}" Jul 7 01:11:28.739569 containerd[2783]: time="2025-07-07T01:11:28.739528814Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"6596645359f59be937af61dbb700715e49d43c2eb89afcf7f6aa8dd064071c0f\" pid:8661 exited_at:{seconds:1751850688 nanos:739352414}" Jul 7 01:11:39.220265 containerd[2783]: time="2025-07-07T01:11:39.220228032Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"5fc881fe7390d78b4b1b523b1edc19f97abd40ade58151ea8903101dbb3c3569\" pid:8693 exited_at:{seconds:1751850699 nanos:220046551}" Jul 7 01:11:39.445794 containerd[2783]: time="2025-07-07T01:11:39.445754494Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\" id:\"6c59ad0ab00a4c1757f945cb48f45f8d7dc60e60b733c72c5a3f3366114af686\" pid:8715 exited_at:{seconds:1751850699 nanos:445524334}" Jul 7 01:11:43.254148 containerd[2783]: time="2025-07-07T01:11:43.254086213Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"2f2854f1186ddcececbfcad7c19701f9cdb4f93bdb1c1ce6e943449568a5b709\" pid:8753 exited_at:{seconds:1751850703 nanos:253843292}" Jul 7 01:12:09.211741 containerd[2783]: time="2025-07-07T01:12:09.211686056Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"11f4e30f5397177ca7392b2524f3dcd594437f48088d537a3214509444453a72\" pid:8802 exited_at:{seconds:1751850729 nanos:211491976}" Jul 7 01:12:09.449146 containerd[2783]: time="2025-07-07T01:12:09.449088375Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\" id:\"f48c1d31de0b531bb6a937c32935b4a9859d4cc9ee29dbae3d1c2723ecd653d0\" pid:8823 exited_at:{seconds:1751850729 nanos:448890815}" Jul 7 01:12:13.252044 containerd[2783]: time="2025-07-07T01:12:13.251992547Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"8a8dc3cbf1e866c63d3ac8908409da4519d24a819e2f64e3f697594da3cd8bd0\" pid:8865 exited_at:{seconds:1751850733 nanos:251809107}" Jul 7 01:12:24.592753 containerd[2783]: time="2025-07-07T01:12:24.592690599Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"9c05f1185320c817a0d6c59e61243aebe4a129e44f4739faddd34adbe1b06d2e\" pid:8902 exited_at:{seconds:1751850744 nanos:592468640}" Jul 7 01:12:28.738357 containerd[2783]: time="2025-07-07T01:12:28.738311202Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"138dad1456e333ed40bf9969a82227c75c14402d125ef3cb62d8aab025dc913b\" pid:8942 exited_at:{seconds:1751850748 nanos:738156283}" Jul 7 01:12:39.217374 containerd[2783]: time="2025-07-07T01:12:39.217338475Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"5c2c05c4209dfbf3b0af3e68e8a83a821cf57ca58d7ddf722739a5e954f06bf2\" pid:8989 exited_at:{seconds:1751850759 nanos:217159596}" Jul 7 01:12:39.438883 containerd[2783]: time="2025-07-07T01:12:39.438849279Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\" id:\"a1459d82988b4b3dfedc8ef65b46bac14f384ef92117c71f516548a47b48ca64\" pid:9010 exited_at:{seconds:1751850759 nanos:438619839}" Jul 7 01:12:43.250305 containerd[2783]: time="2025-07-07T01:12:43.250274123Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"196384f6919c0d3a8a62152761dc5bd67d6396daae43c744a817766af6965535\" pid:9052 exited_at:{seconds:1751850763 nanos:250093563}" Jul 7 01:13:09.224537 containerd[2783]: time="2025-07-07T01:13:09.224460656Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"b064f12277373a6c36f85740ab78680e67439ca2bad5a33b6b1eb89b31ef8dca\" pid:9099 exited_at:{seconds:1751850789 nanos:224269536}" Jul 7 01:13:09.439685 containerd[2783]: time="2025-07-07T01:13:09.439644093Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\" id:\"0a14b1070974bd26e1e2b12fb79611091d1a8bf334910a0ff35d4c6de1def8ac\" pid:9121 exited_at:{seconds:1751850789 nanos:439425853}" Jul 7 01:13:13.254194 containerd[2783]: time="2025-07-07T01:13:13.254150382Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"6ca7a6eff12d48df8d8d5814897e59bde55990b2b743a4f070b3daea0ac1ff59\" pid:9157 exited_at:{seconds:1751850793 nanos:253940502}" Jul 7 01:13:24.590052 containerd[2783]: time="2025-07-07T01:13:24.590021217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"e4c51792c70a2c0abe6a22827350e38cabb33881291d7d40e1c474a60dd9a6b4\" pid:9201 exited_at:{seconds:1751850804 nanos:589829017}" Jul 7 01:13:28.741321 containerd[2783]: time="2025-07-07T01:13:28.741280337Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"ac12ec77e03f4bd98396155cc10011ad4f7e052f1985fe10995cd2dd625e3423\" pid:9256 exited_at:{seconds:1751850808 nanos:741098297}" Jul 7 01:13:39.219359 containerd[2783]: time="2025-07-07T01:13:39.219320865Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"2fe6f9b54a3322ee871fc432c2af32cd239556d35d6c58a189f9ae71a4124351\" pid:9285 exited_at:{seconds:1751850819 nanos:219168385}" Jul 7 01:13:39.447008 containerd[2783]: time="2025-07-07T01:13:39.446971923Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\" id:\"424615a1778dbf97ef8228c6803425b322408f57420a41d832d65047df974f70\" pid:9307 exited_at:{seconds:1751850819 nanos:446623523}" Jul 7 01:13:43.188411 systemd[1]: Started sshd@7-147.28.143.214:22-113.44.161.187:38028.service - OpenSSH per-connection server daemon (113.44.161.187:38028). Jul 7 01:13:43.235592 containerd[2783]: time="2025-07-07T01:13:43.235560528Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"7915bbc4fbb64d3193ad63d68cbbe747a37c24342a2da073cea4d1592cf30ac2\" pid:9344 exited_at:{seconds:1751850823 nanos:235371448}" Jul 7 01:13:43.422615 sshd[9332]: Connection reset by 113.44.161.187 port 38028 [preauth] Jul 7 01:13:43.424135 systemd[1]: sshd@7-147.28.143.214:22-113.44.161.187:38028.service: Deactivated successfully. Jul 7 01:13:51.271643 containerd[2783]: time="2025-07-07T01:13:51.271539369Z" level=warning msg="container event discarded" container=e11bb4cbebe6d13433a6cc91516b70d6ff67077afe3991b59f53ddd7602fde61 type=CONTAINER_CREATED_EVENT Jul 7 01:13:51.281755 containerd[2783]: time="2025-07-07T01:13:51.281721612Z" level=warning msg="container event discarded" container=e11bb4cbebe6d13433a6cc91516b70d6ff67077afe3991b59f53ddd7602fde61 type=CONTAINER_STARTED_EVENT Jul 7 01:13:51.281802 containerd[2783]: time="2025-07-07T01:13:51.281768652Z" level=warning msg="container event discarded" container=57a187dbdeac4b242db056ca4b56c44ca083f32c9c604d6317b248fed14a711c type=CONTAINER_CREATED_EVENT Jul 7 01:13:51.281802 containerd[2783]: time="2025-07-07T01:13:51.281783652Z" level=warning msg="container event discarded" container=c9812da8435807419d4d506a1294628c74758096f354d662675ac9bee85c5b2c type=CONTAINER_CREATED_EVENT Jul 7 01:13:51.281802 containerd[2783]: time="2025-07-07T01:13:51.281796132Z" level=warning msg="container event discarded" container=c9812da8435807419d4d506a1294628c74758096f354d662675ac9bee85c5b2c type=CONTAINER_STARTED_EVENT Jul 7 01:13:51.281871 containerd[2783]: time="2025-07-07T01:13:51.281810132Z" level=warning msg="container event discarded" container=deb49431debd9581c43b9c6b1ad691bbfbdeb5c959df58527014f719aa65fb76 type=CONTAINER_CREATED_EVENT Jul 7 01:13:51.281871 containerd[2783]: time="2025-07-07T01:13:51.281822572Z" level=warning msg="container event discarded" container=deb49431debd9581c43b9c6b1ad691bbfbdeb5c959df58527014f719aa65fb76 type=CONTAINER_STARTED_EVENT Jul 7 01:13:51.300025 containerd[2783]: time="2025-07-07T01:13:51.299990777Z" level=warning msg="container event discarded" container=c8ba2726ac72af1c75dd150ba3622348b419acd79f3d07674f49f191ca9915dd type=CONTAINER_CREATED_EVENT Jul 7 01:13:51.300025 containerd[2783]: time="2025-07-07T01:13:51.300015177Z" level=warning msg="container event discarded" container=813ce4762e10ab202e29fa2d32c73be55993e30ab684bb28b1aa1ff1b6c52178 type=CONTAINER_CREATED_EVENT Jul 7 01:13:51.348198 containerd[2783]: time="2025-07-07T01:13:51.348176949Z" level=warning msg="container event discarded" container=57a187dbdeac4b242db056ca4b56c44ca083f32c9c604d6317b248fed14a711c type=CONTAINER_STARTED_EVENT Jul 7 01:13:51.348198 containerd[2783]: time="2025-07-07T01:13:51.348189869Z" level=warning msg="container event discarded" container=c8ba2726ac72af1c75dd150ba3622348b419acd79f3d07674f49f191ca9915dd type=CONTAINER_STARTED_EVENT Jul 7 01:13:51.348198 containerd[2783]: time="2025-07-07T01:13:51.348196069Z" level=warning msg="container event discarded" container=813ce4762e10ab202e29fa2d32c73be55993e30ab684bb28b1aa1ff1b6c52178 type=CONTAINER_STARTED_EVENT Jul 7 01:14:01.482110 containerd[2783]: time="2025-07-07T01:14:01.482054978Z" level=warning msg="container event discarded" container=9b1bf8258fb2237db897e0c3a5c02900e3f80b0eed883c083218843a51b30235 type=CONTAINER_CREATED_EVENT Jul 7 01:14:01.482110 containerd[2783]: time="2025-07-07T01:14:01.482089818Z" level=warning msg="container event discarded" container=9b1bf8258fb2237db897e0c3a5c02900e3f80b0eed883c083218843a51b30235 type=CONTAINER_STARTED_EVENT Jul 7 01:14:01.493278 containerd[2783]: time="2025-07-07T01:14:01.493252742Z" level=warning msg="container event discarded" container=574a02dbc8cf98c6a09b72b2db1ca52d5539a6e2699d7157621c342254ac3f71 type=CONTAINER_CREATED_EVENT Jul 7 01:14:01.561461 containerd[2783]: time="2025-07-07T01:14:01.561439129Z" level=warning msg="container event discarded" container=574a02dbc8cf98c6a09b72b2db1ca52d5539a6e2699d7157621c342254ac3f71 type=CONTAINER_STARTED_EVENT Jul 7 01:14:01.912751 containerd[2783]: time="2025-07-07T01:14:01.912714306Z" level=warning msg="container event discarded" container=1216dfbfc4cef5128887b9e05dafdedf63ac6cce18b48aa879d235a90675bb83 type=CONTAINER_CREATED_EVENT Jul 7 01:14:01.912751 containerd[2783]: time="2025-07-07T01:14:01.912742506Z" level=warning msg="container event discarded" container=1216dfbfc4cef5128887b9e05dafdedf63ac6cce18b48aa879d235a90675bb83 type=CONTAINER_STARTED_EVENT Jul 7 01:14:03.086710 containerd[2783]: time="2025-07-07T01:14:03.086672659Z" level=warning msg="container event discarded" container=9a23f375e5a5d6f549b7301557675e028e142869ebcd71959257090332aa2595 type=CONTAINER_CREATED_EVENT Jul 7 01:14:03.140886 containerd[2783]: time="2025-07-07T01:14:03.140854321Z" level=warning msg="container event discarded" container=9a23f375e5a5d6f549b7301557675e028e142869ebcd71959257090332aa2595 type=CONTAINER_STARTED_EVENT Jul 7 01:14:09.222390 containerd[2783]: time="2025-07-07T01:14:09.222353023Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"1020dcba90afa28394304c74b1bef0c333e8d92918fbccbb306f6bc65e4788b0\" pid:9410 exited_at:{seconds:1751850849 nanos:222163063}" Jul 7 01:14:09.449878 containerd[2783]: time="2025-07-07T01:14:09.449846732Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\" id:\"cc258eb827d8a8762da3e8f6e12eec58bff451b78b9c7ccbd1300440f29ce2b6\" pid:9431 exited_at:{seconds:1751850849 nanos:449633092}" Jul 7 01:14:13.254566 containerd[2783]: time="2025-07-07T01:14:13.254512792Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"aae45d03bf09265246f3f7deb41d6a9fb7310922b89ea289197253161274291d\" pid:9468 exited_at:{seconds:1751850853 nanos:254257472}" Jul 7 01:14:13.457660 containerd[2783]: time="2025-07-07T01:14:13.457590738Z" level=warning msg="container event discarded" container=d40285d344640bd5855d94ef17d724cbcbc0150846e8ac6fafb68b57633512c8 type=CONTAINER_CREATED_EVENT Jul 7 01:14:13.457660 containerd[2783]: time="2025-07-07T01:14:13.457633898Z" level=warning msg="container event discarded" container=d40285d344640bd5855d94ef17d724cbcbc0150846e8ac6fafb68b57633512c8 type=CONTAINER_STARTED_EVENT Jul 7 01:14:13.697482 containerd[2783]: time="2025-07-07T01:14:13.697445743Z" level=warning msg="container event discarded" container=047a2d283f84704ee95c32615deb67230fd1861d1555958da72d167aada0c10f type=CONTAINER_CREATED_EVENT Jul 7 01:14:13.697482 containerd[2783]: time="2025-07-07T01:14:13.697482463Z" level=warning msg="container event discarded" container=047a2d283f84704ee95c32615deb67230fd1861d1555958da72d167aada0c10f type=CONTAINER_STARTED_EVENT Jul 7 01:14:14.893089 containerd[2783]: time="2025-07-07T01:14:14.893053535Z" level=warning msg="container event discarded" container=eb2d3d8fc376d5671c33a142fbef046cce1d4b9367fb0c25c905bb8ae9b47406 type=CONTAINER_CREATED_EVENT Jul 7 01:14:14.949262 containerd[2783]: time="2025-07-07T01:14:14.949234885Z" level=warning msg="container event discarded" container=eb2d3d8fc376d5671c33a142fbef046cce1d4b9367fb0c25c905bb8ae9b47406 type=CONTAINER_STARTED_EVENT Jul 7 01:14:15.798499 containerd[2783]: time="2025-07-07T01:14:15.798435863Z" level=warning msg="container event discarded" container=3e9ff6360597654655b373bcbc0dca6819df86c39a5464f91c2e35d67dcab18c type=CONTAINER_CREATED_EVENT Jul 7 01:14:15.855691 containerd[2783]: time="2025-07-07T01:14:15.855659294Z" level=warning msg="container event discarded" container=3e9ff6360597654655b373bcbc0dca6819df86c39a5464f91c2e35d67dcab18c type=CONTAINER_STARTED_EVENT Jul 7 01:14:16.186912 containerd[2783]: time="2025-07-07T01:14:16.186883435Z" level=warning msg="container event discarded" container=3e9ff6360597654655b373bcbc0dca6819df86c39a5464f91c2e35d67dcab18c type=CONTAINER_STOPPED_EVENT Jul 7 01:14:20.062662 containerd[2783]: time="2025-07-07T01:14:20.062621145Z" level=warning msg="container event discarded" container=a84b330cd968006b73378f404657b1ffbde6f20c8e9fa9de9cf6740506add9d0 type=CONTAINER_CREATED_EVENT Jul 7 01:14:20.111838 containerd[2783]: time="2025-07-07T01:14:20.111810294Z" level=warning msg="container event discarded" container=a84b330cd968006b73378f404657b1ffbde6f20c8e9fa9de9cf6740506add9d0 type=CONTAINER_STARTED_EVENT Jul 7 01:14:20.642866 containerd[2783]: time="2025-07-07T01:14:20.642838245Z" level=warning msg="container event discarded" container=a84b330cd968006b73378f404657b1ffbde6f20c8e9fa9de9cf6740506add9d0 type=CONTAINER_STOPPED_EVENT Jul 7 01:14:24.588829 containerd[2783]: time="2025-07-07T01:14:24.588732312Z" level=warning msg="container event discarded" container=862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327 type=CONTAINER_CREATED_EVENT Jul 7 01:14:24.589533 containerd[2783]: time="2025-07-07T01:14:24.589512313Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"967361654d13335b9b934bccd67c3c98256e6a622b249490a137df042184386f\" pid:9517 exited_at:{seconds:1751850864 nanos:589314593}" Jul 7 01:14:24.653148 containerd[2783]: time="2025-07-07T01:14:24.653128832Z" level=warning msg="container event discarded" container=862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327 type=CONTAINER_STARTED_EVENT Jul 7 01:14:25.680791 containerd[2783]: time="2025-07-07T01:14:25.680753676Z" level=warning msg="container event discarded" container=79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67 type=CONTAINER_CREATED_EVENT Jul 7 01:14:25.680791 containerd[2783]: time="2025-07-07T01:14:25.680784676Z" level=warning msg="container event discarded" container=79589141401fb5d5c069198c7604bb554f0da244167dcb95f4faab8a550a7b67 type=CONTAINER_STARTED_EVENT Jul 7 01:14:26.607201 containerd[2783]: time="2025-07-07T01:14:26.607149424Z" level=warning msg="container event discarded" container=8c0f864d7e8f2a0c0e4cd8f92683b4dab06c76090264e0de9b957fd0bea6b3bb type=CONTAINER_CREATED_EVENT Jul 7 01:14:26.660602 containerd[2783]: time="2025-07-07T01:14:26.660571698Z" level=warning msg="container event discarded" container=8c0f864d7e8f2a0c0e4cd8f92683b4dab06c76090264e0de9b957fd0bea6b3bb type=CONTAINER_STARTED_EVENT Jul 7 01:14:28.029428 containerd[2783]: time="2025-07-07T01:14:28.029392139Z" level=warning msg="container event discarded" container=90cae2c8a3d884a67ddccf98ebfc6976d7bd07e9e3a9aa800956980adacc4b5f type=CONTAINER_CREATED_EVENT Jul 7 01:14:28.089606 containerd[2783]: time="2025-07-07T01:14:28.089581658Z" level=warning msg="container event discarded" container=90cae2c8a3d884a67ddccf98ebfc6976d7bd07e9e3a9aa800956980adacc4b5f type=CONTAINER_STARTED_EVENT Jul 7 01:14:28.738392 containerd[2783]: time="2025-07-07T01:14:28.738365682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"44d34dd6ad7c407356a01e06650b8bdaf23b98349e97c24da2ca9a6714ba5dd8\" pid:9555 exited_at:{seconds:1751850868 nanos:738207722}" Jul 7 01:14:32.247778 containerd[2783]: time="2025-07-07T01:14:32.247720468Z" level=warning msg="container event discarded" container=30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83 type=CONTAINER_CREATED_EVENT Jul 7 01:14:32.247778 containerd[2783]: time="2025-07-07T01:14:32.247763668Z" level=warning msg="container event discarded" container=30392fec3f907b54c314a0e03c6820d56fd4d6875071f6eaf0c57d8395fafc83 type=CONTAINER_STARTED_EVENT Jul 7 01:14:33.249367 containerd[2783]: time="2025-07-07T01:14:33.249333634Z" level=warning msg="container event discarded" container=fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e type=CONTAINER_CREATED_EVENT Jul 7 01:14:33.249367 containerd[2783]: time="2025-07-07T01:14:33.249358034Z" level=warning msg="container event discarded" container=fbab45eb4ff015e28f6940db7a5e86bd05ed33b4316ffb33f5e02b2f61c5bd6e type=CONTAINER_STARTED_EVENT Jul 7 01:14:33.249367 containerd[2783]: time="2025-07-07T01:14:33.249365274Z" level=warning msg="container event discarded" container=456c7749e9c3665f1d8633ff6fc8611eba1debeb39765fe0bd6353534e1dfe98 type=CONTAINER_CREATED_EVENT Jul 7 01:14:33.310002 containerd[2783]: time="2025-07-07T01:14:33.309958596Z" level=warning msg="container event discarded" container=456c7749e9c3665f1d8633ff6fc8611eba1debeb39765fe0bd6353534e1dfe98 type=CONTAINER_STARTED_EVENT Jul 7 01:14:33.359182 containerd[2783]: time="2025-07-07T01:14:33.359133910Z" level=warning msg="container event discarded" container=4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950 type=CONTAINER_CREATED_EVENT Jul 7 01:14:33.359182 containerd[2783]: time="2025-07-07T01:14:33.359154230Z" level=warning msg="container event discarded" container=4655bb1820cbb5dafd5bd43921ef392901968fa364f18212ba473bf7fad2d950 type=CONTAINER_STARTED_EVENT Jul 7 01:14:33.850713 containerd[2783]: time="2025-07-07T01:14:33.850680490Z" level=warning msg="container event discarded" container=3b7cc72de20ccf28ebb171fcc138b7dd8b6ac275d11cd0db7d14e144168f5678 type=CONTAINER_CREATED_EVENT Jul 7 01:14:33.905002 containerd[2783]: time="2025-07-07T01:14:33.904965287Z" level=warning msg="container event discarded" container=3b7cc72de20ccf28ebb171fcc138b7dd8b6ac275d11cd0db7d14e144168f5678 type=CONTAINER_STARTED_EVENT Jul 7 01:14:34.076244 containerd[2783]: time="2025-07-07T01:14:34.076212206Z" level=warning msg="container event discarded" container=7f60a27c0d2c3c1a3271b4faf2cafb56522beee42c8b2fe2110661f25b34b206 type=CONTAINER_CREATED_EVENT Jul 7 01:14:34.134573 containerd[2783]: time="2025-07-07T01:14:34.134506927Z" level=warning msg="container event discarded" container=7f60a27c0d2c3c1a3271b4faf2cafb56522beee42c8b2fe2110661f25b34b206 type=CONTAINER_STARTED_EVENT Jul 7 01:14:35.254036 containerd[2783]: time="2025-07-07T01:14:35.253981150Z" level=warning msg="container event discarded" container=d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa type=CONTAINER_CREATED_EVENT Jul 7 01:14:35.254036 containerd[2783]: time="2025-07-07T01:14:35.254030350Z" level=warning msg="container event discarded" container=d7de967afbd4fe363a0d9cada7b3ad53d8106977e1e597fc1e94f214bfc906aa type=CONTAINER_STARTED_EVENT Jul 7 01:14:36.250260 containerd[2783]: time="2025-07-07T01:14:36.250206774Z" level=warning msg="container event discarded" container=22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6 type=CONTAINER_CREATED_EVENT Jul 7 01:14:36.250450 containerd[2783]: time="2025-07-07T01:14:36.250412214Z" level=warning msg="container event discarded" container=22ea0d1892cf591773f731729ed2b0ff11c116a396c617e0b46f6c4b5c3cd3a6 type=CONTAINER_STARTED_EVENT Jul 7 01:14:36.250450 containerd[2783]: time="2025-07-07T01:14:36.250427654Z" level=warning msg="container event discarded" container=adf14ff1e19b0d7f3da804d284cb62f21503a8b109f9f5805bf43ba66ef978df type=CONTAINER_CREATED_EVENT Jul 7 01:14:36.283626 containerd[2783]: time="2025-07-07T01:14:36.283597237Z" level=warning msg="container event discarded" container=f5d12ff1cb291db4324bfe0f8979c769023793274f738dff8638d5cc98b75645 type=CONTAINER_CREATED_EVENT Jul 7 01:14:36.305070 containerd[2783]: time="2025-07-07T01:14:36.305034853Z" level=warning msg="container event discarded" container=adf14ff1e19b0d7f3da804d284cb62f21503a8b109f9f5805bf43ba66ef978df type=CONTAINER_STARTED_EVENT Jul 7 01:14:36.324361 containerd[2783]: time="2025-07-07T01:14:36.324321306Z" level=warning msg="container event discarded" container=f5d12ff1cb291db4324bfe0f8979c769023793274f738dff8638d5cc98b75645 type=CONTAINER_STARTED_EVENT Jul 7 01:14:36.324361 containerd[2783]: time="2025-07-07T01:14:36.324348586Z" level=warning msg="container event discarded" container=f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb type=CONTAINER_CREATED_EVENT Jul 7 01:14:36.324361 containerd[2783]: time="2025-07-07T01:14:36.324355986Z" level=warning msg="container event discarded" container=f78242cc7b0bbb2026243a441ca8dac96c8871256d2d13eb65838f5d4145a4eb type=CONTAINER_STARTED_EVENT Jul 7 01:14:36.458610 containerd[2783]: time="2025-07-07T01:14:36.458568682Z" level=warning msg="container event discarded" container=b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9 type=CONTAINER_CREATED_EVENT Jul 7 01:14:36.458610 containerd[2783]: time="2025-07-07T01:14:36.458604722Z" level=warning msg="container event discarded" container=b31c1d332e6e023757bf9265ff82e36c9ed88acc0ee02128b2fe8fcda29491c9 type=CONTAINER_STARTED_EVENT Jul 7 01:14:37.421906 containerd[2783]: time="2025-07-07T01:14:37.421866331Z" level=warning msg="container event discarded" container=e80342076eabfb487d4c340743921c258f71bee8bde76169c3e7a92ae6f83ea1 type=CONTAINER_CREATED_EVENT Jul 7 01:14:37.480081 containerd[2783]: time="2025-07-07T01:14:37.480053732Z" level=warning msg="container event discarded" container=e80342076eabfb487d4c340743921c258f71bee8bde76169c3e7a92ae6f83ea1 type=CONTAINER_STARTED_EVENT Jul 7 01:14:39.058615 containerd[2783]: time="2025-07-07T01:14:39.058572594Z" level=warning msg="container event discarded" container=ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a type=CONTAINER_CREATED_EVENT Jul 7 01:14:39.111750 containerd[2783]: time="2025-07-07T01:14:39.111732713Z" level=warning msg="container event discarded" container=ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a type=CONTAINER_STARTED_EVENT Jul 7 01:14:39.218362 containerd[2783]: time="2025-07-07T01:14:39.218334871Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"80337e890df1ebf89235f552f3a3af4e6a15c89b7820dbca9d9d91447193db39\" pid:9586 exited_at:{seconds:1751850879 nanos:218162151}" Jul 7 01:14:39.449466 containerd[2783]: time="2025-07-07T01:14:39.449439920Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\" id:\"8bebb64d715d68214cd8380944b6d0c25ab062ebca40124aa6dc75ac869ad425\" pid:9607 exited_at:{seconds:1751850879 nanos:449233640}" Jul 7 01:14:40.538700 containerd[2783]: time="2025-07-07T01:14:40.538662001Z" level=warning msg="container event discarded" container=314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282 type=CONTAINER_CREATED_EVENT Jul 7 01:14:40.586871 containerd[2783]: time="2025-07-07T01:14:40.586846597Z" level=warning msg="container event discarded" container=314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282 type=CONTAINER_STARTED_EVENT Jul 7 01:14:43.249689 containerd[2783]: time="2025-07-07T01:14:43.249653827Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"6ebb4eeea1ab10c4b2ef4970e880e20a54d2bd3c56f12a1aceb5a07069f58330\" pid:9642 exited_at:{seconds:1751850883 nanos:249480667}" Jul 7 01:15:09.218529 containerd[2783]: time="2025-07-07T01:15:09.218493646Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"a7e2ed4bd12266fb2b76dcd7704bd7e90b144bc4c70f621f909c2d8fa9e7a4ce\" pid:9706 exited_at:{seconds:1751850909 nanos:218315526}" Jul 7 01:15:09.441408 containerd[2783]: time="2025-07-07T01:15:09.441344524Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\" id:\"2ab6747a2e7fd4705539f8158191e531b2ea923d95fd4c0e1d9fbe91276eed8c\" pid:9729 exited_at:{seconds:1751850909 nanos:440944364}" Jul 7 01:15:13.253736 containerd[2783]: time="2025-07-07T01:15:13.253675061Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"beaab76222840767e5c4bf44776ca3cf4082abd1a850409837bf3c97a353cc8a\" pid:9764 exited_at:{seconds:1751850913 nanos:253462261}" Jul 7 01:15:19.440129 update_engine[2778]: I20250707 01:15:19.440068 2778 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 7 01:15:19.440129 update_engine[2778]: I20250707 01:15:19.440122 2778 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 7 01:15:19.440606 update_engine[2778]: I20250707 01:15:19.440346 2778 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 7 01:15:19.440697 update_engine[2778]: I20250707 01:15:19.440680 2778 omaha_request_params.cc:62] Current group set to beta Jul 7 01:15:19.440773 update_engine[2778]: I20250707 01:15:19.440762 2778 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 7 01:15:19.440773 update_engine[2778]: I20250707 01:15:19.440770 2778 update_attempter.cc:643] Scheduling an action processor start. Jul 7 01:15:19.440811 update_engine[2778]: I20250707 01:15:19.440783 2778 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 01:15:19.440833 update_engine[2778]: I20250707 01:15:19.440809 2778 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 7 01:15:19.440867 update_engine[2778]: I20250707 01:15:19.440856 2778 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 01:15:19.440888 update_engine[2778]: I20250707 01:15:19.440865 2778 omaha_request_action.cc:272] Request: Jul 7 01:15:19.440888 update_engine[2778]: Jul 7 01:15:19.440888 update_engine[2778]: Jul 7 01:15:19.440888 update_engine[2778]: Jul 7 01:15:19.440888 update_engine[2778]: Jul 7 01:15:19.440888 update_engine[2778]: Jul 7 01:15:19.440888 update_engine[2778]: Jul 7 01:15:19.440888 update_engine[2778]: Jul 7 01:15:19.440888 update_engine[2778]: Jul 7 01:15:19.440888 update_engine[2778]: I20250707 01:15:19.440871 2778 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 01:15:19.441148 locksmithd[2812]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 7 01:15:19.441831 update_engine[2778]: I20250707 01:15:19.441813 2778 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 01:15:19.442137 update_engine[2778]: I20250707 01:15:19.442108 2778 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 01:15:19.442520 update_engine[2778]: E20250707 01:15:19.442503 2778 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 01:15:19.442562 update_engine[2778]: I20250707 01:15:19.442551 2778 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 7 01:15:24.594209 containerd[2783]: time="2025-07-07T01:15:24.594170346Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"3467c55eb529cb9fc13f6e0120b7c3db8613b1236645a353f77dbd8e9e6ab75a\" pid:9804 exited_at:{seconds:1751850924 nanos:593962545}" Jul 7 01:15:28.743600 containerd[2783]: time="2025-07-07T01:15:28.743561729Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"26c5e19e5024610b730e34a2aaf6c5fc8311510d1334b7988efd1d51f816353d\" pid:9847 exited_at:{seconds:1751850928 nanos:743362369}" Jul 7 01:15:29.394970 update_engine[2778]: I20250707 01:15:29.394464 2778 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 01:15:29.394970 update_engine[2778]: I20250707 01:15:29.394721 2778 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 01:15:29.394970 update_engine[2778]: I20250707 01:15:29.394929 2778 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 01:15:29.395378 update_engine[2778]: E20250707 01:15:29.395356 2778 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 01:15:29.395477 update_engine[2778]: I20250707 01:15:29.395462 2778 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 7 01:15:39.219542 containerd[2783]: time="2025-07-07T01:15:39.219471071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"21350f754ad8003870523063f723b3a239596de4bb9beb7a89168e7d722b88a5\" pid:9872 exited_at:{seconds:1751850939 nanos:219295790}" Jul 7 01:15:39.394571 update_engine[2778]: I20250707 01:15:39.394512 2778 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 01:15:39.394933 update_engine[2778]: I20250707 01:15:39.394766 2778 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 01:15:39.394995 update_engine[2778]: I20250707 01:15:39.394977 2778 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 01:15:39.395385 update_engine[2778]: E20250707 01:15:39.395371 2778 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 01:15:39.395412 update_engine[2778]: I20250707 01:15:39.395402 2778 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 7 01:15:39.445552 containerd[2783]: time="2025-07-07T01:15:39.445524334Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\" id:\"8b14249054ec799d0fb19ba4f44fe1eb55223d4929ae20cdc063243f8d0c2576\" pid:9894 exited_at:{seconds:1751850939 nanos:445186733}" Jul 7 01:15:43.253621 containerd[2783]: time="2025-07-07T01:15:43.253586509Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"c5480f096a5bb9d31ff4d33222d518a5ae234f2253ee6e2877330a46fbf1caa4\" pid:9929 exited_at:{seconds:1751850943 nanos:253393589}" Jul 7 01:15:49.394579 update_engine[2778]: I20250707 01:15:49.394512 2778 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 01:15:49.395419 update_engine[2778]: I20250707 01:15:49.395155 2778 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 01:15:49.395419 update_engine[2778]: I20250707 01:15:49.395375 2778 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 01:15:49.396137 update_engine[2778]: E20250707 01:15:49.395728 2778 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 01:15:49.396137 update_engine[2778]: I20250707 01:15:49.395766 2778 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 01:15:49.396137 update_engine[2778]: I20250707 01:15:49.395773 2778 omaha_request_action.cc:617] Omaha request response: Jul 7 01:15:49.396137 update_engine[2778]: E20250707 01:15:49.395844 2778 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 7 01:15:49.396137 update_engine[2778]: I20250707 01:15:49.395859 2778 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 7 01:15:49.396137 update_engine[2778]: I20250707 01:15:49.395864 2778 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 01:15:49.396137 update_engine[2778]: I20250707 01:15:49.395869 2778 update_attempter.cc:306] Processing Done. Jul 7 01:15:49.396137 update_engine[2778]: E20250707 01:15:49.395881 2778 update_attempter.cc:619] Update failed. Jul 7 01:15:49.396137 update_engine[2778]: I20250707 01:15:49.395886 2778 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 7 01:15:49.396137 update_engine[2778]: I20250707 01:15:49.395890 2778 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 7 01:15:49.396137 update_engine[2778]: I20250707 01:15:49.395895 2778 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 7 01:15:49.396137 update_engine[2778]: I20250707 01:15:49.395951 2778 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 01:15:49.396137 update_engine[2778]: I20250707 01:15:49.395969 2778 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 01:15:49.396137 update_engine[2778]: I20250707 01:15:49.395974 2778 omaha_request_action.cc:272] Request: Jul 7 01:15:49.396137 update_engine[2778]: Jul 7 01:15:49.396137 update_engine[2778]: Jul 7 01:15:49.396482 update_engine[2778]: Jul 7 01:15:49.396482 update_engine[2778]: Jul 7 01:15:49.396482 update_engine[2778]: Jul 7 01:15:49.396482 update_engine[2778]: Jul 7 01:15:49.396482 update_engine[2778]: I20250707 01:15:49.395979 2778 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 01:15:49.396482 update_engine[2778]: I20250707 01:15:49.396093 2778 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 01:15:49.396482 update_engine[2778]: I20250707 01:15:49.396262 2778 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 01:15:49.396616 locksmithd[2812]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 7 01:15:49.396827 update_engine[2778]: E20250707 01:15:49.396653 2778 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 01:15:49.396827 update_engine[2778]: I20250707 01:15:49.396683 2778 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 01:15:49.396827 update_engine[2778]: I20250707 01:15:49.396688 2778 omaha_request_action.cc:617] Omaha request response: Jul 7 01:15:49.396827 update_engine[2778]: I20250707 01:15:49.396693 2778 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 01:15:49.396827 update_engine[2778]: I20250707 01:15:49.396697 2778 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 01:15:49.396827 update_engine[2778]: I20250707 01:15:49.396701 2778 update_attempter.cc:306] Processing Done. Jul 7 01:15:49.396827 update_engine[2778]: I20250707 01:15:49.396706 2778 update_attempter.cc:310] Error event sent. Jul 7 01:15:49.396827 update_engine[2778]: I20250707 01:15:49.396713 2778 update_check_scheduler.cc:74] Next update check in 42m24s Jul 7 01:15:49.396976 locksmithd[2812]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 7 01:16:09.219460 containerd[2783]: time="2025-07-07T01:16:09.219374084Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"991491e9b638898529a72693aa671798d56457fab7eda5596d11b52cbea3acad\" pid:10008 exited_at:{seconds:1751850969 nanos:219191043}" Jul 7 01:16:09.439937 containerd[2783]: time="2025-07-07T01:16:09.439908035Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\" id:\"11c69fcebbe4fd3ca7994c2c748ae9371bb31fd5d404f4b3f86c753e174eb508\" pid:10031 exited_at:{seconds:1751850969 nanos:439714314}" Jul 7 01:16:13.253823 containerd[2783]: time="2025-07-07T01:16:13.253792442Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"c55477c925112a2ccb31514728ce4f1112d62b0583a3a7b84151026076eb3797\" pid:10067 exited_at:{seconds:1751850973 nanos:253598362}" Jul 7 01:16:24.587836 containerd[2783]: time="2025-07-07T01:16:24.587795754Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"d71e8b19ce3d7583ce9570ac832a921aa496fbc0ee9482fdaa42b57e0005ae21\" pid:10111 exited_at:{seconds:1751850984 nanos:587539874}" Jul 7 01:16:28.733207 containerd[2783]: time="2025-07-07T01:16:28.733168516Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"3a9b034db3b70e05d979d78b4ae054263f79ba181db9bde7e344f2708579fb82\" pid:10148 exited_at:{seconds:1751850988 nanos:733034276}" Jul 7 01:16:39.219972 containerd[2783]: time="2025-07-07T01:16:39.219931531Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"3fa122068f0312fd8bd4be4a50d425189a74a72dbe48a2b08fe8fa95568e3d63\" pid:10172 exited_at:{seconds:1751850999 nanos:219735171}" Jul 7 01:16:39.451284 containerd[2783]: time="2025-07-07T01:16:39.451250334Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\" id:\"fc9c43f2a859e7e1861f541804c84227418ecba3fe75e841e52520efcf281441\" pid:10194 exited_at:{seconds:1751850999 nanos:450965054}" Jul 7 01:16:43.249240 containerd[2783]: time="2025-07-07T01:16:43.249204272Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"8704fb96f9ea26b94552b5dccea530c3b10137611db19be543992cf1fb318615\" pid:10229 exited_at:{seconds:1751851003 nanos:249063432}" Jul 7 01:17:09.218486 containerd[2783]: time="2025-07-07T01:17:09.218448320Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"8ec0155813386f0b1e99c3ab3e09cc564da848ddc1ce8446670fd6dbc3160fa8\" pid:10271 exited_at:{seconds:1751851029 nanos:218252200}" Jul 7 01:17:09.450577 containerd[2783]: time="2025-07-07T01:17:09.450525285Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\" id:\"29c14b58d7a96d5957d0525b1393af0284502cc22b3acf5e504539540d8b0efd\" pid:10293 exited_at:{seconds:1751851029 nanos:449608526}" Jul 7 01:17:13.253280 containerd[2783]: time="2025-07-07T01:17:13.253233296Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"695cb683f3b73d05114412a162eba6228dcb2d792e9d28c328d7597bc91af4f5\" pid:10329 exited_at:{seconds:1751851033 nanos:253004176}" Jul 7 01:17:13.639498 systemd[1]: Started sshd@8-147.28.143.214:22-147.75.109.163:33040.service - OpenSSH per-connection server daemon (147.75.109.163:33040). Jul 7 01:17:13.942860 sshd[10356]: Accepted publickey for core from 147.75.109.163 port 33040 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 01:17:13.944038 sshd-session[10356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:17:13.947656 systemd-logind[2767]: New session 10 of user core. Jul 7 01:17:13.969652 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 01:17:14.229067 sshd[10358]: Connection closed by 147.75.109.163 port 33040 Jul 7 01:17:14.228891 sshd-session[10356]: pam_unix(sshd:session): session closed for user core Jul 7 01:17:14.232294 systemd[1]: sshd@8-147.28.143.214:22-147.75.109.163:33040.service: Deactivated successfully. Jul 7 01:17:14.235087 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 01:17:14.235680 systemd-logind[2767]: Session 10 logged out. Waiting for processes to exit. Jul 7 01:17:14.236512 systemd-logind[2767]: Removed session 10. Jul 7 01:17:19.294365 systemd[1]: Started sshd@9-147.28.143.214:22-147.75.109.163:51664.service - OpenSSH per-connection server daemon (147.75.109.163:51664). Jul 7 01:17:19.598570 sshd[10400]: Accepted publickey for core from 147.75.109.163 port 51664 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 01:17:19.599797 sshd-session[10400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:17:19.605334 systemd-logind[2767]: New session 11 of user core. Jul 7 01:17:19.619684 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 01:17:19.871241 sshd[10402]: Connection closed by 147.75.109.163 port 51664 Jul 7 01:17:19.871596 sshd-session[10400]: pam_unix(sshd:session): session closed for user core Jul 7 01:17:19.874527 systemd[1]: sshd@9-147.28.143.214:22-147.75.109.163:51664.service: Deactivated successfully. Jul 7 01:17:19.876764 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 01:17:19.877343 systemd-logind[2767]: Session 11 logged out. Waiting for processes to exit. Jul 7 01:17:19.878182 systemd-logind[2767]: Removed session 11. Jul 7 01:17:24.597594 containerd[2783]: time="2025-07-07T01:17:24.597518934Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"1e534c7be79afe508a78f2ccdd86b05e1b530576d1bfc625c1b083a93dce1ae0\" pid:10464 exited_at:{seconds:1751851044 nanos:597276294}" Jul 7 01:17:24.929365 systemd[1]: Started sshd@10-147.28.143.214:22-147.75.109.163:51672.service - OpenSSH per-connection server daemon (147.75.109.163:51672). Jul 7 01:17:25.230222 sshd[10492]: Accepted publickey for core from 147.75.109.163 port 51672 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 01:17:25.231388 sshd-session[10492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:17:25.234676 systemd-logind[2767]: New session 12 of user core. Jul 7 01:17:25.251651 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 01:17:25.502508 sshd[10497]: Connection closed by 147.75.109.163 port 51672 Jul 7 01:17:25.502812 sshd-session[10492]: pam_unix(sshd:session): session closed for user core Jul 7 01:17:25.505815 systemd[1]: sshd@10-147.28.143.214:22-147.75.109.163:51672.service: Deactivated successfully. Jul 7 01:17:25.508018 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 01:17:25.508586 systemd-logind[2767]: Session 12 logged out. Waiting for processes to exit. Jul 7 01:17:25.509379 systemd-logind[2767]: Removed session 12. Jul 7 01:17:25.570271 systemd[1]: Started sshd@11-147.28.143.214:22-147.75.109.163:51684.service - OpenSSH per-connection server daemon (147.75.109.163:51684). Jul 7 01:17:25.873841 sshd[10528]: Accepted publickey for core from 147.75.109.163 port 51684 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 01:17:25.875038 sshd-session[10528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:17:25.878385 systemd-logind[2767]: New session 13 of user core. Jul 7 01:17:25.898596 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 01:17:26.170702 sshd[10530]: Connection closed by 147.75.109.163 port 51684 Jul 7 01:17:26.170982 sshd-session[10528]: pam_unix(sshd:session): session closed for user core Jul 7 01:17:26.174080 systemd[1]: sshd@11-147.28.143.214:22-147.75.109.163:51684.service: Deactivated successfully. Jul 7 01:17:26.175733 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 01:17:26.176281 systemd-logind[2767]: Session 13 logged out. Waiting for processes to exit. Jul 7 01:17:26.177071 systemd-logind[2767]: Removed session 13. Jul 7 01:17:26.230199 systemd[1]: Started sshd@12-147.28.143.214:22-147.75.109.163:48478.service - OpenSSH per-connection server daemon (147.75.109.163:48478). Jul 7 01:17:26.533875 sshd[10569]: Accepted publickey for core from 147.75.109.163 port 48478 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 01:17:26.535021 sshd-session[10569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:17:26.538201 systemd-logind[2767]: New session 14 of user core. Jul 7 01:17:26.549605 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 01:17:26.806288 sshd[10571]: Connection closed by 147.75.109.163 port 48478 Jul 7 01:17:26.806600 sshd-session[10569]: pam_unix(sshd:session): session closed for user core Jul 7 01:17:26.809569 systemd[1]: sshd@12-147.28.143.214:22-147.75.109.163:48478.service: Deactivated successfully. Jul 7 01:17:26.811179 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 01:17:26.811751 systemd-logind[2767]: Session 14 logged out. Waiting for processes to exit. Jul 7 01:17:26.812536 systemd-logind[2767]: Removed session 14. Jul 7 01:17:28.741732 containerd[2783]: time="2025-07-07T01:17:28.741687880Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"48d2c172e1589ee408fe0b5c9696bfcca91f8039dea5bb69046380839b336507\" pid:10623 exited_at:{seconds:1751851048 nanos:741481280}" Jul 7 01:17:31.859254 systemd[1]: Started sshd@13-147.28.143.214:22-147.75.109.163:48480.service - OpenSSH per-connection server daemon (147.75.109.163:48480). Jul 7 01:17:32.133770 sshd[10637]: Accepted publickey for core from 147.75.109.163 port 48480 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 01:17:32.134882 sshd-session[10637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:17:32.137977 systemd-logind[2767]: New session 15 of user core. Jul 7 01:17:32.159586 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 01:17:32.390929 sshd[10639]: Connection closed by 147.75.109.163 port 48480 Jul 7 01:17:32.391191 sshd-session[10637]: pam_unix(sshd:session): session closed for user core Jul 7 01:17:32.394058 systemd[1]: sshd@13-147.28.143.214:22-147.75.109.163:48480.service: Deactivated successfully. Jul 7 01:17:32.396510 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 01:17:32.397183 systemd-logind[2767]: Session 15 logged out. Waiting for processes to exit. Jul 7 01:17:32.398083 systemd-logind[2767]: Removed session 15. Jul 7 01:17:32.452348 systemd[1]: Started sshd@14-147.28.143.214:22-147.75.109.163:48494.service - OpenSSH per-connection server daemon (147.75.109.163:48494). Jul 7 01:17:32.758733 sshd[10677]: Accepted publickey for core from 147.75.109.163 port 48494 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 01:17:32.759819 sshd-session[10677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:17:32.762896 systemd-logind[2767]: New session 16 of user core. Jul 7 01:17:32.781665 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 01:17:33.090597 sshd[10679]: Connection closed by 147.75.109.163 port 48494 Jul 7 01:17:33.090880 sshd-session[10677]: pam_unix(sshd:session): session closed for user core Jul 7 01:17:33.093937 systemd[1]: sshd@14-147.28.143.214:22-147.75.109.163:48494.service: Deactivated successfully. Jul 7 01:17:33.096990 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 01:17:33.097554 systemd-logind[2767]: Session 16 logged out. Waiting for processes to exit. Jul 7 01:17:33.098397 systemd-logind[2767]: Removed session 16. Jul 7 01:17:33.156346 systemd[1]: Started sshd@15-147.28.143.214:22-147.75.109.163:48506.service - OpenSSH per-connection server daemon (147.75.109.163:48506). Jul 7 01:17:33.457137 sshd[10708]: Accepted publickey for core from 147.75.109.163 port 48506 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 01:17:33.458290 sshd-session[10708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:17:33.461470 systemd-logind[2767]: New session 17 of user core. Jul 7 01:17:33.483650 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 01:17:34.292174 sshd[10710]: Connection closed by 147.75.109.163 port 48506 Jul 7 01:17:34.292495 sshd-session[10708]: pam_unix(sshd:session): session closed for user core Jul 7 01:17:34.295635 systemd[1]: sshd@15-147.28.143.214:22-147.75.109.163:48506.service: Deactivated successfully. Jul 7 01:17:34.297665 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 01:17:34.298251 systemd-logind[2767]: Session 17 logged out. Waiting for processes to exit. Jul 7 01:17:34.299078 systemd-logind[2767]: Removed session 17. Jul 7 01:17:34.359113 systemd[1]: Started sshd@16-147.28.143.214:22-147.75.109.163:48510.service - OpenSSH per-connection server daemon (147.75.109.163:48510). Jul 7 01:17:34.662598 sshd[10770]: Accepted publickey for core from 147.75.109.163 port 48510 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 01:17:34.663801 sshd-session[10770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:17:34.667019 systemd-logind[2767]: New session 18 of user core. Jul 7 01:17:34.692590 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 01:17:35.021684 sshd[10772]: Connection closed by 147.75.109.163 port 48510 Jul 7 01:17:35.022005 sshd-session[10770]: pam_unix(sshd:session): session closed for user core Jul 7 01:17:35.025054 systemd[1]: sshd@16-147.28.143.214:22-147.75.109.163:48510.service: Deactivated successfully. Jul 7 01:17:35.026688 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 01:17:35.027256 systemd-logind[2767]: Session 18 logged out. Waiting for processes to exit. Jul 7 01:17:35.028084 systemd-logind[2767]: Removed session 18. Jul 7 01:17:35.084194 systemd[1]: Started sshd@17-147.28.143.214:22-147.75.109.163:48526.service - OpenSSH per-connection server daemon (147.75.109.163:48526). Jul 7 01:17:35.384566 sshd[10820]: Accepted publickey for core from 147.75.109.163 port 48526 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 01:17:35.385728 sshd-session[10820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:17:35.388958 systemd-logind[2767]: New session 19 of user core. Jul 7 01:17:35.410591 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 01:17:35.654732 sshd[10822]: Connection closed by 147.75.109.163 port 48526 Jul 7 01:17:35.655046 sshd-session[10820]: pam_unix(sshd:session): session closed for user core Jul 7 01:17:35.657885 systemd[1]: sshd@17-147.28.143.214:22-147.75.109.163:48526.service: Deactivated successfully. Jul 7 01:17:35.660146 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 01:17:35.660750 systemd-logind[2767]: Session 19 logged out. Waiting for processes to exit. Jul 7 01:17:35.661604 systemd-logind[2767]: Removed session 19. Jul 7 01:17:39.218246 containerd[2783]: time="2025-07-07T01:17:39.218209161Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee7fdf0fe4c9efad6db258c7adcce1b416d8a3210eb1a586b249dbaf8930bd4a\" id:\"e0667c86330620945774365fb53925b76fe2aff6c71f437e7093bb149cc70ee9\" pid:10878 exited_at:{seconds:1751851059 nanos:218056281}" Jul 7 01:17:39.447275 containerd[2783]: time="2025-07-07T01:17:39.447245667Z" level=info msg="TaskExit event in podsandbox handler container_id:\"862cc82d1fb4566d8d120d2daf4420deaec54c7077af1f9d340ded5125424327\" id:\"ff4299215bb375df2b9eb4915b458271b0061bd46666eb75de99b14635325867\" pid:10901 exited_at:{seconds:1751851059 nanos:447056667}" Jul 7 01:17:40.712386 systemd[1]: Started sshd@18-147.28.143.214:22-147.75.109.163:42812.service - OpenSSH per-connection server daemon (147.75.109.163:42812). Jul 7 01:17:41.012900 sshd[10922]: Accepted publickey for core from 147.75.109.163 port 42812 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 01:17:41.014170 sshd-session[10922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:17:41.017412 systemd-logind[2767]: New session 20 of user core. Jul 7 01:17:41.036671 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 01:17:41.282675 sshd[10924]: Connection closed by 147.75.109.163 port 42812 Jul 7 01:17:41.282961 sshd-session[10922]: pam_unix(sshd:session): session closed for user core Jul 7 01:17:41.286053 systemd[1]: sshd@18-147.28.143.214:22-147.75.109.163:42812.service: Deactivated successfully. Jul 7 01:17:41.287697 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 01:17:41.288258 systemd-logind[2767]: Session 20 logged out. Waiting for processes to exit. Jul 7 01:17:41.289098 systemd-logind[2767]: Removed session 20. Jul 7 01:17:43.252930 containerd[2783]: time="2025-07-07T01:17:43.252880376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"314fad818c0dc23095dbd4890f133f045c39520ebf168e44c548700369303282\" id:\"0852ba0d908515b9114f54760f44c81927c9788f4ca80c6ec228b167910b982e\" pid:10969 exited_at:{seconds:1751851063 nanos:252658856}" Jul 7 01:17:46.349391 systemd[1]: Started sshd@19-147.28.143.214:22-147.75.109.163:41100.service - OpenSSH per-connection server daemon (147.75.109.163:41100). Jul 7 01:17:46.660115 sshd[10997]: Accepted publickey for core from 147.75.109.163 port 41100 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 01:17:46.661257 sshd-session[10997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:17:46.664254 systemd-logind[2767]: New session 21 of user core. Jul 7 01:17:46.684595 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 01:17:46.932003 sshd[11005]: Connection closed by 147.75.109.163 port 41100 Jul 7 01:17:46.932309 sshd-session[10997]: pam_unix(sshd:session): session closed for user core Jul 7 01:17:46.935300 systemd[1]: sshd@19-147.28.143.214:22-147.75.109.163:41100.service: Deactivated successfully. Jul 7 01:17:46.937471 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 01:17:46.938038 systemd-logind[2767]: Session 21 logged out. Waiting for processes to exit. Jul 7 01:17:46.938845 systemd-logind[2767]: Removed session 21. Jul 7 01:17:51.993240 systemd[1]: Started sshd@20-147.28.143.214:22-147.75.109.163:41116.service - OpenSSH per-connection server daemon (147.75.109.163:41116). Jul 7 01:17:52.267244 sshd[11045]: Accepted publickey for core from 147.75.109.163 port 41116 ssh2: RSA SHA256:kVUNe2Hxe22J6X2IemypgXA/rfUZemrSaRw55p8Kj+c Jul 7 01:17:52.268353 sshd-session[11045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 01:17:52.271266 systemd-logind[2767]: New session 22 of user core. Jul 7 01:17:52.291636 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 01:17:52.521127 sshd[11047]: Connection closed by 147.75.109.163 port 41116 Jul 7 01:17:52.521413 sshd-session[11045]: pam_unix(sshd:session): session closed for user core Jul 7 01:17:52.524331 systemd[1]: sshd@20-147.28.143.214:22-147.75.109.163:41116.service: Deactivated successfully. Jul 7 01:17:52.526474 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 01:17:52.527797 systemd-logind[2767]: Session 22 logged out. Waiting for processes to exit. Jul 7 01:17:52.528903 systemd-logind[2767]: Removed session 22.